There are people out there who are not the biggest fans of automated tests. They think they cause more trouble than they are worth. I can definitely sympathize with that perspective. In fact, I will readily admit it can be completely true. A bad automated test can be worse than no test. But, I’ve worked on projects that had a lot of good automated tests and I think they were better off for them.
There are people out there who think writing all these automated tests is usually more trouble than it’s worth. The alternative to automated tests is to “run your software frequently in a live scenario to test the features as they’re being deployed“. But, I have some questions for the people that feel this way:
Are you advocating for a team of people (perhaps QA) constantly testing the same functionality of the system over and over again as frequently as possible? This is possible to do, especially if your program is simple in nature, but if your program is complex, it would take hours to test a fraction of the functionality. Therefore, it seems to me that to do this constantly, you would need many people working all the time testing every feature that exists. Otherwise, won’t you be constantly be falling behind?
Lets say you are really ambitious and you want to know at the end of every single day if every feature still works. How many people would you estimate you need to achieve that? Keep in mind that every day new features are being added. As soon as you commit, you’ll have to wait at least 24 hours before you know if it works. QA will need to spend time catching up to speed to know what’s new and how to test it. When I think of the projects I’ve worked on, I think there would need to be at least four times more people searching for bugs than there are people developing features.
OK maybe testing every feature at the end of every day is too ambitious. Lets reduce the workload and say that instead you should test every feature before every major release. But, on a daily basis you only test the new features. That’s pragmatic but the problem with this is it takes longer to notice when new features break old features. You’re unlikely to notice because you’re not actively checking if the old features still work.
Lets say it takes a week to completely test every single feature. That’s realistic when I consider some of the places I’ve worked in the past. Lets also assume that you’re about to make a release and you want to be very confident that there are no obvious bugs in it. Doesn’t this mean you have to stop adding features to the release’s code base for a whole week? Doesn’t this mean if bugs are discovered and fixed, you have to start the week of testing over to be really sure that none of the old features broke? If it turns out that it did break something, that’s an additional week down the drain. This just doesn’t seem like the right approach to me.
Most importantly, in the most idealist of worlds, the feedback would still be much slower than automated tests! I’ve written automated test suites made of thousands of checks that run in under a second. Compare that to our “overly ambitious” plan to manually test the whole application in a day. My automated test suite can run 10s of thousands of times each day. A manual test suite can’t compete with that.
Going 100% manual just doesn’t seem sustainable to me. It’s an expensive time sink and you’d need lots of money to throw at the problem.
A lot of people dislike automated tests because they can’t catch everything. Well I’ve seen bugs occur on projects that are tested manually, too. It doesn’t seem like these people compare the two on an even playing field.
But that’s not even the point. I would never advocate automated tests instead of manual tests. I advocate automated tests to offset manual tests. For example, manual tests are great at finding those bugs in new features and automated tests are great at catching regression bugs. You can’t be confident you’re catching everything if you only have one piece of the picture.
When you think about this in terms code coverage, the manual tests check the tip of the iceberg and the automated tests check the rest. To force a QA team to carry all of the iceberg on their shoulders is a waste of time and money when you think about the relative ease it would take to automate it.