How to write your automated tests so they don’t fail when you refactor

The main benefit of a good automated test suite is that it gives you confidence that your code is working correctly.  It helps you find bugs as soon as you introduce them instead of at 3am on a production environment.

If you have a solid test suite, you can feel confident that the change you made is safe.  When you have that confidence, it allows you to change the nastiest parts of your code without wondering if you broke something.

But sometimes tests can get in your way.  Sometimes you’ll want to refactor a group of classes and tests will fail even though nothing actually broke.  This is a bad smell and it should be avoided.  In this article I’m going to teach you how.  But, before I do that, I want to make sure we all agree on the definition of “refactoring”.

Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior. –Wikipedia

In other words, refactoring means cleaning up the code without changing functionality.  If you refactored correctly, every single feature in your application should work exactly the same as it did before you refactored.

Here’s how you know you’ve got a problem

Pretend you have two classes,  X  and Y.   X  calls Y .  Each are heavily tested from an XTest  and a YTest, respectively.  But now you realize that Y  shouldn’t exist anymore. It should be merged into X  and only X should remain.

If your tests are pulling their weight, you should be able to refactor this code without needing to change the tests.  But good luck getting away with this:  As soon as Y disappears, YTest will refer to a class that no longer exists and then it will fail.  It’ll fail even if your code works correctly.  You shouldn’t have to refactor your tests before you refactor your production code.

This isn’t exclusive to merging one class into another.  It’ll occur if you you move a method between classes, if you inline a public method, if you push down a method, etc.

Here’s how you avoid the issue in the previous scenario.  You skip writing  YTest all together.  Instead of writing tests directly for Y , test Y indirectly by calling methods on X .  Now if you decide to merge Y into X, none of your tests have to change.  The only time a test should fail is if you introduce a change in functionality.  Avoding  YTest  is the way to achieve that goal.

Keep in mind when you throw out  YTest you end up with a lot less tests overall.  That means less time writing tests and more time writing actual functionality.  This is a huge bonus.

But wait… if you look at the dependency graph, there’s probably a class calling X , too.  Lets call that class W .  What if one day you realize that X should be merged into W  and X should no longer exist?  Well then, all of your XTest tests are going to fail even though you didn’t change any functionality.  This is just as bad as the first scenario.

How are we supposed to know in advance which classes we will merge and which ones will stay?  There’s really no way to know and if you guess wrong, you have hamstrung yourself later.  Well actually, there is a pretty good solution to this problem.  The solution is to test as far outside as possible.  If you can test your code by calling the same API that your client does, that’s the best way to prevent your tests from getting in your way when you refactor.

People usually bring up 2 concerns when I tell them this.  The first is the difficulty in troubleshooting and the second is the speed of the test suite.  I will address both of those next:

Are these tests more difficult to troubleshoot?

In some ways, yes.  The advantage to testing every single class is that when an error occurs, the test points you directly to it.  This is a very useful quality.  The technique I use will not point you directly to the problem.  For example, if you have a XTest and it uses functionality in  X , Y and Y, then when a test fails that means the problem could be in X , Y and/or Z .  This takes extra time to debug.

But, if you can get your test suite to run really fast (more about that below) then it’s not nearly as big of an issue as you might think.  Imagine that you have thousands of tests and all of them run in 1 second.  If you could get your tests running that fast, then you can run your whole test suite after every single change you make.  If you can run all your tests after every change, then you know exactly where the problem got introduced: You broke something when you changed the previous line.

In my opinion, this is almost as good as having a stack trace point you to the exact line of the problem.  I have decided that I’m willing to sacrifice a little bit of debugging info for the ability to refactor freely.  Overall, it has made me more productive.

Besides, the whole point of the tests is to give me confidence in my code base.  Testing every class has the opposite effect for me.  How confident can I really be when my tests lie and say I introduced a problem just because I refactored?

Aren’t these tests slow?

Not if you write them the way I do.  Speed is essential and I wouldn’t advocate any automated testing technique that wasn’t fast.  You want to test from as outside as possible, but if your test ends up triggering IO (network activity/database reads and writes) then you should mock that part out.  For example, if your XTest calls X, then Y, then Z , and Z  uses a DatabaseConnection , you want to use Dependency Injection to mock out the DatabaseConnection  so it doesn’t really connect to a database.  If you mock out the slow parts, you can have really good coverage, really good speed and the ability to refactor without your tests lying to you.

Where to go to learn more

I hope this has given you some insight into a way of testing that you probably aren’t doing but probably should be doing.  For what it’s worth, I used to write a test class for each production class when I first started.  But after realizing the pain I was putting myself through, I decided that either automated testing is just hype or I’m doing something wrong.  I’m glad I stuck with it and discovered some content that taught me that I was, in fact, doing something wrong.  As a result I have without a doubt become more productive.

If you would like to see a detailed talk about this technique, you can watch this video:

And to be fair and balanced, here’s a video expressing the opposite but more popular opinion:

As I have expressed in the rest of the article, I don’t subscribe to the techniques in the second video.  But, I think it’s very important to know they exist.

Join the email list for bonus content.

2 thoughts on “How to write your automated tests so they don’t fail when you refactor

    • Daniel Kaplan says:

      Thanks a lot, I really appreciate the comment. That is a great blog article, thanks for writing it. It looks like we agree on these things :)

Leave a Reply to Daniel Kaplan Cancel reply

Your email address will not be published. Required fields are marked *