I was working on some legacy code (“legacy” meaning it has no automated test coverage) and I really wanted to write some automated acceptance tests for it. The problem is, I could not think of a way to do that without refactoring the code first. I did not want to do this because “it was already working” and the refactoring could have introduced new bugs. I had a chicken and egg problem: I didn’t want to modify code until I had automated tests but I had to modify code to write the automated tests. Which comes first?
I ended up using a trick that minimally changes the production code, actually improves the production code base, and allows me to write automated tests. It’s these two simple steps:
- Add and use a logger in the production code you want to test.
- Write an automated test that mocks the logger.
In the rest of the article I’ll give an example that was close to my situation. Here’s what the code looked like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
package com.sleepeasysoftware.mockalogger; /** * Created by Daniel Kaplan on behalf of Sleep Easy Software. */ public class AnnoyingToTest { //more code public <T> void deepInTheBowelsOfTheProject(T object) { //lots of logic here //lots of guard statements that might return before it reaches the next line RemoteApi.getInstance().save(object); //I want to check that this is called here //lots more logic here } } |
I specifically wanted to check if RemoteApi.getInstance().save(object); was called in this deepInTheBowelsOfTheProject method. Your reaction might be, “why not just mock out the RemoteApi if that’s what you want to check?” Well I agree, that would be the most straightforward way to do it but that wouldn’t work in this situation: The entry point of my test ends up calling RemoteApi.getInstance().save(object); in many places. I only care if it was called in this method.
I struggled with this for a while and came up with a solution I summarized above: I added a logger to this class and for the purposes of my test, I mocked it out. The class changed to look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
package com.sleepeasysoftware.mockalogger; import java.util.logging.Logger; /** * Created by Daniel Kaplan on behalf of Sleep Easy Software. */ public class AnnoyingToTest { public static Logger logger = Logger.getLogger(AnnoyingToTest.class.getName()); //more code public <T> void deepInTheBowelsOfTheProject(T object) { //lots of logic here //lots of guard statements that might return before it reaches the next line RemoteApi.getInstance().save(object); //I want to check that this is called here logger.info("I called save in deepInTheBowelsOfTheProject with this object class: " + object.getClass()); //lots more logic here } } |
With that logging statement in place, I could mock out the logger and run an acceptance test to check if that line of code ran. Also, there’s very little risk to this change and it will have the side effect adding better logs for debugging in the future. Here’s how my test looks:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
package com.sleepeasysoftware.mockalogger; import org.junit.After; import org.junit.Before; import org.junit.Test; import java.util.logging.Logger; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.verify; public class ImportantIntegrationTest { @Before public void setUp() throws Exception { AnnoyingToTest.logger = mock(Logger.class); } @After public void tearDown() throws Exception { AnnoyingToTest.logger = Logger.getLogger(AnnoyingToTest.class.getName()); //put things back the way they are } @Test public void testObjectICareAboutWasSaved() throws Exception { Important important = new Important(); important.run("some input"); verify(AnnoyingToTest.logger).info("I called save in deepInTheBowelsOfTheProject with this object class: " + ObjectICareAbout.class); } } |
This is a very nontraditional approach to testing, but if the alternative adds a lot of risk and takes a lot of time, it’s a very pragmatic thing to do.
Non-traditional? We’ve been debugging and testing using printf for decades.
Hi Jon,
Thanks a lot for your comment, I really appreciate it. You’ve shown me that in this article I have articulated my thoughts poorly and I need to rewrite some of it. This article isn’t about
Like you said, that has been around for decades and there isn’t really anything unique about saying that. But that’s not was I’m trying to convey. I’m trying to say this:
As a supplement, you can also use Stateprinter to generate your asserts for your resulting objects (object graph). This will not capture behaviour, but it enables you to easily capture the resulting state of your black box. Check it out at https://github.com/kbilsted/StatePrinter/blob/master/doc/AutomatingUnitTesting.md
Hi Kasper,
I skimmed the documentation but I was having a little trouble understanding it. A lot of the time I could not tell if it was pointing out a good way or a bad way to do things. Maybe that’s because this documentation seems to start on chapter 3 instead of chapter 1. Where’s this documentation start?
Hi Daniel
Many thanks for your comments. The documentation starts at https://github.com/kbilsted/StatePrinter
But you are right, it may be a bit confusing the way things are set up now. Maybe section 3.2 should go into a separate document on a critique of contemporary unit testing?
I’d say make that the first section. Start with, “This is the problem” then “this is the solution”. Not the other way around.
Hi, interesting post, thank you!
I wonder whether it would be more convenient to use something different than a logger – a kind of a hand-created mock. Obviously it would take more time, but would result in cleaner tests (so you wouldn’t have to assert against log messages, but simply verify whether the mock is in given state – e.g. assertThat(someMock.someMethodWasDone())
I’m not saying this would be better, only thinking aloud. :)
Cheers!
Hi Tomek,
I think you’re absolutely right that a traditional mock would be better. The problem is that for legacy code, it’s rare to find a safe way to put that mock into the production code.
Mocking a logger for a test doesn’t make a “good” test. But, it provides a safety net. It can help you see that you didn’t break anything when you refactor. Once you refactor, you can use the mock that you suggested and delete the logger test. But if you skip straight to the good mock, it’s too risky: You’ll have to change too much code that you don’t understand yet. You don’t understand yet because it’s a legacy system without tests.