“It depends” is the answer to every interesting question. But why?

I’ve heard it said that every interesting question in software engineering has an answer that says, “It depends”. What I’ve realized recently is that this applies to a lot of questions that I would have normally considered uninteresting. Because of this, it’s taken me longer than it should have to see that past solutions to problems aren’t working.

For example, a good rule of thumb is that you shouldn’t have to change production code solely to make it more testable. It makes people uneasy to make a private method public just because you’d like to test it. This is usually a smell that there’s a hidden class somewhere or you should write a higher level test, etc.

But, why do we dislike changing production code just to make it testable? Well, public/private is a form of documentation. The former says, “There could be some code you can’t see that uses this.” This will slow you down or make you reconsider refactoring the code later. Worse, some other production code could end up calling the new public method when they really shouldn’t be.

I, like most, take this rule for granted. But, “it depends”. If you’re working in a legacy code base with few tests and lots of Singletons, I have no qualms with putting a setInstanceFieldForTests method so I can start writing tests. In this context, I gain much more value than I lose by changing the production code solely for testability because it makes the code base easier to understand and refactor. It’s the easiest way to be able to write integration tests around the code that uses the Singleton. But in a green field application, the same action can make the code base more difficult to understand and refactor.

Problems and their solutions rarely exist in a vacuum. There is usually a context where the solutions make sense and a context where they do more harm than good. I find it very difficult to recognize this context. I usually throw it out when I find a good solution to a problem and when I see a similar problem I try to apply it without recognizing how the previous context invalidates the solution in the second case.

I wonder if it’s human nature to be bad in this way or if it’s a skill that one can improve in. I’d like to hear other’s techniques in deciding what to pick when the solution “depends”. I’d also like to hear your stories about situations you’ve experienced where “it depends”.

Intro to the Extreme Single Responsibility Principle

I’m going to talk about a technique I picked up that I’ve never seen done anywhere except for where I currently work. Because I’ve been programming professionally for over a decade and this is the first time I’ve seen this, I figure it must be relatively unfamiliar to others. This is a technique I’ve been calling “Extreme SRP”. But before I explain what the Extreme version of SRP is, I should explain what SRP is:

SRP stands for the Single Responsibility Principle. It means that a class should only have one reason to change. If you want to learn more, this link is a good reference.

Extreme SRP, surprisingly enough, is taking SRP to the extreme. Before I talk about what it is, I want to preface the description with a suggestion: Keep an open mind. When I first saw this technique my immediate thought was, “That’s not idiomatic”. But, I went with it to see how it’d turn out. In the end I had few complaints but I thought I would have many. It’s easy to think, “That looks weird so it must be bad” but unless you try it, you don’t really know if that’s true.

I’m going to describe Extreme SRP with a set of rules:

  1. Every class can only have one method.
  2. That method must be public.
  3. Every field of the class must be dependency injected.

Here’s an example class following this practice:

And that’s pretty much how all the code in your code base ends looking when you practice “Extreme SRP”. This class is what I call a “node” because it’s only responsible for delegating to other nodes/leaves. Eventually it will delegate to a leaf. A leaf is a node that doesn’t delegate to other nodes. This is usually where the “meat” of your feature goes. The nodes are responsible for delegating to the leaves, translating inconvenient parameters from the outside world to convenient data that it prefers to work with.

As a result, your code base ends up looking like this:

Tree

In other words, your code becomes a tree datatype.

“Extreme SRP” has all the advantages of SRP:

  • Code complexity is reduced by being more explicit and straightforward
  • Loose coupling
  • Improved readability

The methods are very small and easy to understand. But, if you’re looking for a specific leaf and you don’t know its name, you have to traverse the tree to get there. Whereas a lot of these classes may be inlined into one method when using traditional techniques so the code would be easier to find. This is one of the downsides you have to live with when you practice “Extreme SRP”.

As I said above, this code is not very idiomatic, at least for most object oriented languages. It’s more procedural/functional in nature. But because of the rule above that says every field must be dependency injected, the entire code base is extremely easy to test. The non-idiomatic code threw me off originally, but the ease of testability makes it worth it, in my opinion.

The way you go about creating a system like this from start to finish has a lot of advantages. You think about a node and what it directly delegates to instead of the whole tree at once. How you go about thinking this was is documented in another article I wrote.

There are no cyclical dependencies where A depends on B and B depends on A. This simplifies a lot in your code base and makes it easier to modularize your code. The code base requires minimal refactoring. Usually I can throw away a node and rewrite it instead of refactoring it into what I want. Or I simply have to connect one node to a different one, etc. It requires a lot less thought and it rarely feels necessary. Whereas using traditional object oriented techniques, refactoring feels built in to the process and is the first and last step of adding a new feature.

This technique does create an explosion of classes and you have to come up with names for all of them. It’s not always easy but whenever I struggle with this I feel like there is a concept that needs to be named, I just can’t think of the name for it. It’s rarely a situation where I feel like the concept doesn’t exist.

Because of this class explosion, you have to be really good with your namespaces to organize your code. Otherwise, you’ll get lost and it’ll slow you down to browse your code base.

Other than that, I’m really happy with “Extreme SRP”. I recommend that readers put their biases aside and give it a try on your next hobby project and see how it turns out for you. I’d love to hear your feedback on how it turned out!

Prioritize first, estimate second

It’s not uncommon to hear a PM say, “I’d like the development team to estimate how long it will take to implement these features so I can use that to determine how to prioritize them”. Earlier in my career, I would have considered this completely reasonable. After all, if the PM expected a feature to take a few days and the developers estimate that it will take a few months, the feature may not be worth doing.

But now I consider this mentality to be a serious smell that should be investigated. If a PM expects their stories to be estimated between days and months, that means the stories are not broken down into small enough pieces. The stories should be broken down into pieces so small that it would be surprising for any of them to span across an iteration/sprint/etc.

Although splitting stories into manageable pieces is the PM’s responsibility, the PM is not a developer. They usually don’t know the code base intimately. For this reason, it usually makes sense to get a second opinion from a developer if the stories are reasonably sized before the development team points it. Sometimes a developer may know that the prioritization of a few stories needs to be changed for technical reasons, but the PM should have an overall sense of prioritization without getting any feedback from a developer first. Prioritization is the PM’s responsibility, not the developers’ responsibility.

Once you assume that stories are going to be appropriately sized, it’s easy to see why “I want to estimate before I prioritize” is a smell. It means, “I don’t really know the most important thing we should be working on next, so I’ll choose the low hanging fruit”. It either means that stories are not broken down enough or that the PM doesn’t really know what’s important.

As a final note, if it’s safe to assume that the stories are going to be appropriately sized, the level of effort of a few days should not make the return on investment questionable. The features that would provide mere days worth of return on investment should never be considered as a priority in the first place.

Step 1: Developers Agree to Deadlines

Imagine you wanted someone to build you a house and they told you that it would take a year to build. “Unfortunately”, you tell them, “I already told my family that we’d be living in our new home in 3 months”. Of course they tell you they can’t build it on such an aggressive timeline. But, lets say, hypothetically, you had the power to convince this person to agree to your deadline. Would you feel comfortable living in that house?

Imagine the amount of corners the builder would have to cut to meet your deadline. Building a house that way would probably break all sorts of laws. Yet, we do this on a regular basis in software: A manager tells developers about a deadline that they don’t believe they can meet and the development team has to scramble to try to meet it anyway.

Developers have to have a say in deadlines or you will never be able to build high quality software. This is step 1. I don’t use the phrase “step 1” lightly. I literally mean that this is the whole foundation that quality software is built on top of. You must tackle this first.

If developers don’t have a say in deadlines, any attempt at quality will be undermined by unrealistic schedules. Corners will have to be cut to meet these deadlines (AKA technical debt), but the developers will never have any time to go back and pay off this technical debt. Quality can only get worse in such an environment. No process, including Agile, can overcome this.

The only people that know how realistic a deadline is are the people that know the inner workings of the project: The developers. And if they aren’t involved in setting deadlines, an unrealistic deadline will be picked most of the time. That’s because, all things being equal, delivering a feature sooner is better than later.

Now you may be asking, what if the developers don’t agree with the deadline? At this point the managers and the developers should negotiate.

One thing to negotiate on is scope. e.g., “What if the building didn’t have AC by the deadline but we installed it later?” This often goes back and forth many times as the manager and the developer come to some agreement.

Another thing to negotiation on is the deadline itself. e.g., “Realistically, we’d probably complete that 2 weeks after the deadline. Is that ok?”

And third, quality can be negotiated on. e.g., “These windows are poor quality but they’ll save me a ton of time to install at first. Then we can go back and put some better quality windows in later.” Technical debt is an example of this. But quality can only be negotiated on if there is trust between management and the developers. The trust needs to be that the developers will be allowed to go back and put some better quality windows in later.

By the way, this shouldn’t be unquestioned trust. The manager should ask why the developers won’t meet the deadline. Maybe the developers will say, “Because it takes 5 weeks to lay a foundation for an apartment complex”. Then the manager could say, “We don’t need that. We only need the foundation for a single family home”. This communication is very important.

But, this whole system falls apart when developers have no say in deadlines. The end result is poor quality software every time.

Commit messages: Don’t just document “what”, document “why”

Imagine you’re working in a well tested project and you’re assigned a new bug:

GIVEN there's a button to send a notification WHEN I click that button THEN I receive a "button clicked" notification that has an "id" and "username"

Actual: The "button clicked" notification only has an "id".

You work on this bug and run all your tests. Unfortunately you notice that another test you didn’t even realize exists failed. This test is written with a title like, “The button clicked notification should NOT have a username”.

Now you start to wonder which is right: The bug you worked on or the test that contradicts it. You check the commit history message. It says something like “Ensure button notifications don’t include username”. The person who wrote the original test either doesn’t work at the company or simply doesn’t remember why they originally wrote the test.

This is not a common problem, but it happens enough to slow me down occasionally and make me question if I’m doing the right thing. It’s not enough to document “what” was worked on. The reason “why” the work was done needs to be documented, too. A good place to document “why” is in the commit message itself.

Ideally, the developer shouldn’t have to come up with the “why” themselves. It should come from the Product Manager and be known to the developer before they even start changing the code.

Where I work, we make this easy to follow by using a template for stories. It looks like this:

AS A _____ I WANT TO ______ SO THAT _______ //most important part

GIVEN _____ WHEN _____ THEN _____ //2nd most important part`

I’ll go into the details of this template in another article, but the “SO THAT ____” is the most important part because it explains the “why”. If you can tie each change back to the “… SO THAT …”, you’ll have a much easier time when you need to figure out which of the two contradicting changes is correct.