It is my firm belief that you can be risk averse while constantly improving. I worked with someone a long time ago who said that they were “risk averse”. What this meant is they didn’t like the way that I would refactor code I had to maintain (after I put it under automated tests). They pushed back on my suggestions to improve deploy times because it would require changing things that “already worked”. These deploy times were taking up to thirty minutes and sometimes occurred multiple times a day. That got me thinking, “are you risk averse, or are you improvement averse?”
I understand that people have been burned in the past. Maybe they changed something that seemed harmless but turned out to take down a production environment. There’s two extreme ways you can react to this:
- “I’m never changing anything again unless it’s directly fixes the problem at hand”
- “I’m going to do whatever I can to prevent this error from ever happening again” If you never change anything, you can’t practice the
boy scout rule. You’ll look down on people who do. You’ll be living in a constant state of fear because you’ll think sneezing will break a build. That means things never improve unless you rewrite them or they get so terrible that you have to refactor. Once you get to this point, the process is going to be so expensive it will practically put the business on hold.
On the other hand, those that prevent the error from ever happening again will be better off for it in the long run. They will have to come up with a design or a process to prevent the error and this will provide incidental benefits to the company in ways you cannot predict today. For example, if you prevent the error from occurring by writing an automated test, that will make the code more testable in other ways in the future. Or, maybe you’ll set up better monitoring for the production environment so you can notice these issues (and others like them) before they occur.
If you think you have to choose between a path of “risk averse” or “improvement”, go with “improvement”. Because even if improvement did increase risk (and it doesn’t), you will be better prepared to handle any risk that comes along if you have a long history of improvement.
For example, lets say every time you deploy a new bug is introduced (high risk), but you also write a lot of automated tests (improvement). Well even though that’s a bad situation, you can write an automated test for the new thing that breaks every time and never have to worry about it breaking again. On the other hand, when you choose to be “risk averse” but don’t improve anything, you’re going to experience some regression bugs over and over again. I’ve seen the end result of a company that choose “risk averse”: Fixing one bug makes two new ones appear.
Change is inevitable. You’re going to have to maintain the code you’ve written in the past. That’s why being “risk averse” with a “don’t touch anything” attitude can’t work in the long run. Eventually, you’re going to have to maintain the code in one way or another. Focus on improvement and you’ll have less risk as a side-effect.
Averse.
Fixed. Thanks for your comment. I went my whole life saying that wrong.