Broken code gets fixed; poorly-designed, hard-to-maintain, not-quite-bad-enough-to-refactor code is left alone. Modifying it is like placing a rusty fork a centimeter away from your eye – impossible to ignore, emotionally painful to endure, and something you try to make end as quickly as possible. The code is harder to understand, easier to break, more frustrating to modify, and everything about it takes far more time than it should. But the cost to refactor was initially deemed too high, and so it remains, like a broken bottle on the beach.
Tools that report on data incorrectly or break the schema get put out to pasture or fixed; generally, though, internal tools are given short-shrift (as they aren’t “customer-facing”), designed by engineers (not designers), and do the right thing through an obscure and overly complicated interface. They frequently run on slow hardware, and can also be non-performant due to hasty programming. Although the users can learn how to use them, and even get used to them, this imposes a tax on every interaction, in terms of time, insight, and happiness.
Processes that fail to deliver results are thrown away or revised; but the extra time, effort, or frustration caused by a not-quite-bad-enough process is invisible to everyone outside it, and frequently even to those who have to endure it. When you join a company, you take a lot at face value – you want to get your sea legs before rocking the boat, and accepting the existing process is a natural part of that. You don’t want to come into an organization and immediately start demanding that they change their coding standard, merge process, QA pipeline, etc. But once you’ve habituated to a new environment, it becomes harder to see what’s wrong with it, and easier to accept process pain as normal.
If you don’t know how to do something, you learn; if you know a bad way to do it, and the more effective/efficient way would take time to learn, the most common path is to use the existing method instead of investing the additional time to learn a new skill. If you’re only doing something once, this is fine. If you have to do it many times, knowing the crappy method will quickly start costing you time and unnecessary effort. John Cook has a good post on this called bicycle skills, and xkcd has a handy graph to illustrate the point.
Tradeoffs
It isn’t always possible or desirable to do the right thing. Hence “worse is better,” “done is better than perfect,” and “perfect is the enemy of good.” Shipping is sometimes the most important feature, and the landscape is littered with the wreckage of companies that descended into endless spirals of refactoring and tweaking, never shipping anything. <cough>Netscape 4.0</cough>
Of course, too much of a bad thing can be deadly. Technical debt, inefficient processes, and bad tools are all going to add sand to the gears of your team and/or company. As an example, when I was in the video game industry, tools were frequently the last thing to be prioritized on a project, which led to the designers and artists being frustrated and under-productive for a significant amount of time. Putting the tools first almost always had a strong positive effect, both for productivity and morale, which of course led to a better end-product.
Unfortunately, there’s no “right” answer. When you’re small, everyone has to be a star, your processes have to be maximally effective, and you have to make the right tradeoffs for code, tools, and skills. The bigger you get, the more you can absorb frictional losses due to things that aren’t quite bad enough. It’s tempting to take a hard line on quality, but sometimes you just need to ship the product. Or you need a tool, and something is better than nothing.
There aren’t enough hours in the day to be constantly evaluating everything, all the time. Knowing what needs to be fixed, what can be left alone, and what can’t be allowed to get into a bad state in the first place, is what separates the great from the rest. It’s easy to habituate to a minor pain, hard to view it with new eyes, and extremely difficult to know that this specific pain, of all the ones you feel, is a key problem that needs to be addressed. Being able to figure this out is one of the most important things you can do in an organization, and one of your most powerful points of leverage.
A good way to think about fixing the “crappy code” is to figure out what’s the return on investment. For example, if every C++ source files relies on a frequently changing “config.h” – forcing frequent 20 minute rebuilds – you can figure out how quickly your refactoring will return better productivity when you remove that file and use runtime configuration instead.
Its not an exact science, but getting a ball park will help you decide what’s worth doing and what’s not.
Similarly, when you add new feature “C” to codebase “A”, its like planning a route…. a refactoring of code “A” to “B” then “C” maybe more efficient than trying to take the shortest path to “C”. (“B” gets you on the freeway). Which is where knowing the scope of your project helps in the first place, because maybe you can avoid “A” altogether. An awesome programmer would even plan for “C”, “D” and “E” if possible, without over-engineering in the first place.
Turning awful code into beautiful code can be very satisfying.
Often, the awful code masks a broader design or architectural flaw that is the real problem. Fixing that can often yield many other benefits e.g. functions that were previously impossible/costly become easy/fast/simple.