By looking at the current state of the industry, you may think that software development is entirely about the next revolution, that one new thing you must have to be successful. Today it is AI; yesterday it was Web 3. Flashy trends come and go like fast fashion.
But when you hand a product to customers, their priorities are entirely different. They have something that works: a system they've integrated into their workflows, processes they've built around it, trust they've placed in its reliability. What they want, more than anything, is for it to keep working.
Stability as a Service
No surprises, no disruptions, no learning curves for things that used to be simple. The exciting new code that energises developers can be irrelevant or even threatening to users who simply need their tools to remain dependable. In mature software, progress is mostly the art of changing things without surprising anyone. In a way, the main task for enterprise software is to manage legacy code.
An important skill for engineers is knowing how to maintain and care for existing code, how to navigate around and through it when adding new features or improving existing ones. No breakage allowed, no unintentional behaviour changes. And let's not even mention performance regressions. As engineers, it's as if we're living in an ancient house. Hopefully it’s built of stone, not cards. It has holes in the roof, a door that doesn't quite close, and an empty hearth. But the house is still standing, and you can and have to - or perhaps more positively, choose to - live in it.
Now, if you stumble into such a codebase, where everything is barely working, where the developers who wrote most of the code are long gone, you may proclaim, magnanimously and irrevocably, that the whole thing is bad and hopeless. Free from the burden of actually having someone answer back, you may ask yourself: what were they thinking? Did they not know better? How could anyone even write code like this?
In my shoes (what it's like, to be me)

Maybe we can try and gain some understanding for their actions. Let's close our eyes and imagine a world from the past, so that we can get in the shoes of our predecessors. You're walking down the street, your feet navigating expertly the uneven cobblestones. The city buzzes with life, workers coming and going, a blacksmith hammering away nearby. Suddenly, a shutter bursts open above. A bucket emerges, leaving you just a heartbeat to glimpse its contents crashing down toward you… Perhaps we've gone too far back.
Well, there are no crucial differences between software from today and from fifteen years ago. Cloud infrastructure may have changed how it is deployed, but not how it is written. Sure, you can find some changes if we look around. Some are somewhat cosmetic: new web frameworks, new programming languages, trends in how to write code (I still remember a phase where BNF-based grammars were getting replaced by Parser Combinators; while they have a place, the old-fashioned way is still alive and kicking). Some offer ways to improve performance: new methods to interact with drivers, bypassing the Linux kernel - really impressive stuff, but very localised to a part of the stack. Some stem from new uses: GPU software has evolved significantly, driven by game development and LLM proliferation. Depending on your domain, these can dramatically impact how you structure code.
Nothing ever changes
However, imperative programming - and single-threaded programming in particular - has remained largely unchanged for a long time. The code from these 'elders' isn't alien script; it is code that could have been written yesterday. They had similar educations, similar tools, and faced similar constraints. Legacy code, in most cases, is not obsolete code. Treating it as inadequate and belittling its authors would be a disservice to maintaining the codebase. They knew what they were doing, at least as well as you would if you had to write the feature yourself. They weren't missing some life-changing technology that would revolutionise software. They did their best - or at least what they were paid for, no need to always romanticise everything - with the constraints they had at the time.
Digging into the archives
With that said, the fact remains: there is code to maintain, and knowing how to code does not make the code clear. Making sense of what is there can be, for some, a dull and non-creative task. Faced with this unknown, it is the historians that are going to survive, even thrive, the ones that can see the legacy codebase as a mystery, an enigma with its clues and holes, a challenging puzzle to be solved.
In the quest for explanations, you can hope something remains from the old guard. Look for existing documentation, everywhere you think it may be: comments in the code, comments in the git history, wiki articles, external documentation sites. If the company changed infrastructure, like moving to a new repository or a different wiki, look into the old one. Sometimes, you may find video recordings containing hints. Other times, people you can still speak to will have insights. You want to collect everything that may be useful, while starting to organise and summarise, so that you are not overwhelmed with secondary information.
When you find an area with nothing, no more visible handholds, be it a full module or a single function, it is time to add your own. The goal is to pave the way for future engineers, so that they do not have to do the same work you did. Document and comment everything you think could be useful: the architecture used, how different parts interact with each other, the choice of a third-party library, why some code is correct and not totally unsafe like you may assume at first.
It can be a scary step; what if you were wrong and did not understand what the code is doing? Well, you can start crafting a safety net, built from assertions and tests. When a pre-condition is implicit, make it explicit with an assert, and then make sure it is correct by stress testing that particular path. Defensive programming - the art of assertions - is a powerful tool. It ensures the program works as expected while documenting why it works that way. Again, it is not something new; it can be found in the Rules for Developing Safety-Critical Code in 2006, for practical uses.
When all is done, if we don't want to repeat history and have another engineer curse the same code two years from now, all of this needs to be shared, because otherwise your work will just get lost. Knowing that a documentation file exists somewhere is far less memorable than spending half an hour talking about your discoveries. When someone in the future needs that information, they're more likely to remember that help is available. I also find this useful during the discovery period, usually in informal ways: discussions with others often reveal where your knowledge is still weak - because you cannot explain things clearly - and other points of view may surface new problems. When someone asks you "Have you thought about foo?" and you hesitate, you probably didn't think enough about that point. Depending on the importance of that piece of legacy code, knowing everything by heart may not be necessary, but that's where "real-world" priorities come into the picture.
Snap back to reality (ope, there goes gravity)
And that's one of the limits of all this: legacy code is often not your main job. And still, your title may not be Maintainer, but people expect you - sometimes implicitly - to maintain some code. The calm, isolated walk through the code of the past clashes with today's responsibilities: producing something new on top of the existing foundation. In that reality, compromises must be made. You need to find an equilibrium between preserving what's there and setting it aside to build that new highway. This choice is specific to each situation, hard to unpack without diving into concrete examples.
That, however, is a story for another day.