Modernize Legacy Code in Production — Rebuild Your Airplane Midflight Without Crashing | by Shai Almog | May, 2022

Migrating outdated code is all the time a problem — and agile practices are essential when taking up this endeavor

I spent over a decade as a marketing consultant working for dozens of corporations in lots of fields and pursuits. The range of every code base is super. This text will attempt to outline normal guidelines for modernizing legacy code that may hopefully apply to all, however it comes from the angle of a Java developer.

When penning this, my major focus is on updating an outdated Java 6-era type J2EE code to the extra trendy Spring Boot/Jakarta EE code. Nonetheless, I don’t need to go into the code and attempt to hold this generic. I focus on COBOL and comparable legacy techniques too. Many of the overarching pointers ought to work for migrating another kind of codebase too.

Rewriting a challenge isn’t an immense problem, largely — nonetheless, doing it whereas customers are actively banging in opposition to the prevailing system with out service disruption?

That requires numerous planning and coordination.

I don’t assume we should always replace initiatives for the sake of the “newest and biggest.” There’s a cause frequent legacy techniques like COBOL are nonetheless used. Worthwhile code doesn’t lose its shine simply due to age. There’s loads to be mentioned for “code that works,” particularly if it was constructed by tons of of builders a long time in the past. There’s numerous hidden enterprise logic mannequin information in there.

Nonetheless, upkeep can typically grow to be the bottleneck. You would possibly want so as to add options that make the method untenable. It’s arduous to seek out one thing in hundreds of thousands of traces of code. The power to leverage newer capabilities could be the ultimate deciding issue. Because of newer frameworks and instruments, it could be doable to create the same challenge with out the identical complexities.

We shouldn’t decide to overtake current code that’s in manufacturing frivolously. You want to create a plan, consider the dangers, and have a strategy to again out.

Different causes embrace safety, scalability, finish of life to techniques we depend on, lack of expert engineers, and many others.

Often, you shouldn’t migrate for higher tooling however for higher observability, orchestration, and many others. These are super advantages.

Modernization offers you the chance to rethink the unique system design. Nonetheless, this can be a dangerous proposition, because it makes it fairly straightforward to introduce delicate behavioral variations.

Earlier than we head to preparations, there are a number of deep challenges we have to evaluation and mitigate.

Entry to legacy supply code

Typically, the supply code of the legacy code base is not workable. This would possibly imply we will’t add even primary options/performance to the unique challenge. This could occur due to many causes (authorized or technical) and would make migration tougher. Unfamiliar code is an immense downside and would make the migration difficult however doable.

It’s quite common to show inside calls within the legacy system to allow clean migration. For instance, we will present fallback capabilities by checking in opposition to the legacy system. An outdated product I labored on had a customized in-house authentication. To maintain compatibility throughout migration, we used a devoted internet service. If person authentication failed on the present server, the system checked in opposition to the outdated server to offer a “seamless” expertise.

That is necessary in the course of the migration part however can’t all the time work. If we don’t have entry to the legacy code, instruments comparable to scraping could be the one recourse to get excellent backward compatibility in the course of the migration interval.

Typically, the supply is not out there or was misplaced. This makes preparation tougher.

Lack of ability to isolate legacy system

With a purpose to analyze the legacy system, we want the flexibility to run it in isolation so we will take a look at it and confirm its behaviors. This can be a frequent and necessary observe however isn’t all the time doable.

For instance, a COBOL code base working on devoted {hardware} or working system. It could be tough to isolate such an setting.

That is in all probability the largest downside/problem you’ll be able to face. Typically an exterior contractor with area experience may help right here. If so, it’s value each penny!

One other workaround is to arrange a tenant for testing. For instance, if a system manages payroll, units up a pretend worker for testing, and performs the duties mentioned under in opposition to manufacturing. This is a gigantic hazard and an issue, so this example is much from superb, and we should always take it provided that no different possibility exists.

Odd codecs and customized shops

Some legacy techniques would possibly depend on deeply historic approaches to coding. A fantastic instance is COBOL. In it, they retailer numbers based mostly on their kind and are nearer to BCD (Java’s BigDecimal is the closest instance). This isn’t dangerous. For monetary techniques, that is truly the proper strategy to go. But it surely would possibly introduce incompatibilities when processing numeric knowledge which may stop the techniques from working in parallel.

Even worse, COBOL has a posh file storage resolution that isn’t an ordinary SQL database. Transferring away from one thing like that (and even some area of interest newer techniques) might be difficult. Fortunately, there are answers, however they could restrict the practicality of working each the legacy and new merchandise in parallel.

Earlier than we have to even take into account an endeavor of this kind, we have to consider and put together for the migration. The migration shall be painful no matter what you do, however this stage enables you to shrink the scale of the band-aid that you must pull off.

You want to observe many normal guidelines and setups earlier than present process a code migration. Every one among these is one thing that you must be deeply conversant in.

Function extraction

When we have now a long-running legacy system, it’s virtually unattainable to maintain monitor of each characteristic it has and the function it performs within the closing product. There are paperwork, however they’re arduous to learn and undergo when reviewing. Situation trackers are nice for follow-up, however they aren’t nice maps.

Discovering the options within the system and those which are “truly used” is problematic. Particularly after we need to deal with trivialities. We wish each small element. This isn’t all the time doable, but when you should use observability instruments to point what’s used, it will assist very a lot. Migrating one thing that isn’t used is irritating, and we’d need to keep away from it if we will.

This isn’t all the time sensible as most observability instruments that present very fine-grained particulars are designed for newer platforms (e.g., Java, Python, Node, and many others.). However if in case you have such a platform as an outdated J2EE challenge, utilizing a device like Lightrun and putting a counter on a particular line can let you know what’s used and what in all probability isn’t. I focus on this additional under.

I typically use a spreadsheet the place we listing every characteristic and minor habits. These spreadsheets might be enormous, and we’d divide them based mostly on sub modules. This course of can take weeks as a result of there are lots of steps: going over the code, documentation, and utilization. Then iterating with customers of the applying to confirm that we didn’t miss an necessary characteristic.

Reducing corners is straightforward at this stage. You would possibly pay for them later. There have been instances I assigned this requirement to a junior software program developer with out correctly reviewing the output. I ended up regretting that, as there have been circumstances the place we missed nuances throughout the documentation or code.

Compliance assessments

That is a very powerful side of a migration course of. Whereas unit assessments are good, compliance and integration assessments are essential for migration.

We want characteristic extraction for compliance. We have to go over each characteristic and habits of the legacy system and write a generic take a look at that verifies this habits. That is necessary to confirm our understanding of the code and make sure the documentation is right.

As soon as we have now compliance assessments that confirm the prevailing legacy system, we will use them to check the compatibility of the brand new codebase.

The basic problem is writing code you can run in opposition to two fully completely different techniques. For instance, adapting these assessments could be difficult if you happen to intend to vary the person interface.

I’d suggest writing the assessments utilizing an exterior device, perhaps even utilizing completely different programming languages. This encourages you to consider exterior interfaces as a substitute of language- and platform-specific points. It additionally helps in discovering “bizarre” points like minute variations that result in incompatibilities within the HTTP protocol implementation between a brand new and legacy system.

I additionally counsel utilizing a very separate “skinny” adapter for the UI variations. The assessments themselves have to be an identical when working in opposition to the legacy and the present codebase.

The method we take for take a look at authoring is to open a difficulty throughout the situation tracker for each characteristic/habits within the spreadsheet from the earlier step. As soon as that is finished, we shade the spreadsheet row yellow.

As soon as we combine a take a look at and the difficulty is closed, we shade the row inexperienced.

Discover that we nonetheless want to check components in isolation with unit assessments. The compliance assessments assist confirm compatibility. Unit assessments examine the standard and in addition full a lot sooner, which is necessary for productiveness.

Code protection

Code protection instruments won’t be out there in your legacy system. Nonetheless, if they’re, that you must use them.

The most effective methods to confirm that your compliance assessments are intensive sufficient is thru these instruments. You want to do code critiques on each protection report. We should always validate each line or assertion that isn’t coated to ensure there’s no hidden performance that we missed.

Recording and backup

If it’s doable, file community requests to the present server for testing. You should utilize a backup of the present database and the recorded requests to create an integration take a look at of “real-world utilization” for the brand new model. Use dwell knowledge as a lot as doable throughout improvement to forestall surprises in manufacturing.

This won’t be tenable. Your dwell database could be entry restricted or it could be too huge for utilization throughout improvement. There’s clearly privateness and safety points associated to recording community visitors, so that is solely relevant when it may possibly truly be finished.


One of many nice issues about migrating an current challenge is that we have now an ideal sense of scale. We all know the visitors. We all know the quantity of information, and we perceive the enterprise constraints.

We don’t know whether or not the brand new system can deal with the height load throughput we require. We have to extract these particulars and create stress assessments for the crucial parts of the system. Ideally, we have to confirm efficiency to match it to the legacy to ensure we aren’t going again by way of efficiency.

Which elements ought to we migrate and in what means?

What ought to we goal first, and the way ought to we prioritize this work?

Authentication and authorization

Many older techniques embed the authorization modules as a part of a monolith course of. It will make your migration difficult whatever the technique you’re taking. Migration can also be an awesome alternative to refresh these outdated ideas and introduce a safer/scalable method for authorization.

A standard technique in circumstances like that is to ship a person to “enroll once more” or “migrate their accounts” when they should use the brand new system. This can be a tedious course of for customers and can set off numerous assist points, e.g., “I attempted password reset, and it didn’t work.”

These types of failures can occur when a person within the outdated system doesn’t carry out the migration and tries to reset the password on the brand new system. There are workarounds comparable to explicitly detecting a particular case comparable to this and redirecting to the “migration course of” seamlessly. However friction is to be anticipated at this level.

Nonetheless, the good thing about separating authentication and authorization will assist in future migrations and modularity. Person particulars within the shared database are usually one of many hardest issues emigrate.


When coping with the legacy system, we will implement the brand new model on high of the prevailing database. This can be a frequent method and has some benefits:

  • Immediate migration — that is in all probability the largest benefit. All the info is already within the new system with zero downtime
  • Easy — that is in all probability one of many best approaches to migration, and you should use current “real-world” knowledge to check the brand new system earlier than going dwell

There are additionally a couple of severe disadvantages:

  • Knowledge air pollution — the brand new system would possibly insert problematic knowledge and break the legacy system, making reverting unattainable. In case you intend to offer a staged migration the place each the outdated and new techniques are working in parallel, this could be a difficulty
  • Cache points — if each techniques run in parallel, caching would possibly trigger them to behave inconsistently
  • Persisting limits — this carries over limitations of the outdated system into the brand new system

If the storage system is trendy sufficient and highly effective sufficient, the method of migrating the info on this means is smart. It removes, or a minimum of postpones, a problematic a part of the migration course of.


The next three suggestions are on the root of software efficiency. In case you get them proper, your apps shall be quick:

  1. Caching
  2. Caching
  3. Caching

That’s it. But only a few builders use sufficient caching. That’s as a result of correct caching might be very sophisticated and might break the only supply of information precept. It additionally makes migrations difficult, as talked about within the above part.

Disabling caching throughout migration won’t be a practical possibility, however lowering retention would possibly mitigate some points.

There are a number of methods we will tackle a large-scale migration. We are able to have a look at the “huge image” in a migration, e.g., monolith to microservices. However as a rule, there are extra nuanced distinctions in the course of the course of.

I’ll skip the plain “full rewrite” the place we immediately exchange the outdated product with the brand new one. I feel it’s fairly apparent and all of us perceive the dangers/implications.

Module by Module

In case you can decide this technique and slowly exchange particular person items of the legacy code with new modules, then that is the best strategy to go. That is additionally one of many greatest promoting factors behind microservices.

This method can work nicely if there’s nonetheless a crew that manages and updates the legacy code. If one doesn’t exist, you might need a major problem with this method.

Concurrent deployment

This could work for a shared database deployment. We are able to deploy the brand new product to a separate server, with each merchandise utilizing the identical database as talked about above. This method has many challenges, however I decide it typically, because it’s in all probability the only one to begin with.

Because the outdated product continues to be out there, there’s a mitigation workaround for current customers. It’s typically advisable to plan downtime for the legacy servers to power current customers emigrate. In any other case, on this situation, you would possibly find yourself with customers who refuse to maneuver to the brand new product.

Hidden deployment

On this technique, we conceal the prevailing product from the general public and arrange the brand new system as an alternative. With a purpose to ease migration, the brand new product queries the outdated product for lacking info.

For instance, if a person tried to log in and didn’t register within the system, the code can question the legacy system emigrate the person seamlessly. That is difficult and, ideally, requires some modifications to the legacy code.

The large profit is that we will migrate the database whereas retaining compatibility and with out migrating all the info in a single fell swoop.

A significant draw back is that this would possibly perpetuate the legacy code’s existence. It would work in opposition to our improvement objectives on account of that.

You completed writing the code. We’re prepared to tug the set off and do the migration. Now we have to replace the customers that the migration goes to happen. You don’t need an offended buyer complaining that one thing abruptly stopped working.


If doable, carry out a dry run and put together a script for the migration course of. Once I say ‘a script,’ I don’t imply code. I imply a script of obligations and duties that should be carried out.

We have to confirm that every thing works because the migration completes. If one thing is damaged, there must be a script to undo every thing. You’re higher off retreating to redeploy one other day. I’d fairly have a migration that fails early that we will “stroll again” from than have one thing “half-baked” in manufacturing.


In my view, it is best to use a smaller crew for the precise deployment of the migrated software program. Too many individuals can create confusion. You want the next personnel on board:

  • IT/OPS — to deal with the deployment and reverting if needed
  • Assist — to area person questions and points. Elevate flags in case a person studies a crucial error
  • Builders — to determine if there are deployment points associated to the code
  • Supervisor — we want somebody with immediate decision-making authority. Nobody desires to tug a deployment. We want somebody who understands what’s at stake for the corporate

There’s a bent to make a code repair to get the migration via. This works OK for smaller startups, and I’m fairly responsible of that myself. However if you happen to’re working at scale, there’s no strategy to do it. A code change finished “on the spot” can’t go the assessments and would possibly introduce horrible issues. It’s in all probability a nasty concept.


The axiom “don’t deploy on a Friday” could be a mistake for this case. I discover Fridays are an awesome migration interval after I’m prepared to sacrifice a weekend. Clearly, I’m not advocating forcing individuals to work the weekend. But when there’s curiosity in doing this (in change for trip time), low-traffic days are perfect for making main modifications.

In case you work in a number of time zones, builders within the least energetic time zone could be finest to deal with the migration. I’d counsel having groups in all time zones to maintain monitor of any doable fallout.

Agility in these conditions is essential. Responding to modifications rapidly could make the distinction between reverting a deployment and soldering on.

Staged Rollout

We are able to stage our releases with small updates and push the replace to a subset of customers. Sadly, I discover it extra of a hindrance after we make a significant change. The supply of errors turns into tougher to differentiate if in case you have each techniques working. Each techniques have to run concurrently, and it would trigger extra friction.

A few weeks had handed, issues calmed down, and the migration labored. Ultimately.

Now what?

Retirement plan

As a part of the migration, we introduced with us a big set of options from legacy. We in all probability want a few of them, whereas some others won’t be needed. After ending the deployment, we have to determine on the retirement plan. Which options that got here from legacy must be retired, and the way?

We are able to simply see if we use a particular technique or if it’s unused within the code. However are the customers utilizing a particular line of code? A particular characteristic?

For that, we have now observability.

We are able to return to the characteristic extraction spreadsheet and evaluation each potential characteristic. Then use observability techniques to see what number of customers invoke a characteristic. We are able to simply try this with instruments like Lightrun by putting a counter metric within the code (you’ll be able to obtain it without cost here).

In response to that info, we will begin narrowing the scope of options utilized by the product. I mentioned this earlier than, so it won’t be as relevant if this performance labored within the legacy system.

Much more necessary is the retirement of the working legacy. In case you selected a migration path during which the legacy implementation continues to be working, that is the time to determine when to tug the plug. Moreover prices, the safety and upkeep issues make this impractical in the long term. A standard technique is to close down the legacy system periodically for an hour to detect dependencies/utilization we’d not pay attention to.

Instruments comparable to community displays can even assist gauge the extent of utilization. In case you have the flexibility to edit the legacy or a proxy into the legacy, that is the time to gather knowledge in regards to the utilization. Detect the customers that also rely on that and plan the e-mail marketing campaign/course of for transferring them on.

Use tooling to keep away from future legacy

A contemporary system can get pleasure from lots of the newer capabilities at our disposal. CI/CD processes embrace refined linters that detect safety points and bugs and carry out far superior critiques to their human counterparts. A code high quality device could make an enormous distinction to the maintainability of a challenge.

Your product must leverage these new instruments so it received’t deteriorate again to legacy code standing. Safety patches get delivered “seamlessly” as pull requests. Adjustments get implicit critiques to eradicate frequent errors. This allows simpler long-term upkeep.

Sustaining the compliance testing

After the migration course of, individuals typically discard the compliance assessments. It is smart to transform them to integration assessments if doable/needed, but when you have already got integration assessments, they could be redundant and tougher to keep up than your commonplace testing.

The identical is true for the characteristic extraction spreadsheet. It’s not one thing that’s maintainable and is barely a device for the migration interval. As soon as we’re finished with that, we should always discard it, and we shouldn’t take into account it as authoritative.

Migrating outdated code is all the time a problem, as agile practices are essential when taking up this endeavor. There are such a lot of pitfalls within the course of and so many factors of failure. That is very true when that system is in manufacturing, and the migration is important. I hope this listing of suggestions and approaches will assist information your improvement efforts.

I feel the ache on this course of is unavoidable. So is a few failure. Our engineering groups have to be agile and conscious of such circumstances. Detect potential points and tackle them rapidly in the course of the course of. There’s much more I may say about this, however I need to hold it normal sufficient so it should apply to wider circumstances.

Need to Join?In case you have ideas on this, attain out to me on Twitter — @debugagent — to let me know.

More Posts