The word “legacy” evokes mixed reactions from the IT and business crowds alike. For those who depend on legacy systems such as the zSeries and iSeries platforms, it is a way to deliver stable and reliable applications into production. Business stakeholders who may not know the intricacies of the technology often take a view of, “Why fix it when it ain’t broken?”
On the other hand, for those who haven’t experienced the joys and pains of legacy systems , legacy world will continue to defy their logic and assumptions because their view of the world is shaped by the open system world which has completely different characteristics: being more malleable to change and easy to maintain etc. Hence there is this great divide between the legacy and new worlds. Add Agile and DevOps to the equation with the lofty goals of increased release frequency and reduced lead times, we end up with a divide that is too far to bridge.
The likes of Gartner have come up with constructs such as two-speed delivery to help deal with this legacy challenge. In my view, there is a problem with the idea of dual-speed because it is based on a wrong analogy to start with — that of allowing vehicles to move at different speeds depending on whether you are on the motorway or a city road. That analogy works well for traffic management scenarios but not for complex IT architectures we see in the enterprise. The dependencies across the systems in the enterprise landscape are just too intricate and too tightly coupled to be defined using such a binary logic. In my experience, I find that many Agile and DevOps transformations hit the roadblock and never achieve the desired levels of productivity and effectiveness because of an oversimplified view of these complex dependencies across legacy and open systems.
The three biggest challenges that prevent a legacy DevOps transformation are mindset problems, architectural constraints and tooling limitations.
The Mindset Problem
“There are no legacy systems, just legacy thinking.” That’s a great quote I heard recently from an engineer in a large organization trying to adopt DevOps at enterprise scale. The initial resistance from teams to move into DevOps ways is due to reluctance to change, emerging from an inertia of doing things a certain way for years if not decades. The mindset problem is as much a problem for those outside the legacy world as those inside. The “outsiders”—typically leadership, management team and consultants—tend to oversimplify the problem and, too often, rely on examples from the open systems world. This just alienates the incumbent legacy teams and makes them even more skeptical of DevOps driven change. The outsiders need to have a genuine appreciation of the challenges with legacy architecture and need to work in collaboration with the legacy teams if they really want to transform their organization.
Legacy systems have certain characteristics such as being static, tightly coupled and monolithic blocks of procedural code. These systems were not built for the modern Agile workflows that focus on incremental and iterative deliveries. A significant majority of these systems are written to operate in batch mode.Testing the systems requires cycling through multiple business days, which simply limits the level of agility you want to introduce in the organization. I am currently working with a legacy engineering team in the financial services space where you have to cycle through a minimum of 20 business days to simulate even 70 percent of the test scenarios. However, there are a number of solution options to overcome the architectural challenges; they range from killing applications to migrating to modern platforms or re-engineering legacy code. Not surprisingly, the right approach depends very much on your context.
Tooling in the legacy world to support a DevOps ways of delivery has always been a challenge. But it is fast changing now, thanks to a number of startups and even old-school IT product companies investing in this space. If you consider zSeries as an example, there are better IDEs (integrated development environments) such as RDz that can replace traditional code editors typically based on “green screens.” These IDEs provide great features such as better navigation, debugging, split views and searches on the legacy code. One of the CIOs I worked with made it clear to his team that he wanted to get rid of “green” screens from the IT floor. There are tools such as SmartBear Collaborator that support a great peer review experience and help bring one of the original XP practice of pair programming to the legacy world. There are tools such as endeavor from CA that promise automated build and deployments in the z-series world. Similarly, there are tools that support test automation of mainframe applications and tools for code coverage and code quality checks on legacy programming languages like COBOL. XATester looks like a promising tool for unit test automation from a Danish startup called Xact.
While this write-up is not intended to be a compendium of all DevOps and engineering tools, the central message is that there are a number of DevOps tools emerging on the legacy landscape which can help your legacy delivery teams get on to the DevOps and Agile journey.
What is more important is for organizations to realize that many of these legacy systems will remain part of their IT landscape for the next foreseeable future and it is time to invest in these core systems. They may not look as appealing as the digital platforms, but they are fundamental for businesses and you cannot ignore them on your DevOps journey. DevOps in the legacy world is not going to be smooth, but it is certainly not a mission impossible!