Mainframe Innovation, Reimagined
Until recently the mainframe environment at most organizations has been treated like a black box: executives knew it accomplished significant work and it involved significant budget line items but preferred to leave it to its own devices. However, dramatic changes are afoot. Let’s take a look at three particular snapshots of the mainframe environment in time:
The mainframe development staff is extremely stable: turnover is rare and new hires come with the esoteric mainframe skillset. People are assigned to specific applications and may very well have been with that application for decades, possibly even as the original author.
Mainframe work is generally of three categories: ad-hoc maintenance when bugs occur; planned maintenance for support of new IBM subsystems or new regulations; and minor development to support distributed groups that require access to mainframe data. Some/much/a majority of this work is farmed to overseas developers under the supervision of corporate managers. Expectations on the mainframe development staff are low. That is until …
The mainframe workforce is in flux. The entire mainframe staff is approaching retirement at roughly the same time and the corporation has little or no control as to when and who is retiring. Knowledge of the intellectual property associated with the mainframe applications is vanishing as people retire.
Organizations find it difficult to impossible to hire replacements at an adequate pace. The entire mainframe industry has similar demographics and the poor quality of code from overseas is impacting the business. Schools are not producing mainframe developers at an adequate pace and talented younger programmers have no affinity for mainframe work. But every crisis is a combination of danger and opportunity and this situation leads to …
There is no longer a specific mainframe development staff. The entire development staff is ready, willing and able to work on mainframe tasks. The onus is on the development tools to both provide a consistent experience for the programmer regardless of the platform and, in the case of the mainframe, provide application understanding—serving as a virtual mentor for programmers coming into complex applications from the cold. This setup provides the maximum flexibility for the organization, allowing it to retain a consistent cadence of work and to set reliable timetables for new features, regardless of the platform(s) involved.
Why does this matter?
Organizations have their own inertia. It often requires an outside impetus to drive change. In this case this has led to an organizational theory referred to as bimodal IT where an organization’s innovation occurs at two different speeds—slow and focused on stability for the mainframe and fast and experimental for distributed.
Of course that theory presupposes that your competition is also willing to move at that same pace. And who is your competition? They might be well known today, but it’s also very likely there are two guys in a garage building your competition as we speak. This also doesn’t take into account the demographic challenges of today’s mainframe.
So what if we started from a blank slate? Re-read the first paragraph of each of the first three sections. Which represents the healthiest ecosystem? Which places you in the best possible position to accomplish the innovation necessary in today’s competitive marketplace? And which leaves you open for new innovation, possibly even leveraging that new high-powered mainframe hardware?
Is there any other option?
Latest posts by James Liebert (see all)
- Failure Ain’t Nothing but a Learning Thing: An Agile Perspective - December 5, 2017
- DevOps and the Mainframe: The Ultimate Win-win - August 29, 2017
- How Do You Define DevOps? Six Interpretations to Help - June 29, 2017