I’m sure most of you remember what it was like working in IT during the late 90s. The buzz of Y2K was everywhere. While some prepared for the end of the world and for planes to fall from the sky, software vendors and IT organizations worked tirelessly to ensure that come the new millennium, applications continued to properly calculate things like social security payments, insurance claims, and credit card bills. And this work began well in advance of January 1, 2000.
However, to my dismay, I still hear people make chiding remarks such as, “Well, that Y2K thing was never a problem.” While those who thought the world would end were certainly wrong, those who thought a 2-digit year-field would incorrectly process a 4-digit year were right.
It “wasn’t a problem” because IT leaders understood the problem and took action to prevent it from becoming a major issue. A bazillion lines of code were analyzed, modified and tested to make sure “00” for example, wasn’t processed as 1900 instead of 2000.
I was reminded of Y2K as I was reviewing results from a recent survey of CIOs. Vanson Bourne gathered data on their views of the imminent mainframe skills shortage and what steps were being taken to prepare for it. Not surprising, 71% of CIOs are worried about the shortage. What was surprising was that only 46% are doing something about it.
This begs the question, why was IT so proactive in preparing for Y2K, yet so many organizations seem to be waiting until half their zOS/COBOL/DB2/IMS-knowledgeable staff is gone before doing something?
There are certainly differences between the two issues that could provide some explanation:
- Y2K had a specific deadline. The oldest baby boomers, age 66 today, are just starting to retire and the trend will continue for the next 20 years or so. Maybe IT believes the drain will be slow enough that they’ll have time to respond.
- The approaching new millennium, and the “Y2K bug” had the world’s attention. Hence, businesses put enormous (some would argue too much) pressure on IT organizations to make sure application failures didn’t result in massive meltdowns of critical systems. Maybe there just isn’t enough pressure on IT from the executive level.
- Budgets were very different. Analysts put the total cost spent on Y2K at $134 billion. That kind of money doesn’t just float around these days.
Here are some other factors that may be impacting the lack of planning:
- Other skills taking priority – Perhaps Recruiting departments are overwhelmed looking for expertise in areas such as virtualization, network administration and security.
- Taking workers for granted - Companies may believe that since many of their mainframe-savvy employees have worked for them for years, they’re likely to work past 65.
- Underestimating skill sets. Some IT Management may believe “parts are parts,” thinking they’ll just take a developer from here and put him/her over there.
- Underestimating the role of z/OS applications. In a recent conversation about the role of the mainframe with an industry analyst, he said, “There is a global misunderstanding of how computers work.”
- “If it ain’t broke” mentality. Perhaps businesses and their IT leaders simply think they should wait until something breaks.
While I don’t know what’s specifically causing some to not take action, I do know that mainframe applications are humming along, carrying the load of billions of transactions a day. And not having a plan in place to replace employees who have the knowledge to maintain, integrate and protect those applications seems especially risky. While the world won’t end and planes won’t (hopefully) fall from the sky, waiting until the problem is staring you in the face could mean missed delivery dates, poor service quality, and a huge increase in costs.
We’re lucky that the new millennium hit when the baby boomers were in full force. Can you image what we’d go through if the new millennium was going to hit in say, 2015? Take cover!