The System 360 mainframe from IBM will be fifty years old next year. I was in grade school when it was announced in April of 1964, and my wife was just an infant. I wasn’t aware of IBM’s game-changing announcement and I didn’t care, but little did I know how important the mainframe would be to my career. I started working with mainframes after graduating from York University (Toronto) with a History degree in 1981. At the time IBM Canada did not require any formal computer science education; rather, they trained on the job. We used punch cards for payroll and programmers shared 3270 green screens. You knew you had it made when you had a 3279 color monitor on your desk and didn’t have to share it.
I have held many positions in the past thirty-two years, including production control, internal technical support, programming, field technical support and product management. But the most fun and interesting roles I’ve had are in the area of mainframe application performance. And I can say unequivocally that, after fifty years, nothing has changed.
When the mainframe was new, programmers were tasked with making applications available to users as quickly as possible. Performance didn’t matter that much and programmers inadvertently introduced performance defects into their applications. Simple things like allocating datasets with inefficient block sizes and insufficient buffering made a huge negative impact on I/O subsystem performance. Opening and closing files too often and placing date routines in a loop (how many times do you need to get the date in a batch job?) resulted in excessive CPU consumption. Certain COBOL routines, like the INSPECT verb, were found to be too expensive. Over time the performance of applications became more and more important, as end users began to demand faster response times and capacity planners oversaw MIPS growth with a keen eye to saving money. As a result the concept of application performance management (APM) was born.
Through trial and error, the performance defects were corrected. Programmers shared their insights with their less-experienced comrades. But just when we thought that structured programming and code reviews would solve our problems, along came DB2. I remember hearing a presenter at an IBM performance conference years ago state: “If you want to make IMS run faster, disconnect it from DB2.” Over the years DB2 has become much faster and efficient, but SQL still drives the workload, and if the SQL is poorly coded, then performance will suffer.
Applications in 2013 are much more complicated than in 1964 — and now the end users are consumers, not employees of the company for which the developers work. Finding performance problems today is like finding a needle in the proverbial haystack. We need software tools to help identify the root cause of performance issues, which can occur anywhere from the browser to the database in the glass house.
A wise man once said that those who don’t study history (my major, remember?) are doomed to repeat it. Just as performance tuning is becoming more critical but also more complicated, the first mainframers (a.k.a Boomers) are beginning to retire in large numbers, taking their experience and knowledge with them. The IT industry is trying to attract a new generation of programmers. These programmers demand to use graphical user interfaces and programming frameworks. Unfortunately, even when you use a GUI and a framework, the old mistakes are still lurking. At Compuware we still see poorly executed block sizes and buffers, date routines in loops, and everything else that Murphy predicted in his famous law.
At this critical juncture in the ongoing life of the mainframe, our industry needs best practices, techniques that can be used and shared with the next generation of practitioners. More on that next time.