CPU Time: Think Outside the Toolbox
It’s inevitable. As a sysprog, you run a complex analysis job in production and suddenly realize that your batch job is running forever and sucking up CPU time. While higher priority jobs most likely wait, you don’t really want anything running out of control. A poorly designed SAS job can really cause problems. A new production batch job might do the same thing. But what can you do to prevent it? How can you really manage batch CPU time?
One can code CPU time per step in the JCL, but who does? In a production job, you may have wide differences in CPU demand with the same job, depending on the file(s) read in, the time of year, or the time of month. Someone valuing their job would probably be afraid to code such a parameter because, let’s face it, things change. And the PROCLIBS governing production jobs may not be libraries you can modify. It’s a crude tool and one most of us probably never consider.
Another option is the ability to provide a default value per step. Of course, this only works when the user hasn’t coded step CPU time in the JCL. This tool is even cruder, as jobs may have many steps. It’s not uncommon to have complex batch jobs running 10 or more steps, so even if you give a modest 10 seconds per step, the job might end up consuming 100 seconds of CPU. It’s really difficult to predict execution time. But you want to be on top of CPU utilization.
The Tool You Really Need
Compuware ThruPut Manager provides an answer. Using Job Action Language (JAL), you have the flexibility to honor an existing JCL-defined CPU time parameter, or, you can specify/override a step CPU limit for a given job or a type of job. ThruPut Manager inserts the limit value into the job automatically, so you don’t have to manage it in the JCL. This can be very useful for limiting the potential damage of your large analysis jobs and, thus, could be considered a career-enhancement tool. In addition, business users don’t like to be surprised by huge and unexpected bills. This might very well help you avoid that.
But wait! You’re going to tell me that you have CPUs running at different speeds and if the job is dispatched on a slower box, this won’t work. Fortunately, the ThruPut Manager developers foresaw that situation and also provide a CPU Normalization service. Within a JAL SET, you can request normalization, so the value you set for CPU time limits on a job will be adjusted based on processor speed. Consult the ThruPut Manager manuals to look at how to set this. Note that this will allow you to control production jobs even if you can’t edit the PROCLIBS governing these jobs.
CPU time is a valuable resource. When the basic tools in your set don’t give you the controls you need, consider a solution that will offer you a better selection. This treasure chest is full of unmined value. Dig in and see how you can have better control over your environment.
Latest posts by Denise Kalm (see all)
- Breaking Down Dev and Ops Silos with Communication, Collaboration and Trust - April 19, 2018
- What Do Performance and Capacity Roles Have to Do with DevOps? - February 1, 2018
- How Developers Can Boost DevOps Through Automated Batch Processing - December 7, 2017