Getting the Most Out of Your Mainframe: Driving Down TCO Through Expert Mainframe Performance Analysis
Oliver Presland Vice President, Global Consulting Services Portfolio
Is your mainframe working for you? Does it deliver the value you require to deliver against your business strategy? Are you looking to drive down your total cost of ownership across your mainframe estate?
If you answered “yes” to any of the above, here’s one final question: when was the last time you did an end-to-end assessment of your mainframe to identify optimisation opportunities?
It’s not an easy task, but by taking a deep-dive assessment of your mainframe environment, application workloads, system configuration and performance, you can support your business’s goals by driving down costs, delivering a better experience to users and making sure your mainframe is optimised to match the requirements of business demands.
An assessment can deliver a range of outcomes, and usually identifies optimisation opportunities across a variety of components within your mainframe. From understanding whether you can delay an upgrade, identifying the cause of underperformance on SLAs to meet business commitments, identifying how to alleviate capacity constraints during peak periods, and how to drive down peak CPU consumption to deliver potential cost reduction.
At Ensono we work with organisations that have very different ambitions and therefore different reasons to assess their mainframes. Some want to cut costs, others are actively planning modernisation, some are planning for business growth. None of these goals can be effectively achieved without a thorough understanding of your mainframe configuration and whether workloads are truly optimised today.
Why might an assessment be needed?
The pace of change within business has not slowed. Every sector is being subjected to rapid shifts in the market, be it from disruptive new business models or geopolitical challenges that modern businesses must now operate within.
Retail and insurance companies are a stark illustration of this process of rapid transformation during COVID-19. Customers now expect to interact and interface with businesses digitally, on their terms and company systems must be responsive to that.
It’s been a time when many new client-facing applications have been deployed, either from third parties or in-house. These applications place new demands on mainframe processing to deliver core business processes and data that underpin those new client experiences. Like any system reacting to unexpected increase in demand, will it scale to the challenge cost-effectively without the need to upgrade? Is it optimised to deliver the fastest, most efficient applications and data processing, and the customer experience expected?
Over time as applications, software configurations and data structures change, all systems tend to be sub-optimal without regular evaluation. Properly assessed, optimised, and modernised, the mainframe can – and should be – the cornerstone of a business’s technology infrastructure.
What does an assessment involve?
How can a mainframe environment be effectively and comprehensively assessed? Given it’s a specialist job, it requires expertise and use of specialist tools. For instance, Pivotor is a software as a service (SaaS) application that provides deep analysis and insight into mainframe performance data. It digs into demand peaks and the contributing workloads, highlighting areas for further analysis and the opportunity to optimise the underlying workloads.
Further tools are just as forensic, in helping performance experts identify optimisation opportunities across the system. Sampling tools can be used to analyse the heaviest mainframe workloads and assess where the pressures can be released. By analysing how efficiently data access is being performed, reviewing monitoring and trace overheads, subsystem configuration, performance settings and other aspects covered by a comprehensive assessment, it’s not uncommon that we find optimisations from 8 to upwards of 25% in peak processing periods.
Armed with this intelligence and solid recommendations from mainframe performance experts, a series of forward-looking configuration changes can be planned to reduce peak processing, shorten batch windows, and increase transaction efficiency. These adjustments often result in ongoing reductions that improve the scale of mainframe workload under increasing demand. Even with a modest level of peak processing optimisation, the immediate positive impact is not just more efficient processing of load, but can also be felt on the balance sheet: there’s either money to be saved, future unnecessary expense to be avoided, oftentimes both.
Keeping an assessment focused
For a mainframe assessment to be effective, eyes need to be kept on the metaphorical ball. Mainframes are complex IT systems, and both challenges and expectations can vary wildly. We find different businesses have their own databases, applications stacks, languages, software tools, external interfaces and overall demands. No two projects are ever the same, and that comes through not just in tailoring of the assessment scope, but in subsequent strategic recommendations.
Assessments can deliver commonly sought-after potential benefits such as ongoing cost savings and the optimisation of processes. The focus areas may differ dramatically by project and where specific pain points exist, but a full, objective assessment report will showcase detailed information on what can be optimised.
End-to-end assessment of a mainframe system involves multiple perspectives. Coal-face employees will have views specific to what the mainframe does for them, and how they interface with it. But at the top level, it’s crucial to take a broader view to fully capture what it delivers to the business and how the different workloads interact and combine to deliver these outcomes.
With performance and capacity insight across the mainframe in hand, it then becomes possible to plan for effective optimisation, and all the benefits that brings with it.