The mainframe has come a long way since it entered mainstream enterprise IT during the 1960s with the IBM S/360. Even in 1996, when we were told the last mainframe was overdue being switched off, it took a chartered jumbo jet to ship a new mainframe.
In a recent interview with mainframe expert Paul Knight of TES, which you can watch here, we examined some of these oft-repeated claims that the mainframe is yesterday’s technology and has no role in the modern IT environment.
In reality, today there are machines with 20 times the power of those early mainframes but are now the size of a small fridge and sit in a standard rack and consume little power. It’s not only the physical element that’s changed, either. New Linux workloads are driving more digital workloads onto the mainframe platform.
It’s fair to say the death of the mainframe has been somewhat overstated – in fact research shows that 93 per cent of executives continue to believe in the mainframe, long-term. As Paul explains in the interview: “It's scalable vertically. It delivers everything that it needs to do when you do high transaction processing. A lot of banks, insurance companies, airlines, all of those still rely on the mainframe.”
The problem is that this reliable workhorse tag also creates an image of complex, ageing legacy infrastructure. As a result, many organisations are afraid to tackle the issue of modernisation because of the perceived complexity and cost. For example, recent research by Ensono and the Cloud Industry Forum found that 89 per cent of organisations surveyed said they struggled to modernise their IT because they have a mainframe. And half said they were so dependent on the mainframe that they could not change it.
Some organisations take a halfway house approach where they work around the core, leaving the mainframe relatively unchanged and building new services around it like an outer skin. It doesn’t need to be this way, however. The mainframe is far simpler to manage and operate if we update this core to make it more efficient and provide greater functionality at a lower price point.
Where organisations have undertaken an IT economics study across their mainframe estate they often realise that they are getting great value from this ‘legacy’ environment. And so they begin to look at running other elements on the same platform, such as Linux, Kubernetes and Docker and using the platform for high intensity AI activities.
Take the example of a US-based organisation that runs a financial crimes analytics solution based on Docker. The solution analyses 30 years of financial crimes history and then applies this analysis to current transactions to determine if there are new crimes, new behaviours or new patterns emerging. The solution is capable of reducing the number of crimes to 10 per cent of previous levels and positively predict the number of false positives which significantly reduces the number of cases to investigate. Using Docker and Kubernetes they have reduced development cycles from six-to-eight months to one day. The virtualisation on the mainframe allows the running of two million Docker containers on a single server.
The mainframe can run Red Hat, SUSE and Ubuntu too. There is no longer a requirement to run proprietary operating systems. This makes the mainframe highly accessible.
The message here is that organisations that regard the mainframe as a troublesome legacy need to look at the mainframe itself rather than trying to work around it. The underlying reliability, security and scalability along with the blisteringly fast processing power should be harnessed and modernised rather than just built around.
Watch the full interview here – Putting the Mainframe at the Heart of Digital Transformation