I’m often a little apologetic over my liking for the mainframe. After all, I used to be told (mostly by people who knew nothing about mainframes), it was only as powerful as an Intel 486 desktop, which made it appalling value for money.

True, it had an impressive throughput all the same, largely through parallel processing (including “channels” processing IO in parallel)—but who wanted parallel processing, when serial processing was easier to code and if it was a bit slow, next week’s chip would be speeded up enough to make the program fast enough anyway? The mainframe had rock-solid virtualisation too, but who needed virtualisation when you could buy the real thing for peanuts and have a PC all to yourself? The mainframe also had rock solid security and reliabilty, but that’s a bit boring. The mainframe definitely wasn’t suitable for playing computer games, the clock speed’s too slow.

That was then. Now, as the business realises that it depends on software, security and reliability are looking rather attractive. And the proliferation of unmanaged servers has started to make virtualisation and server consolidation seem like a good idea. And Intel has hit a brick wall with increasing clock speed, so you now have to understand coding for parallel (multi-core) anyway. The “value for money” largely depends on IBM’s pricing policies (and the amount of work you need to get through) so thank God it’s still a bit slow, CPU-wise, because the mainframe is still outside most people’s comfort zone.

But that’s all about to end, perhaps. If CA’s Mainframe Madness month was all about rather successfully bringing mainframe management and operational tools into the 21st century, with GUI interfaces and integral knowledge-bases, IBM is having a bit of mainframe madness of its own this month, around the release of what used to be called “zNext”; which now appears to be officially known as “zEnterprise”.

According to Ray Jones, VP System z Software, its “zEnterprise” strategy is, first, to capitalise on its traditional strengths in batch and transaction processing; messaging; quality of service and reliability; and in serving up huge volumes of data. And to allow all this to take full advantage of a new z hardware design..

However, perhaps the second part of the strategy is more exciting—IBM is extending z to new and mixed workloads, partly through virtualised Linux and partly through integration with bladeservers (but only specific models) for traditional distributed-systems workloads. Although the speed of the new quad core z processor, 5.2 GHz, means that more workloads may now be suitable for virtual machines actually running on z.

We now seem to have a mainframe which represents a “system of systems”, an ultra-reliable supervisor (with downtimes in minutes per year—and IBM investigates even these downtimes with the aim of eliminating them—which can schedule workloads around itself and on other platforms reliably, and manage monitoring and security at the same time.

This isn’t the first time I’ve been presented with such a picture but this time I really believe it could work, because this time the mainframe really seems to be part of an integrated enterprise computing platform. In support of this, this release includes:

  • Interesting hardware innovations for sheer speed and power efficiency;
  • A re-engineered software stack;
  • Genuine integration with modern application development environments. You don’t have to develop and maintain in a traditional mainframe environment (IBM’s zPDT provides the flexibility of a System z operating environment on a x86 PC running Linux);
  • Simplification and automation of mainframe operations.

And, let’s face it, if IBM doesn’t (as it puts it) reinvigorate the system z ecosystem, attract new system z customers and ISV application workloads, enable new hybrid and cloud environments and “make System z relevant to the new IT generation”, the mainframe might actually die at last, which would be bad for its customers who need mainframe levels of reliabilty and throughput but are losing traditional mainframe skills; and also bad for IBM, which I’m sure makes a decent margin on mainframe sales.

Of course, another factor in redesigning the mainframe as an equal partner in the distributed world might be the possible interest of the regulators in IBM as a monopoly supplier of mainframe hardware. And another part of “reinvigorating the mainframe” might be a need to bring prices in line with other platforms, perhaps reducing margin a little, although mainframe pricing and what people actually pay for one is always a pretty complex issue.

This is a huge announcement and this isn’t the place to cover it in detail—but the devil will be in the technical detail and I’d like to cherry-pick a few technical details which caught my eye.

Data centre power and space efficiency seems to a design objective. The new z mainframes fit into more or less the same space as their predecessors and use more or less the same electrical power (but deliver much more processing power and a faster processor). And optional water cooling is available, not because it is necessary for high power configurations (it isn’t) but because it makes them more heat efficient. This is going to be very important in many applications.

Fundamentally, however, the z chip design is way improved, with more optimised on-chip cache to contain more working sets, to fully exploit the faster processor—and it has new hardware instructions too. If you like chip technology, I suggest taking a look at this one. It might also be an idea to take a look at IBM’s new compilers, which appear to be necessary to fully exploit the new hardware architecture.

Starting from the user interface, IBM is also addressing the usability of its management, operational and development tools for the mainframe environment. About time, many will say, but how will this affect a vendor like CA, which appears to be pitching on the increased usability of its mainframe monitoring and operational tools (see here)? Time will tell, but IBM probably needs healthy competitors in order to stave off any accusations of monopoly practices from the regulators, so I very much doubt that it wants to put CA out of the mainframe business.

Running a mainframe shop is all about identifying different workloads and running them in the most appropriate place—and IBM’s mainframe scheduler is a key enabler of the immense throughputs a mainframe can deliver. Now, however, although the mainframe is still most appropriate for certain kinds of very high volume business critical workloads, the choice of workload may be a bit less critical—and if your workload really does work best on a distributed platform you can simply run it on z’s integrated blade servers, optimised for the “distributed computing” type of workload. I wonder if this sort of capability is going to make Neon’s zPrime software (which moves workload onto the mainframe’s zIIP and zAAP speciality processors), look less attractive?

I’ve always said that mainframe processing isn’t inherently expensive, given the efficiency and reliability possible, it’s the vendor’s software and hardware pricing models which can make it expensive in practice—or not. Real-world pricing of zEnterprise will have to be looked at very carefully.

Nevertheless, at the moment, I no longer feel at all apologetic about liking the mainframe. It still promises ultimate reliability (5 minutes downtime a year without any “planned downtime” still concerns mainframe designers—they insist on working on getting this figure down). But it now delivers energy efficiency even with a fast processor and a Linux VM potentially as fast as anything you can find elsewhere. It’s a really exciting platform for enterprise computing—all IBM has to do to ensure that it really takes off is to port Grand Theft Auto, so the operators have something to do in the long nights when nothing is going wrong. Except, you largely don’t need operators any more, in today’s mainframe data-centre.