Hardware-assisted virtualization: Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. Virtualization was eclipsed in the late 1970s, with the advent of minicomputers that allowed for efficient timesharing,
George Santayana: Those who cannot remember the past are condemned to repeat it
My first I.T. job right out of university was as a DBA on an IBM mainframe for a large british retailer. That mainframe ran at near 100% cpu all day. The thing was amazing. It ran lots of different workloads. It ran web like applications (CICS). They got the highest priority and response times were great. Any spare capacity was devoted to batch jobs. This could be batch jobs that the business needed or it could be the developers submitting compilation jobs for code they were developing. The point is that it managed this mix of workloads beautifully, it ensured that the interrupt end user driven transactions got the priority and it soaked up any spare capacity with batch jobs that could be swapped in and out at will. Which brings me to the cloud.
There are many reasons that folks are advocating the cloud, but the most important would seem to be cost savings. The case is made in detail in this Booz Allen Hamilton paper which has this eye popping assumption that all the cost savings in their model are based on “Our analysis assumes an average utilization rate of 12 percent of available CPU capacity in the status quo environment and 60 percent in the virtualized cloud scenarios“. I’ve done a bit of digging and those two numbers seem to represent reality which is mind blowing to me on both fronts. The average server utilization in data centers today is a meager 12%, which is terrible and means that a bunch of servers are sitting there with apps that hardly anyone uses or apps that are seasonal and so most of the time aren’t hit. It also is surprising that they only assume 60% utilization in the virtualized cloud environment. Given that the mainframe could crank at 100% all day why isn’t that number higher?
I am speculating now, but I assume it isn’t higher because they are factoring in a significant overhead to hardware-assisted virtualization (like the kind that powers ec2). The very same overhead that caused hardware-assisted virtualization to get eclipsed, in the late 1970s, by computers that could do much more efficient time sharing and hence save a tonne of money. The key questions for anyone investing in building cloud infrastructures is whether economic factors will again render this technology obsolete and whether there is something that has similar characteristics but without all the overhead? Something that say could run 1000 linux instances on a single host? It’s not even the only game in town.