Fundamental Question on Virtualization

Since I first learned a bit about virtualization, there’s been one question that I’ve had that still keeps nagging me: isn’t this what operating systems were originally supposed to do? Back in my undergraduate days in the Computer Science department at the University of Illinois at Urbana-Champaign, I took a course in operating systems, and I seem to recall it being all about the allocation of memory, I/O, storage, and processor cycles among processes. This seems to be the exact same problem that virtualization is trying to achieve. About the only differences I can see is that virtualization, at least on the server side, does try to go across physical boundaries with things like VMWare’s VMotion, and it also allows us to avoid having to add physical resources just because one system requires Windows Server while another requires SuSE Linux.

So, back to the question. Did we simply screw up our operating systems so badly with so much bloat that they couldn’t effectively allocate resources? If so, you could argue that a new approach that removes all the bloat may be needed. That doesn’t necessarily require virtualization, however. There’s no reason why better resource management couldn’t be placed directly into the operating system. Either way, this path at least has the potential to provide benefits, because the potential value is more heavily based on the technology capabilities, rather than on how we leverage that technology.

In contrast, if the current state has nothing to do with the operating systems capabilities, and more about how we choose to allocate systems to those resources, then will virtualization make things any better? Put another way, how much of the potential value in applying virtualization is dependent on our ability to properly configure the VMs? If that number is significant, we may be in trouble.

This is also a key point of discussion as people look into cloud computing. The arguments are again based on economies of scale, but the value is heavily dependent on the ability to efficiently allocate the resources. If the fundamental problem is in the technology capabilities, then we should eventually see solutions that allow for both public-cloud computing as well as private-cloud computing (treat your internal data center as you own private cloud). If the problem is not the technology, then we’re at risk of taking our problems and making them someone else’s problem, which may not actually lead to a better situation.

What are your thoughts on this? Virtualization isn’t something I think about a lot, so I’m open to input on this. So far, the most interesting thing for me has been hearing about products that are designed to run on a hypervisor directly, which removes all of the OS bloat. The risk is that 15 years from now, we’ll repeat this cycle again.

3 Responses to “Fundamental Question on Virtualization”

  • Perhaps part of the problem is that today’s server and desktop OSes aren’t really pure operating sytems, but are actually application platforms. .NET, standard libraries, IE, Gnome, etc. etc. etc. are human focused applications that sit on and interact with the OS functions, just like any other application or application library. VMs, on the other hand, are pure server with just a few small management tools. So they look completely different to your average sysadmin. Plus, they allow multiple of these OS “application platforms” to run together on the same box, something none of the OS vendors themselves were motivated to implement.

    Make sense?

  • Oh, I certainly understand the fact that the operating systems we have today are application platforms. My concern is that today’s hypervisors are really yesterday’s operating systems. How do we prevent hypervisors from going down the same ugly path of bloated platforms that operating systems did, and how much of it was simply poor policies/processes on our part?

  • Yes, most of today’s OSes should be able to provide virtualization, but they are broken so they don’t. Some are less broken than others; e.g. Solaris can do pretty good virtualization with Zones. As you guessed, all the features that we currently associate with hypervisors could be provided by OSes; there were OSes in the past that provided process motion, a single system image over a cluster, or multiple personalities (effectively multiple OSes sharing a common pool of hardware resources). Unfortunately, if you try to provide all these features in one OS it usually collapses under its own complexity. I don’t blame bloat for this complexity; we just ask OSes to solve many problems. Thus the OS must be decomposed into multiple layers; the lower layer has gone by different names over the years: microkernel, exokernel, hypervisor, etc.

    You’re also right that virtualization can be a way of moving your problems around instead of solving them. Hardware was cheap, so we created server sprawl, then we used virtualization to convert it into VM sprawl, saving money on hardware but doing little for system management.

    It would be nice if the hypervisors stuck to resource management, but I suspect they’ll feel the urge to try to take over (i.e. duplicate) functionality from the OS, the network, or the SAN.

Leave a Reply

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.