*206, Building 1, Allied house, LSC, Pushp Vihar, New Delhi 110062

New Delhi

*241/3, 1st Cross Road, Opposite Karnataka Bank, Gunjur, Bengaluru, Karnataka 560087




Many IT professionals think of virtualization in terms of virtual machines (VM) and their associated hypervisors and operating-system implementations, but that only skims the surface. An increasingly broad set of virtualization technologies are redefining major elements of IT in organizations everywhere.

What is virtualization?

Examining the definition of virtualization in a broader context, we define virtualization as the art and science of making the function of an object or resource simulated or emulated in software identical to that of the corresponding physically realized object. In other words, we use an abstraction to make software look and behave like hardware, with corresponding benefits in flexibility, cost, scalability, reliability, and often overall capability and performance, and in a broad range of applications. Virtualization, then, makes "real" that which is not, applying the flexibility and convenience of software-based capabilities and services as a transparent substitute for the same realized in hardware.

Roots in mainframes

Virtual machines trace their roots back to a small number of mainframes from the 1960s, most notably the IBM 360/67, and became an established essential in the mainframe world during the 1970s. And with the introduction of Intel's 386 in 1985, VMs took up residence in the microprocessors at the heart of personal computers. Contemporary VMs, implemented in microprocessors with the requisite hardware support and with the aid of both hypervisors and OS-level implementations, are essential to the productivity of computation everywhere, most importantly capturing machine cycles that would otherwise be lost in today's highly-capable 3-plus GHz processors.

VMs also provide additional security, integrity, and convenience, and with very little computational overhead. Moreover, we can also extend the concept (and implementation) of VMs to include emulators for interpreters like the Java Virtual Machine, and even full simulators. Running Windows under MacOS? Simple. Commodore 64 code on your modern Windows PC? No problem.

What's most important here is that the software running within VMs have no knowledge of that fact – even a guest OS otherwise designed to run on bare metal thinks its "hardware" platform is exactly that. Herein lies the most important element of virtualization itself: an incarnation of the "black box" approach to the implementation of information systems that relies on the isolation enabled by APIs and protocols. Think of this in the same context as the famous Turing Test of machine intelligence – applications, which are, after all, the reason we implement IT infrastructure of any form in the first place – are none the wiser about exactly where they're running. And they don't need to be, enhancing flexibility, lowering costs and maximizing IT RoI in the bargain.

We can in fact trace the roots of virtualization to the era of timesharing, which also began to appear around the late 1960s. While mainframes certainly weren't portable, the rapidly increasing quality and availability of dial-up and leased telephone lines and advancing modem technology enabled a virtual presence of the mainframe in the form of a (typically dumb alphanumeric) terminal. Virtual machine, indeed: This model of computing led – via advances in both the technology and economics of microprocessors – directly to the personal computers of the 1980s, with local computation in addition to the dial-up communications that eventually evolved into the LAN and ultimately into today's transparent, continuous access to the Internet.

Virtualized memory

Also evolving rapidly in the 1960s was the concept of virtual memory, arguably just as important as virtual machines. The mainframe era featured remarkably expensive magnetic-core memory, and mainframes with more than even a single megabyte of memory were rare until well into the 1970s. Virtual memory is enabled by, as is the case with VMs, relatively small additions to a machine's hardware and instruction set to enable portions of storage, usually called segments and/or pages, to be written out to secondary storage, and for the memory addresses within these blocks to be dynamically translated as they are paged back in from disk. Voilà – a single real megabyte of core memory on an IBM 360/67, for example, could support the full 24-bit address space (16 MB) enabled in the machine's architecture – and, properly implemented, each virtual machine could in addition have its own full complement of virtual memory. As a consequence of these innovations, still hard at work today, hardware otherwise designed to run a single program or operating system could be shared among users even with simultaneous multiple operating systems and massive memory requirements beyond the real capacity provisioned. As with VMs, the benefits are numerous: user and application isolation, enhanced security and integrity, and, again, much improved RoI. Sounding familiar yet?