1. Virtualization. Why do we do it?
All modern operating systems are designed so that they have a one-to-one relationship with the hardware elements (CPU and its multi cores, cache memory, physical memory, storage, network, etc.). This one-to-one relationship makes economic sense with hardware elements with a single core CPU, single level cache and few hundred MBs physical memory. With the rapid advancements in chip fabrication methods in the last two decades, the popular off-the-shelf commodity servers that are available on the market today, say HP ProLiant Gen9, are equipped with multi core CPUs (12-18 cores) and 1.5 TB memory. These hyper powered machines when run on 1 OS-to-1 compute element configuration, use only 10-20% of their full computing capacity. How can we maximize the utilization of this precious and abundant computing resource to get full ROI? - Hypervisor Tech. It helps extract every ounce of the capacity and get much higher utilization out of our physical server resources.
A hypervisor is also known as a Virtual Machine Manager (VMM) and its sole purpose is to allow multiple "virtual machines" to share a single hardware platform. In the simplest terms - the VMM is the piece of software responsible for monitoring and enforcing policy on the virtual machines for which it is responsible. This means that the VMM keeps track of everything happening inside of a virtual machine, and when necessary, provides resources, redirects the virtual machine to resources, or denies access to resources (different implementations of VMMs provide or redirect resources to varying levels).
Types of Hypervisors
A type I Hypervisor is one that runs directly on the hardware without the need of a hosting operating system. There is no host OS and the hypervisor has direct access to all hardware and features. The main reasons to install a type 1 hypervisor are to run multiple operating systems on the same physical hardware without the overhead of a host OS and to take advantage of the portability and hardware abstraction. Bare metal is most often used for servers because of their security and portability to move from hardware to hardware in case of a crash. Examples of type I VMMs include the mainframe virtualization solutions offered by companies such as Amdahl and IBM, and on modern computers by solutions like VMware ESX, Citrix XenServer and Windows virtualization Microsoft Hyper-V.
A type II Hypervisor is one that runs on top of a hosting operating system and then spawns higher level virtual machines. These hypervisors run on a conventional operating system just as other computer programs do. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox and QEMU are examples of type-2 hypervisors.
2. Elements of Virtualization
CPU Virtualization: A virtual CPU (vCPU), also known as a virtual processor, is a slice (transient unit) of physical central processing unit (CPU) that is assigned to a virtual machine (VM). In other words, a Virtual Processor that is assigned to a virtual Machine is renting a slice of computing time from the physical processor. By default, virtual machines are allocated one vCPU each. If the physical host has multiple CPU cores at its disposal, then a vCPU essentially becomes a series of time slots on logical processors assigned to VM.
Logical Processor Math: Inside your physical processor, you can have more than one operations unit, called Core. Normally a physical processor core can only handle one thread (aka operation) at a given time (processor time slot). But when the technology Hyper-Threading is activated and supported, the Core can handle several threads (hyper threads) at once. Hence the number of hyper threads the core can support translates into number of logical processors the physical CPU can serve to the VM. So in short:
Cores Count = Processor Count * CoresCountPerProcessor
Logical Processor Count = CoresCount * ThreadCount
So for a Quad Core processors, say Intel Xeon E3-1230 server with Hyper-Threading:
Logical ProcessorCount = 4 * 2 = 8.
An Intel Xeon E3-1230 processor with four physical cores and HT enabled results in 8 logical processors. A virtual machine with 8 vCPU's will look a bit like this:
Memory Virtualization
A guest operating system that executes within a virtual machine expects a zero-based physical address space, as provided by real hardware. Memory Virtualization gives each VM this illusion, virtualizing physical memory by adding an extra level of address translation. In traditional computer architecture, the OS is responsible for allocating physical memory to processes which run inside its own process virtual address space. When accessing memory, the process virtual address must be translated by hardware memory management unit (MMU) to traverse the corresponding page table setup by the OS. When running inside the hypervisor environment, one more in-direction of memory address translation is performed.
As shown by the figure below, when running on the hypervisor the guest OS maintains a mapping from guest virtual addresses (GVA) to guest physical addresses(GPA), and the hypervisor translates from guest physical to machine physical addresses (MPA), i.e., the real physical addresses used to access the memory. Accordingly, the Machine Frame Number (MFN) is a page number in the machine physical address space whereas the Guest Frame Number (GFN) is a page frame number in the guest physical address space. Normally, each GFN of a guest OS is mapped to a unique MFN allocated by the hypervisor. The hardware page table used by hypervisor to translate from guest physical to machine physical address is usually referred to Guest physical to machine physical (P2M) table. In modern CPU architecture, most CPU now has hardware support for memory virtualization techniques. Taking Intel VT (Virtualization Technology) [1] as an example, when the Extended Page Table (EPT) feature is enabled in the Intel CPU, the hardware MMU, and for each memory access from guest VM, the process is as follows: 1. Walk through the page tables used by guest OS to translate from GVA to GPA 2. Walk a separate page table, i.e., EPT, setup by hypervisor to translate from GPA to MPA. Thus, the conceptual P2M table mentioned above just maps to the EPT table in the case of Intel architecture.
Storage Virtualization/Hypervisor
A storage hypervisor is a supervisory program that manages multiple pools of storage as virtual resources. It treats all the storage hardware it manages as generic, even though that hardware includes dissimilar and incompatible platforms. To do this, a storage hypervisor must understand the performance, capacity, and other service characteristics of the underlying storage, whether that represents the physical hardware, such as solid-state disks or hard disks, or the storage architecture such as storage-area network (SAN), network-attached storage (NAS) or direct-attached storage (DAS).
In other words, a storage hypervisor is more than just a combination of a storage "supervisor" and storage virtualization features. It represents a higher level of software intelligence that controls device-level storage controllers, disk arrays and virtualization middleware. It also provisions storage, provides services such as snapshots and replication, and manages policy-driven service levels. The storage hypervisor provides the technology on which software-defined storage can be built.
Network virtualization
Network virtualization involves dividing available bandwidth into independent channels, which are assigned, or reassigned, in real time to separate servers or network devices.
Network virtualization is accomplished by using a variety of hardware and software to combine network components. Network virtualization is categorized as either external virtualization or internal virtualization. External network virtualization combines or subdivides one or more local area networks (LANs) into virtual networks to improve a large network's or data center's efficiency. A virtual local area network (VLAN) and network switch comprise the key components. Using this technology, a system administrator can configure systems physically attached to the same local network into separate virtual networks. Conversely, an administrator can combine systems on separate local area networks (LANs) into a single VLAN spanning segments of a large network. Internal network virtualization configures a single system with software containers, such as a VNIC, to emulate a physical network with software.
Shak Kathirval
Application Architect, Architecture
__________________________________________________
Application Architect, Architecture
__________________________________________________