Creating simulated environments has become a trend in today's society where virtualization plays a major role. Virtualization is the technology that let's you create a simulated environment or a set of environments, that is not real, but is a representation of the actual object. This blog will give you an introduction to virtualization and its basic concepts. It will also address some core components such as the hypervisor and some related use cases. The blog is structured as follows.
What is Virtualization
A Brief History on Virtual Machines
Why Do We Need Virtualization
What is a Hypervisor
Two Types of Hypervisors
Hardware Virtualization
Different Types of Virtualization
1) What is Virtualization
Virtualization is the technology that allows you to create simulated environments, using a single, physical hardware system. The term virtual is used to describe something, that does not have a physical presence or that is not what it appears to be. For example, the term "Virtual Reality" simply means the technology that displays a a simulated environment of reality, but actually something that does not have a physical presence at that moment. Similarly, virtual chat rooms, virtual games etc, are some examples of how the term virtual is upcoming in today's industry.
But, the term virtualization is primarily related to computer science and operating systems. Virtualization refers to the act of creating a virtual version of something, that includes virtual hardware, storage devices, file systems etc. This allows a machine, server or a computer to split into multiple virtual machines and virtual servers. This is made possible via a piece of software called the hypervisor, which will discussed later in this blog. The hypervisor is responsible for distributing the hardware appropriately, by connecting to the hardware and splitting it into virtual machines.
Each of these virtual machines think that it has his own storage, own memory, own network card etc. It feels like an isolated computer.
There are many benefits in using virtualization, which saves a lot of resources like money, time and cost in the day to day operations of people. Due to the popularity and log term savings, many organizations are moving into virtual environments, and to eliminated the use of expensive hardware.
2) A Brief History on Virtual Machines
The early releases of the "OS/360" were strictly batch systems. But, many 360 users wanted to have time sharing, which led IBM to develop TSS/360, which became the official time sharing system of IBM. Many people were looking forward to it, but by the time it came out, it had costed over $50 million, and it was so big and slow. But later, IBM developed CP/CMS which was later renamed as VM/370, which was based on two different observations grasped from time sharing systems. The essence of VM/370 was to separate the below two concepts.
Multi-programming
Extended Hardware
The heart of the system was known as the Virtual Machine Monitor ( Which is also known as the hypervisor), runs on the bare hardware and does the multi-programming, providing not one, but several virtual machines as shown in the figure below. Because each virtual machine is identical to the true hardware, each one can run any operating system, that will run directly on the bare hardware. Different virtual machines had the ability to run different operating systems.
Another area where virtual machines are being used is in Java Programs. When Sun Micro Systems invented the Java Programming Language, they also invented a virtual machine, named "The Java Virtual Machine (JVM)." The Java code that is translated by the Java Compiler, is fed into the JVM interpreter, and then executed. The advantage is that, this gives portability by making java programs runnable in any machine that has a Java Virtual Machine.
3) Why Do We Need Virtualization
The main goal of virtualization is to manage workloads by transforming traditional machines to support better scalability and usage. Let's try and understand the main benefits by looking at a simple analogy. Just imagine your friend Bob, has a huge house with 10 different rooms, kitchens, garages, and attached bathrooms, and he's the only person living in that house, inside a single room. Bob only utilizes 10% of his house. The houses in that area are all the same and very huge like Bob's one.
Just imagine that Peter also wants a place to stay, but now he's thinking of buying another new house, which has around 10 rooms. If he purchases this new house, he will also be utilizing only 10% of that house. If everyone starts doing this, a lot of resources and space is going to be under utilized and wasted. Look at the cost incurred in purchasing the new houses, cost incurred in maintaining all the houses, and the waste of resources?
After a certain time, Bob realizes this and, decides to fix main doors to every room, so that they can all enter from outside. Now, he gives away all these extra rooms to other people, so that they can utilize them, without spending more money on new houses. These new people, will feel that the room is like a separate house, because each room has everything that a house needs. They will feel like they are living in a new house, when it is actually not like that. This is virtualization. Everyone feels like they are living in a virtual home. Bob's house will be referred to as the Host House, and the other peoples rooms will be referred to as Guest Houses.
The same concept can be applied to actual machines which are running in organizations. If the organization decides to expand its business and scale their network, then there is a need for more machines to handle the workload. But what if we can use the existing machines and separate into different virtual machines, allowing this Host machine to provide guest machines to run on top of it. Due to this reason, we don't have to spend more money on new machines, but can use the same machine as a guest, where we see this virtual set up, like an actual machine. This allows you to run the machine at its fullest capacity, by distributing its resources as needed.
4) What is a Hypervisor
In general, a hypervisor is what provides the ability to run virtual machines on top of an actual machine. A hypervisor is also known as a virtual machine monitor (VMM), that creates and runs virtual machines. This could be a software, hardware or even a firmware. The hypervisor provides a virtual operating platform for the Guest Operating Systems, and manages them.
The term hypervisor has derived from the word supervisor. Hyper means super, so it's more like a Super-Supervisor. The following diagram depicts how the Hypervisor plays a major role in virtualization.
Hardware: The lowest level is the underlying hardware. This contains the hardware components that the Host Machine has.
Control Domain: The blue colour component, is known as the Parent Partition/ Control Domain or the DOM Zero. This is the controller/monitor of the hypervisor. It is not on top of the hypervisor, but inside the hypervisor, where one side is in the Hardware, and the other on the top. This has control on all the hardware underneath the hypervisor. This is what makes, manages, destroys, connects the Virtual Machines. When we look at the physical interface of the hypervisor, we are actually looking at this Control Domain (DOM 0).
Virtual Machine: On top of that, we have the virtual machines, sits and runs on top of the Hypervisor. they can be of different sizes, with different operating systems. The Control Domain, is responsible for managing all of these.
Virtual Network: Another thing that lies on top of the Hypervisor is the Virtual network or the Virtual switch. So now we have to link things together. The VMs and the Control Domain can all attach to the virtual network unit. If any of the VMs need to talk to the outside world, the Virtual Network should be connected to the "Network Interface Card" in the underlying hardware. Once connected, the "Virtual Network" becomes and extension of the actual network card.
5) Two Types of Hypervisors
There are two types of Hypervisors, namely:
Type 1: Native or bare-metal hypervisors - These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems.
Type 2: Hosted hypervisors - These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system.
6) Hardware Virtualization
Hardware virtualization simply means the creation of virtual machines with their own hardware, file systems and an operating systems. The concept is based on having a Host Operating System, that is run on the actual machine. On top of this Operating system, there are Guest Operating Systems Running, and they are provided with independent resources to run with different operating systems.
The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.
7) Different Types of Virtualization
Server Virtualization
This is the type of virtualization that partitions a server into multiple virtual servers, where each server can perform a separate functionality or purpose. A server is a unit that is capable of handling a lot of tasks and requests, whereas partitioning it, would give a better advantage in utilizing the capabilities in a much more broader manner.
Network Function Virtualization
In here, the basis is to partition the network's key functions like directory services, file sharing, and IP configuration etc, so that they can be provided as virtual partitions to different environments. Virtualization of networks reduces the number of physical components—like switches, routers, servers, cables, and hubs—that are needed to create multiple, independent networks, and it’s particularly popular in the telecommunications industry.
Operating System Virtualization
Operating system virtualization happens at the kernel—the central task managers of operating systems. It’s a useful way to run Linux and Windows environments side-by-side.
Resources