Since I’d also like to have this blog more accessible for people of different skillsets, I’d like to make a short introduction to the Virtualization technology that is very progressive the last few years.
Servers are machines that, under ideal conditions, operate 24 hours a day, 7 days a week, 365 days in a year. For stability and compatibility purposes, they are often running only one application or service for the highest possible uptime. However we know that maintaining this level of service is an exceptional task that is very hard to reach. In the age where multi-core and multi-threaded processors are prominent, having a quad-core processor just sitting there as a file or a simple application server would be a waste of resources. Here’s a little list.
Server operation cost:
- Electricity that the servers consume and energy used for cooling the server room.
- Actual physical space where 1 server = 1 slot in a rack.
- Time and money if anything decides to break down at any time, requiring a technician’s visit to replace a faulty motherboard or a memory module that either crashed the server completely or caused an avalanche of BSODs (or any other kernel panic colors).
- Money for space rental (if you don’t have your own data center).
- In most cases, physical hardware in loaded small percentages when running.
- Storage may not be easily expanded.
This is remedied with one very effective solution:
This technology enables multiple virtual machines of various Operation Systems to run on one physical server under a so-called hypervisor. Depending on server’s hardware capabilities and application needs, one physical server can run a few resource-demanding , or many lightweight machines.
The hypervisor is a “barebone OS” (in case of VMware, a POSIX-like system with unix-like commandline thanks to the BusyBox Libraries) installed on a bare metal. Once installed, properly configured and managed, it handles all virtual machines’ requests for CPU, memory and IO operations intelligently and scheduling resources when and where needed. Literally it transforms all BIOS instructions the virtual machines generate into the BIOS of physical hardware. Think of it as a robot who catches multiple requests from various sources and has only one output path where he sorts them.
A Virtual machine is a set of files, stored in a folder on a data store using a special file system called VMFS (Virtual Machine File System). These files contain the VM configuration, BIOS’ NVRAM, local Virtual Machine Disks, Virtual Machine Swap Files, snapshot deltas and more.
- One physical hardware runs multiple virtual machines – hardware is being used to its maximum potential if scaled well.
- Hypervisor (or an ESXi host) can be run from local storage, USB drive, SD card, or directly from the memory (with Enterprise version of VMware).
- VMs can communicate between themselves and have physical network NICs assigned with use of virtual network switches.
- VMs on shared storage (FCoE / iSCSI / NFS) with clustered ESXi hosts can use High Availability or Fault Tolerance (VM Mirroring) in case of failures.
- VMs can be dynamically configured how much memory and/or CPU resources they will use.
- Devices attached to the computer you are running a management console from (VMware vSphere) can be attached to the VMs.
- VMs can be transferred from one host to another without any issue involving a hardware change in the OS.
And the eventual outcome? Virtualization helps save resources: power, space, and time – mostly downtime that would be connected to a failure of one physical host where an application would be rendered unusable.