Enabling vSGA on an nVidia GRID Powered ESXi Server

We have a new lab environment and were so lucky to have an nVidia GRID K2 included in one server for testing out its rendering capabilities under virtualized environment. When I had some time to play around I made a first step towards drawing the GRID’s power and deployed a VM that will be using a shared 3D acceleration method, also known as vSGA.

First, let’s talk about the types of graphics rendering we have in virtualized environment:

Rendering techniques in ESXi

  • Software 3D – This is the very basic Graphics adapter that gets installed on all VMs by default if no 3D acceleration capabilites exist on the ESXi host. Everything you see is computed by the CPU – this is just enough for classic VM administration, data manipulation, etc.
  • Virtual Shared Graphics Acceleration (vSGA) – This method aggregates the available GPU power and lets it be split between Virtual Machines using 3D Hardware Acceleration. The vSGA-Enabled VM can have up to 512MB Video Memory and can be snapshotted, vMotioned, and use all these fancy features like any other VM – provided other hosts in the cluster have nVidia GRID present as well. Several workstations can use this GPU at the same time. vSGA mode is good for basic drafting and power users requiring a bit of 3D acceleration, for example for accelerating Windows Aero, 1080p video, multi-monitor usage with availability of OpenGL 2.1 and DirectX 9. The 3D Acceleration is done via nVidia VIB and xorg server handled by the hypervisor.
  • Virtual Dedicated Graphics Acceleration (vDGA) – This is basically PCI passthrough of a GPU allowing you to fully draw upon one of its cores. It requires no nVidia driver to be installed on the ESXi host, nor the xorg server to be running. Once one of the GPUs is passed to the VM,  it is constrained line all other PCI-passthrough enabled VMs. No snapshots, no vMotions and all memory has to be reserved for the machine. The only thing required is a driver for the GPU itself inside the Guest OS, letting you draw its raw processing power including DirectX11, OpenGL 4.3 and CUDA.

Below you will find a nice graph that summarizes what I spoke of in the above text:

About nVidia GRID

nVidia GRID is a special line of Engineering Class GPUs. These are capable of handling high-resolution multi-monitor imaging, combined with the advanced rendering techniques OpenGL provides for detailed picture.

These GPUs are used in various Engineering, Drafting applications (CATIA, SolidWorks etc.), medical and geological rendering & various research simulations (ANSYS Implicit solver, GROMACS molecular dynamics, climae changes,..) and much more.

Their power is also drawn from the GPGPU (General-Purpose computing on Graphics Processing Units) capabilities thanks to CUDA, a highly parallelized language runs its code in parallel on so called CUDA cores – the building blocks of the physical chips themselves.

What makes GRID so special is its multi-GPU architecture that is ideal to be used with virtualization – there are two types of GRID GPUs:

  • nVidia GRID K1 – has 4 Kepler GPUs with total of 16 GB DDR5 VRAM – 4GB for each GPU with TDP of 130W. Designed more towards dense user configuration for vSGA environment  (up to 8 users per GPU Core, 32 in total per GPU itself) or can be designated for power users and entry-level engineering tasks with vDGA.
  • nVidia GRID K2 – is equipped 2 with high-grade Kepler GPU Cores containing more CUDA cores and total of 8GB DDR5 VRAM.  again with 4GB per each chip. The TDP is 225W – 95W more than K1 has. This GPU is more suited for straight GPU passthrough to two powerhouse performance VMs.

You can read up more on these GPUs in nVidia GRID datasheet.

Enabling vSGA on the VM

You need to have the nVidia VIB installed and a Windows 7 (or Windows Server 2008R2+) Guest OS. If you have decided to run Windows Server platform, you must choose the Desktop Windows as its Guest OS in the VM Settings, or work around it by editing the .vmx file by changing the guestOS paramter (if your guest OS is Windows Server 2008) to:

guestOS="windows7-64"

and then re-registering the machine in vCenter. This is for the hypervisor to enable the 3D acceleration tickbox for you.

3dsupport_on

Enabled 3D Support + nVidia VIB = vSGA

After deploying the Server and VMware Tools which enable you the use of an enhanced 3D driver for vSGA, you can choose to turn on the Aero desktop acceleration to verify if everything works as intended – add the Desktop Experience Feature in Server Manager, set the service startup to automatic and reboot the server. Now you will have support for DirectX9, OpenGL2.1 and multiple monitors for your VM in Virtual Desktop Infrastructure. You can see that in DXdiag as well:

dxdiag_svga3D

Unfortunately I can’t tell much on acceleration of High Definition content or the fluidity of basic 3D applications because I was accessing this VM over WAN with considerable latency. But don’t worry, I have already prepared something for you – in a future post 🙂

A few shell commands

If you SSH to the ESXi host where an activity is present, there are a few useful commands for you to check out. The first is gpuvm – it shows you which VMs are using the GPU at the moment.

gpuvm

The second, much more interesting one is nvidia-smi – you can monitor and configure your GPU with the use of this utility.

nvidia-smi

I will be experimenting with this GPU in the coming days. Thanks for visiting and see you soon!

Advertisements

Share your thoughts

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s