Category Archives: Virtualization

Everything that is related to virtualization will be in this container.

VMware Workstation-Powered Lab Part 2: Installing ESXi

In the previous part of this article, I’ve shown you where to download the required packages for use with the upcoming laboratory. In this part I will guide you through the very basic VMware ESXi installation on VMware Workstation. For this you will need the VMware Workstation installed on your system, along with the Hypervisor ISO. Without further ado, let’s begin.

Continue reading

VMware Workstation-Powered Lab: Preparing the Lab Part 1

Hello and a hearty welcome to my series on making a virtualization lab! In this first part of lab preparation I’ll show you where to get the needed binaries, set up networking in VMware Workstation and create a software RAID0 out of 3 SATA HDDs. I have been wanting to do this for quite some time – and finally, there is no time better than the present. This is because the Lab can be run using the cutting-edge software that was provided to me in the form of vExpert Licenses.

On the hardware side – because I am constrained by budget for this vSphere lab (which equals zero at the moment), I have decided to make use of the most I have available. This means I will be utilizing my Whitebox’s hardware with VMware Workstation 11, and creating a basic lab from the resources I have at my disposal.

Lab Specifications

At first we’ll take a look on what I have available – only the compute resources used for virtualization will be noted below:

CPU – Intel Core i7-2600k @ 4.05 GHz – 4 Cores, 8 Threads.
RAM – 16 GB 1866MHz DDR3 Memory
Storage – A software RAID0 out of three SATA drives and perhaps a small 60GB LUN from my 120GB SSD – this will be provided by StarWind Virtual SAN software iSCSI Solution which I got a NFR license for (again, thank you vExpert Programme!).

This hardware configuration will enable me to run two ESXi hosts each with 2 CPUs and 6GB RAM. As far as the software goes, I will be using Windows Server 2012 R2 – both Core (for Domain Controller) and the “GUI” (or non-Core edition) to serve as a management console for both of these previous servers.

Continue reading

Stress Testing an ESXi Host – CPU and MCE Debugging

I have needed to stress test a component inside a physical server – this time it was CPU and I’d like to share my method here. I have done a Memory Stress Test using a Windows VM in a previous article. I will be using a Windows VM again, but this time it will be Windows  Server 2012 Standard Edition that can handle up to 4TB Memory and up to 64 Sockets containing 640 logical processors – a very nice bump from Windows Server 2008 R2 Standard that had a Compute configuration maximum of 4 sockets and 32 GB RAM.

The host has crashed several times into a PSOD with Uncorrectable Machine Check Errors. From the start I had a hunch that the second Physical CPU or a System Board are faulty – but these were replaced already and the host has crashed yet again. I have taken a closer look at the matter and went to stress thest this ill host’s CPUs. Continue reading

VMware vCloud Air Private Cloud OnDemand: Internet Connectivity Test

I have tested the Internet Connectivity that is provided with VMware vCloud Air OnDemand Service. And the results are pretty impressive! I went through assigning a public IP address, interconnecting it with the internal gateway, did some NAT and firewall configuration, and eventually tested the connection speeds with speedtest.net. See for yourself in the screenshots accompanied by few words below. Continue reading

nVidia GRID K2 vDGA with VMware Horizon View using PCoIP

Today I’d like to share a very interesting lab session with you all – the result will be a dedicated virtual machine with one of nVidia GRID K2’s GPU Cores enabled and accessed via Horizon View’s PCoIP protocol, where we’ll take a small look at its performance and parameters.

Prerequisites

You will not need much for this very basic VMware Horizon environment – a VM where either vDGA or vSGA is present and Horizon View Agent & Horizon View Agent Direct-Connection Plugin installed to allow you to connect to it via PCoIP.  You will be connecting to this VM via the VMware Horizon Client. Continue reading

Running 3DMark 2000 In a vSGA-Enabled Virtual Machine

Since I have made the vSGA feature to work in a Virtual Machine, I wanted to see how powerful will this rendering technology be. Sure, there is software that is dedicated to workstation performance benchmarks, but my mind has come around one application that was widely used to compare rigs (and it still is, although in a much newer version – FutureMark 3DMark 2013) It was MadOnion’s (love that company’s name) 3DMark 2000 and I remember running it countless times after swapping my 4MB Graphics Adapter for a 16MB nVIDIA TNT2 Ultra just to see the new, fluid FPS.

This post is a little nostalgy, sure, but seeing this benchmark run in a virtual environment, actually using a fraction of a GPU that is vastly superior to the PCI and AGP-powered adapters that  were available back then has evoked a smile on my face, and memories – oh the memories.

Anyways, without further ado I’d like to share the 3Dmark results with you – it is 1024×768 at 32bpp – nothing too fancy for today’s standards (and the most I could have squeezed off the settings). I didn’t expect an outright blast from the vSGA technology, mainly because the GPU  is being partitioned and also because of the fact that the maximum you can get is DirectX 9 and OpenGL 2.1.

3dmark2000-1

Wow, almost 30k 3DMarks, good job!

Although this may be viewed by many as a redundant thing to do – it’s these little things that brighten my profession occasionally 🙂 I’ll be digging around vSGA and vDGA in the coming weeks, so this is just a little taste of things to come.

Enabling vSGA on an nVidia GRID Powered ESXi Server

We have a new lab environment and were so lucky to have an nVidia GRID K2 included in one server for testing out its rendering capabilities under virtualized environment. When I had some time to play around I made a first step towards drawing the GRID’s power and deployed a VM that will be using a shared 3D acceleration method, also known as vSGA. Continue reading

VMware vCloud Air: Virtual Private Cloud OnDemand Impressions

Foreword

I have noticed that there is a hybrid Cloud offering called VMware vCloud Air – Virtual Private Cloud OnDemand. This Platform-as-a-Service (PaaS) allows you to have your own environment in VMware’s Data Centers. Everything is metered on pay-as-you-go basis – you pay for each resource used – vCPUs, RAM, Hard Drive Storage (you can choose between SSD-accelerated and “platter-based” only), plus licensing fees for Windows OS family. It has a simple, friendly user interface, but your VM Infrastructure Administrators will want to use the integrated vCloud Director Interface that is also included.

The current promo action runs with 300€ credit for you to spend on the first 90-day trial for everyone – if you are interested go and check it out. Since I like trying out new things, I’d like to share my first moments with this brand new service. Continue reading

HyperThreading: What is it and does it benefit ESXi?

Many times I come across the question of HyperThreading and its benefits – either in personal computing, but more importantly over the last few years, virtualization. I’d like to talk about what HyperThreading is for a moment, and show you if it benefits the virtualized environment.

What is HyperThreading?

Today, you see HyperThreadng (HT) technology is present on almost every Intel processor, be it Xeon or Core i3/i5/i7 Series. Basically, it splits one physical core to two logical cores, but the term splitting is somewhat inaccurate and confuses many consumers. Thinking that when they run a 2.5GHz 4-core, HyperThreaded CPU, they immediately have 8 effective cores carrying the full processing capability of 20 GHz. Mainly because when you say you split something, you think that this has been divided to two equal parts (or at least that’s what I think, anyways). Continue reading

1GbE Intel NIC Throttled to 100Mbit By SmartSpeed

We had a case on one of our ESXi hosts equipped with an Intel Corporation 82571EB Gigabit Ethernet Controller – although it was 1Gbit in speed, we were unable to achieve autonegotiation higher than 100 Mbit. When setting it manually to 1Gbit, the NIC disconnected itself from the network. Every other setting worked – 10 Mbit and 100Mbit both half and full duplex. We tried investigating with our Network Team, forcing 1Gbit on switch and that has also brought the NIC down.

I delved deeper into this issue and observed the VMkernel log via tail -f when I have forcibly disconnected the NIC and reconnected it again via esxcli. One line appeared that caught my attention:

vmnic6 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
e1000e 0000:07:00.0: vmnic6: Link Speed was downgraded by SmartSpeed
e1000e 0000:07:00.0: vmnic6: 10/100 speed: disabling TSO

I immediately caugt up on SmartSpeed and tried to find a way to disable it – that is until I have found out on many discussion threads later that SmartSpeed is an intelligent throtlling mechanism that is supposed to keep the connection running on various link speeds when an error somewhere on the link path is detected. The switches were working okay, the NIC didn’t detect any errors, so the next thing to be checked would be the cabling.

I arranged a cable check with the Data Center operators and what do you know – replacing cables for brand new ones eventually solved the issue! Sometimes the failing component causing you a headache for a good few hours can be a “mundane” piece of equipment such as patch cables.