Desktop Virtualization Part 2 – Pieces of the Puzzle

 

Desktop virtualization is a very complex topic with lots of options and paths, all with different benefits and drawbacks. Below, you will find a graphic with a high-level view of each piece in the virtual desktop puzzle. I intend to briefly discuss each piece over the next few posts. You can think of the graphic a bit like a roadmap for how to go from having a physical server sitting on your desk, through the decisions you would need to make, and ultimately how the end user connects to a virtual desktop hosted on that server. Understand that I have a virtualization strategy in mind that offers a very full feature set and that a greatly simplified solution is possible. I will endeavor to explain where you could trim steps for a more basic setup as I go. Most of the options I will describe will work for a basic design, but when graphics requirements or expectations come into play, these decisions become more important. I will not give an exhaustive explanation of every piece, but rather will highlight key decisions in the process. I will also describe the quality of my own experience as I have tested these different technologies.

Server Hardware

The first topic I want to discuss is the hardware that will host your virtual desktops. Many of the considerations in choosing your hardware are similar to standard virtualization hosts for hosting virtual servers. A few things bear mentioning though. Storage I/O is even more critical for virtual desktops because there are so many virtual machines and a high likelihood of having many virtual machines starting at the same time. Because of this, all flash storage or tiered storage is highly recommended. On the topic of CPU (and memory), I would recommend caution when reading the whitepapers. There are some extraordinary density claims out there that suggest allocating ¼ of a CPU per user or even less. Remember that these are highly tuned environments with basic tasks in use. For more general purposes or higher end graphics situations, this is completely unreasonable. You do not want training videos to bring your infrastructure to a crawl. This does not preclude high densities (many desktops hosted on one server), but solutions need to be validated. The next few posts will be a little on the technical side, but they won’t all be this deep.

 

Although you can host virtual desktops on the same type of hardware as standard virtual servers, graphics requirements may force you to consider other details. If you will need physical GPUs (which I think provide a lot of value in virtual desktop environments), you will need to consider the form factor of your servers. You can get one or two GPUs in a 2U server, but these configurations are somewhat uncommon. There are more options in the 4U and 5U form factors for GPUs because there is simply more space to put these add-in cards. There can also be more room on the main board for more PCI-E slots. These larger form factors also tend to allow for better ventilation, which can become a big factor with the high heat output from GPUs.

 

The last area of hardware I want to discuss is the actual graphics hardware. Again, you can do without this, but for the best user experience, I recommend pursuing this route. There are two ways to do graphics in virtual desktops: a dedicated graphics card for each virtual machine is passed through directly to the virtual machine (no sharing), or a single GPU can be shared between multiple virtual machines. Dedicated GPUs are a viable option, but only at very low densities (I have never found a server with more than 11 PCI-E slots). Shared or “virtualized” GPUs scale much better, but have their own considerations. NVidia has a special line of “GRID” cards designed for this and they have collaborated with both Citrix and VMWare to provide this solution. Each GRID card can be divided among between 2 and 32 virtual desktops depending on performance needs. While this has some big names behind it, the price is very high. Microsoft has their own implementation of graphics virtualization called “RemoteFX”. This can be a little confusing because the name “RemoteFX” refers to a set of features that encompasses graphics virtualization as well as a portion of their remote desktop protocol. RemoteFX got a lot of improvements in Server 2016 and is worth another look if you haven’t seen it since 2012. There are lower cost options with RemoteFX than the GRID cards, but I haven’t seen performance benchmarks between the two.

 

My intention here isn’t necessarily to give a “how to” for choosing server hardware, but rather to highlight some of the considerations. In summary: don’t overlook storage performance, be cautious with the whitepapers, and consider physical GPUs for a better user experience even though these will also mean you need to plan for more rack space. The next post will dive into virtualization platforms, which is the most important decision in this process.

 

Michael Richardson – I’m an IT Systems Analyst / Project Manager for Midwest Data Center, a former IT Salesman, and a Youth Pastor at my local church. I live in Maryville, MO and I enjoy learning and implementing new technologies for businesses, solving problems and puzzles, and teaching about my faith.