This is the second post in my series about Windows Multipoint Server (WMS). If you missed Part 1 (Overview), please feel free to read it HERE. In this installment, I’ll be taking a look at some sizing and performance considerations.
When considering a WMS deployment, you first need to consider two major variables: workgroup vs domain, and physical vs virtual server. In this series I’ll be focusing primarily on domain-joined virtual servers, but know that WMS works well in a workgroup and can be installed on bare-metal. There is very little difference in setting up and sizing a Multipoint Server in a workgroup as opposed to a domain environment. However, when it comes to using virtual servers over physical servers there are several benefits which I want to cover.
First of all, I generally push for centralizing technology and management. I would rather not have an expensive piece of equipment out where students can access it. While WMS can be placed on a physical box directly in a classroom, I tend to shy away from this. Put it in a server rack where it is safe. This means IT staff will need to manage the server. Even though it is relatively straight forward to use the built in dashboards for management, it seems best to let IT staff handle this instead of delegating to a teacher or lab attendant.
Secondly, while WMS can be installed on bare-metal, there are some finicky graphics requirements. One might suggest researching which graphics cards are or are not supported and purchasing supported hardware, however the requirements are murky. Generally, that would suggest virtualizing is out of the question, however in this instance it removes the issue altogether. Virtualizing WMS seems to bypass the graphics hardware requirements and things just work.
Finally, when you virtualize WMS, you gain all the normal benefits of virtualization: flexibility/scalability of resources, mobility across hosts, hardware cost savings, etc.
LICENSING – I don’t want to go in depth on the licensing requirements, but know that there are two versions of WMS 2012: Standard and Premium. The differences between Premium is that Premium covers more users per server (20 vs 14) and allows joining a domain. Premium also allows dual processor configurations, where Standard only allows a single processor. In addition to licensing each server, you will need to license the users or devices that access the server with CALs. There is a combo-CAL pack that is specifically for WMS.
LIMITATIONS – The first real sizing consideration I want to mention is that each WMS server can only have up to 20 users. This is a hard limit that is specified as a licensing requirement, but may also have to do with the number of USB devices allowed under Windows. What this means for deployment is that if you have any labs over 20 stations, you will need to split the lab into two or more virtual servers. This isn’t as bad as it sounds since it is possible to manage multiple WMS servers from within the WMS Manager. There is also a cool benefit here, since the latest version of Dell’s WYSE network zero-client driver supports a failover configuration.
Before I go too in-depth with hardware requirements, I want to mention some limitations. WMS is not intended to do high-end graphics work. It can do alright with low resolution videos (480p) and flash games, etc., but don’t expect to play high-definition videos or do any sort of video editing on it. It was not designed for this and it will not work well. That being said, it is a very good fit for using Office software, web browsing, testing, and other programs you would expect to work well on a Terminal Server.
NETWORKING – Most of the hardware sizing is going to follow standard Terminal Server sizing very closely. However, there is one area where there is a big departure from this and that is networking. When using network zero-clients, bandwidth usage is quite high. The technology is essentially USB over IP. Consider that when you play a video, you are sending display data using something akin to a USB display adapter over the network. In my testing, this can max out the 100Mbps NIC that many of these zero-clients have. When that is extrapolated out, you very quickly saturate any 1Gbps link. For the schools I have this in, it provided a reason to upgrade their network backbone to 10Gbps. This may sound scary to smaller schools or businesses, but the cost of 10Gbps has come down significantly and, when factored in with the other cost savings, is not prohibitive. Bandwidth requirements could also be met with NIC teaming/bonding if needed. My recommendation would be to start with 100Mbps/client for the first lab, and then increase bandwidth at 50Mbps/client for each additional lab.
MEMORY – This is going to be very similar to sizing a terminal server. I would start with 4GB for the system and then add another 512MB per concurrent user. For most labs, that is going to land you at 12-16GB per VM. This may sound a little high for a terminal server with so few users, but I know that students tend to open a lot more things than you might see in a call center or other type of environment.
DISK – There are a few things to keep in mind when choosing a storage solution: capacity, and I/O or speed. For WMS, capacity is not going to be a big concern as I would recommend storing user files elsewhere and using the disk protection feature. One thing to be aware of with disk protection is that it will create an additional partition that is about 2x the size of your memory (ex. In a server with 16GB of memory, it will create a partition that is about 32GB in size). In my experience, this means you should plan on about 120GB for each VM. This gives you enough room for the OS, apps, temporary user data, and the extra partition for disk protection. In this environment, I/O is going to be more important than storage space. I would definitely recommend implementing a RAID solution that is aimed at I/O (RAID 10 is probably best here). You will want users to be able to log in quickly and access programs quickly. With the falling costs of flash storage, that may be the way to go.
CPU – As I’ve already said, processor utilization will be very similar to a Terminal Server. There is some additional overhead for the Multipoint Services though. I would aim for a core for every 2-4 users depending on your workload. That will put you at 6-10 cores for a 20 user deployment. I wouldn’t be surprised to see that get close to full utilization if all the students went to an educational webpage that has a lot of visual elements at the same time. That being said, this is where virtualization can really shine. While you might have one lab with high utilization, another lab may not be so highly utilized at the same time. This would allow you to scale this out well without the need to have so much processing power dedicated to each server.
REAL WORLD – In the above screenshot, you can see there are several VMs running on this one host. The host has 2×12 core processors, 256GB of memory, 8x 300GB 15K SAS drives, and 10Gbps networking. When we sized this server, we were still new to the product and wanted to make very sure we had ample resources and room for growth. We are running 4 labs of 25 users each off this host with 2 virtual servers per lab. It looks like we could probably hit double that capacity (200 users) with this host based on our resource utilization thanks to the beauty of virtualization.
I hope you have enjoyed this post on how to size a Windows Multipoint server. While every situation will be different, I have tried to give some concrete numbers and direction on sizing. I’ll be following this post with a few more about installation, the zero-clients, common issues, and the future of WMS.
Michael Richardson – I’m an IT Systems Analyst / Project Manager for Midwest Data Center, a former IT Salesman, and a Youth Pastor at my local church. I live in Maryville, MO and I enjoy learning and implementing new technologies for businesses, solving problems and puzzles, and teaching about my faith.