Virtual Desktop Infrastructure (VDI) is a technology that has been around for over a decade now. Though most IT engineers know the term and what it does, not many have actually implemented such an interesting environment. Most of them associate them to brands such as Citrix, VMWare or Microsoft.
Every VDI Environment is unique. Each setup needs to be tailored to your customer’s specific needs.
The setup of the VDI environment can be build around Remote Desktop Session (RDS) farms or linked clone desktops (floating or dedicated), or it can be a combination of the two, and in combination with hosted applications. As you probably know by now, there is no ‘one size fits all’ approach here. The different possibilities can be used in combination with each other in a variety of ways. It all comes down to what your customer/your boss/the business requires…
My advice would be to get a good grip on the basics of the technology you’re going to use to set up the VDI environment. Whether it is VMWare, Microsoft, Citrix, Dell vWorkspace, Xen Desktop or any other vendor:
The other part of the story of course is what your customer wants to use the VDI environment for.
It’s clear that this second angle, being the requirements, is a lot more difficult to prepare for – compared to the first part. Project scope changes will happen. However, in my experience, having a good grip on the technology should enable you to handle these changing requirements as they come along. This will make it more challenging to meet the requirements, but not impossible!
Let me show you an example of an environment we set up for a customer
A customer wanted to use VMWare Horizon View 6.2 to set up a VDI infrastucture. The environment is expected to host up to 400 floating linked clones.You might think now, “that’s a basic setup, not really challenging.”
The main customer requirements for the solution we build were:
And of course in the event something goes wrong, impact to the end users using the virtual desktop should be as minimal as possible.
Be careful what you wish for… A challenge was asked for and one was given to us.
Step 1: Brainstorming and designing the architecture
After carefully reading the VMWare documentation, we quickly found out that the 2 datacenter requirements made understanding and reviewing the firewall ports configuration a key starting point.
As our customer was already using VMWare in the 2 datacenters, we investigated how they were interconnected. We quickly found out that in our use case only linked vcenter was present, meaning the options of shared storage, vMotion between the two data centers or Fault Tolerance, were not available. Thus, each data center needed to have their own dedicated hosts and storage for this VDI environment. The decision was made to double every component of the Horizon View environment: 2 security servers, 2 connection servers, 2 composer servers, 2 database servers…* (these are explained in detail at the bottom of this page) The Internet accessibility requirement implied that we had to place the security servers in a Load Balance DMZ.
Step 2: The Implementation
Each VDI implementation makes use of databases. Since SQL Server is supported by Horizon View, we set up 2 database servers and created an SQL AlwaysOn Availability Group (AG) to host databases. There is a database server at each site. Given the fact we are using a SQL AG, we can make use of a single AG Listener IP to direct our database traffic too for the Events DB, Composer DB’s and Identity Manager DB’s. The DB’s are replicated and synchronized by the SQL AG. In case of an outage of the primary DB server, the secondary takes over but there will be no impact to the VDI environment as all traffic still passes via the same AG listener IP.
Step 3: User settings retention
Last but not least: View Persona Management implementation. This feature enables the VDI environment to redirect the user profile to a file server so the user is capable of keeping files and settings over different VDI desktops. In order to safeguard the user profiles, we work with 2 file servers. Again, one at each site and we created a domain joined Distributed File Share (DFS) namespace with these 2 fileservers. To make sure the user profile remains available we created a replication group which copies the user profile data between the 2 file servers with DFS-Replication.
Once all the firewall ports were opened, the different servers and components were set up, configured and linked, we created the VDI template and installed our apps. We took a snapshot of the template and created the desktop pools.
All that is left at this stage, test and resolve any issue that might arise.
The way it is written here, it might seem like it’s a task you can do in a couple of days. But make no mistake, setting this up, troubleshooting issues, making the necessary change requests and taking care of changing requirements took a lot more time than you would imagine. My advice is, start well in advance, take your time and enjoy the challenge…
I conclude with an illustration of our setup
An illustration of our setup will make things a little less abstract. I also added some clarifications about the components used in the Horizon View environment.
This software acts as a broker for client connections. View Connection server authenticates users through Windows AD and directs the request to the appropriate virtual machine, physical or blade PC, or Windows Terminal Services server.
These servers are on one side an instance of the connection server; on the other side these servers provide a layer of protection between the Internet and the actual connection servers. They reroute authentication requests to the connection servers.
The implemented solution is set up as follows; We built the connection servers, a primary one at one site and a replica at another site. The replica connection server takes over all settings of the primary one and takes over in case of an issue with the first one. The 2 security servers are in a Load balanced DMZ, behind a virtual IP address (VIP), so the Virtual desktops are accessible from the Internet. The VIP does the load balancing over the two security servers. We linked the security servers and the connection servers with each other in a 1-to-1 relationship.
The software connects to the vCenter and orchestrates the pool creation of linked clones from a specified parent virtual machine according to the settings specified by the admin.
We built two composer servers and linked each composer server to a vcenter at a site, also a 1-to-1 relationship.
This composer setup comes with a potential risk. As it was decided to divide the load of the desktop pools over both data centers, the possibility exists, we lose part of the desktop pools when there is an issue (data center outage, composer server outage, vcenter outage – no more provisioning possible). This will definitely have an impact on the end users. In order to mitigate such risk, we created all desktop pools in both data centers. We still divide the load of the active desktop pools over both data centers but we create a copy of the active desktop pool with the same settings (with a slightly different name) on the other data center. We disable this desktop pool and disable provisioning for it and only activate it when there is an issue. Granted, it’s not an automated solution but it would meet what our customers wanted: low impact on end user in case of composer failure.