VDI, Storage and the IOPS that Come with It – Part 2 of 2

Welcome back. In part one, we covered VDI basics including some of the most common issues that could interfere with building a solid, fast and reliable virtual desktop infrastructure. I must note that most of the technology we talked about is applicable to other sorts of infrastructures as well.

In part two, I’ll discuss some interesting solutions that give us several options to overcome, or eliminate, IOPS, complex image management and some of the other challenges we might face.

Personal vDisks

XenDesktop offers us Personal vDisks, or PvD’s in short. With Personal vDisks, we still use a base (master) image just like before, differential disk included, but we now get an extra virtual disk (the Personal vDisk) attached to our VM as well which will hold all of our ‘personal’ changes. These will include, but will not be limited to, all file level and registry changes like installed or streamed applications provisioned by SCCM, App-V (cache) or XenApp for example, but also things like desktop wallpapers, start menu settings, favourites and other user ‘profile’ related settings.

Smaller companies might use PvD’s as their user profile solution at the same time, or instead of, now I’m not saying that it’s ideal, but it can be done.

Although it doesn’t directly solve our IOPS problem, it’s a step in the right direction. At least, from a management perspective, it will make life a bit easier. Again, I’m just focusing on XenDesktop for now; other vendors might, or will, have their own solutions.

When creating dedicated VM’s with PvD’s you’ll still see a differential (and ID) disk attached to the VM as well, but, just as with ‘normal’ Pooled desktops, it’s stateless, meaning that it will be emptied on every reboot or log-off. The best thing is it will hold all delta writes made to the underlying base (master) image OS, keeping the PvD, which only holds all of our personal changes as explained above, small(er) in size, the way we want it to be.

Another (big) advantage is the ability to update the underlying base master image without destroying or losing any personal data or settings whatsoever; it will just blend in after being applied, and the user won’t notice a thing.

What more can we do?

There must be something else we can do, right? A lot of organizations don’t consider storage to be an issue, thinking there’s plenty to go around. And with technologies advancing, think of data de-duplication (on storage level) for example: some, or most, of the scenarios described above are getting easier and more realistic to implement by the day. The one thing we still have trouble with are IOPS; yes, there they are again. Even though block based level SAN’s offer great performance, in many cases it still isn’t enough. If we really want to make a difference when it comes to IOPS, we need to address (or relieve) the storage layer, since that’s where it all happens.

Citrix provisioning services (PVS)

PVS uses a software streaming technology where a base image (just like with XenDesktop as described in part one) gets streamed over the network to multiple VM’s at the same time. Although this is fundamentally different than the pooled or dedicated desktop model, there are some similarities as well.

Just as with pooled and dedicated desktops, there needs to be some way that we can store our writes to the base image, since it’s read-only. PVS has something called write cache to handle these writes, comparable to the differencing disk technology explained earlier.

Write cache can either be stateless or persistent just as with the pooled and dedicated models. Note that PVS can also be used to provision XenApp servers using a standard image. In fact, this is probably one of the most common use cases out there. Now for the interesting part: we can choose where, and how, we want to create the write cache, and we have the following options.

We can place the write cache on the device’s hard drive (stateless or persisted), cache in device RAM, cache on device RAM with overflow on hard disk (which is only available for Windows 7 and Windows Server 2012 and later), cache on server disk stateless and cache on server disk persisted.

The above methods offer us a lot of flexibility but again, it all depends on the use case you’re presented with which might work best for you. But since we are looking to eliminate IOPS as much as possible, you can probably guess which of the above methods we don’t want to use.

Remember that we’re talking about (VDI) VM’s here, so if we choose to cache on the device’s hard drive, either stateless or dedicated, this means that the write cache will be placed on the virtual hard disk of the VM, thus on the SAN (in most cases anyway) where the VM is provisioned. We could also select to store the write cache on the PVS server hard disk, again, either stateless or persistent, and although this would relieve the SAN of extra IOPS, it would also increase the I/O and network load on the PVS server.

That leaves us with the ‘cache in device RAM’ option, and when using Windows 7 in combination with Windows Server 2012 or later, we could choose to select the ‘overflow on hard disk’ feature, which would make sense since you’ll probably see some blue screens if you run out of memory to store your writes.

Using RAM for write cache will speed up operations immensely and will free your SAN and PVS server of IOPS, but this technology will only be useful when using pooled desktops since RAM isn’t designed to store information permanently. Also, when we say ‘cache in device RAM’ we’re talking about the memory of the target device, which in the case of a VM is the RAM of the hypervisor host server where the VM is running on, so you need to size accordingly.

Another thing to keep in mind is that when your hypervisor host crashes multiple times, the host server will most likely take over, but your writes in RAM will be lost, meaning that your users might lose some work in the process, something to consider. This also applies when you choose to store your write cache on the PVS host local hard disks: if the PVS server dies, you will lose your write cache along with it. Using this solution only leaves us with the base image (and user profile data) which also needs to be stored somewhere.

PVS is smart, since when it reads the master image and starts streaming it out to your VM’s (on request of the VM), what it will do is, it will cache all reads in memory, but this time it will use the RAM of the PVS server itself. So when it needs to read and stream out the exact same blocks of data to VM 2, 3, 4, etc., it will read from RAM again, no extra IOPS and extremely fast. Of course it goes without saying that your network needs to be capable to handle the PVS stream, but as long as you keep in local, preferably on your private LAN, you should be fine in most cases.

This should give you a high level overview on the possibilities of PVS when it comes to eliminating IOPS and other storage related issues. Just as with the pooled and dedicated desktops that use differencing disks or similar technology, PVS also has some pros and cons when it comes to updating the master base image, especially if it’s used to provision a dedicated desktop, as we saw earlier, but for now I’ll leave at this.

Pernixdata

Pernixdata offers us FVP (and that’s their whole portfolio) – check out their Data Sheet here, it’s awesome! Their main focus is to reduce the IOPS bottleneck, and improve overall storage performance where they can, basically by using one big server side caching mechanism built up out of fast SSD like storage.

If you go to Pernixdata.com they’ll tell you that administrators need a way to efficiently scale storage performance using virtualization, much in the same way they scale server compute and memory, and that, Pernixdata FVP does just that. Their revolutionary hypervisor software aggregates server side flash (SSD’s for example) across an entire enterprise to create a scaled-out data tier for the acceleration of primary storage. By optimizing reads and writes at the host level, PernixData FVP reduces application latency from milliseconds to microseconds. And since a picture says more than a thousand words:

It’s easy to install and manage, it supports all major storage vendors and it can be installed on all known hypervisors. It accelerates both read and write operations (IOPS). FVP can be configured to first write changes to flash and later to persistent back-end storage, while in the meantime, data loss is prevented by synchronizing all flash devices on peer servers. It’s fully compatible with almost all existing infrastructures, and believe me, I’ve seen it in action, it really works.

Atlantis ILIO

Be the first to hear of new free tutorials, training videos, product demos, and more. We'll deliver the best of our free resources to you each month, sign up here:

Their portfolio is a little more advanced, so to speak. They offer Atlantis ILIO for persistent VDI stateless VDI (XenDesktop and VMware View), XenApp and Atlantis ILIO center, their central management solution.

Here’s some information from their website, atlantiscomputing.com: Atlantis Computing’s unique In-Memory Storage technology forms the foundation for all Atlantis ILIO products. In virtual desktop environments, Atlantis ILIO delivers better-than-PC performance while enabling linearly scalable VDI and XenApp environments that are fast and easy to deploy and do not require any changes to existing desktop images.

Although the infrastructure needed to support these kinds of deployments is a bit more complex when compared to the Pernixdata solution, it also offers some huge additional advantages. Besides eliminating IOPS almost completely it also reduces your storage needs up to 95% by leveraging their unique In-Memory Storage technology, thereby eliminating the use of differencing disks, linked clones and or PvD’s. This leaves us with just user profile data, our master images and some persistent VDI data (when applicable) all managed by the so-called Replication Host which is a central VM that maintains a master copy of each user’s data blocks.

On top of that, in-line deduplication, wire-speed compression and real-time write coalescing are some technologies used to shrink and speed up the data. As far as the infrastructure goes, Brian Madden wrote an excellent article discussing their Persistent VDI solution, giving you a basic explanation on the technology used and the infrastructure needed. He also briefly discusses their VDI diskless solution. If you want to know more (and yes, you do) make sure you read his article here. There is only one drawback: the licenses needed don’t come cheap, but I guess this also depends on your reseller, something to keep in mind before getting too enthusiastic. Nevertheless, it’s innovative and ahead of its competition by miles, an excellent technology.

Conclusion

Wrapping up part one and two, we again discussed quite a few concepts and technologies. Although one is perhaps more advanced than the other, the fact is, we’re moving forward at warp speed. All of the products discussed offer free evaluation or demo licenses for you to give them a try, XenDesktop and VMware View included, so I suggest you do just that.

If you want to keep going, I’ve published a third article on VDI and data deduplication. You can find that here.

I already highlighted some of the possible pitfalls and possibilities that each product brings, and to be honest there’s not that much to add. Below you’ll find some of the references I used in putting together the above, so make sure to pay them a visit as well, there’s so much more to explore!

References used:

www.vmware.com
www.microsoft.com
www.citrix.com
www.basvankaam.com
http://blog.synology.com
http://recoverymonkey.org
www.atlantiscomputing.com
www.pernixdata.com
www.ngn.nl

The following two tabs change content below.

Bas van Kaam

Bas van Kaam has been part of the IT industry just short of 15 years now. He is currently employed as a Senior (Pre-Sales) Consultant / Engineer at Qwise, one of the leading SBC (Citrix) & Microsoft consultancy companies in the Netherlands. He is also the Citrix Product Lead for his company, a role from which he organizes and hosts technical sessions on a regular basis, advises his CTO and keeps in touch with other (pre) sales colleagues and third party partners. He is an enthusiastic Blogger and as such, loves to share knowledge. He specializes in Citrix technologies with a strong focus on (partly) designing, building, maintaining, troubleshooting and optimizing Microsoft & SBC oriented infrastructures for mid-sized companies. You’ll find Bas on www.basvankaam.com where he tries to share some of his knowledge.

Latest posts by Bas van Kaam (see all)