High Performance Clients and VMWare View

Another customer exchange, this one relating to high performance user experiences in VDI environments, that I thought I would share…

The company develops graphical software for the engineering industry. Having successfully virtualized the datacenter with vSphere they’re currently considering some sort of virtual desktop initiative. Being very sensitive of their intellectual property and with remote development sites, there are concerns with not only moving data large distances but also losing control of those resources. The developers typically rely on high-end systems that leverage GPU cycles on graphics cards, and therein lays the question – in the words of their systems administrator:

Knowing what we know about ESX, since it’s doing all of the hardware abstraction for the OS’s, even if you had a great video card in a host, it would never see it or utilize it because vSphere only presents the generic VGA experience with emulated OpenGL, etc. My question to you is, does View work differently with this? How can I bring high-end graphics to an end user with View?

The Connection Manager performs several functions in a View environment. It coordinated the building and configuration of the client pools by integrating vSphere and Orchestrator to create VMs, Linked Clones, and pools of desktop resources for consumption by entitled users. The most prevalent paradigm is the base desktop image with thin clones, connected to thin clients at the end points. These VMs are good all around performers, but not really suited for high end graphics. The VM uses a generic VMware graphics adapter, with usually no more than 128 MB video RAM. Good for office workers, but not intense graphics workloads. There is another way to get a high-end user experience using View. You just need to think of virtual desktops, not just virtual machines.

Another primary function of the Connection Manager is… well, to connect clients with desktop resources. The Connection Manager is a Broker. It is the proxy that connects an end point running the View Client software, such as a Thin Client, workstation, or laptop to an OS that is running the View Agent. This client request is brokered through the View Manager and establishes the relationship between client device and desktop resource. While in most cases that desktop resource is a VM, it can also be a Terminal Server session or a physical PC it these have the View Agent software installed.

When looking at it from a performance perspective, a standalone desktop with the View Agent installed allows a client to leverage all of those physical resources through the View infrastructure out to the end point. You could have a farm of high end workstations in your datacenter, and provision them to clients all over the world. All of the heavy lifting and rendering would be done locally, and the end user would see the end result on the thin client. . Blade PCs, intended for desktop users, can also be provisioned via View in this manner also. You could look at these for dedicated desktops, or even purchase some Dell Precisions loaded with memory and graphics cards, and stand them up on racks in your datacenter. People could connect via a thin client from anywhere using View and have access to physical resources.

More importantly, the PC is in the datacenter. This means that all of the software, rendering, CPU horsepower, and more importantly, the intellectual property of the company are retained in the datacenter. This protects both the hardware and software corporate assets. There is a much higher level of security within your environment when you can retain control of all assets within your datacenter.

This question spurred a great deal of discussion internally, with fantastic results. Hopefully this will get some of you thinking of View as not just VMs connected to Thin Clients, but as more of a true Virtual Desktop solution for the workplace.

Posted in Desktops, performance, VDI, View, virtualization, vmware | Leave a comment

Every dedupe rose has some thorns

I recently had a customer ask me about migrating backup data out of an old Data Domain appliance.  As this sort of question has come up before, I will share my thoughts with the rest of you as well.

First of all, I want to be very clear that I believe backups are the ‘killer app’ for deduplication, and moving forward, file sharing as well.  That being said, there are a couple of ugly truths that nobody in the dedupe business talks about.  The one we will discuss here is Exit Strategy.

When considering deduplication as part of an upgrade or change in your current backup design, there are several options to weigh.  Software or Hardware, integrated or stand-alone, and then the various offerings for each type.  There are several to choose from, and factors surrounding the decisions that you make.  I could go deeper into each of them, but lets not do that today.  I can save that for another day.

When it comes down to your choice, you will most likely have a simple disk-to-disk (D2D) target for your backup vendor of choice, where the save sets are stored in a compressed and deduplicated manner.  However this is done, either by the backup software or the target device, there is metadata associated with this backup that stores the compression algorithm and the deduplicated block index.  From an external ‘point of view’ the data looks normal, but is stored in the compressed/deduplicated format on the file share based on the metadata.  This is pretty much standard across the board, and is the magic behind the curtain.

Here is where it gets tricky.  You can have multiple dedupe engines working on the same piece of data.  For example, if your backup application can dedupe and compress while writing to a D2D target, it has a ‘point of view’.  If your backup application doesn’t do dedupe, the D2D target could be on an appliance such as Data Domain, which natively performs deduplication inline on anything being stored within, creating another ‘point of view’.  You probably aren’t going to dedupe at multiple layers, but it is possible due to the separate ‘points of view’ established.  You will need to be careful when extracting data by following the same chain back out of the dedupe string to get back to the flat files that you backed up originally.

As an example, you could use your backup software to write your save sets to a D2D target, and it will write that data in a way that is compatible with the backup application.  If that D2D target happens to be a volume on a Data Domain appliance, the data written into the device will be deduplicated inline as it is written to disk, therefore being reduced in size significantly.  Those save sets, and others added afterward, are contained in the same ‘point of view’ allowing you to store significantly more data than physically available on the disks natively.

Now we come to the ugly side of dedupe, which is what to do when the honeymoon is over.  How do you break up with a dedupe device?  The customer in question was a long-term DD customer, and was facing a large increase in maintenance fees post-EMC acquisition.  The cost/benefit had fallen to the point where they wanted to go in a new direction.  They wanted to know if they could just export the data on the Data Domain box to another location.  I wish it was that easy, but alas… it was not meant to be.

As I mentioned earlier, to get out of a dedupe data set, you need to unwrap the data in the same manner that you put it into the store.  The customer needs to open up their backup application, browse to the save sets, and clone them off to another location.  This location could be tape, another dedupe device, or a simple D2D target.  In their case, they had been cloning out monthly sets to tape for archiving purposes, and simply needed to step up the process for weekly and daily tapes until their migration to the new backup application was completed.

In conclusion, deduplication is a great thing for storage and backup vendors, but it can create additional complexities when you try to migrate away from a specific technology.  Make sure you ask about exit and upgrade strategies while shopping around for your next backup appliance.  It could save you headaches down the road.

Posted in backups, DR | Tagged , , , , , | Leave a comment

EqualLogic MEM and vStorage APIs

One of the major features that came out of the vSphere release 16 months ago was the rebuild of the IP stack and iSCSI initiators. With this change was the introduction of the concept of multiple vmkernel ports on the same network, allowing for multipathing your iSCSI connections. Shortly after this change, Dell released a tech paper describing how to create a dedicated iSCSI vSwitch, multiple vmkernel ports, assigning IPs to those ports, setting all of them to use Jumbo Frames, and finally enabling the iSCSI initiator to use the ports concurrently. This became the standard for multipath I/O with iSCSI in general and EqualLogic specifically. Within your vSphere infrastructure, you could now set your datastore connections to use Round Robin pathing and increase both your bandwidth and redundancy on SAN connections. Fast-Forward another year, and VMware releases their latest version, vSphere 4.1 on us last summer. Again, there is major changes to the way that vSphere handles storage. With the release of the vStorage APIs, VMware allowed storage vendors to write hooks directly between their products and vSphere. Some new features introduced to vSphere include offloading storage transactions, faster backups, direct cloning and copying support. However, to get these new features you need three things: Enterprise Plus licensing on your ESX hosts, vSphere 4.1, and the software from your 3rd party vendor. With EqualLogic, this is not necessarily the case. EqualLogic released their vSphere API software, the “MEM”, in September 2010 for use by their customers in virtual infrastructure. If you are currently at vSphere 4.1 and at Firmware v4.3.5 (?), preferably 5.0.2, you can install and use the EqualLogic MEM today regardless of your ESX licensing level. You will see some performance increase, automatically configure all EqualLogic datastores for MultiPathing, and set the Datastore Pathing to a new “Dell_PSP_EQL_ROUTED” option. This pathing option is optimized for EqualLogic storage, and some of our customers have seen up to a 15% increase in performance over the previous manual multipathing and Round Robin connections. Additionally, when installing and configuring the MEM, the setup script automatically builds, configures, and connects your vNICs and IPs, vSwitches, and vmkernel ports. What is important to know is that the MEM works at ANY vSphere licensing level. What doesn’t work is the higher level vStorage API hooks into EqualLogic. If you are at the Enterprise Plus licensing level, you can now leverage those APIs within vSphere for your day to day operations. In a traditional sense, every time you copy files from one datastore to another, such as moving VMs, cloning, snapshots, and such were resource intensive on the ESX hosts. Data would need to move from the Datastore to the ESX host and then back to the new Datastore location even if it was on the same datastore. With the MEM in place and Enterprise Plus, the vStorage APIs allow for those transactions to happen directly on the array without traversing up the line to the ESX host. This change can significantly decrease the back end load on your ESX hosts, increasing availability for supporting your VMs and overall efficiency. What is needed to install the MEM? First of all, install the CLI tools on a management host or your vCenter server or download and install the VIM appliance to allow CLI access to your vSphere infrastructure. You need to have a current support contract with Dell and download the MEM to your vCenter server, or CLI/VIM host. For simplicity, we will go with CLI from here on. Download and unzip the MEM and open up the vSphere CLI interface. Set your ESX host into Maintenance Mode, browse to the MEM folder and run ‘setup.pl –install –server=”192.168.xxx.xxx”‘ and watch the magic happen. Actually, there isn’t much magic. The script will ask for a username and password, and after a few minutes it completes. Reboot your ESX host, and once it comes online again, open up the CLI, browse to the folder, and run ‘setup.pl –configure –server=”192.168.xxx.xxx”‘ This time, there is actual magic happening. The script asks several questions as part of the configuration, such as ‘vSwitch name’, ‘vmNICs to use’, ‘IPs for the vmNICs’, ‘Frame Length’, and ‘Group IP’. At the end of the conversation, it will show the actual configuration script being run, and once you hit ‘YES’, it will build the vSwitch as indicated, rescan the iSCSI bus, and set the connections to the new “Dell_PSP_EQL_ROUTED” option. With additional software available for free (as long as you are on maintenance) from Dell|EqualLogic, why wouldn’t you use this to increase performance and available resources within your existing infrastructure?

Posted in equallogic, iops, iscsi, virtualization, vmware | Tagged , , , , , , , , | 5 Comments

Planning for VDI has little to do with the desktop

It isn’t easy to think that way, but if you start planning your Virtual Desktop initiative at the end point, you are fighting a losing battle.  This is because it isn’t the Thin Client that is important, but rather the delivery and support infrastructure behind that Thin Client that will make or break your VDI project in the long run.
When Virtualization came to the mainstream a few years ago, the prime mover was datacenter consolidation.  This provided SysAdmins with savings in rack space, power, and cooling as well as an increase resilience and responsiveness to the IT infrastructure as a whole.  By compartmentalizing your servers into VMs and putting them on shared storage, placing them into clusters, and leveraging HA and DRS, you added protection for your servers that was extremely costly and rare in a physical environment.  But most importantly, SysAdmins got their lives back.  When virtualized, there is a significant drop in hardware related downtime and outages, quicker response time to new project requests, and fewer calls in the middle of the night.  By virtualizing, we as SysAdmins became proactive rather than reactive in our day to day affairs.  However, there is little to be gained by the SysAdmin when it comes to VDI.  There is a huge benefit to the organization as a whole and the HelpDesk in particular when desktops are virtuailized.
So why not just spin up a bunch of VMs as desktops and roll them out to all of our users?  Because server VMs and desktops VMs have different use cases and workload profiles.  Lets look at an example using some basic generalities and industry standards.
You have a user on a desktop, working at XYZ company.  According to average usage, he is utilizing around 10 I/O operations to the hard drive per second, or 10 IOPS.  The standard SATA hard drive in a desktop or laptop is capable of 80 IOPS, so there is little chance that he will saturate the hard drive’s performance envelope at any given time.  Now take 100 average users, and you find that they will require, on average 1000 IOPS to sustain their performance.  In order to virtualize those 100 desktops, you will need to provide 1000 IOPS from your storage array.  Additionally, you may find that the read/write mix is around 60/40 (on average) from these users.
As I mentioned earlier, a SATA disk (7.2k) can support 80 IOPS.  A 10k-SAS disk is around 140 IOPS, and a 15k-SAS disk supports 180 IOPS.  As you can see, a 15k-SAS disk is much higher performer than the SATA disk.  In the standard 14 disk storage enclosure, you will see significantly more IOPS from SAS drives than you will SATA drives.  Additionally, we also need to take into consideration the “RAID Penalty”, or the cost on the back end to IOPS that occurs due to the RAID policy on an array.  While reads are free (no penalty), writes to a RAID disk incur penalty due to parity.  A parity calculation will require multiple writes to the array for each single write request.  For example, RAID 1(0) requires two writes to the array for the mirror, RAID 5(0) requires 4 writes for parity, and RAID 6(0) requires 6 writes due to double parity.
In our example above, 1000 IOPS at a 60/40 read/write mix will have 600 reads and 400 writes required per second.  In the case of a RAID 10 array, you will need

600 (reads) + [400 (writes) * 2 (RAID Penalty)] = 1400 IOPS.

What this means in practical sense is that to support the 1000 IOPS at 60/40 on RAID 10, you need a disk array capable of 1400 IOPS.
Take that further and calculate out a RAID 5 array.

600 (reads) + [400 (writes) * 4 (RAID Penalty)] = 2200 IOPS on the back end.
RAID 6 will look like this;

600 (reads) + [400 (writes) * 6 (RAID Penalty)] = 3000 IOPS,

which is over twice as much I/O as required by the RAID 10 array.
With all of this IOPS information at hand, we can now calculate what will be needed as hardware to support the virtual desktops.  How many disks, and what type, will be needed to provide the required 1000 IOPS?  Will 8 * 15k-SAS drives in RAID 10 suffice or will we need have to go with 28 * 7.2k-SATA drives in RAID 5?  Possibly it will come down to a mix of the two types.  And yet another card to play is Solid State Drives.  Enterprise SSD drives can support IOPS at 4000 per disk, which can be a perfect match for some VDI deployments.
Now that you understand the relationship between IOPS on the desktop and back end storage requirements, what is the best way to figure out exactly what you will need for YOUR virtual desktop project?  The best tool that I have found is Stratusphere from LiquidwareLabs.  The LWL server is an OVF, downloadable and installable into your existing VMware infrastructure.  From that point, we generate an installable agent that can be deployed on your existing desktops.  Once installed, the agent collects performance information from each of the desktops relating to CPU, memory, network, and disk, from a user and application level.  Once collected it is sent back to the LWL server for analysis.  After several weeks of collection, we can start to see a usage profile for your users, based on time of day, applications used, and overall functionality of each desktop.  With this data in hand, we can then profile your user base and size your desktop needs.  While the average user consumes 10 IOPS, your users may only require 6 or 7 IOPS on average, with a subset of users that require 12-15 IOPS.  We can also track application usage and determine how best to deploy applications to virtual desktops, either installed in the base image or ‘ThinApped’ and streamed to the clients as needed.
So what have we learned today?  First, that it is very important to plan ahead when considering a VDI deployment, and that benchmarking and analysis with tools such as LWL’s Stratusphere can help you to profile your infrastructure.  Secondly, that we will need to correctly size our back end storage support for any VDI deployment, and why it is so important to success.  And third, that when it comes to hard drives, size really DOESN’T matter, at least not when concerning IOPS!  It is all about controller type and spindle speed.

Posted in iops, iscsi, VDI, virtualization, vmware | Tagged , , , , , , , | 4 Comments

Lights Out

A customer recently asked me about power outages, and what she could do to prepare her virtual infrastructure for fluctuations in the power grid.  It is an interesting topic, as not everyone can afford generated power for the datacenter.  Most (hopefully all) SMB IT shops have a UPS to condition and support systems in case of an outage.  However, when the external power is cut there is a limited time window in which to shut down your systems gracefully before the batteries die.
The big question is, how do we shutdown our infrastructure gracefully?  With several UPS vendors, there are management utilities available.  For example, the APC Infrastruxure software allows you to monitor the UPS and upon an outage, trigger scriptable events.  Your servers, physical or VMs, can be shutdown via such scripts.  These can be as simple as the windows “shutdown – \\servername” command, or something from vCenter using the remote CLI interface to shutdown the VM.
Now that you have your servers shutdown gracefully, what to do with your ESX hosts and storage?  In this customer’s case, she was using EqualLogic storage and Dell servers.  In the case of the ESX hosts, they can be shutdown from the vCenter’s CLI interface.  The EqualLogic storage can be shut down via command line as well, either with a serial connection or remotely via SSH.  At this point, you should only have a single server functioning, the one running the APC software.
This is all well and good, however what do you do when the lights come back on?  If the power goes out completely, you can rely on the EqualLogic storage to automatically boot and come online.  Switches follow suit, booting automatically when powered on.  Most servers allow a BIOS setting change to allow them to boot automatically at power on as well as Virtual Machines.
The only eventuality this doesn’t cover is if the power never actually goes out.  In that case, you will want to look at managed PDUs for your server racks.  These allow you to remotely turn on and off each outlet on the strip.  With this you can cycle the power to your storage and physical servers, causing them to boot automatically.  Once your hardware is back online, your virtual infrastructure will restart and you will be back in business.

Posted in DR, equallogic, virtualization | Leave a comment

Welcome to the whiteboardninja blog

Welcome to my new blog, whiteboardninja.  After much encouragement from friends and colleagues, I have finally decided to jump into the blogosphere and share some of the random thoughts that buzz around my head from time to time.  Why would anybody be interested in reading about the voices in my head?  I am currently a Solutions Architect for a VAR with offices in Salem NH and Seattle, WA called Mosaic Technology, and I specialize in virtualization.  I have been working with VMware since early in 2004 and have come across many different configuration and challenges in my travels around the country.  Am I the ultimate source of knowledge on this subject?  Not even close.  However, I think that I can offer a little bit of perspective that some of you may find valuable.  As things arise, I will make an effort to share some of my interactions and experiences here for you entertainment.  Maybe reading this will waste a few minutes of your life you can’t get back.  On the other hand, you just might find something in here that could save you a few minutes of time, solve a problem you may be dealing with, or just leave you saying “wow, I didn’t know that.”

Anyhow, kick back and take a few minutes to see what happens here.  It might just be worthwhile.


Posted in virtualization | 3 Comments

Changing up things for lunch

Many of you out there in the IT world have been invited to, and have attended a vendor-sponsored ‘lunch and learn’.  These typically take place at a high end steakhouse, such as Ruth Chris, Abe & Louie’s, Capital Grille, or such other fine dining venue.  While it is an opportunity to learn something new in the IT world while having a nice steak luncheon, the price you pay is sitting through an hour or two of boring, sanitized powerpoint presentations on the topic or product of the day.  In the past I have been in the audience sitting through these, and now working with a VAR I have had to stand up and present those awful slides to a captive audience.  Regardless of how interesting the subject, Virtualization, iSCSI storage, Performance Tuning, DR… it is still a powerpoint presentation force-feeding info to the audience.   I can honestly tell you that it is as painful for me standing up there presenting as it is for all of you in the audience.

Recently, we decided to make a change in the format.  Instead of the regular PPT slides, we left the projector and screen back at the office and just brought along a few whiteboards.  No cookie cutter slides, no canned presentation, just an engineer standing up front with a few dry-erase markers.  Our topic of the day was Virtual Desktops, and without a formal slide deck to run through, it left the format wide open.  Standing in front of the audience, I just started to talk… and talk, drawing on my whiteboards, scribbling words, boxes, and lines running back and forth between them.  After about 10 minutes of me talking and drawing, I had to erase on of the whiteboards.  While my back was turned, someone asked a question.  Nothing earth-shattering, just a simple question to clarify a point.  As I turned around and looked out, I saw that EVERYONE in the audience was sitting forward in their chair.  People were taking notes and trying to replicate my chicken scratch on paper.  And it was quiet.  Nobody was checking the phones for email or texts, talking to neighbors, or nodding off in the dark.  They were engaged and interested in what was going on.

Well, the question was answered, and soon followed by another… and another.  This ‘steak and storage’ event morphed from a presentation into a conversation.  As we moved along from one topic to another, people even stopped raising their hands, and simply started firing off questions as if we were doing this in their office instead of a dining room.  Questions were asked and answered, I went off on tangents based on questions, and we all went down the rabbit hole a few times.  I think that a few guys asked questions simply to ‘stump the chump’, and see if I really knew what I was talking about.  Apparently I did, as nobody got up and left.   It became very personal and direct.  It became fun.  Finally, we had a event format that was not only entertaining and educational, but engaged both the audience and presenter.

When we finally finished up the event, it was another 40 minutes before we could pack up and leave, as I was swamped with follow-up conversations.  People that would normally be bolting for the door at the finish stayed late to come up and introduce themselves, ask a question that related directly to their environment, or simply to say thanks for the great lunch and presentation.

If I have my way, I would not give another PPT, especially in this sort of format.  Give me a dry-erase marker and stand back… it could get messy!


Posted in equallogic, iscsi, virtualization, vmware | Leave a comment