Overcoming PowerPoint – Virtual Whiteboarding

How do you whiteboard to an audience of 200+ people without a whiteboard?
As a Solutions Architect, my job entails not only giving presentations to customers, but also at Lunch & Learn sessions and at conferences. It is very important for me to be able to pass on information, concepts, and design decisions to groups of various sizes. I noticed that more I used PowerPoint, the less engaged my audiences became. Somewhere, there was a disconnect between content and delivery. People glanced at the screen, then check their phones, email, or whisper among themselves. At first I thought it was me, but i noticed the same behavior when I was an audience member. I realized that I was guilty of the same behavior as well. I needed to mix things up and get the audience engaged again.
Last year I changed things up by getting rid of PowerPoint and bringing in a couple of whiteboards to a lunch presentation. By eliminating slides, the audience had to pay attention while I was talking. I used the whiteboard to illustrate my ideas and to expand the discussion. It was a resounding success, with attendees telling me that it was one of the best presentations they had ever seen.
Since that event, I have not delivered a PowerPoint presentation. I have exclusively used whiteboards when speaking to an audience. While there have been a few logistical issues, such as getting whiteboards where none exist, the results have been outstanding. Yet my greatest obstacle to date was approaching.
I was scheduled to present at a conference in Seattle to an audience of over 200 people, in a room that was too large for all attendees to see a whiteboard. If I couldn’t overcome this obstacle, I would be forced back to a slide deck.
I have an iPad, and occasionally use a whiteboarding application in small meetings. If I could figure out a way to extend this to a larger audience, I might still have a chance to do things my way. I purchased an AppleTV, and by using the AirPlay feature I was able to connect my iPad and my television via my home WiFi. I found a new whiteboard application, 2screens, that was AirPlay compatible. Bang! I could now project my iPad whiteboard to an audience. Since AirPlay requires a wired or WiFi connection between the iPad and AppleTV, and I could not be assured of signal at the conference, I decided to use my iPhone as a access point to connect the two. It was now time to put my design to the test in front of a live audience.


I was slated to give a presentation on VDI Performance in Seattle last week, and brought along my AppleTV, iPhone, and iPad. With everything set up and the projector showing my whiteboard app for all to see, I was mic’ed up and started speaking while carrying my iPad. I was able to walkout among the audience, side to side, taking questions during the session. As I walked and talked, I used the whiteboard to illustrate my points. My drawings appeared instantaneously on the screen at the front of the theatre. As with my other whiteboarding sessions, I captured the attention of everybody in the room. My new “digital whiteboard” worked like a charm.
One other thought that came to me a few minutes ago. I am writing this post during a layover in Detroit heading home to Maine after a week in Seattle. I started this blog with a whiteboarding post almost a year ago… during a Detroit layover heading home from Seattle. Funny how things work out when you aren’t paying attention.

Posted in performance, powerpoint, whiteboard | Tagged , , , | 1 Comment

Building a portable vSphere lab

When you are in the business of delivering virtualization solutions to customers, it helps if you know what you are talking about. While that is pretty obvious in and of itself, there is a great deal of work required to stay on top of the technology and trends. In most cases, you need a lab to work in. Having a sandbox in which you can try out new technology becomes a critical piece to the training and educational process. More importantly, if you are studying for any advanced certifications, access to a lab is pretty much a requirement for success. This was where I did most of my studying and practice while preparing for my VCAP-DCA and VCAP-DCD exams.

I am fortunate enough to have two different labs in which to play. First is my Demo Lab at the Mosaic Technology office. I was granted an isolated port off our firewall directly to my lab, and a private VPN into that network. On that network I have a Dell M1000 blade chassis with two M610 blades, two R710 and one 2950 racked servers, several Dell switches and two EqualLogic iSCSI SANs. This affords me enough hardware to configure just about any possible environment or scenario physically. However, the nature of my job keeps me away from the office most of the time, with only 2-3 days per month onsite. I can still VPN into my lab and make changes as needed, test new solutions or offerings, and even integrate new hardware and products into the rack. But as we added features and products, it became less of a sandbox and more of a showcase. There wasn’t enough flexibility in the environment for me to extend my knowledge as needed.

As a result, I decided to take matters into my own hands. When it came time for me to get a new laptop, I leveraged our position as a Dell partner to select a Precision 6500 mobile workstation for my new PC. Large 17 inch display, dual i7 processors, and 4 memory slots gave me great flexibility. Additionally, there is room for a second internal hard drive. I purchased with 4 GB ram and a 250 GB SATA drive standard. I installed another 8 GB RAM to bring it up to 12 GB, and added a Seagate MomentusXT hybrid drive in the second slot.

Installation of VMware Workstation 7 gave me the horsepower I needed to build out a lab internally. To begin with, I configured the virtual networks to use two vlans, the first VLAN8 was set for NAT to the host as a LAN connection. VLAN9 was configured to be host-only for storage network. I copied over a basic Win2k3 and Win2k8R2 image, creating two VMs from the base images. On the Win2k3 server I installed Active Directory and DNS. The Win2k8R2 became my vCenter server. I then created two VMs, 5 GB in size, and installed ESXi 4.1 on each. Once configured, I joined them to vCenter. As shared storage, I chose NFS. From the Virtual Appliance Marketplace, I downloaded a Fedora VM, added a 100 GB second drive, configured it to export the second disk via NFS, and then mounted it to the ESXi hosts. I also downloaded the Celerra Simulator from EMC to act as an iSCSI target as well.

Here is what the network diagram looks like:

With this in place, I now have a vCenter lab that I can take along with me wherever I go. Since the Public network is NAT to the host, I have a virtual NIC on my laptop on VLAN8. Once the VMs are powered up, I can use the tools on my laptop to interface with the lab. I can SSH to the hosts or the NFS server, use my vSphere Client, or RDP into the vCenter server or DC as needed. This is a great little lab configuration that I carry with me all the time. If I need to troubleshoot a problem, or try out a new tool or application, I can install it into my laptop lab easily. While I won’t win any awards for performance, as everything is a VM running on a single hard drive, at least I can bring it up anywhere. On several occasions, I have fired it up onsite with a customer to try out a new configuration or demonstrate a concept. Nothing beats a live demo!

Anyhow, I hope that this inspires some of you to give it a try. Do you need to buy a couple of servers to build out a home lab? Not if you don’t want to. In most cases, a good laptop or a desktop will suffice.

Posted in certification, VCAP-DCA, VCAP-DCD, VCDX, VCP, virtualization, vmware | Tagged , , , , , , | 1 Comment

My VCAP-DCA experience

There is an old joke that starts out “What do you call the guy that finishes last in his class at Medical School?”  The answer is “Doctor”.

As bad as that joke is, it is kind of how I feel about my VCAP-DCA result.  In a previous post, I wrote about my VCAP-DCD testing experience.  I was pretty confident going into that test because I have been designing customer virtualization solutions for the past three years.  Design is a daily function of my job, and the DCD was a natural extension this.  However, since leaving Bowdoin College and joining Mosaic Technology, I have been removed from the day to day grind of administration of said virtual infrastructure.  And like most things in life, there is a bit of “use it or lose it” when it comes to the DCA side of things.  I have my test labs, both at home and at the office, but without daily reinforcement, things can slip.

With that in mind, passing the DCD was validation, but I approached the DCA in a different light.  I studied the VCAP-DCA blueprint, which I recommend highly.  More importantly, take a look at Sean Crookson’s VCAP-DCA index.  This not only follows the blueprint, but gives a great outline of not just the topics, but where to find the subject matter.  Read it, study it, practice it, and then do it all over again.  Take some time to review the BrownBag sessions that Cody Bunch hosts on http://professionalvmware.com/.  They were of immense help as well.  I took all of this into consideration, prepared for the exam, but went into it with low expectations.  I didn’t expect to pass, but instead get a good feel for where my weak spots were, and where I would need to improve.  Try my best, but prepare for the worst.

As to the actual exam… While I can’t divulge the questions or content, I can discuss the nature of what you will need to know in order to prepare yourself.  The exam is only 34 questions long, as opposed to the 113 questions in the VCAP-DCD, but it is a completely different format.  You are presented with a virtual lab environment, and from question 1 all the way to question 34, you are building upon that same vCenter datacenter.  Tasks you perform early are built upon as you progress through, and early mistakes can multiply quickly and spin out of control.  Some tasks are simple, such as creating a cluster, performing basic storage and VM configuration tasks, setting up switches and such.  However, the tasks become more difficult as the exam rolls along.  You will be asked to perform some operations that you may have never done before, or in a specific manner you are not accustomed to do them.  Be aware of what you are doing, and make sure that you FOLLOW ALL DIRECTIONS!  If it says to perform via vMA, don’t do it via the vSphere Client GUI!  You will notice that you can move forward and back between the questions without switching into the lab environment.  I discovered this about 10 questions in, when I needed to go back and fix something that I missed in an earlier question.  Once I did this, I quickly moved forward through the questions and wrote on my dry erase board things such as 15-performance, 16-network, 17-storage… (not real, but you get the picture).  I broke down what was needed, then went back and tackled the infrastructure pieces in order.  It was important to do this in order to ensure that I had enough time to finish the exam.  I then went back and performed the tasks that were ancillary, such as generating reports, logging, making specific changes, etc.  Core infrastructure first, data and tweaks later was my mantra.   I don’t know if it was a weighted score, but if it all works at the end, you must have done something right.  As it was, I did not complete all of the tasks. I hoped that my jumping around would count for something.

As far as what to expect, all I can say is know everything on the blueprint.  More importantly, if there is a task on the blueprint, make sure you know how to do it not only from the GUI, but via command line, vMA, and if possible…PowerCLI.  Don’t underestimate your ability to perform operations via command line.  Know how to do it without a GUI, and you will drastically improve your chances to pass the VCAP-DCA.

And in reference to the joke at the top of this post… I passed my VCAP-DCA exam on the first try.  With a passing score of…300!  Exactly what was needed to pass.  Therefore, as with number 100 out of a class of 100 at medical school, all that matters is that we both passed.  I am now certified as a VCAP-DCA to go with my VCAP-DCD.  Next on board is preparing a VCDX design submission, and hopefully defending in Frankfurt in February, 2012.

 

Posted in certification, VCAP-DCA, VCAP-DCD, VCDX, virtualization, vmware | Tagged , , , , | 1 Comment

My VCAP-DCD experience

After much procrastination, I decided to pursue my VCDX4 certification.  As most of you know, this means completing the VCAP-DCD and VCAP-DCA exams first.  I have been working with VMware for over 7 years and have my VCP on v3 and v4, so am familiar with the VMware certification path.  I moved from the end user community to a VAR as a consultant and engineer a few years ago, and have been architecting solutions for customers for several years now.  While I enjoy keeping up with the latest and greatest technology available, maintaining the base certifications for work has made it difficult to pursue my personal goals of advanced certifications such as the VCAPs and VCDX.  With the announcement of a final VCDX4 defense in Frankfurt in February, I realized that I needed to get myself in gear to have a chance at v4, or else throw everything into getting up to speed on v5 and lose my advantage working with v4.x at this point.

With this in mind, the first step was to get my VCAPs out of the way first.  I decided on the VCAP-DCD first, as my focus these days tends to be more on designing new environments or upgrading existing installations of vSphere.  My administration skills are good, but without a functioning production environment to maintain on a daily basis, I can only beat up on my test lab so much.  I figured that leading with my strong suit would give me an advantage.

The first step after scheduling my exam was to ask my good friend Google if he had any references out there that have been through the DCD before.  Forewarned is forearmed.  Some good sources come from Eiad Al-Aqqad, Sean Crookson, and Gregg Robertson to name a few.  And, of course, the VCAP-DCD Blueprint!  While the NDA prohibits us from discussing the actual information on the exam, I can pass along some of the info regarding the setup and organization.  Most of this info is out there either in blogs or on the official VMware websites, so I believe I can speak freely.

To begin with, this exam is NOT to be taken lightly.  Most of the recommended study information is the same being offered for the VCP exams.  While this information is important, it is not really what this exam is about.  Where the VCP was very technically detailed, focusing on numbers such as maximum memory, hosts, CPUs, and other bits of esoteric knowledge, the VCAP-DCD had very little to do with it.  Rightly so, the powers-that-be assume that since the VCP is a requirement you’ve already memorized it and they want to test your brain in a different way.  The VCAP-DCD is all about is design.  They want to know how you think when you are putting all of the pieces together.  It is good to know the maximum number of hosts in a HA/DRS cluster, but more importantly, given a set of customer requirements… How would you design an environment?  What kind of decisions would you make, and why would you choose A over B?  Man on second with no outs in the 8th… do you sacrifice him to third, and why?  Carolina whole pig BBQ or Texas beef ribs?  Those kinds of decisions are what you will be faced with during the –DCD, only about virtual environments and how to build/create them.

With several years under my belt designing customer environments, I was comfortable with the questions and my answers as the test went along.  Technically, my hands-on experience carried me through the exam.  If you are looking for a class or book to give you the answers you seek, you are looking in the wrong place.  Course work and memorization will get you only so far with the VCAP–DCD.  Without a good working knowledge of vSphere and experience with design decisions, you will be at a disadvantage.

The actual questions and answers were not the difficult part of the exam.  The time limit was the greatest challenge.  This exam is 113 questions long, with 5 design questions in the mix.  Those 5 questions are a huge hurdle.  If you haven’t done it yet, make sure you use the VMware VCAP4-DCD Design Tool Simulator before you take the exam.  These questions have a Visio-type interface, and without prior experience with the tool, you will waste valuable time familiarizing yourself.  These design questions take 10-15 minutes each, and with only 225 minutes for the whole exam, you are down to a little over a minute per question for the rest of the exam.  There are several ‘drag and drop’ questions that can be time sinks as well if you are not careful.  In a nutshell, you need to stay on task and mind your time during the exam.  Don’t get hung up on a difficult question, flag it for review and move along.  If you get through in time, go back during the review period and make your decisions.  If you don’t make it to the end, at least you will get through more questions.  I can’t state this enough… you need to make quick decisions.  I have a sneaking suspicion that one of the factors you are being judged on is not only your knowledge, but you quick you think on your feet (so to speak).  You either know it or don’t, and putting you in a pressure situation such as not having enough time to think things out will weed out some folks that don’t have the base design experience.  I was able to make it through with 9 minutes to spare, and didn’t run into any overly difficult time-sink type questions.  There is about a minute or so after the exam where I stared at the blue background screen… and then the window pops up congratulating me for passing the VCAP-DCD.  I got a 321, with a 300 being a passing mark.  I took a deep breath, not realizing until after that I had been holding my breath in anticipation.  With the VCAP-DCD behind me, I am now studying for the VCAP-DCA.  With any luck, I will get through it and begin my design submission for a VCDX defense in February.

Posted in certification, VCAP-DCD, VCDX, VCP, virtualization, vmware | Tagged , , , , | 2 Comments

DRaaS? The way of things to come

My week here at VMWorld 2011 has been quite hectic. With the recent release of vSphere 5, SRM 5, View 5, and all of the other Cloud offerings from VMware, it has been a whirlwind of activity. As I fly between breakout sessions and briefings, I try to keep things in order by tweeting. You can follow me at @timantz for more info. That being said, a colleague asked me “what is the most significant thing that I learned so far at the conference?” Even with all of the new information, it wasn’t a difficult choice. With the new SRM 5 release, VMware now offers vSphere Replication. This is a host-based replication service, installed as part of SRM, that uses a Linux vAppliance, the SRMC, to coordinate replication of VMs from existing datastores to a paired vCenter installation datastore. This process seeds the initial information to the datastore target, and then uses changed block tracking within the vKernel to pass only the changed blocks across to the DR site. Why is host-based replication an important feature? Because DR with SRM is now available to everyone by leveraging this feature. With host-based replication, there is no longer a need to rely on the storage vendor for replication. Users with different storage at two datacenters, say EqualLogic and EMC, can now protect their virtual environments through SRM. Heterogeneous storage infrastructure is no longer an obstacle. We can offer protection to every VM, regardless of what the underlying storage platform of choice. What is even more important… If SRM is storage agnostic now, why do you even need a datacenter as your DR site? With the addition of so many cloud providers in the virtualization space, you can now create a virtual datacenter in the cloud, provision tiered storage as needed, and replicate your VMs securely to your own public cloud. As long as you have the infrastructure in place, you can dial up the resources in event of an actual DR event. Today at VMWorld, Dell announced that they now can provide cloud virtualization resources. What could be better than ordering up a DR site from Dell, using host-based replication to protect your VMs with SRM, and in event of a disaster, a simple phone call can provide you with as much compute power as needed to run your infrastructure in the cloud. Can we say DR as a Service (DRaaS)?

Posted in DR, virtualization, vmware | Tagged , , , , | Leave a comment

Hybrids are Bleeping Magic

I can’t state it enough.  When you are designing a virtual desktop environment, you need to correctly size the back end storage correctly.  I have been involved with several VDI deployments, VMware View in particular, and storage performance can make or break a successful deployment.  If you don’t believe me, take a look at one of my other posts here.  That being said, how are you supposed to scale your virtual desktop POC from 100 users to 1000 knowing that the impact to your storage I/O load will be going through the roof?  The quick and unqualified answer is Solid State Drives.  With ratings over 4000 IOPS per disk, they easily outpace 15k SAS drives rated for 180 IOPS and can take the 100-fold hit to your I/O without blinking.  The down side is most storage vendors are still only supporting 100 GB SSD drives, so capacity is a problem.  How does EqualLogic overcome the performance/capacity issue with SSD drives?  It’s Magic… It’s Bleeping Magic.

Last summer, EqualLogic released their Hybrid PS6000XVS.  This array combined eight 100GB SSD drives and eight 600GB 15k SAS drives within a single 16 drive enclosure.  A proprietary RAID type allowed each volume to be stretched across both sides of the array, both SSD and SAS.  The two sides of the array have different I/O and performance profiles, and the EqualLogic controllers are aware of the differences in capacity and performance.  When data is written to the Hybrid array, incoming data pages are tagged as ‘hot’ by the system.  This is a rating of relative activity of pages within the array.  Recent reads and writes to the data marks the page as ‘hot’.  As read/write activity tails off, the page becomes ‘cool’ and even ‘cold’ if the page hasn’t been accessed recently.  Write activity automatically mark a page as ‘hot’.

Since the EqualLogic controllers know that the SSD drives have a higher performance profile than the 15k SAS drives, the data is written to the highest performing area within the array, which is the SSD drive set.  Continued reads will keep the data page ‘hot’, but inactivity for a specific page will cause it to ‘cool’ off relative to other pages.  Once a threshold is reached within the array, the data page is then moved (on the fly) over to the 15k SAS side of the array but still within the same volume.

This process continues to occur as new data is written to the volume, leveraging the high performance of the SSD drives and the higher capacity of the 15k SAS drives to optimize data page placement within the volume.  If a data page that resides within the 15k SAS side of the volume is accessed for reads or writes, the temperature of that data page increases.  Once the temperature of a page on the 15k SAS drives rises above a page that resides on the SSD side, the two pages are swapped (on the fly), therefore optimizing data page placement within the volume.

As you can see above, the ‘colder’ pages move to the lower tier of storage, while the ‘hotter’ pages are migrated to the highest performing level.  This optimizes data placement within the array and grants SSD level performance for all of the data within the volume while extending capacity with 600 GB SAS drives.

When considering VDI rollouts and the increased performance demands that a View infrastructure requires, it is easy to see that the PS6000XVS Hybrid array is a welcome addition to the storage toolkit.  By leveraging a Hybrid array, we can easily provide +30 thousand IOPS within a single enclosure with a couple TB of capacity.   This is a game changer in the VDI world, and one you should consider, especially if you are using EqualLogic storage.  If you are deploying VDI without considering one of these Hybrid arrays, Caveat Emptor.

Posted in Desktops, equallogic, iops, iscsi, performance, VDI, View, virtualization, vmware | Tagged , , , , , , , , , | 4 Comments

High Performance Clients and VMWare View

Another customer exchange, this one relating to high performance user experiences in VDI environments, that I thought I would share…

The company develops graphical software for the engineering industry. Having successfully virtualized the datacenter with vSphere they’re currently considering some sort of virtual desktop initiative. Being very sensitive of their intellectual property and with remote development sites, there are concerns with not only moving data large distances but also losing control of those resources. The developers typically rely on high-end systems that leverage GPU cycles on graphics cards, and therein lays the question – in the words of their systems administrator:

Knowing what we know about ESX, since it’s doing all of the hardware abstraction for the OS’s, even if you had a great video card in a host, it would never see it or utilize it because vSphere only presents the generic VGA experience with emulated OpenGL, etc. My question to you is, does View work differently with this? How can I bring high-end graphics to an end user with View?

The Connection Manager performs several functions in a View environment. It coordinated the building and configuration of the client pools by integrating vSphere and Orchestrator to create VMs, Linked Clones, and pools of desktop resources for consumption by entitled users. The most prevalent paradigm is the base desktop image with thin clones, connected to thin clients at the end points. These VMs are good all around performers, but not really suited for high end graphics. The VM uses a generic VMware graphics adapter, with usually no more than 128 MB video RAM. Good for office workers, but not intense graphics workloads. There is another way to get a high-end user experience using View. You just need to think of virtual desktops, not just virtual machines.

Another primary function of the Connection Manager is… well, to connect clients with desktop resources. The Connection Manager is a Broker. It is the proxy that connects an end point running the View Client software, such as a Thin Client, workstation, or laptop to an OS that is running the View Agent. This client request is brokered through the View Manager and establishes the relationship between client device and desktop resource. While in most cases that desktop resource is a VM, it can also be a Terminal Server session or a physical PC it these have the View Agent software installed.

When looking at it from a performance perspective, a standalone desktop with the View Agent installed allows a client to leverage all of those physical resources through the View infrastructure out to the end point. You could have a farm of high end workstations in your datacenter, and provision them to clients all over the world. All of the heavy lifting and rendering would be done locally, and the end user would see the end result on the thin client. . Blade PCs, intended for desktop users, can also be provisioned via View in this manner also. You could look at these for dedicated desktops, or even purchase some Dell Precisions loaded with memory and graphics cards, and stand them up on racks in your datacenter. People could connect via a thin client from anywhere using View and have access to physical resources.

More importantly, the PC is in the datacenter. This means that all of the software, rendering, CPU horsepower, and more importantly, the intellectual property of the company are retained in the datacenter. This protects both the hardware and software corporate assets. There is a much higher level of security within your environment when you can retain control of all assets within your datacenter.

This question spurred a great deal of discussion internally, with fantastic results. Hopefully this will get some of you thinking of View as not just VMs connected to Thin Clients, but as more of a true Virtual Desktop solution for the workplace.

Posted in Desktops, performance, VDI, View, virtualization, vmware | Leave a comment