vKernel Adds Tools to Free vOPS Server Explorer 6.3


When it comes to gaining insight and understanding of your virtual environment vKernel has some nifty products. They just added two new utilities, Storage Explorer and Change Explorer, to their free vOPS™ Server Explorer that give you more management capabilities with SCOM/SCVMM or vCenter. Sure it’s to get you looking into and considering buying the paid stuff with more functionality and remediation but it does provide you with tools to rapidly asses your virtualization environment for free as is. So what did they add?

Storage Explorer

  • Gain insight into storage performance and capacity via views across data stores and VMs
  • Identifies critical storage issues such as over commitment, low capacity, high latency, VMFS version mismatch
  • Alerts you to critical VM issues such as low disk space,  latency and throughput issues
  • There’s sorting and searching support

Change Explorer

  • You get a listing of the changes to resource pools, hosts, data stores and VMs within the past week. They also indicate a risk associated with hat change
  • You can search & filter to find specific changes
    • There is a graphical mapping of changes over a time line for rapid reporting/assessment.
    • So if you need some free tools to help you get a quick insight into your environment or the need to be informed about changes of performance issues you can try these out. The press release is here http://www.vkernel.com/press-kits/vops-server-explorer-6-3. We have smaller environment at work next to our main production infrastructure where we’d like to test this out. So they need to add support for SCVMM 2012 SP1 a.s.a.p. I think Smile

      In a world were complexity reduction is paramount and the TCO/ROI needs to be good from day one competition is heating up between 3rd party vendors active in this arena providing tools to make that happen. This is especially true when they are adding more and more Hyper-V support. It also doesn’t hurt to push Microsoft or VMware to make their solutions better.

    Experts2Experts Virtualization Conference London 2011 Selling Out Fast!


    It seems a lot of Hyper-V expertise is converging on London in from November 2011 for the small but brilliant Experts2Experts Virtualization Conference. I’m looking forward to learning a lot from them and listening to real world experiences of people who deal with the technologies on a daily basis. It will also be nice to meet up with a lot of on line acquaintances from the blogosphere and twitter. The conference is selling out fast. That’s due to the quality, small scale and very economic attendance fee. So if you want to meet up with and listen to expertise the likes of Aidan Finn, Jeff Wouters, Carsten Rachfal, Ronnie Isherwood and hopefully Kristian Nese have to share you’d better hurry up and register right now.

    I’ll be sharing some musings on “High Performance & High availability Networks for Hyper-V Clusters”.

    Perhaps we’ll meet.

    Virtualization with Hyper-V & The NUMA Tax Is Not Just About Dynamic Memory


    First of all to be able to join in this little discussion you need to know what NUMA is and does. You can read up on that on the Intel (or AMD) web site like http://software.intel.com/en-us/blogs/2009/03/11/learning-experience-of-numa-and-intels-next-generation-xeon-processor-i/ and http://software.intel.com/en-us/articles/optimizing-software-applications-for-numa/. Do have a look at the following SQL Skills Blog http://www.sqlskills.com/blogs/jonathan/post/Understanding-Non-Uniform-Memory-AccessArchitectures-(NUMA).aspx which has some great pictures to help visualize the concepts.

    What Is It And Why Do We Care?

    We all know that a CPU contains multiple cores today. 2,4,6,8,12,16 etc. cores. So in terms of a physical CPU we tend to talk about a processor that fits in a socket and about cores for logical CPUs. When hyper threading is enabled you double the logical processors seen and used. It is said that Hyper-V can handle hyper threading so you can leave it on. The logic being that it will never hurt performance and can help to improve it. I suggest you test it Smile as there was a performance bug with it once.  A processor today contains it own memory controller and access to memory from that processor is very fast. The NUMA node concept is older than the multi core processor technology but today you can state that a NUMA node translates to one processor/socket and all cores contained in that processor belong to the same NUMA node. Sometimes a processors contains two NUMA node like the AMD 12 core processors. In the future, with the ever increasing number of cores, we’ll perhaps see even more NUMA nodes per processor. You can state that all Intel processors since Nehalem with Quick Path Interconnect and AMD processors with Hyper-Transport are NUMA processors. But To be sure, check with your vendors before buying. Assumptions right?

    Beyond NUMA nodes there is also a thing called processor groups which help Windows to use more than 64 logical processors (its former limit) by grouping logical processors into groups of which Windows handle 4 meaning in total Windows today can support 4*64=256 logical processors. Due to the fact that memory access within a NUMA node is a lot faster than between NUMA nodes you can see where a potential performance hit is waiting to happen. I tried to create a picture of this concept below. Now you know why I don’t make my living as a graphical artist Eye rolling smile

    imageimage

     

    To make it very clear NUMA is great and helps us in a lot of ways. But under certain conditions and with certain applications it can cause us to take a (serious) performance hit. And if there is anything certain to ruin a system administrators day than it is a brand new server with a bunch of CPUs and loads of RAM that isn’t running any better (or worse?) than the one you’re replacing. Current hyper visors like Hyper-V are NUMA aware and the better servers like SQL Server are as well. That means that under the hood they are doing their best to optimize the CPU & memory usage for performance. They do an very good job actually and you might, depending on your environment never, ever know of any issue or even the existence of NUMA.

    But even with a NUMA knowledgeable hyper visor and NUMA aware applications you run the risk of having to go to remote memory. The introduction of Dynamic Memory in Windows 2008 R2 SP1 evens increases this likelihood as there is a lot of memory reassigning going on. Dynamic Memory actually educated a lot of Hyper-V people on what NUMA is and what to look out for. Until Dynamic Memory came on the scene, and the evangelizing that came with it by Microsoft, it was "only" the people virtualizing  SQL Server or Exchange & other big hungry application that were very aware of NUMA with its benefits and potential draw backs. If  you’re lucky the application is NUMA aware, but not all of them are, even the big names.

    A Peak Into The Future

    As it bears on this discussion, what is interesting that leaked screenshots from Hyper-V 3.0 or vNext  … have NUMA configuration options for both memory and CPU at the virtual machine level! See Numa Settings in Hyper-V 3.0 for a picture. So the times that you had to script WMI calls (see http://blogs.msdn.com/b/tvoellm/archive/2008/09/28/looking-for-that-last-once-of-performance_3f00_-then-try-affinitizing-your-vm-to-a-numa-node-.aspx) to assign a VM to a NUMA node might be over soon (speculation alert) and it seems like a natural progression from the ability to disable NUMA with W2K8R2SP1 Hyper-V in case you need it to avoid NUMA issues at the Hyper-V host level. Hyper-V today is already pretty NUMA aware and as such it will try to get all memory for a virtual machine from a single NUMA node and only when that can’t be done will it span across NUMA nodes. So as stated, Hyper-V with Windows Server 2008 R2 SP1 can prevent this form happening as we can disable NUMA for a Hyper-V host now. The downside is that you can’t get more memory even if it’s available on the host.

    NumaSpanning

    A working approach to reduce possible NUMA overhead is to limit the number of CPUs to 2 as this gives the largest amount of memory to the CPUs, in this case 50%. 4 CPUs only control 25%, etc.So with more CPU (and NUMA nodes) the risk of NUMA spanning is getting bigger very fast. For memory intensive applications scaling out is the way to go. Actually you could state that we do scale up the NUMA nodes per socket (lots of cores with the most amount of direct accessible memory possible) and as such do not scale up the server. If you can keep your virtual machines tied to a single CPU on a dual socket server to try and prevent any indirect memory access and thus a performance hit. But that won’t always work. If you ever wondered when an 8/12/16 core CPU comes in handy, well voila … here a perfect case: packing as many cores on a CPU becomes very handy when you want to limit sockets to prevent NUMA issues but still need plenty of CPU cycles. This should work as long as you can address large amounts of RAM per socket at fast speeds and the CPU internally isn’t cut up into to many multiple NUMA nodes, which would be scaling out NUMA node in the same CPU and we don’t want that or we’re back to a performance penalty.

    Stacking The Deck

    One way of stacking the deck in your favor is to keep the heavy apps on their own Hyper-V cluster. Then you can tweak it all you want to optimize for SQL Server, Exchange, … etc. When you throw these virtual machines in your regular clusters or for crying out loud on a VDI cluster your going to wreak havoc on the performance. Just like mixing server virtualization & VDI is a bad idea (don’t do it), throwing vCPU hungry, memory hogging servers on those cluster is just killing of performance and capacity of a perfectly good cluster. I have gotten into arguments over this as some thing one giant cluster for whatever need is better. Well no, you’ll end up micro managing placement of VM with very different needs on that cluster effectively “cutting” it up in smaller “cluster parts”. Now is separate clusters for different needs always the better approach? No, it depends, If you only have some small SQL Server needs you  get away with one nice cluster. It depends, I know, the eternal consultants answer, but I have to say it. I don’t want to get angry mails from managers because someone set up a 6 node clusters for a couple of SQL Server Express databases Winking smile There are also concepts called testing, proof of concept, etc. It’s called evidence based planning. Try it, it has some benefits that become very apparent  when you’re going to virtualize beefy SQL Server, SharePoint and Exchange servers.

    How do you even know it is happening apart from empirical testing. Aha, excellent question! Take a look at the "Hyper-V VM Vid Numa Node" counter set and read this blog entry by on this subject http://blogs.msdn.com/b/tvoellm/archive/2008/09/29/hyper-v-performance-counters-part-five-of-many-hyper-vm-vm-vid-numa-node.aspx. And keep an eye on the event log for http://technet.microsoft.com/hi-in/library/dd582929(en-us,WS.10).aspx (for some reason there is no comparable entry for W2K8R2 on TechNet)

    Conclusions

    To conclude, all of the above people is why I’m interested in the some of the latest generation of servers. The architecture of the hardware allows for a the processor to address twice the "normal" amount of memory when you only put dual CPUs on a quad socket motherboard. The Dell PowerEdge R810 and the M910 have this and it’s called a FlexMemory Bridge and that allows more memory to be available without a performance hit. They also allow for more memory per socket at higher speeds. If you put a lot of memory directly addressable to one CPU you see a speed drop. A DELL R710 with 48 GB of RAM runs at 1033 MHZ  but put 96 GB in there and you fall back to 800 Mhz. So yes, bring on those new quad socket motherboards with just 2 sockets used, a bunch of fast direct accessible memory in a neat 2 unit server package with lost of space for NIC cards & FC HBAs if needed. Virtualization heaven :-) That’s what I want so I can give my VMs running SQL Server 2008 R2 & "Denali" (when can I call it SQL Server 2012?) a bigger amount of direct accessible memory form their NUMA node. This can be especially helpful if you need to run NUMA unaware applications like SAP or such. Testing is the way to go for knowing how well a NUMA aware hyper visor and a NUMA aware application figure out the best approach to optimize the NUMA experience together.  I’m sure we’ll learn more about this as more and more information becomes available and as technology evolves.  For now we optimize for performance with NUMA where we can, when we can with what we have :-) For Exchange 2010 (we even have virtualization support for DAG mailbox servers now as well) scaling out is easier as we have all the neatly separate roles and control just about everything down to the mail client. With SQL Server applications this is often less clear. There is a varied selection of commercial and home grown applications out there and a lot of them can’t even scale out, only up. So your mileage of what you can achieve may vary. But for resource & memory heavy applications under your control, for now, scaling out is the way to go.

    I’m Attending The E2E Virtualization Conference


    Well I’ve just finished doing the paperwork for attending the Experts 2 Experts conference in London http://www.pubforum.info/pubforum/E2E2011London.aspx. It runs from 18th to 20th November 2011. I’m looking forward to this one as I’m going to meet up with a lot of people from my on line network and have a change to discuss our virtualization experiences and share information in real life, face to face.

    It’s good to get to attend vendor independent events and exchange information, enrich and extend our networks. I already know several people from my twitter/blogging network will be attending and I’m happy to meet up with you if you’re there. Just let me know via e-mail, the feedback option on this blog or via twitter (@workinghardinit). Well, I’ll see you there!

    Hyper-V Is Right Up There In Gartner’s Magic Quadrant for x86 Server Virtualization Infrastructure


    So how do you like THEM apples? 

    Well take a look at this people, Gartner published the following on June 30th Magic Quadrant for x86 Server Virtualization Infrastructure

    A-Hyper-V-GartnerQuadrant

    Figure 1: Magic Quadrant for x86 Server Virtualization Infrastructure (Source: Gartner 2011)

    That’s not a bad spot if you ask me. And before the “they paid there way up there” remarks flow in, Gartner works how Gartner works and it works like that for everyone (read” the other vendors” on there) so that remark could fly right back into your face if you’re not careful. To get there in 3 years time is not a bad track record. And if you believe some of the people out there this can’t be true. Now knowing that they only had Virtual Server to offer before Hyper-V was available and I was not using that anywhere. No, not even for non critical production an testing as the lack of X64 bit VM support made it a “no go” product for me. So the success if Hyper-V is quite an achievement. But back in 2008, I did go with Hyper-V as a high available virtualization solution, after having tested and evaluated it during the Beta & Release Candidate time frame. Some people thought I was making a mistake.

    But the features in Hyper-V were  “good enough” for most needs I needed to deal with and yes I knew VMware had a richer offering and was proven technology, something people never forget to mention that to me for some reason. I guess they wanted to make sure I hadn’t been living under a rock the last couple of years. They never mentioned the cost and some trends however or looked at the customers real needs. Hyper-V was a lot better than what most environments I had to serve had running at the time. In 2008 those people I needed to help were using VMware Server or Virtual Server. Both were/are free but for anything more than lightweight applications on the “not that important” list they are not suitable. If you’re going to do virtualization structurally you need high availability to avoid the risks associated with putting all your eggs in one basket. However as you might have guessed these people did not use ESX. Why? In all honesty, the cost associated.

    In the 2005-2007 time frame servers where not yet at the cost/performance ratio spent they reached in 2008 and far cry from where they are now. Those organizations didn’t do server virtualization because from the cost perspective in both licensing fees for functionality and hardware procurement. It just didn’t fit in yet.  The hardware cost barrier had come down and now with Hyper-V 1.0 we got a hyper visor that we knew could deliver something that was good enough to get the job done at a cost they could cover. We also knew that Live Migration and Dynamic Memory were in the pipe lines and the product would only become better. Having tested Hyper-V I knew I had a technology to work with at a very reasonable price (or even for free) and that included high availability.  Combine this with the notion at the time that hyper visors are becoming commodities and that people are announcing the era of the cloud. Where do you think the money needs to go? Management & Applications. Where did Microsoft help with that? The System Center suite. System Center Virtual Machine Manager and Operations Manager. Are those perfect at their current incarnations? Nope. But have you looked at SCVMM 2012 Beta? Do you follow the buzz around Hyper-V 3.0 or vNext? Take a peak and you know where this is going. Think private & hybrid cloud. The beef with the MS stack lies in the hyper visor & management combination. Management tools and integration capability  to help with application delivery and hence with the delivery of services to the business. Even if you have no desire or need for the public cloud, do take a look. Having a private cloud capability enhances your internal service delivery. Think of it as “Dynamic IT on Steroids”. Having a private cloud is a prerequisite for having a Hybrid cloud, which aids in the use of the public cloud when that time comes for your organization. And if never, no problem, you have gotten the best internal environment possible, no money or time lost. See  my blog for more Private Clouds, Hybrid Clouds & Public Clouds musings on this.

    Is Hyper-V and System Center the perfect solution for everyone in every case? No sir.  No single product or product stack can be everything to everyone. The entire VMware versus Hyper-V mud slinging contests are at best amusing when you have the time and are in the mood for it. Most of the time I’m not playing that game. The consultants answer is correct: “It depends”. And very few people know all virtualization products very well and have equal experience with them. But when you’re looking to use virtualization to carry your business into the future you should have a look at the Microsoft stack and see if can help you. True objectivity is very hard. We all have our preferences and monetary incentives and there are always those who’ll take it to extreme levels. There are still people out there claiming you need to reboot a Windows server daily and have BSODs all over the place. If that is really the case they should not be blaming technology. If the technology was that bad they would not need to try and convince people not to use it, they would run away from it by themselves and I would be asking you if you want fries with your burger. Things go “boink” sometimes with any technology, really, you’d think it was created by humans, go figure. At BriForum 2011 in London this year it was confirmed that more and more we’re seeing multi hyper visors in use with large to medium organizations. That means there is a need for different solutions in different areas and that Hyper-V was doing particular well in green field scenarios.

    Am I happy with the choices I made? Yes. We’re getting ready  to do some more Hyper-V projects and those plans even include SCVMM 2012 & SCOM 2012 together with and upgrade path to Hyper-V vNext. I mailed the Gartner link to my manager, pointing out my obstinate choice back then turned out rather well Winking smile.

    A VDI Reality Check @ BriForum 2011 For Resource Hungry Desktops In A Demanding Environment


    So what did we notice? VDI generates enough interest from various angles that is for sure. Both on the demand side as on the (re)seller & integrator side. Most storage vendors are bullish enough to claim that they can handle whatever IOPS required to get the most bang for the buck but only the smaller or newest players where present and engaged in interaction with the attendees. One thing is for sure VDI has some serious potential but it has to be prepared well and implemented thoroughly. Don’t do it over the weekend and see if it works out for all your users Smile

    The amount of tools & tactics for VDI on both the storage side and the configuration/management side is both more complex and diverse than with server virtualization.  The possible variations on how to tackle a VDI project are almost automatically more numerous as well. This is due to the fact that desktops are often a lot more complex and heterogenic in nature than server side apps. On top of that the IO on a desktop can be quite high. Some of it can be blamed on the client OS but lots of that has to do with the applications and utilities used on desktops.  I think that developers had so many resources at their disposal that there wasn’t to much pressure on optimization there. The age of multi cores and x64 bit will  help in thinking more about how and application uses CPY cycles but virtualization might very well help in abstracting that away. When a PC has one vCPU and the host has 4*8 cores, how good is that hyper visor at using all that pCPU power to address the needs of that one vCPU?  But I digress. All in all it takes more effort and complexity to do VDI than server virtualization. So there is a higher cost or at least the APEX isn’t such a convincing clear cut story as it is with server virtualization. If you’re not doing the latter today when and where you can you are missing out of a major number of benefits that are just to good to ignore. I wouldn’t dare say that for VDI. Threating VDI just like server virtualization is said to be one of the main reasons of VDI failing or being put on hold or being limited to a smaller segment of the desktop population.

    My experience with server virtualization is also with rather heterogenic environments where we have VMs with anything between 1 and 4 virtual CPUs, 2 to 12 GB of RAM. And yet I have to admit it has been a great success. Never the less I can’t say that helped me much in my confidence that a large part of our desktop environment can be virtualized successfully and cost effectively as I think that our desktops are such vicious resource hogs they need another step forward in raw power and functionality versus cost. Let briefly describe the environment. 85% of the workforce at my current gig have dual 24” wide screens, with anything between 4GB to 8 GB of RAN,Quad Core CPUs and SCSI / SATA 10.000 RPM disks with anything between 250 GB to 1TB local storage in combination with very decent GPUs. Now the employees run Visual Studio, SQL Server, multiple CAD & GIS packages and various specialized image processing software that gauges image and other files that can be 2GB or even higher. If they aren’t that large than they are still very numerous. On top of that 1Gbps network to the desktop is the only thing we offer anymore. So this is not a common office suite plus a couple of LOB applications order, this is a large and rich menu for a very hard to please audience. That means that if you ask them what they want, they only answer more, more, more … And I won’t even mention 3D screens & goggles.

    Now I know that X amount of time the machines are idle or doing a lot less but in the end that’s just a very nice statistic. When a couple of dozen users start playing around with those tools and throw that data around you still need them and their colleagues to be happy customers. Frankly even with the physical hardware that hey have now that can be a challenge. And please don’t start about better, less resource wasting applications and such. You can’t just f* the business and tell them to get or wait for better apps. That flies in the face of reality. You have to be able to deliver the power where and when needed with the software they use. You just can’t control the entire universe.

    I heard about integrators achieving 40-60 VMs per host in a VDI project. Some customers can make due with Windows 7 and 1GB of RAM. I’m not one of those. I think the guys & gals of the service desk would need armed escorts if we rolled that out to the employees they care for. One of the things I notice is that a lot of people choose to implement storage just for VDI. I’m not surprised. But until now I’ve not needed to do it. Not even for databases and other resource hogs. Separate clusters, yes, as the pCPU/vCPU ratio and Memory requirements differ a lot from the other servers. The fact that the separate cluster uses other HBA’s en LUNS also helps.

    Next to SANs local storage for VDI is another option for both performance and cost. But for recovery this isn’t quite that good a solution. The idea of having non persistent disks (in a pool) or a combination of that with persistent disk is not something I can see fly with our users. And frankly a show of hands at BriForum seems to indicate that this isn’t very wide spread. VDI takes really high performance storage, isolated from your server virtualization to make it a success. On top of that if you need control, rapid provisioning, user virtualization &  workspace management in a layered/abstracted way. Lost of interest there but again, yet more tools to get it done. Then there is also application virtualization, terminal service based solutions etc. So we get a more involved, divers and expensive solution compared to server virtualization. Now to offset these costs we need to look at what we can gain. So where do the benefits to be found?

    With non persistent disk you have a rapid provisioning of know good machines in a pool but your environment must accept this and I don’ see this flying well in face of the reality of consumerization of ICT. De-duplication and thin provisioning help to get the storage needs under control but the bigger the client side storage needs and the more diverse these are the less gains can be found there. Better control, provisioning, resource sharing, manageability, disaster recovery, it is all possible but it is all so very specific to the environment compared to server virtualization and some solutions contradict gains that might have been secured with other approaches (disaster recovery, business continuity with SAN versus local storage). One of the most interesting possibilities for the environment I described was perhaps doing virtualization on the client. I look at it as booting from VHD in the Windows 7 era but on steroids. If you can save guard the images/disks on a SAN  with de-duplication & thin provisioning you can have high availability & business continuity as loosing the desktops is a matter of pushing to VM to other hardware which due to abstraction by virtualization should be a problem. It also deals with the network issues of VDI, a hidden bottle neck as most people focus on the storage. Truth be told, the bandwidth we consume is that big, it could be that VDI might have it best improvements for us on that front.

    Somewhat surprising was that Microsoft, whilst being really present at PubForum in Dublin, was nowhere to be seen at BriForum. Citrix was saving it’s best for their own conference (Synergy) I think. To bad, I mean when talking about VDI in 2011 we’re talking about Windows 7 for the absolute majority of implementations and Citrix has a strong position in VDI really giving VMware a un for their money. Why miss the opportunity? And yesterday at TechEd USA we heard about the HSBC story of a 100.000 seat VDI solution on Hyper-V http://www.microsoft.com/Presspass/press/2011/may11/05-16TechEd11PR.mspx.

    On a side note I wish I would/could have gone to PubForum as well. Should have done that Smile. Now these musings are based upon what I see at my current place of endeavor. VDI has a time and place where it can provide significant operational and usage advantages to make the business case for VDI. Today, I’m not convinced this is the case for our needs at this moment in time. looking at our refresh schedule we’ll probably pass on a VDI solution for the coming one. But booting from VHD as  a standard in the future… I’m going to look into that, it will be a step towards the future I think.

    To conclude BriForum 2011 was a good experience and the smaller scale of it makes for good and plenty of opportunities for interaction and discussion. A very positive note is that most vendors & companies present where discussing real issues we all face. So it was more than just sales demos. Brian, nice job.

    Hyper-V 3 & Windows 8, Musings on Hypervisors & Crystal Ball Time


    I think Microsoft sales might be getting a head ache by the ever increasing speed by which people are looking and long for features in the “vNext” version of their products whilst they are still just getting people to adopt the current releases but I like the insights and bits of information. It helps me plan better in the long term.

    A lot of you, just like me, have been playing around with Hyper-V since the betas of Windows 2008. As I run Windows Server tweaked to act en look like a workstation I wanted to move my virtualization solution on the desktop to hyper-V as well. I use Windows server as a desktop because it allows me to install the server roles and features for quick testing, screen shot taking, managing the lab, etc. during writing and documenting.

    Now a lot of you will have run into some performance issues on the host related to the video card, the GPU. Ben Armstrong mentioned it on his blog and wrote Knowledge Base article on it (http://support.microsoft.com/kb/961661). He later provided more insight into the cause of this behavior in the following blog post http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/11/16/understanding-high-end-video-performance-issues-with-hyper-v.aspx it’s a good write up explaining why things are the way they are and why this cannot be “fixed” easily.

    For me this was a bummer as I had a decent GPU on my workstation and I sometimes do need the advanced graphic capabilities of the card.

    So when the first rumors of about “Windows 8″ & “Hyper-V version 3” hit the internet I was very happy to see the mention of Hyper-V being used in Windows 8 as a client hyper-visor virtualization solution. See http://virtualization.info/en/news/2010/07/first-details-about-hyper-v-3-0-appear-online.html, this link was brought to my attention by Friea Berg from Netapp on twitter (@friea). Now there is more to it than just my tiny needs and wishes. Integration with App-V and other functionality that integration of Hyper-V in “MiniWin” can offer, but have a look at the link and follow the source links if you can read French.

    The thing is that Hyper-V in the client would mean that they will have fixed this GPU performance issue by then. They have to; otherwise those plans can’t work. As the code bases of Windows client and server run parallel it should also be fixed on the server side. We’re used to more rich functionality in desktop virtualization by VMware Workstation en Virtual PC. Fixing this also makes sense in another way. Microsoft could be moving forward on one virtualization solution both on server and the desktop and gradually phasing out Virtual PC. They can opt to provide richer functionality with extra features that might be unnecessary or even undesirable on a server but is very handy on a workstation or on a lab server. This is all pure speculation (crystal ball time) by me but I’m pretty convinced this where things are heading.

    Combine this that by the time “Windows 8″ arrives most hardware in use will be much more capable of providing advanced virtualization features and enhancements and in all aspects, things are looking bright. So no I can dream of affordable 32 GB laptops with dual 8 core CPUs with a wicked high end GPU running Hyper-V.

    By the way VMware is also working on similar ideas to provide a true hypervisor on the desktop I guess as they seem to be abandoning VMware Server (no enhancements, not fixes, etc.) and I can also imagine them making VMware Workstation as true hyper-Visor to reduce the product line development and support costs. Pure speculation, I know, especially since the confusing message around off line VDI but never underestimate the ability of a company tho change its mind when practical for them. ;-)

    Someone at SUN Oracle must be smiling at all of this, especially as Virtual Box is getting richer and richer with memory ballooning, hot add CPU capability (I like this and I want this in Hyper-V), etc. unless Microsoft and VMware totally succeed in making hosted virtualization a thing of the past. In the type 1 hypervisor space they are consolidating what they bought. Virtual Iron (Xen) was killed almost immediately and the SUN xVM Hypervisor is also dead. Both have been replaced by Oracle VM (Xen).

    So as everyone seems to have good type 1 hypervisors that are ever improving it might become less and less a differentiator and more of a commodity that one day will be totally embedded in the hardware by Intel and AMD. The OS and software vendors then provide the management, high availability features and integration with their products. And if that is the evolution of things where does that leave KVM (Linux) in the long run? Probably the world is big enough for both types. For the moment both types seem to be doing fine.

    As I said, all of this is musings and crystal ball time. Dreaming is allowed on sunny lazy Sunday afternoons. Open-mouthed