Hyper-V 3.0 Leaked Screen Shots From Windows 8 Create A Buzz


Well, last Monday, June 20th 2011 was quite a twitter active day about some leaked Windows 8 screen shots that lifted a tip of the veil  about Hyper-V 3.0 / Hyper-V vNext or Hyper-V 3. You can take a peak here (Windows Now by Robert McLaws) and here (WinRumors) to see for yourself.

Now Scot Lowe also blogged on this but with some more detail. The list below is the one form Scott Lowe’s blog http://blogs.virtualizationadmin.com/lowe/2011/06/20/hyper-v-30-%e2%80%93-what%e2%80%99s-coming/ but I added some musings and comments to certain items.

  • Virtual Fibre Channel Adapter  ==> nice, I guess the competition of iSCSI was felt. How will this turn out/means with regards to SAN/DRVIVER/HBA support is interesting and there is a mention of virtual fiber channel SAN in the screenshots …
  • Storage Resource Pools  & Network Resource Pools   ==> this could become sweet … I’m dreaming about my wish list feedback to Microsoft but without details I won’t speculate any further.
  • New .VHDX virtual hard drive format (Up to 16TB + power failure resiliency) ==> This is just plain sweet, we’re no longer bound by 2TB LUNs on our physical storage (SAN), now we can take that to the next level.
  • Support for more than 4 cores! (My machine has 12 cores) ==> I say “Bring it on!”
  • NUMA – Memory per Node, Cores per Node, Nodes per Processor Socket ==> Well, well … what will this translate into? Help deal with Dynamic Memory? Aid in virtualization of SQL Servers (i.e. better support for scaling up, for now scaling out works better here).
  • Hardware Acceleration (Virtual Machine Queue & IPsec Offload)
  • Bandwidth Management ==> Ah, that would be nice :-)
  • DHCP Guard  ==> This is supposed to drop DHCP traffic from VM “masquerading” as DHCP servers. Could be very useful, but need details. Will a DHCP server need to be authorize?. What with non Windows VMs, do you add “good” DHCP servers to an allow list?
  • Router Guard  ==> same as above but for rogue routers.  Drops router advertisement and redirection messages from unauthorized virtual machines pretending to be others. So this sound like an allow list.
  • Monitor Port Provides for monitoring of network traffic in and out of a virtual machine. Forwards the information to a monitoring virtual machine.  ==> Do I hear some cheering network engineers?
  • Virtual Switch Extensions.So far, there appear to be two filters added: NDIS Capture LightWeight Filter and WFP vSwitch Layers LightWeight Filter.

All of this is pretty cool stuff and has many of us wanting to get our hands on the first beta :-) I’ve been running Windows Server tweaked as desktop since Windows 2003 so I have Hyper-V already in that role but hey bring it on. I ‘m very eager to get started with this. I have visions on System Center Virtual machine Manager 2012, Hyper-V 3.0 with very capable recent SAN technology … Open-mouthed smile

Hyper-V Team at Microsoft Rocks


The Hyper-V team at Microsoft and the Belgian Hyper-V contact at Microsoft are a very communicative and responsive group of people. Every time I have brought a need, a question or a problem to their attention they take it up and deliver a solution, an answer or a fix respectively. An example of this can be found in this blog post http://workinghardinit.wordpress.com/2010/01/29/microsoft-really-listens-enhances-nvspbind/ on functionality they added to nvspbind on our request.

More recently we ran into an issue with the Windows 2008 SP2 SKUs “Without Hyper-V” which I blogged about in KB2230887 Hotfix for Dynamic Memory with Windows 2008 Standard & Web edition does not apply to without Hyper-V editions?. I also brought this to the attention of John Howard. The reply was swift and positive. The Dynamic Memory owner Serdar was on the case and working with the Windows Sustained Engineering group to provide a re-release of the hotfix so it would support those “Without Hyper-V” SKUs. This has been fixed now as announced on the TechNet forum thread here, just scroll to the latest post. The only negative point here is that they forgot to mention it on the blog post and that you now need to call them to get the hotfix, why this is, I do not know.

UPDATE: As of 24 hours the hotfix is downloadable again http://support.microsoft.com/kb/2230887 (v2).

Over the years I’ve been impressed with what they delivered with Hyper-V out of the box. They’ve had some issues but they recognized them early and fixed ‘m fast. It has been a pleasure working with this technology for the last three years and I am now enjoying the benefits of Dynamic Memory with Windows Server 2008 SP1. All I can say that if you have a genuine interest in the technology and communicate clearly and politely with Microsoft personnel they are very engaged to help you. I’m pretty pleased Smile.

Consider CPU Power Optimization Versus Performance When Virtualizing


Over the past couple of years I’ve read, heard, seen and responded to reports of users dealing with performance issues when trying to save the planet with the power saving options on CPUs. As this if often enabled by default they often don’t even realize this is in play. Now for most laptop users this is fine and even for a lot of desktop users it delivers upon the promise of less energy consumption. Sure, there are always some power users and techies that need every last drop of pure power but on the whole life is good this way. So you reduce your power needs, help save the planet and hopefully some money along the way as well. Now, even when your filthy rich and money is no objection to you what so ever, you could still be in a place where there are no more extra watts available due to capacity being maxed out or the fact they have been reserved for special events like the London Olympics, so keeping power consumption in check becomes a concern for you as well.

Now this might make good economic sense for a lot of environments (mobile computing) but in other places it might not work out that well. So when you have al this cool & advanced power management running in some environments you need to take care and not turn your virtualization hosts into under achievers. Perhaps that putting it too strong but hey I need to wake you up to get your attention. The more realistic issue is that people are running more and more heavy workloads in virtual machines and that the hosts used for that contain more and more cores per socket using very advanced CPU functionalities and huge amounts of RAM. Look at these KB article KB2532917: Hyper-V Virtual Machines Exhibit Slow Startup and Shutdown and KB 2000977: Hyper-V: Performance decrease in VMs on Intel Xeon 5500 (Nehalem) systems. All this doesn’t always compute (pun intended) very well.

Most hyper-V consultants will also be familiar with the blue screen bugs related to C-state like You receive a "Stop 0x0000007E" error on the first restart after you enable Hyper-V on a Windows Server 2008 R2-based computer and Stop error message on a Windows Server 2008 R2-based computer that has the Hyper-V role installed and that uses one or more Intel CPUs that are code-named Nehalem: "0x00000101 – CLOCK_WATCHDOG_TIMEOUT" on top of the KB articles mentioned above. I got bitten by the latter one a few times (yes I was a very early adopter of Hyper-V). Don’t start bashing Microsoft too hard on this, VMware and other vendors are dealing with their own C-State (core parking) devils (just Google for it) and read the articles to realize sometimes this is a hardware/firmware issue. A colleague of mine told me that some experts are advising to just turn C-state of in a virtualization environment. I’ll leave that to the situation at hand but it is an area that you need to be aware of an watch out for. As always, and especially if you’re reading this in 2014, realize that all information has a time limited shelf life based on the technology at the time of writing. Technology evolves and who knows what CPUs & hyper visors will capable of in the future?  Also these bugs have been listed on most Hyper-V blogs as they emerged, so I hope you’re not totally surprised Smile.

It’s not just the C-States we need to watch out for, the P-States have given us some performance issues as well. I’ve come across some “strange” results in virtualized environments that resulted from “merely confused” system administrators to customers suffering from underperforming servers, both physical and virtual actually. All those fancy settings like SpeedStep (Intel) or Cool’n’Quiet (AMD), might cause some issues, perhaps not in your environment but it pays to check it out and be aware of these as servers arrive with those settings enabled in the BIOS and Windows 2008 R2 is using them by default. Oh, If you need some reading on what C-States and P-States are take a look at: C-states and P-states are very different

Some confusion can happen when virtual machines report less speed than the physical CPUs can deliver, worsened by the fact that sometimes it varies between VMs on the same host. As long as this doesn’t cause performance issues this can be lived with by most people but the inquisitive minds. Wen performance takes a dive, servers start to respond slower and apps wind down to a glacial pace; you see productivity suffer which causes people to get upset. To add to the confusion SCVMM allows you assign an CPU type to your VMs as a hint to SCVMM to help out with intelligent placement of the virtual machines (see What is CPU Type in SCVMM 2008 R2 VM Processor Hardware Profile?), which confuses some people even more. And guess on whose desk that all ends up Confused smile?

When talking performance on servers we see issues that pitch power (and money, and penguins) savings against raw performance. We’ve seen some SQL servers and other CPU hungry GIS applications servers underperform big time (15% to 20%) under certain conditions. How is this possible? Well when CPUs are trimmed down in voltage and frequency to reduce power consumption when the performance is not needed. The principle is that they will spring back into action when it is needed. In reality this “springing” back into action isn’t that responsive. It seems that the gradual trimming down or beefing up the CPUs voltage and frequency isn’t that transparent to the processes needing it. Probably because constant, real time, atomic adjustments aren’t worth the effort or are technically challenging. For high performance demands this is not good enough and could lead to more money spend on extra servers and time spend on different approaches (code, design, and architecture) to deal with an somewhat artificial performance issue. The only time you’re not going to have these issues is when your servers are either running apps with mediocre to low performance needs or when they are so hungry for performance those CPUS will never be trimmed down, they just don’t get the opportunity to do this. There is a lot to think about here and now add server virtualization into the mix. No my dear application owner Task Manager’s CPU information is not the not the real raw info you can depend on for the complete truth and nothing but the truth Winking smile.  Many years ago CPUz was my favorite tool to help tweak my home PC. Back then I never thought it would become part of my virtualization toolkit but it’s easy and faster than figuring it out with all the various performance counters.

Now don’t think this is a “RDBMS only” problem and that ,since you’re a VDI guy or a GIS or data crunching guy, you’re out of the woods. VDI and other resource hungry applications (like GIS and data crunching) that show heterogenic patterns in CPU needs can suffer as well and you’d do well to check on your vCPUs and pCPUs and how they are running under different loads. I actually started looking at SQL Server because of seeing the issue first with freaked out GIS application running at 100%v CPUs and the pCPU being all relaxed about it. It made me go … “hang on I need to check something” that’s when I ran into an TechNet forum post on Hyper-V Core Parking performance issues leading to some interesting articles by Glenn Berry and Brent Ozar who are dealing with this on physical servers as well. The latter article even mentions a  HP ILO card bug that prevents the CPU from throttling back up completely. Ouch!

Depending on your findings and needs you might just want turn SpeedStep or Cool’n’Quiet off either in the BIOS or in windows. Food for taught, what if one day some vendors decide you don’t need to be able to turn that off, it disappears from your view and ultimately from you control … The “good enough is good enough” world can lead to a very mediocre world. I’m I being paranoid? Nope, not according to Ron Oglesby (you want VDI reality checks? Check him out) in his blog post SpeedStep and VDI? Is it a good thing? Not for me. where CISCO UCS 230 blades are causing him problems.

So what do I do? Well to be honest, when the need for stellar and pure raw performance is there, the power savings go out the window whenever I see that it’s causing issues. If it doesn’t, fine, than they can stay. So yes, this means no money saved, no reduction of cooling costs and penguins (not Linux, but those fluffy birds on the South Pole that can’t fly) loosing square footage of ice surface. Why? Because the business wants and needs the performance and they are nagging me to deliver it. When you have a need for that performance you’ll make that trade off and it will be the correct decision. Their fancy new servers performing worse or not better than what they replaced and that virtualization project getting bashed for failing to deliver? Ouch! This is unacceptable, but, to tell you the truth, I kind of like penguins. They are cute. So I’m going to try and help them with Dynamic Optimization and Power Optimization in System Center Virtual Machine Manager 2012. Perhaps this has a better change for performance critical setups to provide power savings than the advanced CPU capabilities. With this approach you have nodes running on full power, while distributing the load and shutting down entire nodes when there is over capacity. I’ll be happy to report how this works out in real life. But do mind that this is very environment dependent and you might not have any issues what so ever, so don’t try to fix what is not broken.

The thing is in most places you can’t hang around for many weeks fine tuning very little configuration option in the CPUs in collaboration with developers & operations. The production needs, costs and time constraints (by the time they notice any issues “play time” has come and gone) just won’t allow for it. I’m happy to have those options where I have the opportunity to use them but in most environments I’ll stick with easier and faster fixes due to those constraints. Microsoft also informs us to keep an eye on power saving settings in this KB article Degraded overall performance on Windows Server 2008 R2 and offers some links to more guidance on this subject. There is no “one size fits all” solution. By the way some people claim that the best performance results come from leaving SpeedStep on in the BIOS and disabling it in Windows. Others swear by disabling it in the BIOS. I just tend to use what I can where I can and go by the results. It’s all a bit empirical and this is a cool topic to explore, but as always time is limited and you’re not always in the position where you can try it all out at will.

In the end it comes down to making choices. This is not as hard as you think as long as you make the right choices for the right reasons. Even with the physical desktops that are Wakeup On LAN (WOL) enabled to allow users to remotely boot them when they want to work from home or while traveling I’ve been known to state to the bean counters that they had to pick one of two: have all options available to their users or save the penguins. You see WOL with a machine that has been shut down works just fine. But when they go into hibernation/standby you have to enable the NICs to be allowed to wake up the computer from hibernation or standby for WOL to work or the users won’t be able to remotely connect to them. See more on this at http://technet.microsoft.com/en-us/library/ee617165(WS.10).aspx But this means they’ll wake up a lot more than necessary by non-targeted network traffic. So what? Think of the benefits! An employee wanting to work a bit at 20:00 PM to get work done on her hibernating PC at work so she can take couple of hours to take her kid to the doctor next morning can do so = priceless as that mother knows how great of a boss and company she works for.

Heads Up On Microsoft Security Bulletin MS11-047: Vulnerability in Hyper-V Could Allow Denial of Service (KB2525835)


Well it’s patch Tuesday again and here’s a quick heads up to all people using Hyper-V.  I would like to point your attention to http://www.microsoft.com/technet/security/bulletin/MS11-047.mspx.  This security bulletin deals with a vulnerability in Hyper-V that could allow a denial of service as mentioned in knowledge base article 2525835 which can be found here http://support.microsoft.com/kb/2525835. As you can read the severity rating is important, not critical. If you want to manually download the update you can get it here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=c9c6c36d-a455-42f7-b7d4-9fb9824c07cb

This is, if I’m not mistaken, only the third security fix for Hyper-V since the Windows 2008 era. That is not a bad track record at all! Now look at the information available under mitigating factors:  An attacker must have valid logon credentials and be able to log on locally to exploit this vulnerability. The vulnerability could not be exploited remotely or by anonymous users. Now that isn’t to much to ask from your virtualization infrastructure I hope. If it is, we need to talk Smile As the time of writing no known exploits are out in the wild.

So review this and plan to deploy this at you earliest available maintenance windows. When you’re running a cluster with Live Migration you can do this with no down time for the guests what so ever as it requires a restart.

Microsoft Offers Operations Manager Community Evaluation Program (2012 CEP)


At TechEd 2011 Microsoft announced the OpsMgr 2012 Community Evaluation Program (CEP) and are now inviting everyone to apply to take part in this in the public Beta time frame. They position a CEP as follows:

Many of you are likely familiar with Microsoft TAP’s, Technology Adoption Programs, where a small pool of customers partner with our engineering teams to preview and provide feedback on pre-beta software. TAP participants provide our engineers with some early guidance and validation of next generation software, prior to us releasing publicly-available beta software. TAP is a great program, but it starts very, very early on and usually fills up quick (and waay before beta). The OpsMgr 2012 TAP has been very active in helping us with early builds, but it is unfortunately full.

The Community Evaluation Program (CEP) has recently been created to provide a broader range of customers with an in-depth experience with our upcoming beta software.

Essentially, a CEP is an organized way of bringing our subject matter experts (SMEs) from our product teams, our community (like MVPs and experienced users) and those interested in taking a deep look at our v.Next software for evaluation and preparation for deployment purposes.

This is good news, we’ got SCVMM2012 Beta running in the lab, it will be nice to get our ands on SCOM 2012 Beta as well. For an overview of the Operations Manager 2012 CEP, take a look at TechNet blog post http://blogs.technet.com/b/momteam/archive/2011/06/02/now-enrolling-for-the-operations-manager-2012-cep.aspx and the OM12 CEP overview datasheet.

If this is to your liking you can get all the information you need here and follow this link to apply for the CEP Apply for the OpsMgr 2012 CEP. Somewhere in June the accepted participants will get the SCOM2012 topic schedule & access to the CEP discussion forums. If you have questions on all this you can send them to OMCEP@microsoft.com.

Some Feedback On How to defrag a Hyper-V R2 Cluster Shared Volume


Hans Vredevoort posted a nice blog entry recently on the defragmentation of Clustered Shared Volumes and asked for some feedback & experiences on this subject. He describes the process used and steps taken to defrag your CSV storage and notes that there may be third party products that can handle this automatically. Well yes, there are. Two of the most know defragmentation products support Cluster Shared Volumes and automate the process described by Hans in his blog.  Calvin made a very useful suggestion to use Redirected Access instead of Maintenance mode. This is what the commercial tools like Raxco PerfectDisk and Diskeeper also do.

As the defragmentation of Cluster Shared Volumes requires them to be put into Redirected Access you should not have “always on” defragmentation running in a clustered Hyper-V node. Sure the software will take care of it all for you but the performance hit is there and is considerable. I might just use this point here as yet another plug for 10 Gbps networks for CSV Smile. Also note that the defragmentation has to run on the current owner or coordinator node. Intelligent defragmentation software should know what node to run the defrag on, move the ownership to the desired node that is running the defragmentation or just runs it on all nodes and skips the CSVs storage it isn’t the coordinator for. The latter isn’t that intelligent. John Savill did a great blog post on this before Windows 2008 R2 went RTM for Windows IT Pro Magazine where he also uses PowerShell scripts to move the ownership of the storage to the node where he’ll perform the defragmentation and retrieves the GUID of the disk to use with the  defrag command. You can read his blog post here and see how our lives have improved with the commands he mentions would be available in the RTM version of W2K8R2 (Repair-ClusterSharedVolume  with –defrag option).

For more information on Raxco PerfectDisk you can take a look at the Raxco support article, but the information is rather limited. You can also find some more information from Diskeeper on this subject here.  I would like to add that you should use defragmentation intelligently and not blindly. Do it with a purpose and in a well thought out manner to reap the benefits. Don’t just do it out of habit because you used to do it in DOS back in the day Smile.

To conclude I’ll leave you with some screenshots from my lab, take during the defragmentation of a Hyper-V cluster node.

As you can see the CSV storage is put into redirected access:

0

 

And our machines remain on line and  available:

1

 

This is because we started to defrag it on the Hyper-V cluster node:

2

 

Here you can see that the guest files are indeed being defragmented, in this case the VHD for the guest server Columbia (red circle at bottom):

image