Carsten Rachfahl Interviews Me On Windows Server 2012 Storage Improvements


Carsten Rachfahl, a German Hyper-V Expert, friend and fellow MVP, interviewed me after the joint MVP effort at TEC 2012 in Barcelona. The subject was storage in Windows Server 2012. We found a great setting in the garden and got into quite a nice discussion on the subject.

It’s no surprise to anyone I guess that I’m very enthusiastic about what Microsoft is doing with storage on all levels in Windows Server 2012 and is trying to achieve for us, the customers from both a cost and performance and reliability perspective. It was a lot of fun to do and I see blinking lights in our eyes at many moments during this interview. Yes, working is important for many reasons, but when you can enjoy your work and have fun whilst doing it, life is pretty good Smile. So enjoy, we certainly did.

didierBL

Altaro Backup for Hyper-V Has Gifts for the Festive Season


Here’s an early X-Mas gift from Altaro. They are giving away 50 free licenses of their desktop backup solution to all Hyper-V admins until December 24th 2012. Altaro is better known for their cost effective and good Hyper-V Backup product.

There is no catch. Now there is no such thing as a free lunch in life but there are some very decent meals to be gotten at very democratic pricing. This is one such case. All you need to do is send them a screenshot of Hyper-V in your environment that proves that you’re really using Hyper-V. I guess that means I qualify due to the amount of Hyper-V related screenshots on my blog Winking smile. I’m going to check it out for sure.

What do you get? 50 licenses of their desktop backup solution ($2,000 worth of software). You’re free to use them in your company, at home of as a gift to friends and family. 50 Licenses is something that a lot of companies using Hyper-V in the SMB market can leverage to protect their desktops so that’s a pretty nice gift.

If you’re interested you can go to http://www.altaro.com/hyper-v/50-free-pc-backup-licenses-for-all-hyper-v-admins

There more information about Altaro Hyper-V Backup at http://www.altaro.com/hyper-v/ and http://www.altaro.com/hyper-v-backup/?LP=Xmas. If you’re a SMB shop in need of easy to use, affordable backup software for Hyper-V and want one that has full support for all features in Windows Server 2012 you should try them out. In that respect they were very fast to market beating most or all competitors I know (a lot of them still don’t have that support) They are also a non-aggressive vendor, which is something I appreciate.

Microsoft Management Summit 2013 Registration opens on December 3rd, 2012


Just as a heads up to all people planning to attend the Microsoft Management Summit 2013 (MMS 2013) this blog is to let you know that registrations open on December 3rd 2012.

image

So, I’d keep an eye out for the MMS 2013 site and register as soon as you get the opportunity. This event has the tendency to sell out fast.

Shared Nothing Live Migration Leverages SMB 3.0 Under the Hood


Shared Nothing Live Migration

By now most of you must have heard about the Shared Nothing Live Migration capabilities introduced with Windows Server 2012 Hyper-V. If not I suggest you check it out over here and then come back here for some extra insights in how it works.

Shared Nothing Live Migration is not magic however. It is made possible by the fact that it relies on some of the new capabilities SMB 3.0 in Windows Server 2012 brought us. Once you know this you also realize that this can be quite fast. The reason for this is that you can design your the network for Shared Nothing Live Migration with 10Gbps or higher, Multi Channel and RDMA for unprecedented throughput. Yup Smile, if you invest in setting up networking right the remaining bottle neck might be the amount of storage IO you can handle whilst reading from the source and writing to the target, or the CPU load you put o your host. Windows will protect you from draining your host beyond reason by the way.

Making Shared Nothing Live Migration Work

You need to set if up of course and do it right. Here’s a list of steps you need to do / check on every Hyper-V host involved.

  1. Enable incoming and outgoing live migrations on all involved Hyper-V host otherwise it will not work. If your host are part of  a cluster this is taken care of for you.
  2. Select an authentication protocol (CredSSP or Kerberos)
    Kerberos authentication allows you to Live Migrate VMs without having to login to the source host’s server itself. Kerberos authentication does require you to configure constrained delegation in Active Directory (don’t go for "Trust this computer for delegation to any services". Follow the principle of least privileges possible.
  3. Set the number of Simultaneous Live Migrations. Experiment with the best value for you environment. Test a little what’s
  4. Set the networks(s) for incoming Live Migrations. It’s best to design this and not just use any network.

See Keith Mayer’s excellent blog for more details.

Constraint Delegation

Shared Nothing Live Migration needs some prep work security wise before it will work. In Active directory you need to set up so constraint delegation permissions. To some people the concept of constraint delegation is brand new but if you’ve been deploying multi tiered web applications in your environment before this is a cookie you’ve dealt with many times before. It’s the same approach you need to get a web client using Windows Authentication to talk via an IIS web app or service to a SQL Server database and/or read file data from somewhere you’ve been configured this plenty of times.

Use an account to perform the Shared Nothing Live Migration that has administrator privileges on all computers that are involved. While you can use groups in AD to make your live and permission management easier when it comes to granting Share permissions & NTFS rights on folders it doesn’t work that way with constraint delegation. Groups can not be used here so you’ll need to use individual accounts. PowerShell scripting here can help lessen the work if you have many hyper-v hosts involved. In large environments (up to 64 nodes!) this inundates the constraint delegations tab with computer names, so PowerShell really is your friend here.

On each computer object you need to set the delegation permissions for the  CIFS and the Microsoft Virtual System Migration Service to all other computers you want to involve in Shared Nothing Live Migration as a source or a target.

IMPORTANT! Hey why do we need CIFS constraint delegation here? Well indeed because Shared Nothing Live Migration under the hood leverages SMB 3.0. It creates a temporary file share on the target to get the job done Smile! So once you realize that Shared Nothing Live Migration uses SMB 3.0 shares to do it’s magic it than becomes obvious why these constraint delegation permissions for CIFS in active directory are needed.

Visualizing the SMB 3.0 share in action

At the source server (ZULU) we run  after starting the Shared Nothing Live Migration and see that we have a connection to a share o the target server. That share is named after the source server with an ID like ZULU.3341302342$. So it’s a hidden share.image

 

On the target server we run Get-SmbSession | fl and see that indeed the source computer has two sessions open on target server.

image

 

Let’s see if a share is created using Get-SmbShare.on the target. Yes there is:

image

 

In Computer Management it shows up like this on the target sever:

image

In explorer you can see this as a $VSM$ folder in the root of C, that has a subfolder with the name of the source server and an ID like ZULU.2541288334$. This subfolder is shared (hidden) and contains a shortcut to the volume where the selected target folder resides, this could be C, D local storage (DAS), shared storage (CSV) or an SMB 3.0 share as well. In the screen shot below the folder doesn’t match up to the share name as they are taking from different Shared Nothing Live Migration

image

Security wise we’re to keep our hands of and the security settings reflect this Winking smile. But if you take ownership you can co peak at what’s in there. When writing a blog post for example WhistlingWe indeed saw the copied disk size of the VM being live migrated increase in the selected target folder.

image

image

Conclusion

I find it pretty cool to see how this all works under the hood. Hope you found this educational and interesting as well. It’s a testimonial to what SMB 3.0 can be leveraged for all kind of interesting scenarios.

KB2770917 Updating Host & Guest Integration Services Components – Most Current Version Depends on Guest OS


As after installing http://support.microsoft.com/kb/2770917 on Windows Server 2012 Hyper-V hosts the integration services components are upgraded from 6.2.9200.16384 to 6.2.9200.16433. Windows Server 2012 guest get that same upgrade and as such also the newer integration services components. The guest with older OS version needed a different approach. So I turned to all the great PowerShell support now available for Hyper-V to automate this. Pretty pleased with the results of our adventures in PowerShell scripting I let the script go on Hyper-V cluster dedicated to test & development. As such there are some virtual machines on there running Windows 2003 SP2 (X64) and Windows XP SP3 (x86).  Guess what, after running my script and verifying the integration services version I see that those VM still report version 6.2.9200.16384 . No update. Didn’t my new scripting achievement “take” on those older guests?

So I try the install manually and this is what I get:

clip_image001

 

Why is there no upgrade for these guests?  Are they not needed or do I have an issue? So I mount the ISO and dig around in the files to find a clue in the date:

clip_image001[10]

 

It looks like there are indeed no update components in there for Windows XP/ W2K3. So then I look at the following registry key on the host where I normally use the Microsoft-Hyper-V-Guest-Installer-Win6x-Package value to find out what integration services version my hosts are running:

image

 

Bingo, there it seems indicated that we indeed need version for XP/W2K3 and version for W2K8(R2)/W2K12 and Vista/Windows 7/Windows 8. Cool, but I had to check if this was indeed as it should be and I’m happy to confirm all is well. Ben Armstrong (http://blogs.msdn.com/b/virtual_pc_guy/) confirmed that this is how it should be. There was a update needed for backup that only applied to Windows 8 / Windows Server 2012 guests.  As this fix was in a common component for Windows Server 2008 and later they all got the update. But for the older OS versions this was not the case and hence no update is need. Which is reflected in all the above. In short, this means your XP SP3 & W2K3SP2 VMs are just fine running the version of the integration services and are not in any kind of trouble.

This does leave me with an another task. I was planning to do enhancements to my script like feedback on progress, some logging, some better logic for clustered and non clustered environments, but now I have to also address this possibility and verify using the registry keys on the host which IC version I should check against per OS version. Checking against just for the one related to the host isn’t good enough Smile.

Windows Server 2012 VHDX Thin Provisioning Benefits Explored


Thin Provisioning With Hyper-V

Windows Server 2012 provides thins provisioning at the virtual layer via the VHDX file format. It also provides it at the physical storage layer when your storage supports it. For the later don’t forget that this also means Storage Spaces! So even in environments where budgets are really tight you can leverage this on the physical storage now. So its not just for the feature rich SAN owners anymore Smile.

Even if you use a storage sub system that does not support thin provisioning at the physical layer you will benefit from this mechanism when you use dynamic VHDX files. Not only will these grow less but during shut down they shrink by the size of the empty blocks. Pretty cool! I do however see a potential risk for increased fragmentation. This has a negative impact on performance and needs defragmentation to remediate which also has an impact on IO performance. How much this is a concern depends on your environment and needs. We’ll also have to see in real life how well dynamic VHDX files live up to their performance improvements they got with Windows Server 2012 to entice more people to use this. You have proponents and naysayers. I’m selective and let the circumstances and needs/requirements decide.

Thin Provisioning at the Virtual Layer

You can take a look at the TechEd 2012 session VIR301 by Senthil Rajaram to see how VHD versus VHDX behaves in regards to thin provisioning. I will not repeat all of this here. What I am going to do is look at some other situations.

Important note: You get this UNMAP feature automatically in Windows. There’s no need to manually run the Optimize-Volume command we’ll use in the scenarios below. It’s run automatically for us when the standard Defrag scheduled task runs or during the NTFS check pointing mechanisms that sends the info down every 5 minutes.  So these will normally take care of all that. But the defrag “only” runs every week by default you might want to tweak it or create your own scheduled task in your environment if needed. In demos and labs we’re rather inpatient geeks so even the 5 minute interval for the check pointing mechanisms are to long so we run “Optimize-Volume  -DriveLetter X –ReTrim” to get immediate gratification while testing. In real life it’s zero touch feature, you don’t need to baby sit it.

Fixed VHDX versus Dynamic VHDX

Apart from the fact that you’ll have no shrink on shutdown this optimization does nothing for the file size. The only benefit here is that the UNMAP can be passed to the physical storage where it can help if that supports it. At the virtual layer it doesn’t matter for a fixed sized VDHX disk.

Dynamic VHDX Disk

You’ll profit from the savings in storage when the dynamically expanding VHDX file doesn’t need to grow as much this. This reduces the overhead of expanding the disk, which is a performance benefit and it even helps your non thin provisioning capable storage go further.

Watch Senthil’s presentation (from around minute 20) to see the benefits in action. With VHDX, If you “shift delete” the files inside the VM, then run “Optimize-Volume -DriveLetter X –ReTrim” or  the defrag job and then copy new files  you’ll see that there is no additional file growth as long as you don’t exceed the current size of the VHDX. If you don’t do this both the VHD and VHDX file will grow.

But is another potential benefit why this might be important. Even with the block sizes that have been increased to have less overhead when growing dynamic VDHX files we still have to deal with fragmentation of the VHDX files on the storage where they live. The better/more empty blocks are reused, the less the dynamic files will have to grow. This means you’ll have less opportunity for fragmentation. Whether this compensates for potential of more fragmentation due to the shrinking when they are shutdown I don’t know. If all the performance improvements for dynamic disks are good enough will depend on your environment and needs. Defragmentation can help mitigate this but IO performance during the defragmentation process suffers. Do it or better, schedule it, wisely!

Virtual SCSI controller attached versus virtual IDE controller attached

What about a guest (boot) VHDX disk attached to an IDE controller? I see a lot of one disk virtual machines out there, so it would be a pity if it didn’t work for those and just for the one who have extras vSCSI disk attached.So let’s test this.

image

Below you see the disk size of the VHD and VHDX files and what type of controller they are attached to. As you can see this they had one or two 3.3 GB ISO files copied to them and where then “shift deleted”. The size of the VHD(X) files reflects the amount of data that they stored.

image

Now after running the defrag job or executing “Optimize-Volume -DriveLetter X –ReTrim” inside the VM you’ll see the results below after you shut down the VM

image

So as it turns out, the thin provisioning benefits it work with an IDE attached VHDX files as well! Yes inside a Windows Server 2012 virtual machine you get the UNMAP support with IDE attached VHDX disks to. Think of Hosting companies with many thousands single disk virtual machines who can leverage this as well. So this is something you might not expect when having watch the video as there they only talk about virtual SCSI/ FC controllers.

Conclusion

Doing tests like these are a bit artificial but they do demonstrate how the technology works. In real life it will translate into efficiencies over time, based on the data creation and deletion in your VHDX files. Think about hundreds or thousands of virtual machines in your environment leveraging this mechanism. Over time, on that scale, the amount of storage consumed will be reduced which results in better economies. Now leverage that together with thin provisioning support in Storage Spaces and you see that there are some very interesting scenarios to investigate. Some how it’s starting to look like you can have your cookie and eat it to Smile. You don’t need an expensive SAN to get these efficiencies at the physical storage layer, but if you have and use to have to mess around with sdelete or agents, it’s easy to see the benefit you get from this here as well.

First Windows Server 2012 Cluster/Hyper-V related Patches


With November 2012 Patch Tuesday having come and gone, the first hotfixes (it’s a cumulative update) related to Windows Server 2012 are available. These are relevant to both Hyper-V & Failover clustering (Scale Out File Server)  There is also an older hotfix that has been brought to our attention that related to certain versions Windows Server 2008/R2 domain controllers,which is also important for Windows Server 2012 Clustering. None of these are urgent/critical and only apply in specific circumstances but it’s good to keep up with these and protect your environment..

Windows 8 and Windows Server 2012 cumulative update: November 2012

http://support.microsoft.com/kb/2770917: A collection of small changes – for HA VMs (Hyper-V on Cluster) there are three minor CSV file system fixes in this Hotfix : Improves clustered server performance and reliability in Hyper-V and Scale-Out File Server scenarios. Improves SMB service and client reliability under certain stress conditions.

Error code when the kpasswd protocol fails after you perform an authoritative restore: “KDC_ERROR_S_PRINCIPAL_UNKNOWN”

http://support.microsoft.com/kb/976424: Install on every domain controller running Windows Server 2008 Service Pack 2  or Windows Server 2008 R2 in order to add a Windows Server 2012 failover cluster. This is included in Windows Server 2008 R2 Service Pack 1. So just see if you need this fix in your environment or not.

I’m happy to see Microsoft acting fast on these issues,, even if not critical, to serve & protect their customers deployments.

Dell Storage Forum 2012 Paris – Fluid Forward Think Tank


Thanks to some great people at Dell in Germany (yes, you Florian), Belgium and of course Alison Krause (@AlisonatDell), Maryna Frolova  (@MarineroF) and Stephanie Woodstrom I got invited to attend the “Fluid Forward Think Tank” at the Dell Storage Forum in Paris.

icon

We had a healthy variation in customers, partners, consultants and DELL employees discussing various aspects of IT related to storage. The task of herding the cats fell upon the shoulders of Simon Robinson (@simonrob451) who’s an Analyst and VP at 451 Research, a firm that deals with storage and information management. I for one think he did so brilliantly. This interactive discussion was streamed live and if you missed it you can click on this live stream link to look at our ramblings :-)

I had to pitch some of my dreams of leveraging al the new mobility features as well as the high to continuous available that is being enabled with Windows Server 2012 Hyper-V on inherently unreliable components what opportunities these present to us customers and storage vendors.

Here’s the gang around the table:

It was a fun, educational discussion as the mix of backgrounds, industries, job functions was diverse enough to address all sides of the storage story, the good, the bad and the ugly. We gave them some food for taught I think. Well the folks at DELL can now take this back to Austin and reflect on it all. If need be, I’ll drop by some day to provide some feedback and remember @WarrenByle I ‘d like to try out that STI of his Winking smile  After an interview I ran of to a Compellent customer panel to learn something and provide some feedback on our first experiences.

Windows Server 2012 Deduplication Results In A Small Environment


There is a small environment that provides web presence and services. In total there a bout 20 production virtual machines. These are all backed up to a Transparent Failover File Share on a Windows Server 2012 cluster that is used to host all the infrastructure and management services.

The LUN/Volume for the backups is about 5.5 TB of storage is available. The folder layout is shown in the screenshot below. The backups are run “in guest” using native Windows Backup which has the WindowsImageBackup subfolder as target. Those backups are  archived to an “Archives” folder. That archive folder is the one that gets deduplicated, as the WindowsImageBackup folder is excluded.

image

This means that basically the most recent version is not deduplicated guaranteeing the fastest possible restore times at the cost of some disk space. All older (> 1 day) backup files are deduplicated. We achieve the following with this approach:

  • It provides us with enough disk space savings to keep archived backups around for longer in case we need ‘m.
  • It also provides for enough storage to backup more virtual machines while still being able to maintain a satisfactory number of archived backups.
  • Ay combination of the above two benefits can be balanced versus the business needs
  • It’s a free, zero cost solution

The Results

About 20 virtual machines are backed up every week (small delta and lots of stateless applications).As the optimization runs we see the savings grow. That’s perfectly logical. The more backups we make of virtual machines with a small delta the more deduplication can shine. So let’s look at the results using Get-DedupStatus | fl

image

A couple of weeks later it looks like this.

image

Give it some more months, with more retained backusp, and I think  we’ll keep this around 88%-90% .From tests we have done (ddpeval.exe) we think we’ll max out at around 80% savings rate. But it’s a bit less here overall because we excluded the most recent backups. Guess what, that’s good enough for us Winking smile. It beats buying extra storage of paying a wad of money for disk deduplication licenses from some backup vendor or appliance. Just using the build in deduplication mechanisms in Windows Server 2012 Server saved us a bunch of money.

The next step is to also convert the production  Hyper-V cluster to Windows Server 2012 so we can do host based backups with the native Windows Backup that now supports Cluster Shared Volumes (another place where that 64TB VHDX  size can come in handy as Windows backup now writes to VHDX).

Some interesting screen shots

image

The volume reports we’re using 3TB in data. So 2.4TB is free.

image

Looking at the backup folder you see  10.9TB of data stored on 1.99 TB of disk .

So the properties of the volume reports more disk space used that the actual folder containing the data. Let’s use WinDirStat to have a look.

image

So the above agrees with the volume properties. In the details of this volumes we again see about 2TB of consumed space.

image

Could it be that the volume might is reserving some space ensure proper functioning?

When you dive deeper things we get some cool view of storage space used.. Where Windows Explorer is aware of deduplication and shows the non deduplicates size for the vhd file, WinDirStat does not, it always shows shows the size on disk, which is a whole lot less.

image

This is the same as when you ask for the properties of a file in Windows Explorer.

image

Discussion

Is it the best solution for everyone? Not always no. The deduplication is done on the target after the data is copied there. So in environments where bandwidth is seriously constrained and there is absolutely no technical and/or economical way to provide the needed throughput this might not be viable solution. But don’t dismiss this option to fast. In a lot of scenarios is it is very good and cost effective feature. Technically & functionally it might be wiser to do it on the target as you don’t consumes to much memory (deduplication is a memory hog) an CPU cycles on the source hosts. Also nice is that these dedupe files are portable across systems. VEEAM has demonstrated some nice examples of combing their deduplication with Windows dedupe by the way. So this might also be an interesting scenario.

Financially the the cost of deduplication functionality with hardware appliances or backup software hurts like the kick of a horse straight onto the head. So even if you have to invest a little in bandwidth and cabling you might be a lot better of. Perhaps, as you’re replacing older switches by new 1Gbps or 10Gbps gear, you might be able to recuperate the old ones as dedicated backup switches. We’re using mostly recuperated switch ports and native Windows NIC teaming, it works brilliantly. I’ve said this before, saving money whilst improving operations rarely gets you fired. The sweet thing about this that this is achieved by building good & reliable solutions, which means they are efficient even if it costs some money to achieve. Some managers focus way to much on efficiency from the start as to them means nothing more than a euphemism for saving every € possible. Penny wise and Pound foolish. Bad move. Efficiency, unless it is the goal itself, is a side effect of a well designed and optimized solution. A very nice and welcome one for that matter, but it’s not the end all be all of a solution or you’ll have the wrong outcome.

The Microsoft Management Summit 2013


MMS 2013 is in Las Vegas, Nevada, USA

Time flies fast and it’s time to look ahead to 2013. My continuing investment in myself is part of that.  Despite a lot of rumors about big changes to MMS (its future, location, timing etc.) things will go forward as they’ve been in the past years. That includes the location. As you probably already heard it’s back in Las Vegas, state of Nevada, USA. So after the, for many people, somewhat disconcerting announcement at MMS 2012 indicating the above mentioned changes, MMS 2013 will once again be held in Las Vegas again. As before it will be focused on the entire System Center Suite. That was confirmed by a mail form the MSS conference team recently and a TechNet blog post

image

Recently is was announced that the MMS 2013 content survey is now open. So they’re planning for the Microsoft Management Summit 2013 content and they’d like to hear from us. Why? Well, the better they align the content of the conference to our needs, the better it will be as an experience. This means our return on investment will be bigger which is always a good thing. So if you’re going or thinking of going this is the place, MMS 2013 Content Survey, to voice your opinions on what it should look like content wise. You have two more weeks to fill it out and than it’s scheduled to close down.

Why Attend?

It’s great to have an event focused on managing, deploying and protecting the infrastructure we’ve spent so much time, effort and money building. This conference is dedicated to exactly that. Smaller in scale but very focused. All together in the same hotel/conference center for 5 long days living in System Center and nothing else. As the world’s top operators in this space are there, the networking opportunities are also excellent. I can still remember the amount of talking and discussing I did with my colleagues in 2012, that was stimulating.

It’s also the place to provide feedback to Microsoft about System Center. Things you like, don’t like, things that are missing etc. I most certainly have some feedback for them.

Will I attend?

I’ll most certainly try to attend, that’s for sure. So it’s time to fill out the request form and start cutting through the red tape. Let’s hope the economy doesn’t tank completely and that we can go. The chips might be down right now but let’s not cost cut ourselves out of skills, education, opportunities and a future. Remember, keep moving forward and don’t quit yet, you can always give up later Winking smile.