SMB 3, ODX, Windows Server 2012 R2 & Windows 8.1 perform magic in file sharing for both corporate & branch offices


SMB 3 for Transparent Failover File Shares

SMB 3 gives us lots of goodies and one of them is Transparent Failover which allows us to make file shares continuously available on a cluster. I have talked about this before in Transparent Failover & Node Fault Tolerance With SMB 2.2 Tested (yes, that was with the developer preview bits after BUILD 2011, I was hooked fast and early) and here Continuously Available File Shares Don’t Support Short File Names – "The request is not supported" & “CA failure – Failed to set continuously available property on a new or existing file share as Resume Key filter is not started.”

image

This is an awesome capability to have. This also made me decide to deploy Windows 8 and now 8.1 as the default client OS. The fact that maintenance (it the Resume Key filter that makes this possible) can now happen during day time and patches can be done via Cluster Aware Updating is such a win-win for everyone it’s a no brainer. Just do it. Even better, it’s continuous availability thanks to the Witness service!

When the node running the file share crashes, the clients will experience a somewhat long delay in responsiveness but after 10 seconds the continue where they left off when the role has resumed on the other node. Awesome! Learn more bout this here Continuously Available File Server: Under the Hood and SMB Transparent Failover – making file shares continuously available.

Windows Clients also benefits from ODX

But there is more it’s SMB 3 & ODX that brings us even more goodness. The offloading of read & write to the SAN saving CPU cycles and bandwidth. Especially in the case of branch offices this rocks. SMB 3 clients who copy data between files shares on Windows Server 2012 (R2) that has storage an a ODX capable SAN get the benefit that the transfer request is translated to ODX by the server who gets a token that represents the data. This token is used by Windows to do the copying and is delivered to the storage array who internally does all the heavy lifting and tell the client the job is done. No more reading data form disk, translating it into TCP/IP, moving it across the wire to reassemble them on the other side and write them to disk.

image

To make ODX happen we need a decent SAN that supports this well. A DELL Compellent shines here. Next to that you can’t have any filter drives on the volumes that don’t support offloaded read and write. This means that we need to make sure that features like data deduplication support this but also that 3rd party vendors for anti-virus and backup don’t ruin the party.

image

In the screenshot above you can see that Windows data deduplication supports ODX. And if you run antivirus on the host you have to make sure that the filter driver supports ODX. In our case McAfee Enterprise does. So we’re good. Do make sure to exclude the cluster related folders & subfolders from on access scans and schedules scans.

Do not run DFS Namespace servers on the cluster nodes. The DfsDriver does not support ODX!

image

The solution is easy, run your DFS Namespaces servers separate from your cluster hosts, somewhere else. That’s not a show stopper.

The user experience

What it looks like to a user? Totally normal except for the speed at which the file copies happen.

Here’s me copying an ISO file from a file share on server A to a file share on server B from my Windows 8.1 workstation at the branch office in another city, 65 KM away from our data center and connected via a 200Mbps pipe (MPLS).

imageafk

On average we get about 300 MB/s or 2.4 Gbps, which “over” a 200Mbps WAN is a kind of magic. I assure you that they’re not complaining and get used to this quite (too) fast Winking smile.

The IT Pro experience

Leveraging SMB 3 and ODX means we avoid that people consume tons of bandwidth over the WAN and make copying large data sets a lot faster. On top of that the CPU cycles and bandwidth on the server are conserved for other needs as well. All this while we can failover the cluster nodes without our business users being impacted. Continuous to high availability, speed, less bandwidth & CPU cycles needed. What’s not to like?

Pretty cool huh! These improvements help out a lot and we’ve paid for them via software assurance so why not leverage them? Light up your IT infrastructure and make it shine.

What’s stopping you?

So what are your plans to leverage your software assurance benefits? What’s stopping you? When I asked that I get a couple of answers:

  • I don’t have money for new hardware. Well my SAN is also pré Windows 2012 (DELL Compellent SC40 controllers. I just chose based on my own research not on what VARs like to sell to get maximal kickbacks Winking smile. The servers I used are almost 4 years old but fully up to date DELL PowerEdge R710’s, recuperated from their duty as Hyper-V hosts. These server easily last us 6 years and over time we collected some spare servers for parts or replacement after the support expires. DELL doesn’t take away your access to firmware &drivers like some do and their servers aren’t artificially crippled in feature set.
  • Skills? Study, learn, test! I mean it, no excuse!
  • Bad support from ISV an OEMs for recent Windows versions are holding you back? Buy other brands, vote with your money and do not accept their excuses. You pay them to deliver.

As IT professionals we must and we can deliver. This is only possible as the result of sustained effort & planning. All the labs, testing, studying helps out when I’m designing and deploying solutions. As I take the entire stack into account in designs and we do our due diligence I know it will work. The fact that being active in the community also helps me know early on what vendors & products have issues makes that we can avoid the “marchitecture” solutions that don’t deliver when deployed. You to can achieve this as well, you just have to make it happen. That’s not too expensive or time consuming,

at least a lot less than being stuck after you spent your money.

Copy Cluster Roles Hyper-V Cluster Migration Fails at Final Step with error Virtual Machine Configuration ‘VM01′ failed to register the virtual machine with the virtual machine service


I was working on a migration of a nice two node Windows Server 2012 Hyper-V cluster to Windows Server 2012 R2. The cluster consist out of 2 DELL R610 servers and a DELL  MD3200 shared SAS disk array for the shared storage. It runs all the virtual machines with infrastructure roles etc. It’s a Cluster In A Box like set up. This has been doing just fine for 18 months but the need for features in Windows Server 2012 R2 became too much to resists. As the hardware needs to be recuperated and we have a maintenance windows we use the copy cluster roles scenario that we have used so many times before with great success. It’s the Perform an in-place migration involving only two servers scenario documented on TechNet and as described in one of my previous blogs Migrating a Hyper-V Cluster to Windows 2012 R2 for your convenience.

Virtual Machine Configuration ‘VM01′ failed to register the virtual machine with the virtual machine service

As the source host was running on Windows Server 2012 we could have done the live migration scenario but the down time would be minimal and there is a maintenance window. So we chose this path.

So we performed a good health check. of the source cluster and made sure we had no snapshots left hanging around. Yes it’s supported now for this migration scenario but I like to have as few moving parts as possible during a migration.

It all went smooth like silk. After shutting down the VMs on the source cluster node, bringing the CSV off line (and un-presenting the LUN from the source node for good measure), we present that LUN to the target host. We brought the CSV on line and when that was completed successfully we were ready to bring the virtual machines on line and that failed …

Log Name:      Microsoft-Windows-Hyper-V-High-Availability-Admin
Source:        Microsoft-Windows-Hyper-V-High-Availability
Date:          4/02/2014 19:26:41
Event ID:      21102
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      VM01.domain.be
Description:
‘Virtual Machine Configuration VM01′ failed to register the virtual machine with the virtual machine management service.

image

image

 

Let’s dive into the other event logs. On the host the application security and system event log are squeaky clean. The Hyper-V event logs are pretty empty or clean to except for these events in the Hyper-V-VMMS Admin log.

Log Name:      Microsoft-Windows-Hyper-V-VMMS-Admin
Source:        Microsoft-Windows-Hyper-V-VMMS
Date:          4/02/2014 19:26:40
Event ID:      13000
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      VM01.domain.be
Description:
User ‘NT AUTHORITY\SYSTEM’ failed to create external configuration store at ‘C:\ClusterStorage\HyperVStorage\VM01′: The trust relationship between this workstation and the primary domain failed.. (0x800706FD)

 

image

Bingo. It must be the fact that no domain controller is available. It’s completely self contained cluster and both domain controller virtual machines are highly available and reside on the CSV. Now the CSV does come on line without a DC since Windows Server 2012 so that’s not the issue. it’s the process of registering the VMs that fails without a DC in an Active Directory environment.

Getting passed this issue

There are multiple ways to resolve this and move ahead with our cluster migration. As the environment is still fully functional on the source cluster I just removed a DC virtual machine from high availability on the cluster. I shut it down and exported it. I than copied it over to the node of the new cluster  (we’re going to nuke the source host afterwards and install W2K12R2, so we moved it to the new host where it could stay) where I put it on local storage and imported it. For this is used the “Register the virtual machine in-place option”. I did not make it high available.

image

After verifying that we could ping the DC and it was up and running well we tried the final phase of the migration again. It went as smooth as we have come to expect!

Other options would have been to host the DC virtual machine on a laptop or other server. If you could no longer get to the the DC for export & import or heck even a shared nothing migration depending on your environment can help you out of this pickle. A restore from backup would also work. But here in that 2 node all in one cluster our approach was fast and efficient.

So there you go. Tip to remember. Virtualizing domain controllers is fully supported, no worries there but you need to make sure that if you have a dependency on a DC you don’t have the DC depending on that dependency. It’s chicken an egg thing.

Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 2


Introduction

In this blog series we’ll walk you through the process of migrating a Windows Server 2008 R2 Hyper-V Cluster to a Windows Server 2012 R2 Hyper-V Cluster in another Active Directory domain. You are now reading part 2.

  1. Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 1
  2. Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 2

The source W2K8R2 Hyper-V cluster is a production environment. To test the procedure for the migration we created a new CSV on the source cluster with some highly available test virtual machines with production like network configurations (multi homed virtual machined). This allows us to demonstrate the soundness of the process on one CSV before we tackle the 4 production CSVs.

We left off in part 1 with the virtual machines on the CSV LUN we are going to migrate shutdown. We’ll now continue the process of moving the CSV LUN from the old Windows Server 2008 R2 SP1 cluster to the new Windows Server 2012 R2 cluster. After that we can import them and, start them up, test that all is well and finally make them highly available in the cluster. Don’t forget the upgrade the integration components when all is done.

Removing the CSV LUN from the the source W2K8R2 Hyper-V Cluster

Just leave the VMs where they are on the LUN, un-present that LUN from the old source W2K8R2 Hyper-V cluster and present it to the new W2K12R2 Hyper-V Cluster. In our case, with a dealing with a cluster so we use a CSV. So when the LUN is presented and added to the cluster don’t forget to add it to the CSVs. Well

In Failover Cluster Manager bring the CSV that you are migrating off line. Make sure you have the correct one (green circles/arrow) to avoid down time in production.

imageWhen asked if you’re sure, confirm this

image

The CSV will be brought of line, which you can verify in Disk Management

image

We’re going to do our clean up already. You could wait until after the migration but we want the old cluster to look as clean and healthy for the operations people as possible so they don’t worry. So we go and remove this LUN from Cluster Shared Volumes.

image

Which you’ll need to confirm

image

after which your disk will be move to available storage

image

Do note that if you do this it brings the LUN back on line. As it’s still a clustered diskand  there is no IO (all VMS are shut down) that’s OK. We’ll remove it form available cluster storage (“Delete” isn’t a bad as it sounds in this context)

image

The storage will be gone form the cluster and off line in disk manager.

On the SAN / Shared Storage

We create a SAN snapshot for fall back purposes (we throw it away after all has gone well). If you have this option I highly advise you to do so. It’s not easy to move back form Windows Server 2012 R2 to W2K8R2 in the unlikely event you would need to do so. It also protects the VM against any errors & mishaps that might occur, if you understand how to use the snapshot to recover.

On the SAN we un map the CSV LUN from the old cluster. We could wait but this is an extra protection against two clusters seeing the same storage.

On the SAN we map that CSV LUN to the new cluster. It will appear in disk manager.

image

We add this disk to the new cluster

image

image

We add it to the CSV on the new cluster, which brings it on line.

image

It uses the default naming convention of clustered disks. So this is the moment to change the name if you need or want to do so.

image

So now it’s time to go Hyper-V Manager and do the actual import.

image

Navigate to the folder where you Hyper-V Virtual Machine Configuration lives. This location can be central for all VM or individual per VM, depending on how the virtual machines were organized on the old source cluster. In our example it is the latter. Also note that we only have one CSV involved per VM here, so it easy. Otherwise you will need to move multiple CSVs across together, all the ones the VM or VMs depend on.

image

It has found a virtual machine to import.

image

This is important, select “Register the virtual machine in-place (use the existing unique ID)”

image

Click “Next” to confirm the your actions

If anything about your virtual machine is not compatible with your host, the GUI allows you to make fix this. Here we have to change the correct virtual switch as they are different from the source host.

image

When done, just click next and in a blink of the eye your machine will be imported. You can start it up right now to see if all went well.

image

As in Windows Server 2012 (R2) we can add running virtual machines to the cluster for high availability that’s the final step.

image

We can import all virtual machines on the demo CSV in the same manner. Congrats, if you set up network connectivity right and done this manual migration procedure correctly you have now migrated a first CSV with VMs to the new cluster in another AD domain that can talk to to VMs that are still on the old cluster.  Cool huh! What scenarios? Well, a hoster that has clusters in a management domain that runs different workloads for different customers (multiple ADs) or a company consolidating multiple environments on a common Hyper-V Cluster or clusters in a management domain, etc.

You need to update the integration components of the virtual machines now running but other than that, you’re all set. Just move along with the next CSVs / Virtual machines until you’re done.

Closing comments

Note, what to do if you don’t have shared storage. Move the disks to the new host/cluster, copy the data over (do NOT export the VMs, as that will not work in this scenario, see part 1) or … use VEEAM Replica. It will do the heavy lifting for you and help minimize down time.. Read this blog post by our fellow MVP Silvio Di Benedetto  and for more information Veeam Backup & Replication: Migrate VM from Hyper-V 2008 R2 to 2012 R2.

Good luck. And remember if you need any assistance, there are many highly experienced Hyper-V MVPs /consultants out there. They can always help you with your migration plans if you need it.

Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 1


Introduction

In this blog we’ll walk you through the process of migrating a Windows Server 2008 R2 Hyper-V Cluster to a Windows Server 2012 R2 Hyper-V Cluster in another Active Directory domain. You are reading part 1.

  1. Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 1
  2. Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 2

The source W2K8R2 Hyper-V cluster is a production environment. To test the procedure for the migration we created a new CSV on the source cluster with some highly available test virtual machines with production like network configurations (multi homed virtual machined). This allows us to demonstrate the soundness of the process on one CSV before we tackle the 4 production CSVs. Do note that in this case the two clusters do share the same SAN. If not we can move the storage, copy the data, replicate between SANs or use VEEAM Replica (see part 2 for more info).

Preparing the source W2K8R2 Hyper-V Cluster virtual machines & Cluster

Before we begin, I always make sure I have no Hyper-V snapshots  anymore on virtual machines I migrate. It prevents any issues on that front an while Windows Server 2012 R2 is better than before dealing with snapshots I prefer to have a little possible points of concern before I start such an operation.

Go to Failover Cluster Manager

image

and shut down the virtual machines on the CSV you want to migrate.

image

You’ll see them pending whilst they are shutting down …

image

And when they are fully stopped we’ll removed the form the cluster.

image

To do so, delete (scary word) the virtual machines on our CSV that’s going to be migrated from the cluster, which makes them no longer high available

image

To do so you’ll need to confirm that this is what you want to do.

image

In Hyper-V Manager we see that the virtual machines are indeed of line. As the virtual machines reside on cluster / CSV the path to the hard disk, config files etc is indeed under C:\ClusterStorage.

image

We just close the Hyper-V Manager GUI. We will NOT export the VMs to import them on the new cluster. Why?

  1. This is not necessary as since Windows Server 2012 and as such also in R2 we can import them with the option to register them in place. No export is needed for this.
  2. Due to the fact the the is no longer there you cannot import virtual machines that have been exported from Windows 2008 R2 directly into Windows Server 2012 R2. This is due to the fact that the WMI v1 namespace was deprecated in Windows Server 2012, and then removed in Windows Server 2012 R2.  When exporting a VM from Windows 2008 R2, the WMI v1 namespace was used that resulted in an .exp file to represent the exported virtual machine. In Windows Server 2012 (R2) a new WMI namespace (version 2 or root\virtualization\v2) leverages an improved import/export model. This allows for registering the VMs in place as said in point 1. In Windows Server 2012 the version 1 WMI namespace was still there which allowed for importing of Windows Server 2008/R2 VM’s. In Windows Server 2012 R2 the version 1 namespace has been removed. So YOU CANNOT import virtual machines that where exported from Windows Server 2008/R2 into Windows Server 2012 R2. The workarounds are described here: http://blogs.technet.com/b/rmilne/archive/2013/10/22/windows-hyper-v-2012-amp-8-1-hyper-v-did-not-find-virtual-machine-to-import.aspx.

Now the combination of point 1 and 2 is what is used by the Copy cluster roles wizard in Windows Server 2012 R2. That works within a domain but not across separate AD Domains as in our case. But don’t worry. All this means is that we need to do some work manually and that’s it. That’s what we’ll describe in part 2 of this blog. Do realize you want to do this in one go as that ensures you have the least possible down time. In production don’t do part 1 of the blog on Monday and part 2 on Thursday or so Winking smile.

Read on here Migrating A Windows Server 2008 R2 Hyper-V Cluster To Windows Server 2012 R2 Hyper-V Cluster In Another Active Directory Domain – PART 2

Unable to retrieve all data needed to run the wizard. Error details: "Cannot retrieve information from server "Node A". Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.


I was recently configuring a Windows Server 2012 File server cluster to provide SMB transparent failover with continuous available file shares for end users. So, we’re not talking about a Scale Out File Server here.

All seemed to go pretty smooth until we hit a problem. when the role is running on Node A and you are using the GUI on Node A this is what you see:

image

When you try to add a share you get this

"Unable to retrieve all data needed to run the wizard. Error details: "Cannot retrieve information from server "Node A". Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.”

image

When you failover the file server role to the other node, things seem to work just fine. So this is where you run the GUI from Node A while the file server role resides on Node B.

image

You can add a share, it all works. You notice the exact same behavior on the other node. So as long as the role is running on another node than the one on which you use Failover Cluster Manager you’re fine. Once you’re on the same node you run into this issue. So what’s going on?

So what to do? It’s related to WinRM so let’s investigate that.

image

So the WinRM config comes via a GPO. The local GPO for this is not configured. So that’s not the one, it must come from the  domain.The IP addresses listed are the node IP and the two cluster networks. What’s not there is local host 127.0.0.1, the cluster IP address or any of the IPV6 addresses.

I experimented with a lot of settings. First we ended up creating an OU in the OU where the cluster nodes reside on which we blocked inheritance. We than ran gpupdate /target:computer /force on both nodes to make sure WinRM was no longer configure by the domain GPO. As the local GPO was not configured it reverted back to the defaults. The listener show up as listing to all IPv4 and IPv6 addresses. Nice but the GPO was now disabled.

image

This is interesting but, things still don’t work. For that we needed to disable/enable WinRM

Configure-SMRemoting -disable
Configure-SMRemoting –enable

or via server manager

image

That fixed it, and we it seems a necessity to to. Do note that to disable/enable remote management it should not be configured via a GPO or it throws an error like

image

or

image

Some more testing

We experimented by adding 127.0.0.0-172.0.0.1 an enabling the GPO again. We then saw the listener did show the local host, cluster & file role IP address but the issue was back. Using * in just IPv 4 did not do the trick either.

image

What did the trick was to use * in the filter for IPv 6 and keep our original filters on IPv4. The good news is that having removed the GPO and disabling/enabling WinRM  the cluster IP address & Filer Role IP address are now in the list. That could be good for other use cases.

This is not ideal, but it all works now.

What we settled for

So we ended up with still restricting the GPO settings for IPv4 to subnet ranges and allowing * for IPv6. This made sure that even when we run the Failover Cluster Manager GUI from the node that owns the file server role everything still works.

One workaround is to work from a remote host, not from a cluster member, which is a good practice anyway.

The key takeaway is that when Microsoft says they test with IPv6 enabled they literally mean for everything.

Note

There is a TechNet article on WinRM GPO Settings for SCVMM 2012 RC where they advice to set both IPv4 and  IPv6  to * to avoid issues with SCVMM operations. How to Add Trusted Hyper-V Hosts and Host Clusters in VMM 

However, we found that IPv6 is the key requirement here, * for just IP4 alone did not work.

Windows Server 2012 R2 Cluster Reset Recent Events With PowerShell


I blogged before about the fact that since Windows Server 2012  we have the ability to reset the recent events shown so that the state of the cluster is squeaky clean with not warnings or errors. You can read up on this here. Windows Server 2012 Cluster Reset Recent Events Feature.

You can also do this in PowerShell like in the example below:

#Connect to cluster & get current RecentEventsResetTime value
$MyCluster = Get-CLuster -name "W2K12R2RTM"
$MyCluster.RecentEventsResetTime

#Reset recent events
$MyCluster.RecentEventsResetTime = get-date
$MyCluster.RecentEventsResetTime

 

 

 

 

As you may notice, the RecentEventsResetTime is displayed in UTC when read form the cluster after connecting to it. Right after you set it it displays the time respectful of the time zone you’re in right until you connect to the cluster again. We demonstrate this in the 2 screenshots below (I’m at GMT+1).

image

image

This comes in handy when writing test, comparison & demo scripts. Often you do things with the network that causes network connectivity to be lost when the NIC gets reset (disabled/enabled) and such. Also when something fails as part of the demo or tests scripts it’s nice to start the rerun or the next part of the demo/test with a clean cluster GUI when you’re showcasing stuff. Unfortunately an already GUI doesn’t refresh these setting if the reset is not done in the GUI. So you need to open a new one. For scripting you don’t have this issue. EDIT: In Windows 2012 R2 you can use the $MyCluster.Update() to reflect the new value of RecentEventsResetTime in UTC without having to reconnect to the cluster. In Windows Server 2012 this Update method isn’t available but it seems to happen automatic.

House Keeping In The Cluster Aware Updating GUI


When you work in an environment with multiple clusters and some of them are replaced, destroyed etc, you’ll end up with stale clusters in the “Recent Clusters” list of the Cluster Aware Updating GUI. In the example below the red entry (had to obfuscate, sorry) is a no longer existing cluster but it’s very similar to a new one that was created to fix a naming standard error. So we’d like to get rid of those to prevent mistakes and cluttering up the GUI with irrelevant information.image

The Recent Cluster list is tied to your user profile and you can end up with a list polluted with stale entries of no longer existing clusters. To clean them out you can dive into the registry and navigate to:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\ClusterAwareUpdating\ClusterMRUimage

Simply delete the entries that contain the values of the old cluster that are no longer in existence.

Close the Cluster Aware Updating GUI it still open and reopen it. You’ll see the stale entries to the one or more no longer exiting clusters in “Recent Clusters” is gone.image

Migrating a Hyper-V Cluster to Windows 2012 R2


Introduction

We’ll walk through a transition of a Windows Server 2012 Hyper-V Cluster to R2. For this we’ll use the Copy Cluster Roles Wizard (that’s how the Migration Wizard is now called). You have to approaches. You can start with a R2 cluster new hardware and you might even use new storage. For the process in my lab I evicted the nodes one by one and did a node by node migration to the new cluster. How you’ll do this in your environments depends on how many nodes you have, the number of CSVs and what workload they run. This blog post is an illustration of the process. Not a detailed migration plan customized for you environment.

Step by Step

1. Preparing the Target node(s)/cluster

Install the OS, add the Hyper-V role & the Failover cluster feature on the new or the evicted node.You’ll already have some updates to install only a week after the preview bits came available. Microsoft is on top of things, that’s for sure.

clip_image002

Configure the networking for the virtual switch, CSV, LM, management and iSCSI (that’s what I use in the lab) on the OS. Nothing you don’t know yet for this part.

Create a Virtual switch in Hyper-V manager. This is important. If you can you should give it the same name as the ones on the old cluster. Also note that the virtual switch name is case sensitive. If you forget to create one you will not be able to copy the hyper-v virtual machine roles.

Create a new Cluster & configure networking. One of the nice things of R2 is that is intelligent enough to see what type of connectivity suits the networks best and it defaults to that. It sees that the ISCSI network should be excluded for cluster use and that the CSV/LM network doesn’t need to allow client access.clip_image004

On the ISCSI Target (a W2K12RTM box) I remove the evicted node from the ISCSI initiators and I add the newly installed new cluster node. You can wait to do this out of precaution but the cluster itself does leave the newly detected LUNS off line by default. On top of that it detects the LUNs are in use (reservations) and you can’t even bring them on line.clip_image006

Right click the cluster and select “Copy Cluster Roles”clip_image008

Follow the wizard instructionsclip_image010

Click Next and then click Browse to select the source cluster (W2K12)clip_image012

Select “Warrior” and click OK.clip_image014

Click Next and see how the connection to the old cluster is being established.clip_image016

The wizard scans for roles on the old cluster that can be migrated.clip_image018

In this example the results are the Roles per CSV and the Replication Broker. For migration you can select one of the CSV’s and work LUN per LUN, select multiple of them or even all. It all depends on your needs and environment. I migrated them all over at once in the lab. In real live I usually do only 1 or a couple of interdependent LUNS (for example OS, DATA, LOGS, TempDB on separate CSVs of virtualized SQL Servers).clip_image020

Note that if you did not specify a virtual switch you will not see any Roles on the CSV listed. So that’s why it’s important defining a virtual switch.clip_image022

As we did it right we do see all the virtual machines. As we are not migrating to new storage we have to move all VMs on a LUN in one go. That’s because you cannot expose a LUN to multiple clusters.clip_image024

So we select all the clustered roles and click Next. As we use all capitals in the new virtual switch for the guests we are asked to map it manually. When the names are one on one identical AND in the same case, the mapping happens automatically. You can see this for the virtual switches for guest clustering CSV & ISCSI network connectivity.clip_image026

We view the report before we click next and see exactly what will be copied.clip_image028

Close the report and click Next.clip_image030

Click Next again to kick of the migration.clip_image032

When don you can view a report of the results. Click finish to close the wizard.clip_image034

You now have copied your virtual machine cluster roles and the Replication Broker role.clip_image036

Even the storage is already there in the cluster but off line as it’s not yet available to the new cluster. Remember that your workload is still running on the old cluster so all what you have do until now is a zero down time exercise. clip_image038

2. Preparing to make the switch on the source cluster node(s)

On the source cluster we shut down all the virtual machines and the Replication Broker role.clip_image040

We then take the CSVs off line.clip_image042

If you did not (or could not due to lack of disk space) create a new witness disk you can recuperate this LUN as well. Not ideal I a production migration but dynamic quorum will help you staying protected. Not so with Windows 2008 R2. But you’ll fight as you are. Things are not always perfect.clip_image044

3. Swapping The Storage From the Source To The Target

We’ll now disconnect the ISCSI initiator from the target on the old cluster node and remove the target.clip_image045

clip_image046

On the ISCSI target we’ll also remove the old nodes from the list of ISCSI initiators to keep the environment clean. You could wait until you see the VMs up and running on the new environment to facilitate putting the old environment back up if the migration should fail.

clip_image048

Wait until the process has completed and click applyclip_image050

4. Bringing the new cluster into production

Bring the CSV LUNs on line on the new cluster.clip_image052

You’ll see then become available / on line in the cluster storage.clip_image054

You are now ready to start up your virtual machines on the new cluster. With careful planning the down time for the services running in the virtual machines can be kept as low as 10 to 15 minutes depending on how fast you can deal with the storage switch over. Pretty neat.clip_image056

Conclusion

You can now keep adding new nodes as you evict them from the old cluster. As said the exact scenario will vary based on the workload, number of nodes and CSV in your environment. But this should give you a good head start to work out your migration part. Keep in mind this is all done using the R2 Preview bits. Things are looking pretty good.

In a future blog I’ll discuss some best practices like making sure you have no snapshots on the machines you migrate as this still causes issues like before. At least in this R2 preview.

Shared Virtual Disks in Windows Server 2012 R2 Hyper-V Maximizes TCO/ROI


One of the great additions to Hyper-V in Windows Server 20012 R2 are shared virtual disks. TechEd 2013 is disclosing a lot of new and improved features and this is one of them!

image

This single feature brings benefits to me I can use to solve business issues today:

Ease of guest clustering

How easy is it? Look at this:

New-VHD -Path C:\ClusterStorage\Volume1\Shared.VHDX -Fixed -SizeBytes 30GB

Add-VMHardDiskDrive -VMName Node1 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

Add-VMHardDiskDrive -VMName Node2 -Path C:\ClusterStorage\Volume1\Shared.VHDX –ShareVirtualDisk

That’s it, basically. No fabrics to extend to the guest, no vFC  needed. In simplicity it looks a lot like SMB 3.0. A major improvement.

To the guest the shared storage has become abstracted

With a shared VHDX I get mobility and flexibility I’m used to with VHDX files & virtualization. FC, iSCSI, SMB3.0, Storage Spaces, PCI Raid, Share SAS, it all doesn’t matter what happens to the underlying storage infrastructure when doing guest clustering in this way.That’s sweet!

Fast Backups

We have a lot of large size LUNs. 2-16TB. We want to virtualize all of these as the speed of backing up these large VHDX file  a LOT better than backing up a LUN with millions of smaller files. But when we need high availability we have to go for vFC, iSCSI and don’t get that benefit.  Yes we can also use SMB3.0 already gave us a helping hand (SQL Server guest clustering if you don’t or can’t do “Always On”) in some scenarios but it’s not the major storage deployment out there (not yet) AND we’re talking about file server workloads. Now with shared VHDX we can have our cookies and eat it to. Or better 2 cookies!

Conclusion

This just rocks. My live just got better and easier. So can yours. Moving to Windows Server 2012 (R2) is all that’s needed. For more information look here at Application Availability Strategies for the Private Cloud (Speakers: Jose Barreto, Steven Ekren)

Windows Server 2012 Cluster Aware Updating In Action


You might have noticed that Microsoft recently released some important hotfixes for Windows Server 2012 Hyper-V Clusters. These are You cannot add VHD files to Hyper-V virtual machines in Windows Server 2012 and Update that improves cluster resiliency in Windows Server 2012 is available

So how do you deploy these easily and automatically to your Windows Server 2012 Clusters? Cluster Aware Updating! Here’s a screenshot of cluster Aware Updating in action deploying these hotfixes without a single interruption to the business services.

image

So what are you waiting for?Start using it Smile It will make your live easier, save time, and help you with continuous available infrastructure.

Here’s a link to the slide deck of a presentation I did on Cluster Aware Updating in a TechNet webcast http://www.slideshare.net/technetbelux/hands-on-with-hyperv-clustering-maintenance-mode-cluster-aware-updating

We’ve been enjoying the benefits of Windows Server 2012 since we got the RTM bits in August 2012. I can highly recommend it to everyone.