SMB 3, ODX, Windows Server 2012 R2 & Windows 8.1 perform magic in file sharing for both corporate & branch offices


SMB 3 for Transparent Failover File Shares

SMB 3 gives us lots of goodies and one of them is Transparent Failover which allows us to make file shares continuously available on a cluster. I have talked about this before in Transparent Failover & Node Fault Tolerance With SMB 2.2 Tested (yes, that was with the developer preview bits after BUILD 2011, I was hooked fast and early) and here Continuously Available File Shares Don’t Support Short File Names – "The request is not supported" & “CA failure – Failed to set continuously available property on a new or existing file share as Resume Key filter is not started.”

image

This is an awesome capability to have. This also made me decide to deploy Windows 8 and now 8.1 as the default client OS. The fact that maintenance (it the Resume Key filter that makes this possible) can now happen during day time and patches can be done via Cluster Aware Updating is such a win-win for everyone it’s a no brainer. Just do it. Even better, it’s continuous availability thanks to the Witness service!

When the node running the file share crashes, the clients will experience a somewhat long delay in responsiveness but after 10 seconds the continue where they left off when the role has resumed on the other node. Awesome! Learn more bout this here Continuously Available File Server: Under the Hood and SMB Transparent Failover – making file shares continuously available.

Windows Clients also benefits from ODX

But there is more it’s SMB 3 & ODX that brings us even more goodness. The offloading of read & write to the SAN saving CPU cycles and bandwidth. Especially in the case of branch offices this rocks. SMB 3 clients who copy data between files shares on Windows Server 2012 (R2) that has storage an a ODX capable SAN get the benefit that the transfer request is translated to ODX by the server who gets a token that represents the data. This token is used by Windows to do the copying and is delivered to the storage array who internally does all the heavy lifting and tell the client the job is done. No more reading data form disk, translating it into TCP/IP, moving it across the wire to reassemble them on the other side and write them to disk.

image

To make ODX happen we need a decent SAN that supports this well. A DELL Compellent shines here. Next to that you can’t have any filter drives on the volumes that don’t support offloaded read and write. This means that we need to make sure that features like data deduplication support this but also that 3rd party vendors for anti-virus and backup don’t ruin the party.

image

In the screenshot above you can see that Windows data deduplication supports ODX. And if you run antivirus on the host you have to make sure that the filter driver supports ODX. In our case McAfee Enterprise does. So we’re good. Do make sure to exclude the cluster related folders & subfolders from on access scans and schedules scans.

Do not run DFS Namespace servers on the cluster nodes. The DfsDriver does not support ODX!

image

The solution is easy, run your DFS Namespaces servers separate from your cluster hosts, somewhere else. That’s not a show stopper.

The user experience

What it looks like to a user? Totally normal except for the speed at which the file copies happen.

Here’s me copying an ISO file from a file share on server A to a file share on server B from my Windows 8.1 workstation at the branch office in another city, 65 KM away from our data center and connected via a 200Mbps pipe (MPLS).

image

On average we get about 300 MB/s or 2.4 Gbps, which “over” a 200Mbps WAN is a kind of magic. I assure you that they’re not complaining and get used to this quite (too) fast Winking smile.

The IT Pro experience

Leveraging SMB 3 and ODX means we avoid that people consume tons of bandwidth over the WAN and make copying large data sets a lot faster. On top of that the CPU cycles and bandwidth on the server are conserved for other needs as well. All this while we can failover the cluster nodes without our business users being impacted. Continuous to high availability, speed, less bandwidth & CPU cycles needed. What’s not to like?

Pretty cool huh! These improvements help out a lot and we’ve paid for them via software assurance so why not leverage them? Light up your IT infrastructure and make it shine.

What’s stopping you?

So what are your plans to leverage your software assurance benefits? What’s stopping you? When I asked that I got a couple of answers:

  • I don’t have money for new hardware. Well my SAN is also pré Windows 2012 (DELL Compellent SC40 controllers. I just chose based on my own research not on what VARs like to sell to get maximal kickbacks Winking smile. The servers I used are almost 4 years old but fully up to date DELL PowerEdge R710’s, recuperated from their duty as Hyper-V hosts. These server easily last us 6 years and over time we collected some spare servers for parts or replacement after the support expires. DELL doesn’t take away your access to firmware &drivers like some do and their servers aren’t artificially crippled in feature set.
  • Skills? Study, learn, test! I mean it, no excuse!
  • Bad support from ISV an OEMs for recent Windows versions are holding you back? Buy other brands, vote with your money and do not accept their excuses. You pay them to deliver.

As IT professionals we must and we can deliver. This is only possible as the result of sustained effort & planning. All the labs, testing, studying helps out when I’m designing and deploying solutions. As I take the entire stack into account in designs and we do our due diligence, I know it will work. The fact that being active in the community also helps me know early on what vendors & products have issues and makes that we can avoid the “marchitecture” solutions that don’t deliver when deployed. You can achieve this as well, you just have to make it happen. That’s not too expensive or time consuming, at least a lot less than being stuck after you spent your money.

Traveling To MMS 2013


Well I’ll be traveling towards MMS 2013. A big thank you, by the way, to the team back home for keeping an eye on things while reading the Windows Server 2012 Hyper-V Installation and Configuration Guide.

I’m attending this conference for the great networking opportunities and to establish the role System Center will have in our future. Many thousands of us will be attending MMS 2013 in Las Vegas (Nevada, USA) once again for that very same reason. I’m travelling over LHR to LAS with the help of British Airways as one of their Boeing 747s does the job quite adequately.

image

System Center 2012 SP1 has been released with full support for Windows Server 2012 whilst Windows 8 is gaining traction and the BYOD & Hybrid trends are ever increasing the challenges for management & support. Meanwhile we’re faced with ever bigger challenges keeping up with Private, Hybrid & Public cloud efforts and trends while maintaining our “legacy” systems.

I’m looking forward to discuss some serious issues we’re dealing with in managing an ever increasing varied ecosystem. Things are moving fast in technology. This means we need to adapt and move even faster with the flow.My friends, colleagues, fellow MEET members & MVPs, business partners, Microsoft employees I’m looking forward to meeting up at the Summit in Mandalay Bay!

Next to the sessions I have meetings lined up with vendors, friends & colleagues from around the globe as we optimize our time when we can meet face to face to talk shop and provide feedback. If you can’t attend follow some of the action here at MMS 2013 Live!

image

If you  read my blog or follow me on twitter and are attending be sue to let us know so we can meet & greet.

Failover Cluster Node Names in Upper & Lower Case In Window 2012 with Cluster.exe, PowerShell & GUI


 

Cluster Node Names Can Be Inconsistently Named

A lot of us who build failover clusters are bound to run into the fact that the node names as shown the Failover Cluster Management GUI is not always consistent in the names format  it gives to the nodes. Sometimes they are lower case, sometimes they are upper case. See the example below of a Windows Server 2008 R2 SP1 cluster.

image2

Many a system administrator has some slight neurotic tendencies. And he or she can’t stand this. I’ve seen people do crazy things like trying to fix this up to renaming a node in the registry. Do NOT do that. You’ll break that host. People check whether the computer object in AD is lower or upper case, whether the host name is lower or upper case, check how the node are registered in DNS etc. They try to keep ‘m all in sync at sometimes high cost Smile But in the end you can never be sure that all nodes will have the same case using the GUI.

So what can you do?

  1. Use cluster.exe to add the node to the cluster. That enforces the case you type in the name!  An example of this is when you’d like upper case node names:
    cluster.exe /cluster:CLUSTER-NAME /add /node:UPPERCASENODE1
  2. Some claim that when you add all nodes at the same time and they will all be the same. But ‘m not to sure this will always work.

Windows 2012

In Windows 2012 PowerShell replaces cluster.exe (it is still there, for backward compatibility but for how long?) and they don’t seem to enforce the case of the names of the node. For more info on Failover Clustering PowerShell look at Failover Clusters Cmdlets in Windows PowerShell, it’s a good starting point.

Don’t despair my fellow IT Pros. Learn to accept that fail over clustering is case insensitive and you’ll never run into any issue. Let it go …. Well unless you get a GUI bug like we had with Exchange 2010 SP1 or any other kind of bug that has issues with the case of the nodes Smile.

If you want to use cluster.exe (or MSClus) for that matter you’ll need to add it via the Add Roles and Features Wizard / Remote Administration Tools /Feature Administration Tools / Failover Clustering Tools. Note that there are not present by default.

clusterdotexe

image

On an upgraded node I needed to uninstall failover clustering and reinstall it to get it to works, so even in that scenario they are gone and I needed to add them again.

MSClus and Cluster.EXE support Windows Server 2012, Windows 2008 R2 and Windows 2008 clusters. The Windows Server 2012 PowerShell module for clustering supports Windows Server 2012 and Windows Server 2008 R2, not Windows Server 2008.

For more information see the relevant section at Remote Server Administration Tools (RSAT) for Windows Vista, Windows 7, Windows 8 Consumer Preview, Windows Server 2008, Windows Server 2008 R2, and Windows Server “8″ Beta (dsforum2wiki). You’ll have to live with the fact that a lot of documentation still refers to Windows Server 8. As of his post, it’s only been a week that the final name of Windows Server 2012 was announced.

 

 

Sign In to Vote

Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 3


This is a multipart series based on some lab test & work I did.

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 3

And we have arrived at part three of my adventures while “transitioning” my Hyper-V cluster nodes to Windows Server 2012. I prefer the term transition as is more correct. We can still not do a rolling upgrade a cluster cluster. We still need to create a new cluster and recuperate the evicted nodes.

I’ll repeat myself here (again) by stating I did not reinstall the evicted nodes but upgraded them. Why, because I can and I wanted to try it out and see what happens. For production purposes I do advise you to rebuild nodes from scratch using a well defined and automated plan if possible. I already mentioned this in Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1

Moving the Storage & Hyper-V Guests

So we stopped Part 2 at a newly created cluster without any storage. That’s what we’ll be taking care of in this part.  Let’s recap what we already mentioned at the end of Part 2.

We have several options for storage here. We could assign new storage but we cannot do a Quick Storage Migration between cluster using SCVMM2008R2 but that doesn’t fly as SCVMM2008R2 can’t manage Windows 2012 clusters and I don’t know if it ever will. We can do a good old manual or scripted export to and import from the new storage of the VMs what takes a considerable amount of time. You also need to have the extra storage available.

We can also recuperate the old storage with the VMs still on there. This could get tricky as no two cluster should be able to see & use the storage at the same time. The benefit could be that we can just use the import type in Windows Server 2012 “Register the virtual machine in-place” (use the existing unique ID) and be done with it. We’ll try that one. We’ll still have some down time but it should be pretty fast. It’s only from Windows Server 2012 on that we’ll be able to do Shared Nothing Live Migrations between clusters Smile and live will be good. If you have a SAN you could also use clones to get this job done without less risk. You work on cloned data and keep the original around instead of using that for the process described below.

So how do we approach this?

Since Windows Server 2008 storage & clustering isn’t the pain it could be in earlier version. It’s the disk manager handling all that and it makes live a lot easier. All disks presented to a cluster node are off line to the operating system until you bring it online. Even if it contains data or is presented to another host, whether that is a member of another cluster or a stand alone host. Pretty cool. It also means you can have all your nodes on line during the process. The process of bringing the disk online and, if needed formatting it with NTFS and then adding it to the cluster as storage can be done on just one of the nodes.

As you recall I unplugged the evicted node from the iSCSI storage (you could also disable the ports) before I upgraded it. The entire iSCSI configuration got upgraded perfectly so all I needed to do was plug the iSCSI cables in and the storage appeared offline. My old cluster node was up and running still accessing it. Pretty slick! And great as a demo but you can play it safer. That was fun Smile but perhaps we won’t be that brave in a production environment.

Options

You could decide to bring all LUNS over at once or one at the time. The process is the same. If you do it one by one you’ll have to rely on the above behavior to protect the LUNs against corruption or you can un-present the LUNS remaining on the old cluster from the new cluster so you’ll never have an issue. We’ve done both and it works out rather fine in testing. Windows clustering is really doing it’s best to prevent you from shooting yourself in the foot Smile

Let’s say I go LUN by LUN. Now I can just remove the VMs from the old cluster using the Failover cluster GUI so they are no longer highly available on that node. When I have no more clustered VMs on a CSV LUN I can shut down all the guests in Hyper-V Manager and stop right there.

On the old cluster I remove that LUN from the CSV storage and from the cluster storage. At that moment that LUN is already taken offline for you!

image

Pardon the silly size but I didn’t have space left to make a realistic screenshot Smile

Great, Windows is protecting us against any possible data corruption! So now I can than un-present the LUN form the old cluster nodes. The next step is to enable the ISCI ports, present that LUN to the new cluster node or nodes (depends on where in the x number of node process you are) or just plug in the cable .

You’ll see the new LUN off line than on the new cluster. We can than make the LUN on line so it will be available to add to the cluster. Just right click that disk and select “Online”.

image

image

 

Right click on storage

image

 

Select an disk that’s available to add to the cluster.

02

 

Things has gotten a lot simpler with CSV in Windows Server 2012. No more enabling it with a funky warning message that’s well meant but is rather confusing an annoying. You just right click the disk and choose “Add to Cluster Shared Volumes” and that’s it.

image

 

And there it is. That disk in our new cluster is ready to use as a CSV.

image

 

So we can now us a nifty new capability in Windows Server 2012 Hyper-V: “Register the virtual machine in-place” (use the existing unique ID)

05

 

The wizard starts.

06

 

Select the folder where your VM or VMs live. yes you can do multiple given that your folder structure allows for this.

07

 

It’s found one VM in our folder

08

 

We click Next

10

 

We select “Register the virtual machine in-place” (use the existing unique ID) and click next.

11

If something is not right like some forgotten “saved” states you’ll get a change to dump those or cancel the process to deal with it properly before trying it again.

12

 

If virtual network names do not match you’ll get the opportunity to set correct that by specifying what virtual switch to use.

13

 

If all was well in the first place or after you’ve fixed any issues like the ones demonstrated above you’re good to go. Click finish and enjoy your Windows Server 2012 Hyper-V Guest.

18

 

At this point you can already start your VMs. I know that the next step is to make all these VMs highly available but here we have some good news as well. You can now make running VMs highly available. Yeah! They no longer need to be shut down. All this is done via the well know process so I’m not going to walk trough the entire process here. But the screen shot of a making a running VM highly available is worth posting Smile

addrunningvm

Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 2


This is a multipart series based on some lab test & work I did.

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 8 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 3

Here’s part two of my adventures while upgrading or rather “transitioning” my Hyper-V cluster nodes to Windows 8. Transition is more correctly as you can not upgrade a cluster, you create a new cluster en recuperate the node. I did however not reinstall them but upgrade them. Why, because I can and I wanted to try it out to see what happens. For production purposes I do advise you to rebuild nodes from scratch using a well defined and automated plan if possible. I already mentioned this in Upgrading Hyper-V Cluster Nodes to Windows Server 8 (Beta) – Part 1

So we stopped Part 1 with a evicted and upgraded node. We’ll want to create a new cluster with that node and then transition the other nodes over to the new Windows 8 cluster one by one, or in batches, depending on how many you can afford to take down at one time. In this part we’ll just build our new Window 8 cluster with a single node. It’s a good thing this is possible as we can start a transition with just one node. This an easy part.

First of all we create a new cluster. I will all look very familiar if you’ve ever created a Windows 2008 (R2) cluster.

image

 

The Create Cluster Wizard appears, read all the advice you want and click “Next”

02

 

We select the node that we evicted from the old cluster and upgraded to Windows 8

03

04

 

You now run the validation test for your cluster

06

Let’s run ‘m all and see what it has to say.

07

 

We get a summary of what notes will be tested and what tests will be run. Click “Next”

08

 

The tests are running.

09

 

We get a pass with some warnings. So we click “View Report” to take a look. It’s OK we only have one node, we don’t have storage yet and networking wise we still need to configure some things but we can create a one node cluster, So click “Finish”

10

image

 

I named my new cluster “warriors”, the old one was called “warrior”.

12

 

I define the IP Address for the Access Point for administering the cluster

13

 

We’re ready to create the cluster so we click “Next” and the creation process starts

15

16

 

And we’re informed we’ve have successfully created a cluster. Click Finish. Any experienced cluster builder should find this process very familiar without surprises.

17

 

So now we have a cluster existing out of one node and we haven’t got any storage assigned yet.

We have several options for storage here. We could assign new storage but we cannot do a Quick Storage Migration between cluster using SCVMM2008R2 but that doesn’t fly as SCVMM2008R2 can’t manage Windows 8 clusters and I don’t know if it ever will.  We can do a good old manual or scripted export and import of the VMs what takes a considerable amount of time.

We can recuperate the old storage with the VMs still on there. This could get tricky as no two cluster should be able to see & use the storage at the same time. The benefit could be that we can just use the import type in Windows 8 ("Register the virtual machine in-place" (use the existing unique ID) and be done with it. We’ll try that one. We’ll still have some down time but it should be pretty fast. It’s only from Windows 8 on that we’ll be able to do Shared Nothing Live Migrations between clusters Smile We’ll address that in Part 3.

I’m off to Attend MMS 2012 In Las Vegas


image

Life is good people. I have to good fortune to work in an interesting industry, doing great projects with modern technologies. On top of that my employer allows me to fully develop my skills . In that respect it makes a serious difference to have a good boss & management that understands the benefits of ongoing education. They look a both the short & long term value of people educating & developing themselves a lot more than at that nagging Excel sheet on the screen. Professional development is not just a cookie cutter 4 day training course once or twice a year but real opportunities to become a better professional if and when you’re willing to put in the effort. They’ve figured out that you cannot just use utmost cost reduction to catapult both your business and employees in to prosperity & wellbeing. You need to keep learning, evolving, networking, … The contacts I make and the education I get by working with and learning along very smart & motived people are priceless. Sure it costs money and effort form everyone involved but it beats doing nothing and saving a few € as a long term strategy for growth & success. On top of that I feel appreciated & valued for my contributions and the efforts I put in.So to the tunes of some eighties rockers I’m off again.

Here I go again on my own, goin’ down the only road I’ve ever known.
Like a drifter I was born to walk alone. An’ I’ve made up my mind, I ain’t wasting no more time. I’m attending MMS 2012

Alone, heck no, many thousands of us will be descending on Las Vegas (Nevada, USA) to attend the summit. This event sells out fast each year. A friend told me to register a.s.a.p. or miss out, so I did as soon as I got the go ahead to attend, securing my spot. So now I’m travelling over LHR to LAS following my buddies & other attendees journey from their respective countries to Las Vegas on line, mostly via Twitter.

If you can’t come, whatever the reason, you can always enjoy a good number of sessions here MMS 2012 goes digital: LIVE streaming and On-Demand for attendees AND non-attendees! 48 hours after the live presentation.

I don’t have to tell you what System Center 2012 means to the IT Pro in the Microsoft ecosystem. Combine that with the RTM of Windows 8 later this year and I just had to go and attend the Microsoft management Summit 2012 in Las Vegas.  It’s more than training. It’s networking and an education.

Apart from the formal agenda & sessions I already a have some meetings lined up with vendors, colleagues from around the globe. We’re making the most of this opportunity to meet face to face with people we other wise only get to talk to on line and often with huge time zone difference.

MMS2012_Server

I’ve you’re going and you read my blog or follow me on twitter. Give us a shout out and perhaps we can have a meet & greet.

To all my geek & nerd friends, colleagues, MEET members, business partners, Microsoft employees & MVPs in route to Vegas & the Summit at The Venetian, I’m looking forward to seeing you all again! But first I have some traveling to do in the next 24 hours, to make my way over there.

NIC Teaming in Windows 8 & Hyper-V


One of the many new features in Windows 8 is native NIC Teaming or Load Balancing and Fail Over (LBFO). This is, amongst many others, a most welcome and long awaited improvement. Now that Microsoft has published a great whitepaper (see the link at the end) on this it’s time to publish this post that has been simmering in my drafts for too long. Most of us dealing with NIC teaming in Windows have a lot of stories to tell about incompatible modes depending on the type of teaming, vendors and what other advanced networking features you use.  Combined with the fact that this is a moving target due to a constant trickle of driver & firmware updates to rid of us bugs or add support for features. This means that what works and what doesn’t changes over time. So you have to keep an eye on this. And then we haven’t even mentioned whether it is supported or not and the hassle & risk involved with updating a driver Smile

When it works it rocks and provides great benefits (if not it would have been dead). But it has not always been a very nice story. Not for Microsoft, not for the NIC vendors and not for us IT Pros. Everyone wants things to be better and finally it has happened!

Windows 8 NIC Teaming

Windows 8 brings in box NIC Teaming, also know as Load Balancing and Fail Over (LBFO), with full Microsoft support. This makes me happy as a user. It makes the NIC vendors happy to get out of needing to supply & support LBOF. And it makes Microsoft happy because it was a long missing feature in Windows that made things more complex and error prone than they needed to be.

So what do we get form Windows NIC Teaming

  • It works both in the parent & in the guest. This comes in handy, read on!

image

  • No need for anything else but NICs and Windows 8, that’s it. No 3rd party drivers software needed.
  • A nice and simple GUI to configure & mange it.
  • Full PowerShell support for the above as well so you can automate it for rapid & consistent deployment.
  • Different NIC vendors are supported in the same team.  You can create teams with different NIC vendors in the same host. You can also use different NIC across hosts. This is important for Hyper-V clustering & you don’t want to be forced to use the same NICs everywhere. On top of that you can live migrate transparently between servers that have different NIC vendor setups. The fact that Windows 8 abstracts this all for you is just great and give us a lot more options & flexibility.
  • Depending on the switches you have it supports a number of teaming modes:
    • Switch Independent:  This uses algorithms that do not require the switch to participate in the teaming. This means the switch doesn’t care about what NICs are involved in the teaming and that those teamed NICS can be connected to different switches. The benefit of this is that you can use multiple switches for fault tolerance without any special requirements like stacking.
    • Switch Dependent: Here the switch is involved in the teaming. As a result this requires all the NICs in the team to be connected to the same switch unless you have stackable switches. In this mode network traffic travels at the combined bandwidth of the team members which acts as a as a single pipeline.There are two variations supported.
      1. Static (IEEE 802.3ad) or Generic: The configuration on the switch and on the server identify which links make up the team. This is a static configuration with no extra intelligence in the form of protocols assisting in the detection of problems (port down, bad cable or misconfigurations).
      2. LACP (IEEE 802.1ax, also known as dynamic teaming). This leverages the Link Aggregation Control Protocol on the switch to dynamically identify links between the computer and a specific switch. This can be useful to automatically reconfigure a team when issues arise with a port, cable or a team member.
  • There are 2 load balancing options:
    1. Hyper-V Port: Virtual machines have independent MAC addresses which can be used to load balance traffic. The switch sees a specific source MAC addresses connected to only one connected network adapter, so it can and will balance the egress traffic (from the switch) to the computer over multiple links, based on the destination MAC address for the virtual machine. This is very useful when using Dynamic Virtual Machine Queues. However, this mode might not be specific enough to get a well-balanced distribution if you don’t have many virtual machines. It also limits a single virtual machine to the bandwidth that is available on a single network adapter. Windows Server 8 Beta uses the Hyper-V switch port as the identifier rather than the source MAC address. This is because a virtual machine might be using more than one MAC address on a switch port.
    2. Address Hash: A hash (there a different types, see the white paper mention at the end for details on this) is created based on components of the packet. All packets with that hash value are assigned to one of the available network adapters. The result is that all traffic from the same TCP stream stays on the same network adapter. Another stream will go to another NIC team member, and so on. So this is how you get load balancing. As of yet there is no smart or adaptive load balancing available that make sure the load balancing is optimized by monitoring distribution of traffic and reassigning streams when beneficial.

Here a nice overview table from the whitepaper:

image

Microsoft stated that this covers the most requested types of NIC teaming but that vendors are still capable & allowed to offer their own versions, like they have offered for many years, when they find that might have added value.

Side Note

I wonder how all this is relates/works with to Windows NLB, not just on a host but also in a virtual machine in combination with windows NIC teaming in the host (let alone the guest). I already noticed that Windows NLB doesn’t seem to work if you use Network Virtualization in Windows 8. That combined with the fact there is not much news on any improvements in WNLB (it sure could use some extra features and service monitoring intelligence) I can’t really advise customers to use it any more if they want to future proof their solutions. The Exchange team already went that path 2 years ago. Luckily there are some very affordable & quality solution out there. Kemp Technologies come to mind.

  • Scalability.You can have up to 32 NIC in a single team. Yes those monster setups do exist and it provides for a nice margin to deal with future needs Smile
  • There is no THEORETICAL limit on how many virtual interfaces you can create on a team. This sounds reasonable as otherwise having an 8 or 16 member NIC team makes no sense. But let’s keep it real, there are other limits across the stack in Windows, but you should be able to get up to at least 64 interfaces generally. Use your common sense. If you couldn’t put 100 virtual machines in your environment on just two 1Gbps NICs due to bandwidth concerns & performance reasons you also shouldn’t do that on two teamed 1Gbps NICs either.
  • You can mix NIC of different speeds in the same team. Mind you, this is not necessarily a good idea. The best option is to use NICs of the same speed. Due to failover and load balancing needs and the fact you’d like some predictability in a production environment. In the lab this can be handy when you need to test things out or when you’d rather have this than no redundancy.

Things to keep in mind

SR-IOV & NIC teaming

Once you team NICs they do not expose SR-IOV on top of that. Meaning that if you want to use SR-IOV and need resilience for your network you’ll need to do the teaming in the guest. See the drawing higher up. This is fully supported and works fine. It’s not the easiest option to manage as it’s on a per guest basis instead of just on the host but the tip here is using the NIC Teaming UI on a host to manage the VM teams at the same time.  Just add the virtual machines to the list of managed servers.

image

Do note that teams created in a virtual machine can only run in Switch Independent configuration, Address Hash distribution mode. Only teams where each of the team members is connected to a different Hyper-V switch are supported. Which is very logical, as the picture below demonstrates, because you won’t have a redundant solution.

image

Security Features & Policies Break SR-IOV

Also note that any advanced feature like security policies on the (virtual) switch will disable SR-IOV, it has to or SR-IOV could be used as an effective security bypass mechanism. So beware of this when you notice that SR-IOV doesn’t seem to be working.

RDMA & NIC Teaming Do Not Mix

Now you need also to be aware of the fact that RDMA requires that each NIC has a unique IP addresses. This excludes NIC teaming being used with RDMA. So in order to get more bandwidth than one RDMA NIC can provide you’ll need to rely on Multichannel. But that’s not bad news.

TCP Chimney

TCP Chimney is not supported with network adapter teaming in Windows Server “8” Beta. This might change but I don’t know of any plans.

Don’t Go Overboard

Note that you can’t team teamed NIC whether it is in the host or parent or in virtual machines itself. There is also no support for using Windows NIC teaming to team two teams created with 3rd party (Intel or Broadcom) solutions. So don’t stack teams on top of each

Overview of Supported / Not Supported Features With Windows NIC Teaming

image

Conclusion

There is a lot more to talk about and a lot more to be tested and learned. I hope to get some more labs going and run some tests to see how things all fit together. The aim of my tests is to be ready for prime time when Windows 8 goes RTM. But buyer beware, this is  still “just” Beta material.

For more information please download the excellent whitepaper NIC Teaming (LBFO) in Windows Server "8" Beta