ODX Speed Up VHDX Creation Times On Windows Server 2012 (R2)


Some technlogies you just need to see in action instead of reading about it. I have posted a video on Vimeo that shows ODX in action on Windows Server 2012 R2 and a DELL Compellent SAN running Storage Center 6.3.10 firmware that supports UNMAP & ODX. Watch the video here or on Vimeo itself for a better experience. It’s a rerun of the demo scripts used in my TechNet Belux Live Meeting of this week.

We demonstrate the amazing speeds at which we can create VHDX files on both a traditional clustered disk and a Cluster Shared Volume. If you have ever tried to create a lot of fixed VHD/VHDX files, especially larger one, then you really need to check out ODX and its potential. If you have a SAN or think about acquiring one make sure you get this feature and be sure that it works as advertised.

I hope you enjoy it and inspires you to look where you can leverage this technology in your own environments.

Hands on with Hyper-V Clustering Maintenance Mode & Cluster Aware Updating TechNet Screencast


I’ve blogged and given some presentations on Cluster Aware Updating before and I also did a web cast on this subject on Technet. You can find the video of that screencast right here Hands on with Hyper-V Clustering Maintenance Mode & Cluster Aware Updating.

image

I hope you get something out of it. Once I got my head wrapped around around the XML to make the BIOS, firmware & driver updates from DELL to work as well as the pre configured inbox functionality (DGR & QFE updates) it has proven equally valuable for those kinds of updates.

Teamed NIC Live Migrations Between Two Hosts In Windows Server 2012 Do Use All Members


Introduction

Between this blog NIC Teaming in Windows Server 2012 Brings Simple, Affordable Traffic Reliability and Load Balancing to your Cloud Workloads which states TCP/IP can recover from missing or out-of-order packets. However, out-of-order packets seriously impact the throughput of the connection. Therefore, teaming solutions make every effort to keep all the packets associated with a single TCP stream on a single NIC so as to minimize the possibility of out-of-order packet delivery. So, if your traffic load comprises of a single TCP stream (such as a Hyper-V live migration), then having four 1Gb/s NICs in an LACP team will still only deliver 1 Gb/s of bandwidth since all the traffic from that live migration will use one NIC in the team. However, if you do several simultaneous live migrations to multiple destinations, resulting in multiple TCP streams, then the streams will be distributed amongst the teamed NICsand other information out their such as support forum replies it is dictated that when you live migrate between two nodes in a cluster only one stream is active and you will never exceed the bandwidth of a single team member. When running some simple tests with a 10Gbps NIC team this seems true. We also know that you can consume near to all of the aggregated bandwidth of the members in a NIC Team for live migration if you these conditions are met:

1. The Live Migrations must not all be destined for the same remote machine. Live migration will only use one TCP stream between any pair of hosts. Since both Windows NIC Teaming and the adjacent switch will not spread traffic from a single stream across multiple interfaces live migration between host A and host B, no matter how many VMs you’re migrating, will only use one NIC’s bandwidth.

2. You must use Address Hash (TCP ports) for the NIC Teaming. Hyper-V Port mode will put all the outbound traffic, in this case, on a single NIC.

When we look at these conditions and compare them to the behavior we expect from the various forms of NIC teaming in Windows 2012 this is a bit surprising as one might expect all member to be involved. So let’s take a look at some of the different NIC Teaming setups.

Any form of NIC teaming with Hyper-V Port Mode

This one is easy as condition 2 above is very much true. In all my testing with any NIC team configuration in the Hyper-V Port mode traffic distribution algorithms I have not been able to exceed 10Gbps. I have seen no difference between dependent static of LACP mode or switch independent (active-active) for this condition. As you can see in the screenshot below, the traffic maxes out at 10Gbps.

clip_image002

clip_image004

This is also demonstrated in the following screenshots taking with the resource manager where you can see only half of the bandwidth of the Team is being used.

clip_image006

clip_image008

Exceeding a single NIC team member’s bandwidth when migrating between 2 nodes

The first condition of the previous heading doesn’t seem true. In some easy testing with a low number of virtual machines and not too much memory assigned you never exceed the bandwidth of one 10Gbps NIC team member. So on the surface, with some quick testing it might seem that way.

But during testing on a 2 node cluster with dual port 10Gbps cards and I have found the following

Switch Dependent LACP and Static

  1. Take a sufficient number of large memory virtual machines to exceed the capacity of a single 10Gbps pipe for a longer time (that way you’ll see it in the GUI).
  2. Live migrate them all from host A to host B (“Pause” with “Drain Roles” or “select all” + “Move”)
  3. Note that with a 2 node cluster there is no possibility to Live Migrate to multiple nodes simultaneous. It’s A to or B or B to A or both at the same time.

Basically it didn’t take long to see well over 10Gbpsbeing used. So the information out there seems to be wrong. Yes we can leverage the aggregated bandwidth when we migrate from host A to host B as long as we have enough memory assigned to the VMs and we migrate a sufficient number of them. Switch dependent teaming, whether it is static or LACP does its job as you would expect.

Let’s think about this. The number of VMs you need to lie migrate to see > 10Gbpss used is not fixed in stone. Could it be that there is some intelligence in the Live Migration algorithm where it decides to set up multiple streams when a certain number of virtual machines with sufficient memory are migrated as the sorting is mitigated by the amount of bandwidth that can leveraged? Perhaps he VMMS.EXE kicks off more streams when needed/beneficial? Further experimenting indicates that this is not the case. All you need is > 1 VM being live migrated. When looking at this in task manager you do need them to be of sufficient memory size and/or migrate enough of them to make it visible. I have also tried playing with the number of allowed simultaneous live migrations to see if this has an effect but I did not find one (i.e. 4, 6 or 12).

It looks like it is more like one TCP/IP connection per Live Migration that is indeed tied to one NIC member. So when you live migrate VMS between two hosts you see one VM live migration go over 1 member and the other the other as static/LACP switch dependent teaming did does its job. When you do enough live migrations of large VMs simultaneously you see this in Task Manager as shown below. In this case as each VM live migration stream sticks to a NIC team member you do not need to worry about out of order packets impacting performance.

clip_image010

But to make sure and to prevent falling victim to the fall victim to the limits of the task manger GUI during testing this behavior we also used performance monitor to see what’s going on. This confirms we are indeed using both 10Gbps NIC team member on both the target and the source host server. This is even the case with 2 virtual machines Live Migration. As long as it’s more than one and the memory assigned is enough to make the live migration last long enough you can see it in Task Manager; otherwise it might miss it. Performance Monitor however does not..clip_image012

clip_image002[4]

clip_image004[4]

This is interesting and frankly a bit unexpected as the documentation on this subject is not reflecting this. However it IS in agreement with the NIC teaming documented behavior for other tan Live Migration traffic. We took a closer look however and can reproduce this over and over again. Again we tested both switch dependent static and LACP modes and we found the behavior to be the same.

Switch Independent with Address Hash

Let’s test Live Migration over switch independent teaming with Address Hash. Here we see that the source server sends on the two member of the NIC team but that the target server receives on only one. This is normal behavior for switch independent teaming. But from the documentation we expect that one member on the source server would send and one member on the target server would receive. Not so.

Basically with Windows Server 2012 this doesn’t give you any benefit for throughput. You are limited to the bandwidth of one member, i.e. 10Gbps.

clip_image018

clip_image020

Red is Total Bytes received on the target host. It’s clear only one member is being used. Green is Bytes Sent/Sec on the source server. As you can see both team members are involved. In a switch independent scenario the receiving side limits the throughput. This is in agreement the documented behavior of switch independent NIC teaming with Address hash.

Helpful documentation on this is Windows Server 2012 NIC Teaming (LBFO) Deployment and Management (A Guide to Windows Server 2012 NIC Teaming for the novice and the expert).

Hope this helps sort out some of the confusion.

Fixing Event ID 2002 “The policy and configuration settings could not be imported to the RD Gateway server "%1" because they are associated with local computer groups on another RD Gateway server”


Introduction

I was working on a little project for a company that was running TS Gateway on 32bit Windows 2008. The reason they did not go for x64 at the time was that they used Virtual Server as their virtualization platform for some years and not Hyper-V. One of the drawbacks was that they could not use x64 guest VMs. Since then they have move to Hyper-V and now also run Window Server 2012. So after more than 5 years of service and to make sure they did not keep relying on aging technology it is time to move to Windows Server 2012 RD Gateway and reap the benefits of the latest OS.

All in all the Microsoft documentation is not to bad, all be it that the information is a bit distributed as you need to use various tools to complete the process. Basically, depending on the original setup of the source server you’ll need to use the TS/RD Gateway Export & Import functionality, Web Deploy (we’re at version 3.0 at the time of writing) and the Windows Server Migration Tools that were introduced with Windows 2008 R2 and are also available in Windows Server 2012.

In a number of posts I’ll be discussing some of the steps we took. You are reading Part 3.

  1. x86 Windows Server 2008 TS Gateway Migration To x64 Windows Server 2012 RD Gateway
  2. Installing & using the Windows Server Migration Tools To Migrate Local Users & Groups
  3. TS/RD Gateway Export & Import policy and configuration settings a.k.a  “Fixing “The policy and configuration settings could not be imported to the RD Gateway server "TARGETSERVER" because they are associated with local computer groups on another RD Gateway server”

The Migration

Their is no in place upgrade from a x86 to an x64 OS. So this has to be a migration. No worries this is supported. With some insight, creativity and experience you can make this happen. The process reasonably well documented on TechNet, but not perfectly, and your starting point is right here RD Gateway Migration: Migrating the RD Gateway Role Service. These docs are for Windows Server 2008 R2 but still work for Windows Server 2012. Another challenge was we needed to also migrate their custom website used for the employees to check whether their PC is still on and if not wake it up or start it up remotely.

As you read in the previous part we had to migrate local users and groups that are also used by the TS Gateway x86 Windows 2008 Server as we still need those in the Windows Server 2012 RD Gateway. The Active Directory users and groups used in Connection Authorization Policies (CAP) and Resource Authorization Policies (RAP) require no further work.

TS/RD Gateway Export & Import

I’m not going to write on how to install  a brand new RD Gateway. That’s been done just fine by Microsoft and many other. I’ll just discuss the import and export functionality in the TS/RD Gateway manager and help you with a potential issue.

Export

This is easy. On the source TS/RD Gateways server you just right click the server in TS/RD Gateway Manager and select Export policy and configuration settings. In our case this is a Windows Server 2008 TS Gateway, X86, so 32 bit. But that doesn’t matter here.

image

Give the export file a name and chose a location.

image

You’ll get a notification of a successful import.

image

Import

Ordinarily you’ll launch the RD Gateway Manager Import policy and configuration settings feature and follow the wizard.image

Select a export file (from the old TS Gateway server) to import

 image

image

image

But instead of getting a success message you get an error.

image

If you are moving the TS/RDGateway to a new server and will not recuperate the name you’ll have to deal with the following issue: The policy and configuration settings could not be imported to the RD Gateway server "TARGETSERVER" because they are associated with local computer groups on another RD Gateway server.

This also manifests itself as an error in the TerminalServices-Gateway Admin log with Event 2002

image

“The policy and server configuration settings for the TS Gateway server "%1" could not be imported. This problem might occur if the settings have become corrupted.”

What? Corrupt? The Export went fine!? Now if you start researching this error you’ll end up here http://technet.microsoft.com/en-us/library/cc727351(v=ws.10).aspx which will tell you what to do if you get this error duse to a bad export but basically tells you you’re stuck otherwise. Not so! The solution to this is very easy, you just have to know it works. I found out by testing & verifying this. All you have to do is edit the source TS/RD Gateway export XML file.

Open op the XML file in notepad. Select Edit/Replace from the menu and do a Find "SOURCESERVER" with Replace All "TARGETSERVER" and use that XML File. Save the file and use that for the import.

image

So now start the import again with your edited file and after a while you’ll see that you have been successful this time.

image

If you are recuperating the name you will not have this issue as the name in the export file will match the host name. However as this server is domain joined to the same domain as the original one you’ll have to respect the order of taking down the original one, resetting it’s AD computer account and reusing it for then new RD gateway server. This is more risky as you take down the service before you switch over. With a new server and a DNS alias you can just swap between the old and the new one by simply updating the DNS record(s) or even recuperating the old IP address, that switch can go fast.

Installing & using the Windows Server Migration Tools To Migrate Local Users & Groups


Introduction

I was working on a little project for a company that was running TS Gateway on 32bit Windows 2008. The reason they did not go for x64 at the time was that they used Virtual Server as their virtualization platform for some years and not Hyper-V. One of the drawbacks was that they could not use x64 guest VMs. Since then they have move to Hyper-V and now also run Window Server 2012. So after more than 5 years of service and to make sure they did not keep relying on aging technology it is time to move to Windows Server 2012 RD Gateway and reap the benefits of the latest OS.

All in all the Microsoft documentation is not too bad, all be it that the information is a bit distributed as you need to use various tools to complete the process. Basically, depending on the original setup of the source server you’ll need to use the TS/RD Gateway Export & Import functionality, Web Deploy (we’re at version 3.0 at the time of writing) and the Windows Server Migration Tools that were introduced with Windows 2008 R2 and are also available in Windows Server 2012.

In a number of posts I’ll be discussing some of the steps we took. You are reading the second post.

  1. x86 Windows Server 2008 TS Gateway Migration To x64 Windows Server 2012 RD Gateway
  2. Installing & using the Windows Server Migration Tools To Migrate Local Users & Groups
  3. TS/RD Gateway Export & Import (Fixing Event ID 2002 “The policy and configuration settings could not be imported to the RD Gateway server "%1"" because they are associated with local computer groups on another RD Gateway server”)

As discussed in the first part we need to migrate some local users & groups on the TS Gateway (source) server as they are also being used for some special cases of remote access, next to Active Directory users & groups for the Remote Access Policies (RAPs) & Connection Authorization Policies (CAPs). The tool the use is the Windows Server Migration Tools. These were introduced with Windows 2008 R2 and are also available in Windows Server 2012.

Some people seem to get confused a bit about the installation of the Server Migration Tools but it’s not that hard. I have used these tools several times before in the past and they work very well. You just need to read up a bit on the the deployment part and once you have it figured out they work very well.

Installing the Windows Server Migration Tools on the DESTINATION Server

First we have to install the on the DESTINATION host (W2K12 in our case, the server to which you are migrating)). For this we launch Server Manager and on the dashboard select Manage and choose Add Roles & Feature.clip_image001

Navigate through the wizard until you get to Features. Find and select Windows Server Migration Tools. Click Next.clip_image001[4]

Click Install to kick of the installation.clip_image001[9]

After a while your patience will be rewarded.clip_image001[11]

Installing the Windows Server Migration Tools on the SOURCE Server

To install the Windows Server Migration Tools on the SOURCE server, you need to run the appropriate PowerShell command on the DESTINATION server. This is what trips people up a lot of the time. You deploy the correct version of the tools from the destination server to the source server, where you will than register them for use. Do this with an admin account that has admin privileges on both the DESTINATION & SOURCE Computer.

Start up the Windows Server Migration Tools from Server Manager, Tools.image

This launches the Windows Server Migration Tools PowerShell window.image

Our SOURCE server here is the32 bit (X86)  Windows 2008 TS Gateway Server. The documentation tells us the correct values to use for the parameters /architecture and /OS to use.

SmigDeploy.exe /package /architecture X86 /os WS08 /path \\SourcerServer\c$\sysadmin

Now before you run this command be sure to go to the ServerMigrationTools folder as the UI fails to do that for you.

Also this is PowerShell so use .\ in front of the command otherwise you’ll get the error below.image

While you want this:image

Now you have also deployed the correct tools to the SOURCE server, our old legacy TS Gateway Server. Next we need to register these tools on the SOURCE Server to be able to use them. You might have gotten the message already you need PowerShell deployed on the SOURCE Server as documented.

If you have PowerShell, launch the console with elevated permissions (Runs As Administrator) and run the following command: .\SmigDeploy.exeimage

Congratulations you are now ready to use the Windows Server Migration Tools! That wasn’t so hard was it? Smile

Using the Windows Server Migration Tools To Migrate Local Users & Groups

To export the local users and groups from the source TS/RD Gateway server you start up the Windows Server Migration Tools on the SOURCE server (see the documentation for all ways to achieve this) and run the following PowerShell command:
Export-SmigServerSetting -User All  -Group –Path C:\SysAdmin\ExportMigUsersGroups –Verboseimage

As you can see I elected to migrate all user accounts not just the enabled or disabled ones. We’ll sort those out later. Also note the command will create the folder for you.

To import the local users and groups to the target RD Gateway server you start up the Windows Server Migration Tools on the Destination server (see the documentation) , i.e. our new Windows Server 2012 RD Gateway VM.

image

and run the following PowerShell command:

Import-SmigServerSetting  -User Enabled  -Group -Path C:\SysAdmin\ExportMigUsersGroups -Verbose

Do note that the migrated user accounts will be disabled and have their properties set to "Next Logon". This means you will have to deal with this accordingly depending on the scenarios and communicate new passwords & action to take to the users.image

image

Do note that the local groups have had the local or domain groups/users added by the import command. Pretty neat.image

You’re now ready for the next step. But that’s for another blog post.

x86 Windows Server 2008 TS Gateway Migration To x64 Windows Server 2012 RD Gateway


Introduction

I was working on a little project for a company that was (still) running TS Gateway on a 32 bit  x86) version Windows 2008. The reason they did not go for x64 at the time of deployment was that they then used Microsoft Virtual Server as their virtualization platform and had been for some years.

In a number of posts I’ll be discussing some of the steps we took. You are reading the first one.

  1. x86 Windows Server 2008 TS Gateway Migration To x64 Windows Server 2012 RD Gateway
  2. Installing & using the Windows Server Migration Tools To Migrate Local Users & Groups
  3. TS/RD Gateway Export & Import (Fixing Event ID 2002 “The policy and configuration settings could not be imported to the RD Gateway server "%1"" because they are associated with local computer groups on another RD Gateway server”)

In those early days of W2K8 they had not yet switched to Hyper-V. As an early adopter I was able to show the the reliability of Hyper-V, so later they did.

One of the drawbacks of using Microsoft Virtual Server was that they could not use x64 guest VMs and that’s how they ended up with x86, which was still available for a server OS for W2K8. Since then they have move to Hyper-V and now also run Window Server 2012. Happy customers! So after more than 5 years of service and to make sure they did not keep relying on aging technology it is time to move to Windows Server 2012 RD Gateway and reap the benefits of the latest OS.

The Migration

Their is no in place upgrade from a x86 to an x64 OS. So this has to be a migration. No worries this is supported. With some insight, creativity and experience you can make this happen. The process reasonably well documented on TechNet, but not perfectly, and your starting point is right here RD Gateway Migration: Migrating the RD Gateway Role Service. These docs are for Windows Server 2008 R2 but still work for Windows Server 2012. Another challenge was we needed to also migrate their custom website used for the employees to check whether their PC is still on and if not wake it up or start it up remotely.

There are some things to take care of and I’ll address these I some later blog posts but I want you to take to heart this message. While an in place upgrade of an 32 bit X86 operating system to X64 version of that OS is not possible that doesn’t mean you’re in  a pickle and will have to start over from scratch. For many scenario’s there are migration paths and this is just one example of them, or better two combined,TS Gateway and a Website.

If You Can, You Should Attend TechEd 2013 Europe


It’s that time of the year again, when TechEd is coming closer. I’m attending the European Edition in Madrid, Spain. But I can guarantee you I will be on line a lot during the USA edition as well. At be attending the USA edition this year if I could but work, time and budget wise I can’t make that happen. This isn’t because the European edition is less, absolutely not. The reason is that at MMS2013 in Las Vegas last month we got the heads up that Microsoft will start talking publicly about the new version of Windows and I’m game for that. Windows Server 2012 is the best Windows version ever but I know what I’d like to see in there to make it even better. I’m kind of curious if anyone at MSFT follows my thinking on this subject. I hope so!

TechEdEU_250x250_7

So yes I’m a TechEd advocate, you bet! If you want to know why, read my blog post here on http://workinghardinit.wordpress.com/2010/06/05/why-i-find-value-in-a-conference/.

Come and learn amongst your peers, network with them and industry experts. To become competent and gain expertise you are going to have to get out there and expose your ideas, insights and thinking to your peers around the globe. That’s how it works. To those who dismiss quality conferences like this I can only say that you are wrong. To those who claim it’s a paid holiday I can only say that to a liar all other men are liars and to a thief all other men are thieves.  Enough said. Invest in knowledge and competence development, it will pay of better than some extra thousands of € in the bank!

So if you can please join me and attend TechEd. It’s a blast and a tremendous learning experience. I never ever miss attending TechEd, not even at times it wasn’t easy for me to do so. You can register here. I hope to see you there!

SMB Direct RoCE Does Not Work Without DCB/PFC


Introduction

SMB Direct RoCE Does Not Work Without DCB/PFC. “Yes”, you say, “we know, this is well documented. Thank you.” but before you sign of hear me out.

Recently I plugged to RoCE cards into some test servers and linked them to a couple of 10Gbps switches. I did some quick large file copy testing and to my big surprise RDMA kicked in with stellar performance even before I had installed the DCB feature, let alone configure it. So what’s the deal here. Does it work without DCB? Does the card fail back to iWarp? Highly unlikely. I was expecting it to fall back to plain vanilla 10Gbps and not being used at all but it was. A short shout out to Jose Barreto to discuss this helped clarify this.

DCB/PFC is a requirement RoCE

The more busy the network gets the faster the performance will drop. Now in our test scenario we had two servers  for a total of 4 RoCE ports on the network consisting of a beefy 48 port 10Gbps switches. So we didn’t see the negative results of this here.

DCB (Data Center Bridging) and Priority Flow Control are considered a requirement for any kind of RoCE deployment. RDMA with RoCE operates at the Ethernet layer. That means there is no overhead from TCP/IP, which is great for performance. This is the reason you want to use RDMA actually. It also means it’s left on it’s own to deal with Ethernet-level collisions and errors. For that it needs DCB/PFC other wise you’ll run into performance issues due to a ton of retries at the higher network layers.

The reason that iWarp doesn’t require DCB/PCF is that it works at the TCP/IP level also offloaded by using a TCP/IP stack on the NIC instead of the OS. So errors are handled by TCP/IP at a cost: iWarp results in the same benefits as RoCE but it doesn’t scale that well. Not that iWarp performance is lousy, far form! Mind you, for bandwidth management reasons,you’d be better of using DCB or some form of QoS as well.

Conclusion

So no, not configuring  DCB on your servers and the switches isn’t an option, but apparently it isn’t blocked either so beware of this. It might appear to be working fine but it’s a bad idea. Also don’t think it defaults back to iWarp mode, it doesn’t, as one card does one thing not both. There is no shortcut. RoCE RDMA does not work error free out of the box so you do have the install the DCB feature and configure it together with the switches.

MVP Carsten Rachfahl Visits & Interviews Me On Networking & Storage in Windows Server 2012


Last month Carsten (MVP – Virtual Machine) & Kerstin Rachfahl (MVP – Office 365) visited me in my home town. Apart from a short visit to the historic center & a sushi diner amongst friends we also did an interview where we discussed our ongoing Windows Server 2012 Hyper-V activities. We’re trying to leverage as much of the product we can to get the best TCO & ROI and as early adopters we’ve been reaping the benefits form the day the RTM bits were available to us. So far that has been delivering great results. Funny to hear me mention the Fast Track designs as a week later we saw version 3 of those at MMS2013. The most interesting to me about those was the fact that the small & medium sizes focus on Cluster in a Box and Storage Spaces!

While we were having fun talking about the above we also enjoyed some of the most beautiful landmarks of the City of Ghent as a back drop for the interview. It was filmed in a meeting room at AGIV, to whom I provide Infrastructure services with a great team of colleagues. Just click the picture to view the video.

Videointerview_with_Didier_Van_Hoye_Storage_Networking_and_other_Stuff-Thumb2

You can also enjoy the video on Carsten’s blog http://www.hyper-v-server.de/videos/interview-mit-didier-van-hoye-ber-seinen-storage-netwerk-und-mehr/ All I need to do now is to arrange for Carsten to physically touch the Compellent storage I think.

SMB 3.0 Multichannel Auto Configuration In Action With RDMA / SMB Direct


Most of you might remember this slide by Jose Barreto on SMB Multichannel  Auto Configuration in one of his many presentations:image

  • Auto configuration looks at NIC type/speed => Same NICs are used for RDMA/Multichannel (doesn’t mix 10Gbps/1Gbps, RDMA/non-RDMA)
  • Let the algorithms work before you decide to intervene
  • Choose adapters wisely for their function

You can fine tune things if and when needed (only do this when this is really the case) but let’s look at this feature in action.

So let’s look at this in real life. For this test we have 2 * X520 DA 10Gbps ports using 10.10.180.8X/24 IP addresses and 2 * Mellanox  10Gbps RDMA adaptors with 10.10.180.9X/24 IP addresses. No teaming involved just multiple NIC ports. Do not that these IP addresses are on different subnet than the LAN of the servers. Basically only the servers can communicate over them, they don’t have a gateway, no DNS servers and are as such not registered in DNS either (live is easy for simple file sharing).

image

Let’s try and copy a 50Gbps fixed VHDX file from server1 to server2 using the DNS name of the target host (pixelated), meaning it will resolve to that host via DNS and use the LAN IP address 10.10.100.92/16 (the host name is greyed out). In the below screenshot you see that the two RDMA capable cards are put into action. The servers are not using  the 1Gbps LAN connection. Multichannel looked at the options:

  • A 1Gbps RSS capable Link
  • Two 10Gbps RSS capable Links
  • Two 10Gbps RDMA capable links

Multichannel concluded the RDMA card is the best one available and as we have two of those it use both. In other words it works just like described.

image

Even if we try to bypass DNS and we copy the files explicitly via the IP address (10.10.180.84)  assigned to the Intel X520 DA cards Multichannel intelligence detects that it has two better cards  that provide RDMA available and as you can see it uses the same NICs  as in the demo before.  Nifty isn’t it Smile

 image

If you want to see the other NICs in action we can disable the Mellanox card and than Multichannel will choose the two X520 DA cards. That’s fine for testing but in real life you need a better solution when you need to manually define what NICs can be used. This is done using PowerShell Smile (take a look at Jose Barrto’s blog The basics of SMB PowerShell, a feature of Windows Server 2012 and SMB 3.0  for more info).

New-SmbMultichannelConstraint –ServerName SERVER2 –InterfaceAlias “SLOT 6 Port 1”, “SLOT 6 Port 2”

This tells a server it can only use these two NICs which in this example are the two Intel X520 DA 10Gbps cards to access Server2. So basically you configure/tell the client what to use for SMB 3.0 traffic to a certain server. Note the difference in send/receive traffic between RDMA/Native 10Gbps.

On Server1, the client you see this:

image

On Server2, the server you see this:

image

Which is indeed the constraint set up as we can verify with:

Get-SmbMultichannelConstraint

image

We’re done playing so let’s clean up all the constraints:

Get-SmbMultichannelConstraint | Remove-SmbMultichannelConstraint

image

Seeing this technology it’s now up to the storage industry to provide the needed  capacity and IOPS I a lot more affordable way. Storage Spaces have knocked on your door, that was the wake up call Winking smile. In an environment where we throw lots of data around we just love SMB 3.0