Windows XP Clients Cannot Execute Logon Scripts against a Windows Server 2012 R2 Domain Controller – Workaround


The issue

The real issue is that you are still running Windows XP. The secondary issue is that you have Windows XP clients that cannot connect to a file share (NETLOGON) on a Windows Server 2012 R2 Domain Controller. If you try manually via \\domaincontroller\Netlogon it will throw an error like  "The specified network name is no longer available".  Security wise & moral pressure wise I kind of think this drives home the message you need to get off Windows XP. But I realize you’re in a pickle so here’s the workaround/fix.

Root Cause & Fix

Windows XP talks SMB 1.0 and that’s it. If this is not offered by the server (file server or domain controller) we have a problem. Now if you installed new Windows Server 2012 R2 servers they do not deploy the SMB 1.0 feature by default. If you upgraded from Windows 2008 R2 (perhaps even over Windows 2012) to get to Windows 2008 (R2) this feature kept in place. Other wise you’ll need to make sure SMB 1.0 is installed, it often (always?) is. Just check.

image

However there is a big change between Windows Server 2008 R2/Windows 2012. The LanmanServer service has a dependency set to SMB 2.0 and no longer to SMB 1.0

This is what it looks like on a Windows Server 2012 (or lower) domain controller:

image

This is what it look like on a Windows Server 2012 domain controller

image

So we need to change that on Windows 2012 R2 to support Windows XP. We can do this in the registry. Navigate to

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\DependOnService

  1. Change SamSS Srv2 to SamSS Srvimage
  2. Restart the Server (Lanmanserver) service (it will restart the dependent services like netlogon, DFS Namespace, .. as well)

You’re XP clients should be able to authenticate again. You can test this by navigating to \\domaincontroller\Netlogon on a XP client. This should succeed again.

If you have issues with Windows Server 2012 R2 file servers … this is also valid. When you do get rid of Windows XP. Go back to the original settings please Smile.

If you want to read more on SMB read this blog Windows Server 2012 R2: Which version of the SMB protocol (SMB 1.0, SMB 2.0, SMB 2.1, SMB 3.0 or SMB 3.02) are you using? by Jose Barreto (File Server team at Microsoft)

Finally, get off XP!

I think I said it enough on twitter and my blog Legacy Apps Preventing Your Move From Windows XP to Windows 8.1? Are you worried about HeartBleed? Good! Are you worried about still being on XP? No? Well dump SSL and use clear text authentication as XP is a free fire zone  anyway (as of April 8th 2014) and it’s just a matter of time before you’re road kill. Any company who has CIO/CTO/IT managers and other well paid functions and have let their organization be held hostage on XP (I’m not talking about a few PCs or VMs left and right) by legacy apps & ISV should realize they are the one who let this happen. Your watch. Your responsibility. No excuses.

Adventures In RDMA – The RoCE Path Over DCB To Windows Server 2012 R2 SMB 3.0 Glory


Prologue

On gloomy day, it was dark, grey and cold, we gave battle with RoCE & DCB (PFC/ETS). The fight was a long one, the battle field uncharted and we had only our veteran attitude towards adversity to guide us through the switch configurations. It seemed that no man had gone that far to the edges of the Windows Server 2012 empire. And when it came to RoCE & DCB meets Didier, I needed to show it that it had been conquered and was remembered of a quote in Gladiator:

Quintus: People RoCE/DCB configs should know when they are conquered.
Maximus: Would you, Quintus? Would I?

image

After many, many lonely & unsuccessful hours dealing with Performance monitor, switch configurations, reloads, firmware, drivers & Windows we got results:

… “it’s working” … “holey s* look at those numbers” …

On that dark day in a scarcely illuminated room, in the faint glare of the monitors even the CLI  of the switches in PUTTY felt like a grim cold place. But all that changed at as the impressive results brightened up the day and made all efforts seem worthwhile. “Didier Victor” I thought as I looked away from the screen, ‘”Once more”.image

But it has been a hard won victory. And should you fight this battle? We’ll let’s discuss this a bit now we’ve got your attention. RDMA is a learning process for many of us and neither Infiniband,  iWARP or RoCE are the one that need to win at this game. It’s you, via the knowledge you’ll gain working with RDMA technologies.

SMB Direct or SMB over RDMA comes in flavors

Infiniband (Mellanox)

That’s been here for a while. Has high cost associated (depends on where you come from) and also has a psychological barrier to it. Try discussing buying 10Gbps versus Infiniband with semi technical managerial types. You’ll know what I mean.

Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-2/ConnectX-3 using InfiniBand – Step by Step

iWARP (Chelsio / Intel)

RDMA but it’s TCP/IP offloaded to the card. It can leverage DCB but doesn’t require it.

Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Chelsio T4 cards using iWARP – Step by Step

RoCE (Mellanox)

“Infiniband over Ethernet” > so you “NEED” (no not a real hard requirement) DBC with PFC/ETS (DCBx can be handy) for it to work best. No need for Congestion Notification as it’s for TCP/IP but could be nice with iWARP (see above). Do note that you’ll need to configure your switches for DCB & that’s highly dependent on the vendor & even type of switches.

Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-3 using 10GbE/40GbE RoCE – Step by Step

Here’s an older overview of RDMA flavor’s pros & cons:image

Please see Jose Barreto’s excellent work on explaining SMB 3.0 over RDMA in his presentations at SNIA, TechEd and on his blog.

While I have heard of two people I have in my network working with Infiniband for SMB Direct and Windows Server 2012 (R2) most of us are doing 10Gbps. Pricing for Infiniband has a bad reputation. Not because Infiniband is super costly compared to 10/40Gbps (I’m told by most people who ask quotes are positively surprised) but when you can’t afford a Porsche you’re not shopping for a Ferrari either.  Especially not when a mid size sedan will serve al of your needs above and beyond the call of duty. On top of that you might have bought all that nice “converged network ready” 10Gbps gear some years ago. Some of us may be working towards 40Gbps but most are 10Gbps shops. My 40Gbps is “limited” to the inter links & uplinks. Meaning that we either go for iWarp or RoCE.

RoCE or iWARP

Which one is best of those two? Well, as the line is drawn between vendors. RoCE today equals Mellanox (yes the Infiniband vendor, RoCE is sometimes called “Infiniband on layer 4 over Ethernet layer 2”) and iWarp means Chelsio or Intel (their cards look a bit old in the teeth however).

You’ll find comparisons by both vendors claiming superiority for varied reasons. Here’s the Mellanox side http://www.mellanox.com/pdf/whitepapers/WP_RoCE_vs_iWARP.pdf & here’s Chelsio’s take http://www.chelsio.com/roce/ & http://www.moderntech.com.hk/sites/default/files/whitepaper/V09_iWAR_Summary_WP_0.pdf. It’s good to look at your needs and map them. But I cannot declare a winner. I did notice that at least one vendor of SOFS/CiB uses iWarp. Is that a statement? And if so about what? Price? Easy of use? Perfomance/Cost?

What I do find is that Chelsio is really hacking into RoCE as you can see here http://www.chelsio.com/wp-content/uploads/2011/05/RoCE-The-Grand-Experiment1.pdf, http://www.chelsio.com/roce-whitepaper/, http://www.chelsio.com/wp-content/uploads/2011/05/RoCE-FAQ-1204121.pdf So that begs the question are the right or are the scared of RoCE, as the Infiniband boys are out to eat their lunch?

My take on this for now

iWarp is way easier to get started. That’s for sure. RoCE  is firmware sensitive (NIC, Switches), driver sensitive (NIC). Configuring your switches (DCB) now is usually followed by a rebooting that switch (so you might not do that so easily in production and depending on where in the stack those switches live you really need to Force10 VLT or Cisco vPC, Arista MLAG  or a independent redundant switches to get away with it. RoCE loves green field. Stacking I hear you say? I don’t like stacking on that spot of the stack as firmware updates will get you to suffer through a single point of failure.

Disclaimer: RoCE in itself does not  DEMAND/REQUIRE DCB but the consensus is that it will work better, especially under heavy load. Weather SMB Direct over RoCE requires DCB is another question. For all practical purposes I’m working from the prerequisite it does for a production environment. But as you can do RoCE RDMA between to NIC with no DCB switch in between this indicates that the hard requirement for DCB is not there. Mind you not using DCB might not be smart in regards to QoS & error handling (no TCP/IP goodness handling this for you). But I’m no expert on this subject. Paul Grun however is and he’s involved with RoCE at  https://www.openfabrics.org/component/search/?searchword=Paul+grun&ordering=&searchphrase=all They tend to know their stuff. Read some of the comments below this article and you’ll know a lot http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html But PFC isn’t Walhalla either and some claim you can just forget about it and build non blocking networks. I guess you could if your pockets are deep enough Smile. And you might go a very long way without the need for RDMA. Many do … and when you talk to some network people & vendors they can’t agree either as everyone is on the same learning curve but from a different perspective. There is no one size fits all & it all depends.

iWarp doesn’t require DCB so you can get away with cheaper switches. Or, not so cheap switches that don’t support DCB (choose wisely). So cheaper switches is probably true on the low end. But, even very economically priced switches from DELL have good DCB support. Some other vendors who are more expensive don’t.

DCB is uncharted terrain for SMB Direct purposes & new to many for us. So if you want to do RDMA the easy way  … go iWARP. As said, the use of DCB for PFC/ETS is not mandatory in that case, you’ll get great results and it’s easy.  Mind you, you’ll still be dabbling with DCB if you want to do lossless magic in the switches Smile. Why you say? Well, that “converged network” story makes it kind of interesting to do so and PFC, DCBx/TLV is generic and can be leveraged for other things than iSCSI or FCoE.  And for all practical purposes SMB 3.0 with SMB Direct is a storage protocol since Windows Server 2012 made it so (CSV). Or do you do DCB for iSCSI/FCoE & iWarp for SMB Direct? After all there’s only 2 lossless queues to be had. But hey how many do you need? Choices, choices and no vast pool of experienced practitioners yet.

iWARP routes, it’s not bound by a single Ethernet broadcast domain. That could be useful info depending on your environment & needs. I’ll note that I leverage RDMA for East-West traffic, not north south & as such this could not be an issue. The time that I do “Shared Nothing Live Migration" from on premise to the cloud has not arrived yet.

The Mellanox cards in my neck of the woods were 35% cheaper than Chelsio (SFP+)

What about the scalability? “iWarp doesn’t scale that well” is stated left and right but I think that might often be based on older information. Chelsio makes a strong case for iWARP scalability. Especially when it comes to long distances, multiple hops & routing.

Again, your mileage may vary. But for “the smaller environments” who want to leverage RDMA with SMB 3.0 I’d say that iWarp is the easiest path to go & will do just fine. Now if you’re already into lossless Ethernet for iSCSI or working with FCoE you might have all the hardware you need & the experience to deal with DCB. The latter might not always be true however. Most people have Lossless Ethernet for iSCSI or FCoE set up by the vendor or consultants who’ll use well defined step by step guides. These do not exist for the RoCE variant of SMB 3.0 over RDMA.

The case for RoCE can be made as well.  Some claim that high volume of connections consumes memory when using iWARP and TCP’s flow and reliability controls are less suited for large-scale datacenters & cloud deployments due to performance issues. Where iWARP does not know multicast, RoCE does and that could be important to you.

So why did I or still do RoCE?

So why did I walk the walk? Basically because just talking the talk isn’t enough. We considered it an investment in our education. DCB is not going away (the abstraction isn’t their yet and won’t be for a while) and we need to gain knowledge of it to both handle it and make informed decisions. By the way once you go to lossless you might leverage DCB/PFC with iWarp as well just like you do for iSCSI to make it lossless (leveraging DCBx/TLV). Keep in mind that DCB is key in converged networking and as such deserves your attention. That’s why I chose not to avoid it but gave battle. DCB is all over the place when it comes to converged networking (iSCSI, FCoE), so we need to learn the good, the bad and the ugly. Until that day that perhaps, the hardware stack is that good, powerful & has so much bandwidth TCP/IP never needs it built in protection for packet loss. Hmmmmmm, I remember people saying that about 10Gbps, but then they wanted to send everything over 2*10Gbps pipes and it becomes an issue again?

It’s early days yet but you have to give credit to Microsoft for getting RDMA/DCB on the radar screen of the worlds virtualization & storage admins than ever before. It’s not a well established segment yet and it will be interesting to see how this all turns out. I do know that now that I’ve figured out a thing or two about RoCE, I won’t be intimidated & won’t make choices out of fear. And do remember that if you have plenty of idle CPU cycles & 10Gbps you might not even need RDMA. The value for me and my employers is the knowledge gained. DCB has it’s role to play but we’ll leverage iWARP or RoCE without a preference. Today you have 2 choices. RoCE is the newer one while iWarp has been around longer and both have avid proponents it seems.

I know one thing. If you need or want RDMA in any existing 10Gbps environment with minimal effort & no risk to existing switch infrastructure, you’ll use iWarp it seems.

Epilogue

You sit there staring at a truckload of VMs with 120GB of memory assigned in total being evacuated in +/- 70 seconds seconds, while doing a Shared Nothing Live Migration between the same hosts and without consuming CPU load …  and have DCB for SMB 3.0 running on your switches … Yes!

image

Remember, “What we do in life, echo’s in eternity” Winking smile You might think now that I’m a bit nutty, but I assure you that in my quest to find someone who had hands on experience configuring DCB on switches for SMB Direct with RoCE I had to turn to myself as no one seems to have done it.  I’ll be sharing more info on our setup and configurations in the future. Once you wrap your head around the concepts, you understand why things are done and how. There in lies the value for me.

TechEd 2013 Revelations for Storage Vendors as the Future of Storage lies With Windows 2012 R2


Imagine you’re a storage vendor until a few years ago. Racking in the big money with profit margins unseen by any other hardware in the past decade and living it up in dreams along the Las Vegas Boulevard like there is no tomorrow. To describe your days only a continuous “WEEEEEEEEEEEEEE” will suffice.

clip_image002

Trying to make it through the economic recession with less Ferraris has been tough enough. Then in August 2012 Windows Server 2012 RTMs and introduces Storage Spaces, SMB 3.0 and Hyper-V Replica. You dismiss those as toy solutions while the demos of a few 100.000 to > million IOPS on the cheap with a couple of Windows boxes and some alternative storage configurations pop up left and right. Not even a year later Windows Server 2012 R2 is unveiled and guess what? The picture below is what your future dreams as a storage vendor could start to look like more and more every day while an ice cold voice sends shivers down your spine.

clip_image003

“And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him.”

OK, the theatrics above got your attention I hope. If Microsoft keeps up this pace traditional OEM storage vendors will need to improve their value offerings. My advice to all OEMs is to embrace SMB3.0 & Storage Spaces. If you’re not going to help and deliver it to your customers, someone else will. Sure it might eat at the profit margins of some of your current offerings. But perhaps those are too expensive for what they need to deliver, but people buy them as there are no alternatives. Or perhaps they just don’t buy anything as the economics are out of whack. Well alternatives have arrived and more than that. This also paves the path for projects that were previously economically unfeasible. So that’s a whole new market to explore. Will the OEM vendors act & do what’s right? I hope so. They have the distribution & support channels already in place. It’s not a treat it’s an opportunity! Change is upon us.

What do we have in front of us today?

  • Read Cache? We got it, it’s called CSV Cache.
  • Write cache? We got it, shared SSDs in Storage spaces
  • Storage Tiering? We got it in Storage Spaces
  • Extremely great data protection even against bit rot and on the fly repairs of corrupt data without missing a beat. Let me introduce you to ReFS in combination with Storage Spaces now available for clustering & CSVs.
  • Affordable storage both in capacity and performance … again meet storage spaces.
  • UNMAP to the storage level. Storage Spaces has this already in Windows Server 2012
  • Controllers? Are there still SAN vendors not using SAS for storage connectivity between disk bays and controllers?
  • Host connectivity? RDMA baby. iWarp, RoCE, Infiniband. That PCI 3 slot better move on to 4 if it doesn’t want to melt under the IOPS …
  • Storage fabric? Hello 10Gbps (and better) at a fraction of the cost of ridiculously expensive Fiber Channel switches and at amazingly better performance.
  • Easy to provision and manage storage? SMB 3.0 shares.
  • Scale up & scale out? SMB 3.0 SOFS & the CSV network.
  • Protection against disk bay failure? Yes Storage Spaces has this & it’s not space inefficient either Smile. Heck some SAN vendors don’t even offer this.
  • Delegation capabilities of storage administration? Check!
  • Easy in guest clustering? Yes via SMB3.0 but now also shared VHDX! That’s a biggie people!
  • Hyper-V Replication = free, cheap, effective and easy
  • Total VM mobility in the data center so SAN based solutions become less important. We’ve broken out of the storage silo’s

You can’t seriously mean the “Windoze Server” can replace a custom designed SAN?

Let’s say that it’s true and it isn’t as optimized as a dedicated storage appliance. So what, add another 10 commodity SSD units at the cost of one OEM SSD and make your storage fly. Windows Server 2012 can handle the IOPS, the CPU cycles, memory demands in both capacity and speed together with a network performance that scales beyond what most people needs. I’ve talked about this before in Some Thoughts Buying State Of The Art Storage Solutions Anno 2012. The hardware is a commodity today. What if Windows can and does the software part? That will wake a storage vendor up in the morning!

Whilst not perfect yet, all Microsoft has to do is develop Hyper-V replica further. Together with developing snapshotting & replication capabilities in Storage Spaces this would make for a very cost effective and complete solution for backups & disaster recoveries. Cheaper & cheaper 10Gbps makes this feasible.  SAN vendors today have another bonus left, ODX. How long will that last? ASIC you say. Cool gear but at what cost when parallelism & x64 8 core CPUs are the standard and very cheap. My bet is that Microsoft will not stop here but come back to throw some dirt on a part of classic storage world’s coffin in vNext. Listen, I know about the fancy replication mechanisms but in a virtualized data center the mobility of VM over the network is a fact. 10Gbps, 40Gbps, RDMA & Multichannel in SMB 3.0 puts this in our hands. Next to that the application level replication is gaining more and more traction and many apps are providing high availability in a “shared nothing“ fashion (SQL/Exchange with their database availability groups, Hyper-V, R-DFS, …). The need for the storage to provide replication for many scenarios is diminishing. Alternatives are here. Less visible than Microsoft but there are others who know there are better economies to storage http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/ & http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/.

The days when storage vendors offered 85% discounts on hopelessly overpriced storage and still make a killing and a Las Vegas trip are ending. Partners and resellers who just grab 8% of that (and hence benefits from overselling as much a possible) will learn just like with servers and switches they can’t keep milking that cash cow forever. They need to add true and tangible value. I’ve said it before to many VARs have left out the VA for too long now. Hint: the more they state they are not box movers the bigger the risk that they are. True advisors are discussing solutions & designs. We need that money to invest in our dynamic “cloud” like data centers, where the ROI is better. Trust me no one will starve to death because of this, we’ll all still make a living. SANs are not dead. But their role & position is changing. The storage market is in flux right now and I’m very interested in what will happen over the next years.

Am I a consultant trying to sell Windows Server 2012 R2 & System Center? No, I’m a customer. The kind you can’t sell to that easily. It’s my money & livelihood on the line and I demand Windows Server 2012 (R2) solutions to get me the best bang for the buck. Will you deliver them and make money by adding value or do you want to stay in the denial phase? Ladies & Gentleman storage vendors, this is your wake-up call. If you really want to know for whom the bell is tolling, it tolls for thee. There will be a reckoning and either you’ll embrace these new technologies to serve your customers or they’ll get their needs served elsewhere. Banking on customers to be and remain clueless is risky. The interest in Storage Spaces is out there and it’s growing fast. I know several people actively working on solutions & projects.

clip_image005

clip_image007

You like what you see? Sure IOPS are not the end game and a bit of a “simplistic” way to look at storage performance but that goes for all marketing spin from all vendors.

clip_image008

Can anyone ruin this party? Yes Microsoft themselves perhaps, if they focus too much on delivering this technology only to the hosting and cloud providers. If on the other hand they make sure there are feasible, realistic and easy channels to get it into the hands of “on premise” customers all over the globe, it will work. Established OEMs could be that channel but by the looks of it they’re in denial and might cling to the past hoping thing won’t change. That would be a big mistake as embracing this trend will open up new opportunities, not just threaten existing models. The Asia Pacific is just one region that is full of eager businesses with no vested interests in keeping the status quo. Perhaps this is something to consider? And for the record I do buy and use SANs (high-end, mid-market, or simple shared storage). Why? It depends on the needs & the budget. Storage Spaces can help balance those even better.

Is this too risky? No, start small and gain experience with it. It won’t break the bank but might deliver great benefits. And if not .. there are a lot of storage options out there, don’t worry. So go on Winking smile

Verifying SMB 3.0 Multichannel/RDMA Is Working In Windows Server 2012 (R2)


So you have spend some money on RDMA cards (RoCE in this example), spent even more money on 10Gbps Switches with DCB capabilities and last but not least you have struggled for many hours to get PFC, ETS, … configured. So now you’d like to see that your hard work has paid of, you want to see that RDMA power that SMB 3.0 leverages in action. How?

You could just copy files and look at the speed but when you have sufficient bandwidth and the limiting factor is in disk IO for example how would you know? Well let’s have a look below.

You can take a look at performance monitor for RDMA specific counters like “RDMA Activity” and “SMB Direct Connection”.image

Whilst copying six 3.4GB ISO files over the RDMA connection we see a speed of 1.05 GB/s. Not to shabby.  But hay nothing a good 10Gbps with TCP/IP can handle under the right conditions.image

It’s the RDMA counters in Performance Monitor that show us the traffic that going via SMB Direct.image

Another give away that RDMA is in play comes from Task Manager, Performance counters for the RDMA NIC => 1.3Mbps send traffic can’t possibly give us 1.05GB/s in copy speed magically Smile

image

When you run netstat –xan (instead of the usual –an) you get to see the RDMA connection. The mode is “Kernel” instead of the usual “TCP” or “UDP” with –an showing the TCP/IP connections/Listerners.

 image

If you want to go all geeky there is an event log where you look at RDMA events amongst others. Jose Baretto discusses this in Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-3 using 10GbE/40GbE RoCE – Step by Step with instructions how to use it. You’ll need to go to Event Viewer.On the menu, select “View” then “Show Analytic and Debug Logs”
Expand the tree on the left: Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic. On the “Actions” pane on the right, select “Enable Log”
You then run your RDMA work. And then disable the log to view the events. Some filtering & PowerShell might come in handy to comb through them.image

Complete VM Mobility Across The Data Center with SMB 3.0, RDMA, Multichannel & Windows Server 2012 (R2)


Introduction

The moment I figured out that Storage Live Migration (in certain scenarios) and Shared Nothing Live Migration leverage SMB 3.0 and as such Multichannel and RDMA in Windows Server 2012 I was hooked. I just couldn’t let go of the concept of leveraging RDMA for those scenarios.  Let me show you the value of my current favorite network design for some demanding Hyper-V environments. I was challenged a couple of time on the cost/port of this design which is, when you really think of it, a very myopic way of calculating TCO/ROI. Really it is. And this week at TechEd North America 2013 Microsoft announced that all types of Live Migrations support Multichannel & RDMA (next to compression) in Windows Server 2012 R2.  Watch that in action at minute 39 over here at Understanding the Hyper-V over SMB Scenario, Configurations, and End-to-End Performance. You should have seen the smile on my face when I heard that one! Yes standard Live Migration now uses multiple NIC (no teaming) and RDMA for lightning fast  VM mobility & storage traffic. People you will hit the speed boundaries of DDR3 memory with this! The TCO/ROI of our plans just became even better, just watch the session.

So why might I use more than two 10Gbps NIC ports in a team with converged networking for Hyper-V in Windows 2012? It’s a great solution for sure and a combined bandwidth of 2*10Gbps is more than what a lot of people have right now and it can handle a serious workload. So don’t get me wrong, I like that solution. But sometimes more is asked and warranted depending on your environment.

The reason for this is shown in the picture below. Today there is no more limit on the VM mobility within a data center. This will only become more common in the future.

image

This is not just a wet dream of virtualization engineers, it serves some very real needs. Of cause it does. Otherwise I would not spend the money. It consumes extra 10Gbps ports on the network switches that need to be redundant as well and you need to have 10Gbps RDMA capable cards and DCB capable switches.  So why this investment? Well I’m designing for very flexible and dynamic environments that have certain demands laid down by the business. Let’s have a look at those.

The Road to Continuous Availability

All maintenance operations, troubleshooting and even upgraded/migrations should be done with minimal impact to the business. This means that we need to build for high to continuous availability where practical and make sure performance doesn’t suffer too much, not noticeably anyway. That’s where the capability to live migrate virtual machines of a host, clustered or not, rapidly and efficiently with a minimal impact to the workload on the hosts involved comes into play.

Dynamics Environments won’t tolerate downtime

We also want to leverage our resources where and when they are needed the most. And the infrastructure for the above can also be leveraged for that. Storage live migration and even Shared Nothing Live Migration can be used to place virtual machine workloads where they are getting the resources they need. You could see this as (dynamically) optimizing the workload both within and across clusters or amongst standalone Hyper-V nodes. This could be to a SSD only storage array or a smaller but very powerful node or cluster in regards to CPU, memory and Disk IO. This can be useful in those scenarios where scientific applications, number crunching or IOPS intesive  software or the like needs them but only for certain times and not permanently.

Future proofing for future storage designs

Maybe you’re an old time fiber channel user or iSCSI rules your current data center and Windows Server 2012 has not changed that. But that doesn’t mean it will not come. The option of using a Scale Out File Server and leverage SMB 3.0 file shares to providing storage for Hyper-V deployments is a very attractive one in many aspects. And if you build the network as I’m doing you’re ready to switch to SMB 3.0 without missing a heart beat. If you were to deplete the bandwidth x number of 10Gbps can offer, no worries you’ll either use 40Gbps and up or Infiniband. If you don’t want to go there … well since you just dumped iSCSI or FC you have room for some more 10Gbps ports Smile

Future proofing performance demands

Solutions tend to stay in place longer than envisioned and if you need some long levity and a stable, standard way of doing networking, here it is. It’s not the most economical way of doing things but it’s not as cost prohibitive as you think. Recently I was confronted again with some of the insanities of enterprise IT. A couple of network architects costing a hefty daily rate stated that 1Gbps is only for the data center and not the desktop while even arguing about the cost of some fiber cable versus RJ45 (CAT5E). Well let’s look beyond the North – South traffic and the cost of aggregating band all the way up the stack with shall we? Let me tell you that the money spent on such advisers can buy you in 10Gbps capabilities in the server room or data center (and some 1Gbps for the desktops to go) if you shop around and negotiate well. This one size fits all and the ridiculous economies of scale “to make it affordable” argument in big central IT are not always the best fit in helping the customers. Think  a little bit outside of the box please and don’t say no out of habit or laziness!

Conclusion

In some future blog post(s) we’ll take a look at what such a network design might look like and why. There is no one size fits all but there are not to many permutations either. In our latest efforts we had been specifically looking into making sure that a single rack failure would not bring down a cluster. So when thinking of the rack as a failure domain we need to spread the cluster nodes across multiple racks in different rows. That means we need the network to provide the connectivity & capability to support this, but more on that later.

SMB Direct RoCE Does Not Work Without DCB/PFC


Introduction

SMB Direct RoCE Does Not Work Without DCB/PFC. “Yes”, you say, “we know, this is well documented. Thank you.” but before you sign of hear me out.

Recently I plugged to RoCE cards into some test servers and linked them to a couple of 10Gbps switches. I did some quick large file copy testing and to my big surprise RDMA kicked in with stellar performance even before I had installed the DCB feature, let alone configure it. So what’s the deal here. Does it work without DCB? Does the card fail back to iWarp? Highly unlikely. I was expecting it to fall back to plain vanilla 10Gbps and not being used at all but it was. A short shout out to Jose Barreto to discuss this helped clarify this.

DCB/PFC is a requirement RoCE

The more busy the network gets the faster the performance will drop. Now in our test scenario we had two servers  for a total of 4 RoCE ports on the network consisting of a beefy 48 port 10Gbps switches. So we didn’t see the negative results of this here.

DCB (Data Center Bridging) and Priority Flow Control are considered a requirement for any kind of RoCE deployment. RDMA with RoCE operates at the Ethernet layer. That means there is no overhead from TCP/IP, which is great for performance. This is the reason you want to use RDMA actually. It also means it’s left on it’s own to deal with Ethernet-level collisions and errors. For that it needs DCB/PFC other wise you’ll run into performance issues due to a ton of retries at the higher network layers.

The reason that iWarp doesn’t require DCB/PCF is that it works at the TCP/IP level also offloaded by using a TCP/IP stack on the NIC instead of the OS. So errors are handled by TCP/IP at a cost: iWarp results in the same benefits as RoCE but it doesn’t scale that well. Not that iWarp performance is lousy, far form! Mind you, for bandwidth management reasons,you’d be better of using DCB or some form of QoS as well.

Conclusion

So no, not configuring  DCB on your servers and the switches isn’t an option, but apparently it isn’t blocked either so beware of this. It might appear to be working fine but it’s a bad idea. Also don’t think it defaults back to iWarp mode, it doesn’t, as one card does one thing not both. There is no shortcut. RoCE RDMA does not work error free out of the box so you do have the install the DCB feature and configure it together with the switches.

SMB 3.0 Multichannel Auto Configuration In Action With RDMA / SMB Direct


Most of you might remember this slide by Jose Barreto on SMB Multichannel  Auto Configuration in one of his many presentations:image

  • Auto configuration looks at NIC type/speed => Same NICs are used for RDMA/Multichannel (doesn’t mix 10Gbps/1Gbps, RDMA/non-RDMA)
  • Let the algorithms work before you decide to intervene
  • Choose adapters wisely for their function

You can fine tune things if and when needed (only do this when this is really the case) but let’s look at this feature in action.

So let’s look at this in real life. For this test we have 2 * X520 DA 10Gbps ports using 10.10.180.8X/24 IP addresses and 2 * Mellanox  10Gbps RDMA adaptors with 10.10.180.9X/24 IP addresses. No teaming involved just multiple NIC ports. Do not that these IP addresses are on different subnet than the LAN of the servers. Basically only the servers can communicate over them, they don’t have a gateway, no DNS servers and are as such not registered in DNS either (live is easy for simple file sharing).

image

Let’s try and copy a 50Gbps fixed VHDX file from server1 to server2 using the DNS name of the target host (pixelated), meaning it will resolve to that host via DNS and use the LAN IP address 10.10.100.92/16 (the host name is greyed out). In the below screenshot you see that the two RDMA capable cards are put into action. The servers are not using  the 1Gbps LAN connection. Multichannel looked at the options:

  • A 1Gbps RSS capable Link
  • Two 10Gbps RSS capable Links
  • Two 10Gbps RDMA capable links

Multichannel concluded the RDMA card is the best one available and as we have two of those it use both. In other words it works just like described.

image

Even if we try to bypass DNS and we copy the files explicitly via the IP address (10.10.180.84)  assigned to the Intel X520 DA cards Multichannel intelligence detects that it has two better cards  that provide RDMA available and as you can see it uses the same NICs  as in the demo before.  Nifty isn’t it Smile

 image

If you want to see the other NICs in action we can disable the Mellanox card and than Multichannel will choose the two X520 DA cards. That’s fine for testing but in real life you need a better solution when you need to manually define what NICs can be used. This is done using PowerShell Smile (take a look at Jose Barrto’s blog The basics of SMB PowerShell, a feature of Windows Server 2012 and SMB 3.0  for more info).

New-SmbMultichannelConstraint –ServerName SERVER2 –InterfaceAlias “SLOT 6 Port 1”, “SLOT 6 Port 2”

This tells a server it can only use these two NICs which in this example are the two Intel X520 DA 10Gbps cards to access Server2. So basically you configure/tell the client what to use for SMB 3.0 traffic to a certain server. Note the difference in send/receive traffic between RDMA/Native 10Gbps.

On Server1, the client you see this:

image

On Server2, the server you see this:

image

Which is indeed the constraint set up as we can verify with:

Get-SmbMultichannelConstraint

image

We’re done playing so let’s clean up all the constraints:

Get-SmbMultichannelConstraint | Remove-SmbMultichannelConstraint

image

Seeing this technology it’s now up to the storage industry to provide the needed  capacity and IOPS I a lot more affordable way. Storage Spaces have knocked on your door, that was the wake up call Winking smile. In an environment where we throw lots of data around we just love SMB 3.0

Shared Nothing Live Migration Leverages SMB 3.0 Under the Hood


Shared Nothing Live Migration

By now most of you must have heard about the Shared Nothing Live Migration capabilities introduced with Windows Server 2012 Hyper-V. If not I suggest you check it out over here and then come back here for some extra insights in how it works.

Shared Nothing Live Migration is not magic however. It is made possible by the fact that it relies on some of the new capabilities SMB 3.0 in Windows Server 2012 brought us. Once you know this you also realize that this can be quite fast. The reason for this is that you can design your the network for Shared Nothing Live Migration with 10Gbps or higher, Multi Channel and RDMA for unprecedented throughput. Yup Smile, if you invest in setting up networking right the remaining bottle neck might be the amount of storage IO you can handle whilst reading from the source and writing to the target, or the CPU load you put o your host. Windows will protect you from draining your host beyond reason by the way.

Making Shared Nothing Live Migration Work

You need to set if up of course and do it right. Here’s a list of steps you need to do / check on every Hyper-V host involved.

  1. Enable incoming and outgoing live migrations on all involved Hyper-V host otherwise it will not work. If your host are part of  a cluster this is taken care of for you.
  2. Select an authentication protocol (CredSSP or Kerberos)
    Kerberos authentication allows you to Live Migrate VMs without having to login to the source host’s server itself. Kerberos authentication does require you to configure constrained delegation in Active Directory (don’t go for "Trust this computer for delegation to any services". Follow the principle of least privileges possible.
  3. Set the number of Simultaneous Live Migrations. Experiment with the best value for you environment. Test a little what’s
  4. Set the networks(s) for incoming Live Migrations. It’s best to design this and not just use any network.

See Keith Mayer’s excellent blog for more details.

Constraint Delegation

Shared Nothing Live Migration needs some prep work security wise before it will work. In Active directory you need to set up so constraint delegation permissions. To some people the concept of constraint delegation is brand new but if you’ve been deploying multi tiered web applications in your environment before this is a cookie you’ve dealt with many times before. It’s the same approach you need to get a web client using Windows Authentication to talk via an IIS web app or service to a SQL Server database and/or read file data from somewhere you’ve been configured this plenty of times.

Use an account to perform the Shared Nothing Live Migration that has administrator privileges on all computers that are involved. While you can use groups in AD to make your live and permission management easier when it comes to granting Share permissions & NTFS rights on folders it doesn’t work that way with constraint delegation. Groups can not be used here so you’ll need to use individual accounts. PowerShell scripting here can help lessen the work if you have many hyper-v hosts involved. In large environments (up to 64 nodes!) this inundates the constraint delegations tab with computer names, so PowerShell really is your friend here.

On each computer object you need to set the delegation permissions for the  CIFS and the Microsoft Virtual System Migration Service to all other computers you want to involve in Shared Nothing Live Migration as a source or a target.

IMPORTANT! Hey why do we need CIFS constraint delegation here? Well indeed because Shared Nothing Live Migration under the hood leverages SMB 3.0. It creates a temporary file share on the target to get the job done Smile! So once you realize that Shared Nothing Live Migration uses SMB 3.0 shares to do it’s magic it than becomes obvious why these constraint delegation permissions for CIFS in active directory are needed.

Visualizing the SMB 3.0 share in action

At the source server (ZULU) we run  after starting the Shared Nothing Live Migration and see that we have a connection to a share o the target server. That share is named after the source server with an ID like ZULU.3341302342$. So it’s a hidden share.image

 

On the target server we run Get-SmbSession | fl and see that indeed the source computer has two sessions open on target server.

image

 

Let’s see if a share is created using Get-SmbShare.on the target. Yes there is:

image

 

In Computer Management it shows up like this on the target sever:

image

In explorer you can see this as a $VSM$ folder in the root of C, that has a subfolder with the name of the source server and an ID like ZULU.2541288334$. This subfolder is shared (hidden) and contains a shortcut to the volume where the selected target folder resides, this could be C, D local storage (DAS), shared storage (CSV) or an SMB 3.0 share as well. In the screen shot below the folder doesn’t match up to the share name as they are taking from different Shared Nothing Live Migration

image

Security wise we’re to keep our hands of and the security settings reflect this Winking smile. But if you take ownership you can co peak at what’s in there. When writing a blog post for example WhistlingWe indeed saw the copied disk size of the VM being live migrated increase in the selected target folder.

image

image

Conclusion

I find it pretty cool to see how this all works under the hood. Hope you found this educational and interesting as well. It’s a testimonial to what SMB 3.0 can be leveraged for all kind of interesting scenarios.

Windows Server Backup Benefits from Improvements in Windows Server 2012


Introduction

In certain environments we backup VMs and any remaining physical hosts using Windows Backup. Before you all think this is ridiculous, I advise you to think again. With some automation you can build a very reliable agentless backup solution with the built in functionality. Windows Server 2012 brings good news for the smaller & perhaps low budget environments. Windows Server Backup is now capable of doing host level backups of the Hyper-V guest stored on Clustered Shared Volumes. This was not the case in Windows 2008 R2 and it is a vast improvement.This change is due to the fact that CSV has been changed not to require specialized API capable of dealing with it’s intricacies. All backup products now can backup CSVs without specialized APIs.

This is, linked to huge improvements in how a CSV behaves during a backup. In the past, when you started a backup, the CSV ownership would be moved to the node that runs the backup and all access by other nodes was in redirected mode for the duration of the backup. Unless you used a hardware VSS provider, which were not trouble free either. If your backup software did not understand CSVs and use the CSV APIs you were out of luck. From Windows Server 2012 on you are only in redirected I/O mode for the time it takes to create the VSS snapshot. The rest of the backup duration your nodes access the CDV disk in direct mode. So back to Windows Server Backup. You cannot up the CSV as disk volume but you can select Hyper-V from the items to include in the backup.

image

That will show all VMs running on that host, meaning you cannot backup VMs running on another host. Compared to Windows Server 2008 R2 where using the native Windows backups  with VMs on a CSV LUN meant using in guest backups this a major improvement.

Some Approaches to Using Windows Server Backup

Sometime we run the backups to local disk and regularly copy those off to a file share. This has the benefit of providing the backup versioning you can from using a local disk. The draw back is that the backups can be rather big.

In VMs, that is with backups in the guest, we run those backups to a file share over a 1Gbps management network. Performance is good, but it leaves us with the issue that there is no versioning.

For that reason our backup script copies the entire backup folder for a server to an archive folder on that same JBOD. Depending on how much space you have and need you can can configure the retention time of these older backups. This way you can keep a large number of backups over time.  A script runs every day that deletes the older backups based on the chosen retention time so you don’t run out of space.

There is one way around the lack of versioning when writing backups to a share and that is to mount a VHD on a file share locally to the host where you are running a backup or use pass through disk inside the VM. While you can get away with this becomes rather messy due to management & flexibility drawbacks of pass through disks. Mounting a VHD on a file share inside a VM is also a performance issue. So while possible and viable in certain scenarios I don’t use this for more than a few hosts and those are physical ones.

We had hoped that Windows Server 2012 with its support for VSS snapshots on SMB 3.0 shares would have enabled backup versioning in Windows Backup, just like it can do for backups to local disk. Unfortunately this is not the case. You’ll still get the same warning when backing up from a Windows Server 2012 host to an SMB 3.0 share as you used to get with previous versions:

image

How fast are backups and restores?

The largest environment is a couple of Physical servers, a two node Hyper-V cluster and about 22 virtual machines. That includes some with a larger amount of data. The biggest being 400 GB. Full backups are run weekly at night on all servers over 1Gbps and this works just fine.

We can backup a VM of with a 50GB VHD (about 50% to 75% in use) and copy that backup to the archive folder in 20 minutes. We backup AND copy to archive a VM’s C: Drive (20GB of data) and a D: Drive (190GB of data), 2 separate VHD  in 2.5 hours.

For some statistics. A bare metal restore of a VM or physical host over 1Gbps with a single VHD or volume with the OS, applications and some data takes us 30 tot 35 minutes in real life due to the overhead of setting a new VM up. I you just want to restore individual data you can do that as well. You can even mount the backup VHD and recover them via the Windows Explorer.

This is what a the logging we do and e-mail to the sys admins looks like:

image

Where else do the Windows Server 2012 improvements help?

Data Deduplication

Now there is some good news. We ran ddpeval.exe against the JBOD LUNS where we store these archived backups and got some great results. We also copied such an archive folder to a Windows Server 2012 Host and ran data deduplication against it. In that test we achieved a 84%-85% deduplication rate depending on how many versions of the backups we archive and what the delta is during that time frame.  The latter is important. If we run dedupe only against domain controller backup archives we get up to 94%. Deduplication should not not impact restore performance to much because in 99% of the cases you revert to the last backup which sits in the WindowsBackup Folder. Only if you need older backups you will work against the deduped files, unless the archive folder is on the same LUN as the original backups. I’ don’t have real life info about restores yet. Just a small lab test.

SMB 3.0 & Multichannel

In Windows Server 2012 you also get all benefits of SMB 3.0 and MultiChannel for your backup traffic.

Not your grandfathers ChkDsk

The vastly improved ChkDsk is a comfort for worried minds when it comes to fixing potential corruption on a large LUN. Last but not least, the FLUSH command in NTFS makes using cheaper data disks safer.

VHDX

The VHDX format allows for 64TB. That means your Windows Server backup can now handle more than 2TB LUNs. This should be adequate Smile

Conclusion

With some creativity and automation via scripting you can leverage the Windows Backup to be a nice and flexible solution. Although I feel that providing backup versioning to a file share is an improvement that is missing  the new features help out a lot and all in all, it’s not bad at all! So you see, even smaller organizations can benefits seriously from Windows Server 2012 and get more bang for their bucks.