DELL Enterprise Forum EMEA 2014 in Frankfurt

As you might have noticed on Twitter I was in Frankfurt last week to attend DELL Enterprise Forum EMEA 2014. It was a great conference and very worthwhile going to. It was a week of multi way communication between vendor, marketing, engineering, partners and customers. I learned a lot. And I gave a lot of feedback. As a Dell TechCenter Rockstar and a Microsoft MVP in Hyper-V I can build bridges to make sure both worlds understand each other better and we, the customers get their needs served better.

Dell Enterprise Forum EMEA 2014 - Frankfurt

I’m happy I managed to go and I have some people to thank for me being able to grab this opportunity:

  • I cleared the time with my employer. This is great, this is a win win situation and I invested weekend time & extra hours for both my employer and myself.
  • I got an invite for the customer storage council where we learned a lot and got ample of opportunity to give honest and constructive feedback directly to the people that need to hear it! Awesome.
  • The DELL TechCenter Rockstar program invited me very generously to come over at zero cost for the Enterprise Forum. Which is great and helped my employer  and myself out. So, thank you so much for helping me attend. Does this color my judgment? 100%  pure objectivity does not exist but the ones who know me also know I communicate openly and directly. Look, I’ve never written positive reviews for money or kickbacks. I do not have sponsoring on my blog, even if that could help pay for conferences, travel expenses or lab equipment. Some say I should but for now I don’t. I speak my mind and I have been a long term DELL customer for some very good reasons. They deliver the best value for money with great support in a better way and model than others out there. I was sharing this info way before I became a Rockstar and they know that I tell the good, the bad and the ugly. They can handle it and know how to leverage feedback better than many out there.
  • Stijn Depril ( @sdepril,, Technical Datacenter Sales at RealDolmen gave me a ride to Frankfurt and back home. Very nice of him and a big thank you for doing so.  He didn’t have to and I’m not a customer of them. Thank buddy, I appreciate it and it was interesting ton learn the partners view on things during the drive there and back. Techies will always be checking out gear …

Dell Enterprise Forum EMEA 2014 - Frankfurt

What did all this result in? Loads of discussion, learning and sharing about storage, networking, compute, cloud, futures and community in IT. It was an 18 hour per day technology fest in a very nice and well arranged fashion.

I was able to meet up with community members, twitter buddies, DELL Employees and peers from all over EMEA and share experiences, learn together, talk shop, provide feedback and left with a better understanding of the complexities and realities they deal with on their side.

Dell Enterprise Forum EMEA 2014 - Frankfurt

It has been time very well spent. I applaud DELL to make their engineers and product managers available for this event. I thank them for allowing us this amount of access to their brains from breakfast till the moment we say goodnight after a night cap. Well done, thank you for listening and I hope to continue the discussion. It’s great to be a DELL TechCenter Rockstar and work in this industry during this interesting times. To all the people I met again or for the first time, it was a great week of many interesting conversations!

For some more pictures and movies visit the Dell Enterprise Forum EMEA 2014 from Germany photo album on Flickr

Exchange 2010 SP3 Rollup 5 Added Support for Windows Server 2012 R2 Active Directory

6 weeks ago (February 25th 2014) Microsoft finally took away the last barrier to upgrading some of our Windows Server 2012 Active Directory Environments to R2.  Most of them are still running Exchange 2010 SP3 and not Exchange 2013. The reason is that Exchange 2013 was not deployed is whole other discussion Eye rolling smile.

However that dis mean that until the release of  Exchange Server 2010 SP3 Update Rollup 5 last month we could not upgrade Active Directory to Windows Server 2012 R2. Rollup 5 brought us support for exactly that. We can now:

  • Support Domain Controllers running Windows Server 2012 R2
  • Raise the Active Directory Forest Function Level and Domain Functional Level to Windows Server 2012 R2

Please note that you cannot deploy Exchange Server 2010 (SP3 RU5) on Windows Server 2012 R2 and you’ll probably never will be able to do that. I’m not sure Microsoft has any plans for this.

Now our office moves have been concluded, meaning I can get back to IT Infrastructure instead of being an glorified logistics & facility peon, we’re doing the upgrade.

This also means we can move the Active Directory environments to the latest version so we have the best possible position for any future IT projects at very low risk. The environments are already at W2K12 functional level. If the budgets get so tight they lose/scrap EA or volume licensing it also allows them to run at this level for many years to come without causing any blocking issues.

The Hyper V Amigos Showcast Episode 2: Unmap

We’re back for our second episode of the Hyper-V Amigos show cast. In this episode we discuss and demonstrate UNMAP in Windows Server 2012 R2 a bit. As always it was fun to work with Carsten Rachfahl.

2 Hyper-V Amigos having fun discussing UNMAP


Here’s our fun and unscripted (other than the PowerShell used in the demos) attempt at showing you UNMAP behavior with Hyper-V and a DELL Compellent SAN

If you want to read more on our experiences with UNMAP search my blog I have prepared some links for you.

I still need to get the slides uploaded, but all that info is in the blog posts.



In relation to the question below about not much difference between Dynamically expanding VHD/VHDX. That demo didn’t work out so well here so I include  some screenshots of a comparison I just ran:

This is the dynamically expanding VHDX. on an IDE controller, no ODX.


This is the dynamically expanding VHD on an vSCSI controller, with ODX.


So yes, losing ODX makes things slower for dynamically expanding VHDX, but it still beats a Dynamically expanding VHD that has ODX.  A VHDX is a lot better at dynamically growing than a VHD.

Speaking At The ITPROceed Event–June 12th 2014, ALM Antwerp

The Belgian IT Pro community is organizing the ITPROceed event a “technology geek fest” as they call it on their web site.


It’s a joint venture between the IT pro community and Microsoft Belgium to help you all proceed in designing, deploying and operating Microsoft technologies.

The sessions will not only help you proceed but succeed as well. The speakers are Microsoft MVPs, MEET members & passionate community experts. They’ll share expertise & information gather by using these technologies in real life deployments.

A rich mix of technologies you have available and need today will be discussed like the Cloud OS, System Center, SQL, Office 365, Windows 8, Unified Communications, Lync, Azure and SharePoint.

I’m speaking

I’ll be speaking about the features in Windows Server 2012 R2 that make it “The Scalable & Capable Cloud OS”.

Come see how you can leverage the capabilities of Windows Server 2012 R2, a true cloud OS, to achieve powerful and scalable solutions. We’ll demonstrate how to use technologies as SMB Direct, DVMQ/vRSS, ODX, UNMAP, VHDX and Storage QoS. This will help you get the most out of commodity infrastructure and investment in Windows today.  We’ll share our experiences with you based on real life deployments to help you proceed and succeed.

Join us!

Really, make time in your schedule and attend this event by registering here.


Attend the sessions, talk shop with your peers and discuss your questions with the experts. I’ll see you there.

Windows NLB On Windows Server 2012 R2 Hyper-V: A Personal Preferred Configuration Using IGMP With Multicast

To know and see the issues we are dealing with in this demo, you need to read this blog post first: Windows NLB Nodes “Misconfigured” after Simultaneous Live Migration on Windows Server 2012 (R2).

We were dealing with some issues on on several WNLB clusters running on a Windows Server 2012 R2 Hyper-V cluster after a migration from an older cluster. So go read that and come back Smile.

Are you back? Good.

Let’s look at the situation we’ll use to show case one possible solution to the issues. If you have  a 2 node Hyper-V cluster, are using NIC Teaming for the switch and depending on how teaming is set up you’ll might run into these issues. Here we’ll use a single switch to mimic a stacked one (the model available to me is non stackable and I have only one anyway).

  • Make sure you enable MAC Spoofing on the appropriate vNIC or vNICs in the advanced settings


  • Note that there is no need to use a static MAC address or copy your VIP mac into the settings of your VM  with Windows Server 2012 (R2) Hyper-V
  • Set up WNLB with IGMP multicast as option. While chancing this there will be some advice warnings thrown at you Smile



I’m not going in to the fact that since W2K8 the network default configurations are all about security. You might have to do some configuration work to get the network flow to do what it needs to do. Lots on this  weak host/strong host model behavior on the internet . Even wild messy ramblings by myself here.

On to the switch itself!

Why IGMP multicast? Unicast isn’t the best option and multicast might not cut it or be the best option for your environment and IGMP is less talked about yet it’s a nice solution with Windows NLB, bar replacing into with a hardware load balancer. For this demo I have a DELL PowerConnect 5424 at my disposal. Great little switch, many of them are still serving us well after 6 years on the job.

What MAC address do I feed my switch configurations?

Ah! You are a smart cookie, aren’t you. A mere ipconfig reveals only the unicast MAC address of the NIC. The GUI on WNLB shows you the MAC address of the VIP. Is that the correct one for my chosen option, unicast, multicast or IGMP multicast?  No worries, the GUI indeed shows the one you need based on the WNLB option you configure. Also, take a peak at nlb.exe /? and you’ll find a very useful option called ip2mac.

Let’s run that against our VIP:


And compare it to what we see in the GUI, you’ll notice that show the MAC to use with IGMP multicast as well.


You might want to get the MAC address before you configure WNLB from unicast to IGMP multicast. That’s where the ip2mac option comes in handy.

Configuring your switch(es)

We have a multicast IP address that we’ll convert into the one we need to use. Most switches like the PowerConnect 5424 in the example will do that for you by the way.

I’m not letting the joining of the members to the Bridge Multicast Group happen automatically so I need to configure this. I actually have to VLANs, each Hyper-V host has 2 LACP NIC team with Dynamic load balancing connected to an LACP LAGs on this switch (it’s a demo, yes, I know no switch redundancy).  I have tow as some WNLB nodes have multiple clusters and some of these are on another VLAN.

I create a Bridge Multicast Group. For this I need the VLAN, the IGMP multicast MAC address and cluster IP address

When I specify the IGMP multicast MAC I take care to format it correctly with “:” instead of “–“ or similar.

You can type in the VIP IP address or convert is per this KB yourself. If you don’t the switch will sort you out.

The address range of the multicast group that is used is 239.255.x.y, where x.y corresponds to the last two octets of the Network Load Balancing virtual IP address.

For us this means that our VIP of becomes The switch handles typing in either the VIP or the converted VIP equally well.


This is what is looks like, here there are two WNLB clusters in ICMP multicast mode configured. There are more on the other VLAN.


We leave Bridge Multicast Forwarding here for what it is, no need in this small setup. Same for IGMP Snooping. It’s enable globally and we’ve set the members statically.

We make unregistered multicast is set to forwarding (default).


Basically, we’re good to go now. Looking at the counters of the interfaces & LAGs you should see that the multicast traffic is targeted at the members of the LAGs/LAGs and not all interfaces of the switch. The difference should be clear when you compare the counters adding up before and after you configured IGMP.


The Results

No over the top switch flooding, I can simultaneously live migrate multiple WLNB nodes and have them land on the same switch without duplicate IP address warning. Will this work for you. I don’t know. There are some many permutations that I can’t tell you what you should do in your particular situation to make it work well. I’ll just quote myself from my previous blog post on this subject:

“"If you insist you want my support on this I’ll charge a least a thousand Euro per hour, effort based only. Really. And chances are I’ll spend 10 hours on it for you. Which means you could have bought 2 (redundancy) KEMP hardware NLB appliances and still have money left to fly business class to the USA and tour some national parks. Get the message?”

But you have seen some examples on how to address issues & get a decent configuration to keep WNLB humming along for a few more years. I really hope it helps out some of you struggling with it.

Wait, you forgot the duplicate IP Address Warning!

No, I didn’t. We’ll address that here. There are some causes for this:

  • There is a duplicate IP address. If so, you need to address this.
  • A duplicate IP address warning is to be expected when you switch between unicast and multicast NLB cluster modes ( Follow the advice in the KB article and clearing the ARP tables on the switches can help and you should get rid of it, it’s transient.
  • There are other cause that are described here Troubleshooting Network Load Balancing Clusters. All come down to the fact that somehow you’re getting multiple MAC address associated with the same IP address. One possible cause can be that you migrated form an old cluster to a new cluster, meaning that the pool of dynamic IP addresses is different and hence the generated VIP MAC … aha!
  • Another reason, and again associated multiple MAC address associated with the same IP address is that you have an old static ARP entry for that IP address somewhere on your switches. Do some house cleaning.
  • If all the above is perfectly fine and you’re certain this is due to some Hyper-V live migration, vSwitch, firmware, driver bug you can get rid of the warning by disabling ARP checks on the cluster members. Under HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters, create a DWORD value with as name “ArpRetryCount” and set the value to 0. Reboot the server for this to take effect. In general this is not a great idea to do. But if you manage your IP addresses well and are sure no static entries are set on the switch it can help avoid this issue. But please, don’t just disable “ArpRetryCount” and ignore the root causes.


You can still get WNLB to work for you properly, even today in 2014. But it’s time to start saying goodbye to Windows NLB. The way the advanced networking features are moving towards layer 3 means that “useful hacks” like MAC spoofing for Windows NLB are going no longer going to work.  But until you have implement hardware load balancing I hope this blog has given you some ideas & tips to keep Windows NLB running smoothly for now. I’ve done quite few and while it takes some detective work & testing, so far I have come out victorious. Eat that Windows NLB! I have always enjoyed making it work where people said it couldn’t be done. But with the growing important of network virtualization and layer 3 in our networks, this nice hack, has had it’s time.

For some reasons developers like Windows NLB as  “it’s easy and they are in control as it runs on their servers”. Well … as you have seen nothing comes free and perhaps our time is better spend in some advanced health checking and failover in hardware load balancing. DevOps anyone?

Windows NLB Nodes Misconfigured after Simultaneous Live Migration on Windows Server 2012 (R2)

Here’s the deal. While Windows NLB on Hyper-V guests might seem to work OK you can run into issues. Our biggest challenge was to keep the WNLB cluster functional when all or multiple node of the cluster are live migrated simultaneously. The live migration goes blazingly fast via SMB over RDMA nut afterwards we have a node or nodes in an problematic state and clients being send to them are having connectivity issues.

After live migrating multiple or all nodes of the Windows NLB cluster simultaneously the cluster ends up in this state:


A misconfigured interface. If you click on the error for details you’ll see


Not good, and no we did not add those IP addresses manually or so, we let the WNLB cluster handle that as it’s supposed to do. We saw this with both fixed MAC addresses (old school WNLB configuration of early Hyper-V deployments) and with dynamic MAC addresses. On all the nodes MAC spoofing is enabled on the appropriate vNICs.

The temporary fix is rather easy. However it’s a manual intervention and as such not a good solution. Open up the properties of the offending node or nodes (for every NLB cluster that running on that node, you might have multiple).


Click “OK” to close it …


… and you’re back in business.



Scripting this out somehow with nlb.exe or PowerShell after a guest gets live migrated is not the way to go either.

But that’s not all. In some case you’ll get an extra error you can ignore if it’s not due to a real duplicate IP address on your network:


We tried rebooting the guest, dumping and recreating the WNLB cluster configuration from scratch. Clearing the switches ARP tables. Nothing gave us a solid result.

No you might say, Who live migrates multiple WNLB nodes at the same time? Well any two node Hyper-V cluster that uses Cluster Aware Updating get’s into this situation and possibly bigger clusters as well when anti affinity is not configured or chose to keep guest on line over enforcing said anti affinity, during a drain for an intervention on a cluster perhaps etc. It happens. Now whether you’ll hit this issue depends on how you configure and use your switches and what configuration of LBFO you use for the vSwitches in Hyper-V.

How do we fix this?

First we need some back ground and there is way to much for one blog actually. So many permutations of vendors, switches, configurations, firmware & drivers …


This is the default and Thomas Shinder has an aging but  great blog post on how it works and what the challenges are here. Read it. It you least good option and if you can you shouldn’t use it. With Hyper-V we and the inner workings and challenges of a vSwitch to the mix. Basically in virtualization Unicast is the least good option. Only use it if your network team won’t do it and you can’t get to the switch yourself. Or when the switch doesn’t support mapping a unicast IP to a multicast MAC address. Some tips if you want to use it:

  1. Don’t use NIC teaming for the virtual switch.
  2. If you do use NIC teaming for the virtual switch you should (must):
    • use switch independent teaming on two different switches.
    • If you have a stack or just one switch use multicast or even better IGMP with multicast to avoid issues.

I know, don’t shout at me, teaming on the same switch, but it does happen. At least it protects against NIC issues which are more common than switch or switch port failures.


Again, read Thomas Shinder his great blog post on how it works and what the challenges are here.

It’s an OK option but I’ll only use it if I have a switch where I can’t do IGMP and even then I do hope I can do two things:

  1. Add a static entry for the cluster IP address  / MAC address on your switch if it doesn’t support IGMP multicast:
    • arp [ip] [cluster multicast mac*] ARPA  > arp  03bf.bc1f.0164 ARPA
  2. To prevent switch flooding occurs, as with the unicast configure your switch which ports to use for multicast traffic:
    • mac-address-table static [cluster multicast mac] [vlan id] [interface]  > mac-address-table static 03bf.bc1f.0164 vlan 10 interface Gi1/0/1

The big rotten thing here is that this is great when you’re dealing with physical servers. They don’t tend to jump form switch port to switch port and switch to switch on the fly like a virtual machine live migrating. You just can’t hardcode all the vSwitch ports into the physical switches, one they move and depending on the teaming choice there are multiple ports, switches etc …it’s not allowed and not possible. So when using multicast in a Hyper-V environment stick to 1). But here’s an interesting fact. Many switches that don’t support 1) do support 2). Fun fact is that most commodity switches do seems to support IGMP … and that’s your best choice anyway! Some high end switches don’t support WNLB well but in that category a hardware load balancer shouldn’t be an issue. But let’s move on to my preferred option.

  • IGMP With Multicast (see IGMP Support for Network Load Balancing)

    This is your best option and even on older, commodity switches like a DELL PowerConnect 5424 or 5448 you can configure this. It was introduced in Windows Server 2003 (did not exist in NT4.0 or W2K). It’s my favorite (well, I’d rather use hardware load balancing) in a virtual environment. It works well with live migration, prevents switch flooding and with some ingenuity and good management we can get rid of other quirks.

    So Didier, tell us, how to we get our cookie and eat it to?

    Well, I will share the IGMP with Multicast solution with you in a next blog. Do note that as stated above there are some many permutations of Windows, teaming, WNL, switches  & firmware/drivers out there I give no support and no guarantees. Also, I want to avoid writing a  100 white paper on this subject?. If you insist you want my support on this I’ll charge at least a thousand Euro per hour, effort based only. Really. And chances are I’ll spend 10 hours on it for you. Which means you could have bought 2 (redundancy) KEMP hardware NLB appliances and still have money left to fly business class to the USA and tour some national parks. Get the message?

    But don’t be sad. In the next blog we’ll discuss some NIC teaming for the vSwitch, NLB configuration with IGMP with Multicast and show you a simple DELL PowerConnect 5424 switch example that make WNLB work on a W2K12R2 Hyper-V cluster with NIC teaming for the vSwitch and avoids following issues:

    • Messed up WNLB configuration after the simultaneous live migration of all or multiple NLB Nodes.
    • You avoid “false” duplicate IP address goof ups (at the cost of  IP address hygiene management).
    • You prevent switch port flooding.

    I’d show you on redundant Force10 S4810 but for that I need someone to ship me some of those with SFP+ modules for the lab, free of cost for me to keep Winking smile


    It’s time to start saying goodbye to Windows NLB. The way the advanced networking features are moving towards layer 3 means that “useful hacks” like MAC spoofing for Windows NLB are going no longer going to work.  But until you have implement hardware load balancing I hope this blog has given you some ideas & tips to keep Windows NLB running smoothly for now. I’ve done quite few and while it takes some detective work & testing, so far I have come out victorious. Eat that Windows NLB!

  • Copy Cluster Roles Hyper-V Cluster Migration Fails at Final Step with error Virtual Machine Configuration ‘VM01′ failed to register the virtual machine with the virtual machine service

    I was working on a migration of a nice two node Windows Server 2012 Hyper-V cluster to Windows Server 2012 R2. The cluster consist out of 2 DELL R610 servers and a DELL  MD3200 shared SAS disk array for the shared storage. It runs all the virtual machines with infrastructure roles etc. It’s a Cluster In A Box like set up. This has been doing just fine for 18 months but the need for features in Windows Server 2012 R2 became too much to resists. As the hardware needs to be recuperated and we have a maintenance windows we use the copy cluster roles scenario that we have used so many times before with great success. It’s the Perform an in-place migration involving only two servers scenario documented on TechNet and as described in one of my previous blogs Migrating a Hyper-V Cluster to Windows 2012 R2 for your convenience.

    Virtual Machine Configuration ‘VM01′ failed to register the virtual machine with the virtual machine service

    As the source host was running on Windows Server 2012 we could have done the live migration scenario but the down time would be minimal and there is a maintenance window. So we chose this path.

    So we performed a good health check. of the source cluster and made sure we had no snapshots left hanging around. Yes it’s supported now for this migration scenario but I like to have as few moving parts as possible during a migration.

    It all went smooth like silk. After shutting down the VMs on the source cluster node, bringing the CSV off line (and un-presenting the LUN from the source node for good measure), we present that LUN to the target host. We brought the CSV on line and when that was completed successfully we were ready to bring the virtual machines on line and that failed …

    Log Name:      Microsoft-Windows-Hyper-V-High-Availability-Admin
    Source:        Microsoft-Windows-Hyper-V-High-Availability
    Date:          4/02/2014 19:26:41
    Event ID:      21102
    Task Category: None
    Level:         Error
    User:          SYSTEM
    ‘Virtual Machine Configuration VM01′ failed to register the virtual machine with the virtual machine management service.




    Let’s dive into the other event logs. On the host the application security and system event log are squeaky clean. The Hyper-V event logs are pretty empty or clean to except for these events in the Hyper-V-VMMS Admin log.

    Log Name:      Microsoft-Windows-Hyper-V-VMMS-Admin
    Source:        Microsoft-Windows-Hyper-V-VMMS
    Date:          4/02/2014 19:26:40
    Event ID:      13000
    Task Category: None
    Level:         Error
    User:          SYSTEM
    User ‘NT AUTHORITY\SYSTEM’ failed to create external configuration store at ‘C:\ClusterStorage\HyperVStorage\VM01′: The trust relationship between this workstation and the primary domain failed.. (0x800706FD)



    Bingo. It must be the fact that no domain controller is available. It’s completely self contained cluster and both domain controller virtual machines are highly available and reside on the CSV. Now the CSV does come on line without a DC since Windows Server 2012 so that’s not the issue. it’s the process of registering the VMs that fails without a DC in an Active Directory environment.

    Getting passed this issue

    There are multiple ways to resolve this and move ahead with our cluster migration. As the environment is still fully functional on the source cluster I just removed a DC virtual machine from high availability on the cluster. I shut it down and exported it. I than copied it over to the node of the new cluster  (we’re going to nuke the source host afterwards and install W2K12R2, so we moved it to the new host where it could stay) where I put it on local storage and imported it. For this is used the “Register the virtual machine in-place option”. I did not make it high available.


    After verifying that we could ping the DC and it was up and running well we tried the final phase of the migration again. It went as smooth as we have come to expect!

    Other options would have been to host the DC virtual machine on a laptop or other server. If you could no longer get to the the DC for export & import or heck even a shared nothing migration depending on your environment can help you out of this pickle. A restore from backup would also work. But here in that 2 node all in one cluster our approach was fast and efficient.

    So there you go. Tip to remember. Virtualizing domain controllers is fully supported, no worries there but you need to make sure that if you have a dependency on a DC you don’t have the DC depending on that dependency. It’s chicken an egg thing.

    Hot Iron, Cold Steel & Cables Are Still Paramount In The Era Of The Cloud

    Cloud, virtualized, on premise, hosted … the people in the field offices need to connect to them and as such hardware is not dead yet Winking smile. Commodities don’t mean obsolete or “in the cloud” only.


    Some nice DELL PowerConnect 5548P switches. We’ve been using this line of switches (since the 53XX series) for many years now and with great success for in the datacenter (before we switched to 10Gbps) and campus/client access. They’ve never let us down at a price/value point that make the economies of using them to good to ignore.

    Once in a while, we’re out in the field making sure the people can access their apps, services, servers in the cloud, the data center or at a hosting provider. Meaning we get to play with some hardware and we all enjoy that still Smile. Whilst at work at several sites I’m once again confronted with commodities being treated like specialties with the following results:

    • Overly expensive
    • Very little value & capabilities (under delivery)
    • Slow delivery
    • Churning

    To avoid wasting you money or allowing it to be wasted you need to use common sense. If you use advisors get a consigliore, not a racketeer.

    1Gbps to the desktop and get some extra ports

    I’ve talked about getting affordable 10Gbps without compromising capabilities before so here I’ll look at the access/campus side of the story. I still find many organizations rolling out 100Mbps to the desktop for cost reasons and counting ports in orders of one. Two things to keep in mind. Buy 1Gbps and buy some extra.

    Buying vast quantities of something you don’t use but does it power is not a good idea. But being a complete scrooge and not having some extra ports is ridiculous. I have seen many thousands of € wasted in meetings about 10 to 40 switch ports too few in new building projects that have > 5000 outlets. The only real saving I see in in electricity used, if that is a major concern where you are at. Organizations spend tens of thousands of  € discussing something that would be fixed by spending a few thousand which would give extra benefits on top. That’s churning people. Creating work and billable hours by overinflating issues & crying wolf to justify the expenditure that’s supposedly needed to stave of disaster.

    On top of that when you do ask those architects to do some modern designs like SMB Direct  & DCB they freak out & repeat the above ritual. Chances are you’ll spend 20.000 to 30.00 euro on a 6 month study that says it can’t be done because of cost & the probability the sky will fall on your head, leaving you empty handed an poorer. You should have taken the money and just done it. Their scams defer responsibilities to untraceable entities, lines the pocket of consulting houses and, as no one is going to take responsibility to stop this madness, it just goes on forever whilst on paper everything is done by the book and compliancy to the rules is achieved.  Until the day some joker, frustrated at the lack of a few ports, attached a cheapo 8 ports switch to the outlet, creates a loop and brings down the buildings network affecting many thousands. Because the design didn’t handle that to well … been there, seen it.

    I also disagree with the practice of dropping in 100Mbps unless you have really good reasons. Structural cabling is being put in at Cat6A specifications nowadays and CAT5E has been put I for many years. 1Gbps is not a luxury if you do lots of data transfers within an office and have image intensive needs (more and more that is all of us with video, images, all in high res). Google fiber is coming to residential homes … guess what that could mean to services that can be delivered … Heaven forbid you buy 100Mbps because those fancy overpriced VOIP phones only do 100Mbps & you can’t afford the replace them.

    With QoS for VOIP and other use cases some extra bandwidth comes in handy as well for. Also don’t forget software installations & automated rollouts of desktops & laptops. Last but not least it helps deal with crappy network behavior of way to many software packets.

    On the number of ports and the price per port. We buy the most minimal support on switches possible. They hardly ever die on you and if something goes bad it’s a port perhaps, and even that is rare. So don’t waste money on support contracts. Buy some extra ports. For one you need some wiggle room and you have spare capacity to deal with port or even switch failures. If you need 400 ports, by 10*48 port switches. You have spare capacity and can even afford to lose a switch. If one really fails you most have a “lifetime warranty”. You finance 1Gbps to the desktop by dumping support you won’t need, buying value commodity switches and avoiding the racketeers mentioned above. If you need a network engineer, hire one, a good one.

    Than inevitably the cry comes: “you’ll saturate the uplinks”! Not a big issue for the small office (+/- 60 people) setup we did recently but what about a bit larger environments? Todays commodity switches had dual fiber uplink port,10Gbps capable, for a redundant lag. If you build a star design and not a cascade to a more capable core/top switch & you’re golden. It’s also great future proofing as we use access switches for a long time, over 7 years is not an exception, so give yourself some wiggle room.

    Cost you say? Again, forgo the expensive market leaders and you’ll get better value for less money that get the job done very well. Cables, even OM3 fiber, is affordable compared to the labor, construction and maintenance of a  > 1000 employee building. Put in enough cabling to allow for 21st century network traffic and make sure working on it is easy. Good principles used at the wrong place in the wrong way are no good to anyone except for the ones making money of this scam.

    Some Insights Into How Windows 2012 R2 Hyper-V Backups Work

    How Windows Server 2012 R2 backups differ from Windows Server 2012 and earlier

    You’ll remember our previous blog about an error when backing up a virtual machine on Windows Server 2012 R2, throwing this error:

    Dealing With Event ID 10103 “The virtual machine ‘VM001′ cannot be hot backed up since it has no SCSI controllers attached. Please add one or more SCSI controllers to the virtual machine before performing a backup. (Virtual machine ID DCFE14D3-7E08-845F-9CEE-21E0605817DC)” In Windows Server 2012 R2

    The fix was easy enough, adding a virtual SCSI controller to the virtual machine. But why does it need that now?

    Well, this all has to do with the changed way Windows Server 2012 R2 backups work. Before Windows Server 20012 R2 the VSS provider created a VSS snapshot inside the guest virtual machine. That snapshot was exposed to the host, to create a volume snapshot for backup purposes. Right after the volume snapshot has been taken this VSS snapshot inside the guest virtual machine needed to be reverted. The backups then run against that volume snapshot and is consistent thanks to both host & guest VSS capabilities.

    For an overview of VSS based backup process in general take a peak at Overview of Processing a Backup Under VSS

    Now it is the “Hyper-V Integration Services Shadow Copy Provider” that is being used. When the the host initiates a volume snapshot (Microsoft or hardware VSS provider) the host VSS writer goes in to freeze. This process leverages the Hyper-V Integration Services Shadow Copy Provider  to create the virtual machine checkpoint. After that the volume/LUN/CSV snapshot is taken. When that is done the host VSS writes goes into thaw and the virtual machine checkpoint is deleted. After that the backup runs against the Volume snapshot and at the end that is also deleted. You can follow this process quite nicely in the GUI of your Hyper-V host, you SAN (if you use a Hardware VSS provider).

    Dear storage vendors: a great, reliable, fast VSS Hardware Provider is paramount to success in a Microsoft environment. You need to get this absolutely right and out of the door before spending any more time and money on achieving yet more IOPS. Keep scalability in mind when doing this.

    Dear backup software vendors: think about the scalability when designing your products. If we have 200 or 500 or a thousand VMs … can we leverage CSV based backups to protect every VM on the LUN or do we need to snap the LUN for every VM backed up? Choice there is good for both data protection schemes and scalability.

    At this stage the hardware VSS snapshot is being taken …


    Contrary to common belief this means that the backup will indeed application consistent to the time of the checkpoint as the CSV snapshot being taken is of a consistent checkpoint. It’s the delta in the active avhdx that is only crash consistent, like any running VM by the way. Now pay attention to the screenshot below. The two red arrows are indicating to ntfs source events, two volumes seem to be exposed to the next free drive letters. E: and F: here as C: is the virtual machine OS and D: the DVD.


    Look at the detail. Indeed two. Well it the previous screenshot we only saw one in the CSV path but there are two avhdx files indeed.


    Exposing a snapshot on the SAN to a server actually shows us this much better … look here at the avhdx with the GUID and one with “AutoRecovery” in the name. So that makes for two nfts events … and as the backup needs to do this life it requires a vSCSI controller to be present in the virtual machine … and vIDE controller can’t do this.


    Anyway, enough under the hood detective work for now, In VEEAM that stage looks like this:


    And on the Compellent it looks like this. The screenshots are from different backups at different times so don’t get confused about the time stamps here. It’s just as illustration of what you can expect to see.


    Now when the CSV snapshot has been taken the virtual machine checkpoint is removed. At that time the backup runs against the CSV snapshot. In our case (hardware VSS provider) this is a snapshot on the SAN that gets exposed in a view and mapped to the off host backup proxy VEEAM server. On the DELL Compellent it looks like this.


    This takes a while to o…but after a while the backup will kick off. Do not that the checkpoint has merged and is no longer visible at this time.


    Once the backup is complete, the mapping is removed, the view deleted and the snapshot expired. So your SAN is left as the backup found it.

    There you go. I hope this helped clarify certain things on how Hyper-V guest backups work in Windows 2012 R2. So your backups are still application consistent, just not when you’re running Linux or DOS or NT4.0 as there is no support / VSS for that. However they are based on a  consistent virtual machine snapshot which explains why Hyper-V backups can protect Linux guests very adequately!

    Dealing With Event ID 10103 “The virtual machine ‘VM001′ cannot be hot backed up since it has no SCSI controllers attached. Please add one or more SCSI controllers to the virtual machine before performing a backup. (Virtual machine ID DCFE14D3-7E08-845F-9CEE-21E0605817DC)” In Windows Server 2012 R2

    I was doing backups of a Windows 2012 R2 Hype-V cluster recently and it runs only Windows Server 2012 R2 virtual machines. It’s a small but very modern and up to date cluster Smile.

    Using VEEAM as backup software I have high expectations and VEEAM did deliver. All went well except for one virtual machine.


    VEEAM states "Processing Error. Guest processing skipped (check guest OS VSS state and integration components version)". Well all  virtual machines  are W2K12R2 as are the cluster host and all IC components are up to date and backup (volume checkpoint) is enabled.


    I dove into the Hyper-V log and sure enough I found following event:

    The virtual machine ‘VM001′ cannot be hot backed up since it has no SCSI controllers attached. Please add one or more SCSI controllers to the virtual machine before performing a backup. (Virtual machine ID DCFE14D3-7E08-845F-9CEE-21E0605817DC).

    As it turns out in in Windows Server 2012 R2 the VM requires a SCSI controller for the backup to function. It doesn’t need to have any storage attached. It just needs one to be there (default). So the fix is easy, just add one.



    Click “Apply” and “OK”. You can now start the virtual machine and that’s it. Once we fixed that it was a squeaky clean backup run.

    But why does it need to be there?

    Well when we monitor the event logs inside a virtual machine we are backing up we see that during the backup process, very briefly a VHDX get’s mounted inside the guest.


    To answer this question we need to dive into how Windows Server 2012 R2 backups work as that is different from how it used to be. You can read about that over here when it’s published.