DELL Enterprise Forum EMEA 2014 in Frankfurt


As you might have noticed on Twitter I was in Frankfurt last week to attend DELL Enterprise Forum EMEA 2014. It was a great conference and very worthwhile going to. It was a week of multi way communication between vendor, marketing, engineering, partners and customers. I learned a lot. And I gave a lot of feedback. As a Dell TechCenter Rockstar and a Microsoft MVP in Hyper-V I can build bridges to make sure both worlds understand each other better and we, the customers get their needs served better.

Dell Enterprise Forum EMEA 2014 - Frankfurt

I’m happy I managed to go and I have some people to thank for me being able to grab this opportunity:

  • I cleared the time with my employer. This is great, this is a win win situation and I invested weekend time & extra hours for both my employer and myself.
  • I got an invite for the customer storage council where we learned a lot and got ample of opportunity to give honest and constructive feedback directly to the people that need to hear it! Awesome.
  • The DELL TechCenter Rockstar program invited me very generously to come over at zero cost for the Enterprise Forum. Which is great and helped my employer  and myself out. So, thank you so much for helping me attend. Does this color my judgment? 100%  pure objectivity does not exist but the ones who know me also know I communicate openly and directly. Look, I’ve never written positive reviews for money or kickbacks. I do not have sponsoring on my blog, even if that could help pay for conferences, travel expenses or lab equipment. Some say I should but for now I don’t. I speak my mind and I have been a long term DELL customer for some very good reasons. They deliver the best value for money with great support in a better way and model than others out there. I was sharing this info way before I became a Rockstar and they know that I tell the good, the bad and the ugly. They can handle it and know how to leverage feedback better than many out there.
  • Stijn Depril ( @sdepril, http://www.stijnsthoughts.be/), Technical Datacenter Sales at RealDolmen gave me a ride to Frankfurt and back home. Very nice of him and a big thank you for doing so.  He didn’t have to and I’m not a customer of them. Thank buddy, I appreciate it and it was interesting ton learn the partners view on things during the drive there and back. Techies will always be checking out gear …

Dell Enterprise Forum EMEA 2014 - Frankfurt

What did all this result in? Loads of discussion, learning and sharing about storage, networking, compute, cloud, futures and community in IT. It was an 18 hour per day technology fest in a very nice and well arranged fashion.

I was able to meet up with community members, twitter buddies, DELL Employees and peers from all over EMEA and share experiences, learn together, talk shop, provide feedback and left with a better understanding of the complexities and realities they deal with on their side.

Dell Enterprise Forum EMEA 2014 - Frankfurt

It has been time very well spent. I applaud DELL to make their engineers and product managers available for this event. I thank them for allowing us this amount of access to their brains from breakfast till the moment we say goodnight after a night cap. Well done, thank you for listening and I hope to continue the discussion. It’s great to be a DELL TechCenter Rockstar and work in this industry during this interesting times. To all the people I met again or for the first time, it was a great week of many interesting conversations!

For some more pictures and movies visit the Dell Enterprise Forum EMEA 2014 from Germany photo album on Flickr

Where Does Storage QoS Live In Windows Server 2012 R2 Hyper-V


Back to basics to explain where storage QoS lives and how it works

In Windows Server 2012 R2 Hyper-V (and earlier) we have Hyper-V components called Virtualization Service Provider (VSP) and Virtualization Service Clients (VSC). In combination with the VMBUS the VSP and VSC components are what make virtualization perform well on Hyper-V.The Stor VSP/VSC are were the maximum IOPS functionality lives, aka as QoS Limit.

In a hosted hypervisor like Virtual PC or in a bare metal hypervisor without any “enlightment” the operating system inside a virtual machine is blissfully unaware of the fact it virtualized. Basically it sends hardware access requests using native drivers, but the requests are received by the virtual layer that intercepts them on behalf of the host OS by emulating hardware devices. This comes at a cost, namely performance, latency and losing device specific functionality.

In Hyper-V Microsoft provides the Integration Services (IS) for virtual machines running on Hyper-V which, in combination with the VMBus, avoids this overhead. So you should ways use them where and when possible. Two of the components in the IS are VSP and VSC. They are responsible for the communication between the Host OS or Parent Partition (where the VSP lives) and the Guest OS or Child Partition (where the VSC lives).

image

There are 4 VSP & VSC components: Network, Video, HID and Storage. As you probably guessed we’re interested in the storage VSP & VSC (storVSP.sys & storvsc.sys) for the discussion at hand. While the Stor VSP lives in the host OS and the Stor VSC in the guest OS of every VM running on the host they communicate over the VMBus we mentioned and is designed to make communications as fast as possible (it’s a communication protocol that runs in memory, i.e. it’s very fast).

image

The Minimum IOPS, also known as the Reserve is set per virtual disk but the threshold alerts for it are generated by the VHDMP. This is the VHD/VHDX parser and dependency property provider and this know all about the VHD/VHDX format with in itself is again a file on storage (DAS, CSV, SMB 3.0 File Share). This also happens to be where the Storage IO Balancer lives with which it collaborates, more on that below. You now see why QoS is not available for pass-through disk or iSCSI/FC storage in a VM, it requires a VHDX and is implemented at the virtual disk layer.

The QoS Limit (Maximum IOPS) is set at the virtual disk level via the Stor VSC and the Qos Limiter lives in the Stor VSP.

image

So what do we know:

QoS Limit (Maximum IOPS) and QoS reserve (Minimum IOPS) are implemented at the virtual disk layer. So per VHDX in a particular VM.  It’s not available yet for shared VHDX, whether on the same host or not.

Unlike QoS Limit (Maximum IOPS), which is a hard cap, QoS reserve (Minimum IOPS) is a best effort not a hard minimum. It’s used to warn us, not as an enforcement. This works at the host level, where it will detect whether the VHDX can get get the minimum IOPS configured or not and can generate alerts if this happens. This tied to the QoS IO Balancer which is improved in R2 but it will still only spreads IOPS across multiple VMs on the same host, making sure they all get a fair share.

The key point here is that this process doesn’t work across multiple hosts in a cluster, over multiple clusters and stand alone member servers that might all be attached to the same storage system. Meaning that on shared, multi purpose storage we might have an issue. What if some VMs in a dedicated 4 node Hyper-V cluster dedicated to SQL Server virtualization is eating away all the IOPS. QoS IO Balancer will give each SQL Server VM a fair share of the IOPS but only within its host in that cluster. But if a VM on another host is consuming all IOPS, that’s out of it’s scope  That’s where the max cap comes to the rescue (at the virtual disk level) if you need it. Nice but not perfect. You can see now why the storage QoS minimum is implemented at the VHDMP layer, as this which is where the IO Balancer also lives. The fairness that the IO Balancer gives you a better change that the minimal reserve might be met and if it doesn’t you’ll get notified (you need to listen an react, I hope that’s obvious).

Also don’t forget that if you still have other physical servers that run file services, SQL Server or some data crunching apps you will find that those are blissfully ignorant of your QoS IO Balancer at the Hyper-V host level and of your QoS at the Hyper-V virtual disk level.

There is no multiple host QoS, there is no cluster wide QoS and there is no storage wide QoS in Windows. Perhaps you have some QoS your SAN but most of the time this has no knowledge of Hyper-V, the cluster and the virtual machines.

So the above this gives you an idea where does Microsoft might focus it’s attention in regards to storage IOPS  management (there are many more storage capabilities on my wish list) in vNext.

Any other options available today?

Other options are storage that is smart and has knowledge about the workload. This is nice but that means that it will come at a cost. For the moment GridStore with it’s virtual controller seems to be one of the better ones out there. Now I have heard people say Microsoft doesn’t get it and they’re doing do a bad job, but I do not agree. I have spoken to many people in the community and at MSFT and they have stated, even publicly, on stage, that they will keep investing in storage feature to enhance it in the versions to come. Take a look here at TechEd 2013 Session  MDC-B345: Windows Server 2012 Hyper-V Storage Performance.

Why would I like Microsoft to keep improving storage

When talking to storage vendors serving our needs, I always have some feedback. A lot of the advanced storage features don’t always work well in real life, especially if you combine a few. Don’t believe me? Talk to some experienced Windows engineers about the sorry state of many hardware VSS providers. Or how federation across storage systems falls apart the moment you combine it with application consistent snapshots or put a real heavy load on it. Not to cool when you paid for all those licenses which are tuned into “lab only” toys. Yes sometimes as a Windows user you feel like a second class citizen in storage land. A lot of storage systems are still very much a silo. Attempts to do storage federation without a hit on performance, making it load balance across SAN building blocks whilst making all the advanced features that have knowledge of the OS and hypervisor work reliably are not moving as fast as the race for ever more IOPS.

Sure I love the notion of 2 million IOPS, especially if you can get them with random write/read IO at super low latencies Smile. But there are other, sometimes more urgent needs and those seem to fall between the cracks as the storage vendors compete with each other and forget about the needs of their customers. If some storage vendors would shut up long enough to listen to customers they might be less surprised as to why those customers are interested in Storage Spaces.

So it would be kind of nice if Microsoft can work on this an include more evolved storage QoS capabilities in the box. I also like that approach for other reasons. Basically we will do everything we can with what Windows offers us inbox. It’s cost effective as long as you keep the KISS principle in mind and design it consciously. I assure you that often too much money is spent on 3rd party software because people don’t leverage what they have in box and drop the 20/80 rule. We do and you get the best TCO/ROI for our licenses possible. We don’t spend extra money on licenses, integration and support of third party products so we can spend it where it matters the most. It also makes upgrades easier as the complexity and the number of dependencies are lower on pure in box solution.On top of that we minimalize the distinct possibility that one or more 3rd party products will hold us hostage in an older infrastructure because they don’t support new versions of Windows fast, good and complete enough for us to upgrade.

How To Monitor Storage QoS Minimum IOPS & Identify VM & The Virtual Hard Disk In Windows Server 2012 R2 Hyper-V


At TechEd 2013 John Matthew & Liang Yang presented following session  MDC-B345: Windows Server 2012 Hyper-V Storage Performance. That was a great one. During this they demonstrated the use of WMI to monitor storage alerts related to Storage QoS in Windows Server 2012 R2. We’re going to go further, dive in a bit deeper and show you how to identify the virtual hard disk and the virtual machine.

One important thing in all this is that we need to have the reserve or minimum IOPS not being met, so we run IOMeter to make sure that’s the case. That way the events we need will be generated. It’s a bit of a tedious exercise.

So we start with a wmi notification query, this demonstrates that notifications are sent when the minimum IOPS cannot be met. The query is simply:

select * from Msvm_StorageAlert

image

instance of Msvm_StorageAlert
{
    AlertingManagedElement = "\\\\TESTHOST01\\root\\virtualization\\v2:Msvm_ResourcePool.InstanceID=\"Microsoft:70BB60D2-A9D3-46AA-B654-3DE53004B4F8\"";
    AlertType = 3;
    Description = "Hyper-V Storage Alert";
    EventTime = "20140109114302.000000-000";
    IndicationTime = "20140109114302.000000-000";
    Message = "The ‘Primordial’ Hard Disk Image pool has degraded Quality of Service. One or more Virtual Hard Disks allocated from the pool is not reporting sufficient throughput as specified by the IOPSReservation property in its Resource Allocation Setting Data.";
    MessageArguments = {"Primordial"};
    MessageID = "32930";
    OwningEntity = "Microsoft-Windows-Hyper-V-VMMS";
    PerceivedSeverity = 3;
    ProbableCause = 50;
    ProbableCauseDescription = "One or more VHDs allocated from the pool (identified by value of AlertingManagedElement property) is experiencing insufficient throughput and is not able to meet its configured IOPSReservation.";
    SystemCreationClassName = "Msvm_ComputerSystem";
    SystemName = "TESTHOST01";
    TIME_CREATED = "130337413826727692";
};

That’s great, but what virtual hard disk of what VM is causing this? That’s the question we’ll dive into in this blog. Let’s go. On MSDN docs on Msvm_StorageAlert class we read:

Remarks

The Hyper-V WMI provider won’t raise events for individual virtual disks to avoid flooding clients with events in case of large scale malfunctions of the underlying storage systems.

When a client receives an Msvm_StorageAlert event, if the value of the ProbableCause property is 50 (“Storage Capacity Problem“), the client can discover which virtual disks are operating outside their QoS policy by using one of these procedures:

Query all the Msvm_LogicalDisk instances that were allocated from the resource pool for which the event was generated. These Msvm_LogicalDisk instances are associated to the resource pool via the Msvm_ElementAllocatedFromPool association.
Filter the result list by selecting instances whose OperationalStatus contains “Insufficient Throughput”.

So I query  (NOT a notification query!) the Msvm_ElementAllocatedFromPool class, click through on a result and select Show MOF.

image

image

 

Let’s look at that MOF …In yellow is the GUID of our VM ID. Hey cool!

instance of Msvm_ElementAllocatedFromPool
{
    Antecedent = "\\\\TESTHOST01\\root\\virtualization\\v2:Msvm_ProcessorPool.InstanceID=\"Microsoft:B637F347-6A0E-4DEC-AF52-BD70CB80A21D\"";
    Dependent = "\\\\TESTHOST01\\root\\virtualization\\v2:Msvm_Processor.CreationClassName=\"Msvm_Processor\",DeviceID=\"Microsoft:b637f346-6a0e-4dec-af52-bd70cb80a21d\\\\6\",SystemCreationClassName=\"Msvm_ComputerSystem\",SystemName=\"
96CD7F7E-0C0A-42FE-96CB-B5550D937F27\"";
};

Now we want to find the virtual hard disk in question! So let’s do what the docs says and query Msvn_LogicalDisk based on the VM GUID we find the relates results …

image

Look we got OperationalStatus 32788 which means InsufficientThroughput, cool we’re on the right track … now we need to find what virtual disk of our VM  that is. Well in the above MOF we find the device ID:     DeviceID = "Microsoft:5F6D764F-1BD4-4C5D-B473-32A974FB1CA2\\\\L"

Well if we then do a query for Msvm_StorageAllocationSettingData we find two entrties for our VM GUID (it has two disks) and by looking at the value InstanceID that contains the above DeviceID we find the virtual hard disk info we needed to identify the one not getting the minimum IOPS.

image

HostResource = {"C:\\ClusterStorage\\Volume5\\DidierTest01\\Virtual Hard Disks\\DidierTest01Disk02.vhdx"};
HostResourceBlockSize = NULL;
InstanceID = "Microsoft:96CD7F7E-0C0A-42FE-96CB-B5550D937F27\\5F6D764F-1BD4-4C5D-B473-32A974FB1CA2\\\\L";

Are you tired yet? Do you realize you need to do this while the disk IOPS is not being met to see the events. This is no way to it in production. Not on a dozen servers, let alone on a couple of hundred to thousands or more hosts is it? All the above did was give us some insight on where and how. But using wbemtest.exe to diver deeper into wmi notifications/events isn’t really handy in real life. Tools will need to be developed to deal with this larger deployments. The can be provided by your storage vendor, your VAR, integrator or by yourself if you’re a large enough shop to make private cloud viable or if you are the cloud provider Smile.

To give you an idea on how this can be done there is some demo code on MSDN over here and I have that compiled for demo purposes.

We have 4 VMs running on the host.  One of them is being hammered by IOMeter while it’s minimum IOPS have been set to an number it cannot possibly get. We launch StorageQos to monitor our Hyper-V host.

image

Just let it run and when the notification event that the minimum IOPS cannot be delivered on the storage this monitor will query WMI further to tell us what virtual disk or disks are  involved.  Cool huh! A good naming convention can help identify the VM and this tools works remotely against node so you can launch one for each node of the cluster. Time to fire up Visual Studio 2013 me thinks or go and chat to a good dev you might know to take this somewhere, some prefer this sort of work to the modern day version of CRUD apps any day. If you buy monitoring tools you might want them to have this capability.

While this is just demo code, it gives you an idea of how tools and solutions can be developed & build to monitor the Minimum IOPS part of Storage QoS in Windows Server 2012 R2. Hope you found this useful!

Use Cases For Fluid Cache For SAN With DELL Compellent In High Performance Virtualization With Windows 2012 R2


Fluid Cache For SAN

At Dell World 2013 in Austin Texas I spent some time talking to engineers & managers about Fluid Cache For SAN. The demo in the keynote was enough to grab my distinct attention, especially as a Compellent customer.

What is it?

Dell already has Fluid Cache for DAS available in its PowerEdge servers. Now it’s time to bring this to their best SAN offering, the Compellent, and make Fluid Cache shared storage suitable for shared storage clustering. The way to do that cost effective and high performance is to build on the success of on board (local to the server) high performance storage and make that shared through software in a physical shared nothing replication/sync model. To make this happen they use a 10/40Gbps Ethernet solution leveraging RoCE (RDMA over Converged Ethernet). Yes that very technology I have been investing time & effort in for SMB Direct and which we leverage for CSV & Live Migration traffic and with SOFS in Windows 2012 R2.

Basically the super low latency an high throughput enable the memory to be synced across all nodes in a cluster and as such each node sees all the cluster memory. For redundancy you will need at least 3 nodes in a cluster. Dell will scale Fluid Cache For SAN to 128 nodes. Windows Server 2012 R2 can handle 64 nodes, which some think is ridiculously high, but then again, Dell aims even higher so it’s not as weird as you think. Some people have really huge computing needs. Just remember that 10 years ago you probably found that 16GB of RAM was extravagant.

Why this architecture?

Dell uses server based “shared nothing flash storage” & high speed low, latency synchronization to create a logical cluster wide shared pool of flash memory. This means the achieve stellar low latency as the flash storage is inside of the servers, close to the processors and as such delivers excellent performance for the workloads. Way better that “just” flash only SAN can. For data integrity they commit the data only when it written to the Express Flash drive(s) of one server and then also to another and verified. This needs to happen very fast and that where the RoCE network come sin to play. Later, at less speed critical times the data is pushed out to the Compellent SAN for storage. If that SAN is a flash based setup think about the capabilities this gives you in performance. Likewise data reads of the SAN that are highly active are pushed from the Compellent SAN and cached (also in multiple copies) on the Express Flash modules. While two servers with each a copy of the data on Express Flash modules would suffice DELL requires at least three. This is just a plain common sense N+1 redundancy design to have high availability even when a node fails. A cool think to note is that you can build larger clusters with 3 nodes each having one or more Express Flash modules and additional nodes don’t need it as long as they can read the cache of those 3. So the cost of this can be managed. The drawback is that you don’t read & write to a local Express Flash module on those extra node. If you want that you’ll need to put more $ on the table.

clip_image001

The thing to note here is that the Servers/SAN are connected over RoCE/RDMA. Well this look familiar. What technology can also leverages RDMA? SMB Direct in Windows Server 2012 R2! And where do we use this amongst other things? Storage IO in Scale Out File Server, CSV traffic, Live Migration …

The big benefit of this design it just it takes your SAN to the next level but also, if DELL does this right, they won’t break any of the good stuff like VSS aware snapshot with Replay Manager, Automatic Data Tiering, Live Volumes, Live Migration etc. A lot of the high IOPS/low latency solutions out their based on fast local flash break a lot of the good stuff and reduces centralized storage management. What if you can have your cookie and eat it to?

Demo Time at Dell World

Dell demonstrated an Oracle database load on an eight node cluster of PowerEdge R720 servers with Intel Xeon E5 processors, with Linux (no Windows Server 2012 R2 support yet Sad smile) These servers each used 350GB PCI-Express flash cards (“only” PCI-Express 2.0 capable by the way). This cluster, using a Compellent SAN, managed to get a result of more than 5 million IOPS at 6 millisecond response times, delivering 12,000 tps for 14,000 client connections. This was read only. If they dropped the Fluid Cache for SAN they  could “only” achieve 2,000 clients (6 times less clients due to 4 time less transactions and 99% slower responses). See this movie for more info: http://www.youtube.com/watch?v=uw7UHWWAtig and watch the keynote from Dell World 2013 here

clip_image003

Where would I use this?

Cost will determine use cases and this is unknown for now. We can only look at what Fluid Cache for DAS cost right now and speculate. I for one hope/bet on the fact that DELL won’t price itself out of the marked (they have a lot of competition from big & smalls players in a “good enough is good enough” world with a cloud mindset all around). So make it too expensive and we might be happy with “just” 500.000 IOPS at much less cost. It’s a fine line. Price it right, support it well and you might win the bulk of sales in the storage wars. Based on the DAS solution we’re looking at least 8000 $ per server (license is 3500 for DAS => see http://www.theregister.co.uk/2013/03/05/dell_fluid_cache_server_acceleration/  + cost of PCI-Express flash module (> 5000$ => see http://en.community.dell.com/techcenter/extras/chats/w/wiki/4480.3-5-2013-techchat-fluid-cache-for-das.aspx) &  yearly maintenance fee. Then we need to factor in the cost of the RDMA/RoCE capable NICs & the (dedicated) Force10 switches – 2 for redundancy  that are at least 10Gbps (S4810?) or probably 40Gbps (S6000?) & cabling. So this is not a cheap solution and you won’t just “throw it in” on a quiet afternoon to see what it does for you. Not that there will be a DIY “throw it on kit” I think, it’s a step above plug and play. If they keep it affordable and do some other things for Windows Server 201 R2 / Hyper-V they can be the absolute number one SAN vendor for any Microsoft customer. But that’s another blog topic.

Cost is indeed something that might make it a show stopper for us. I just can’t tell yet. One of the key factors is that if affordable it could give the point solutions we now see pop up more and more in storage. a run for their money. While cheap and workable in good enough is good enough scenarios it takes some of the centrally shared storage advantages away. But if we ever do a state full VDI project in an environment with high end physical desktops (500GB or more local storage, SSD disks, 8 core CPU, 8-32GB DDR3, dual or more screens) that run ArcGis, AutoCad, Visual Studio, SQL Server, Outlook with 5GB mailboxes, large documents & huge files (images) this might be the enabler we need to make VDI happen & works as desired with current all-purpose Compellent SANs. IIf the price is right it could enable VDI in now “NO GO” scenarios.  And those are plentiful, … Another use case I see is a virtualized SQL Server environment on Hyper-V with general purpose shared storage. We’re doing very well but the day might arrive that we need those IOPS in order to take it even further. Don’t laugh but realize how much IOPS an SSD delivers to a workstation today and that’s what your users expect & demand. Want to fail at VDI? Have it outperformed by a 4 year old physical PC where you slapped an SSD into.

Could it help in keeping excessive IOPS away from the SAN, making that capable of doing more over a longer life time? In other words can it play a part in the Storage QoS issue across server/cluster/storage system issue for non workload aware storage solutions?

So I might have some homework to do. For our next SQL Server cluster we’ll look at the next generation of servers & start counting our PCI Express slots. We now already consume 4 PCI-Express slots for 2*FC & 2*Dual Port 10Gbps) in our Hyper-V design. That’s another discussion, but they are built purposely for performance under any condition & to be highly redundant. A health check / improvement track by Microsoft for our SQL Server environment has proven this to be an outstanding setup (nice e-mail to see your bosses get by the way). I digress, free PCI-Slots should not be an issue, as we also don’t need the FC cards in the Fluid Cache Nodes. The storage IO uses the RoCE network, to which the Compellent SAN attaches.

Cost is very important in determining if we’ll ever get to deploy it. The cloud is here, and while that is far from cheap either, it’s a lot easier to sell than internal IT for various reasons. That’s just how the powers that be roll right now & how things are.

What we’ll get in our hands

There was a lot of love between Dell & Samsung at Dell World. Talking to Dell at the server/storage/networking boots I understood that Samsung is going to produce flash modules for this that support PCI-Express 3.0 and the industry backed NVM Express host interface for solid state drives which will reduce latency with 1/3 compared to now. As it seems they will produce higher capacity cards than what was used in the demos (800 GB and 1.6 TB). So capacity will increase & latency will drop even more. They leverage the Force10 10Gpbs or 40Gbps switches for the RoCE network. As Dell & Mellanox are cooperating heavily (Mellanox Collaborates with Dell to Deliver 10/40GbE Solution for Mainstream Servers and Networking Solutions) my bet is on Mellanox for the cards. Broadcom is not there yet for it to happen in time and Intel has no RoCE cards afaik. They seem to be playing the waiting game before they jump in.

Magic Ball Time, Speculation & Questions.

I’m not a DELL Server / Storage designer or architect, and those that are don’t tell me to plaster it all over the internet, so this really is magic ball time …

image

I’ll show my ignorance on what Samsung does under the hood when I hear that the next generation of DELL servers can have 6TB of RAM I can only speculate that with the advent of DDR4 in servers & ever dropping cost the path is open to leverage NV-RAM disk for the read/write cache in Fluid Cache for SAN as well a bit like what IDT did http://us.generation-nt.com/idt-announces-world-first-pci-express-gen-3-nvme-nv-dram-press-3732872.html. The persistence comes from writing the DRAM content to NAND at shutdown, can we do that fast enough at 1.6 TB sized caches? Can we fit enough of  those modules on a card? What would that do for IOPS & latency? Does that even make sense at this moment in time?

What if we could leverage the DDR4 dims in the server itself? This would perhaps cut costs and also save us some valuable PCI-Express 3.0 slots for our 10Gbps or better addiction Smile. Sure there is no persistence than but the content is distributed redundantly over the cluster anyway? Is that safe enough to make it feasible? What if we need to shut down the cluster? I guess it’s not that easy and perhaps we just need to make sure future motherboards have 8 or more PC-Express 3.0 slots & not worry about that. Or move to 40/100Gbps & have less need for NICs. Yeah that’s what was said of 10Gbps in the early days …

Support for Windows?

While it’s not there yet I have absolutely no doubt that they will bring it to Windows Server 2012 R2 and higher. Well Windows is a huge on premise market for native workloads like SQL Server, VDI and Hyper-V. The number of sales opportunities in the Microsoft ecosystem is growing (despite cloud) while others are stagnant or dropping. On top of that the low cost of Hyper-V leaves money to be spent on Fluid Cache for SAN. As Dell is in business to make money, they will not leave that big chunk of cash on the table.

When can we get our hands on this technology?

Timing wise that will be early to late Q2 in 2014, which is my best guestimate. Interesting times people, interesting times

How To Measure IOPS Of A Virtual Machine With Resource Metering And MeasureVM


The first time we used the Storage QoS capabilities in Windows Server 2012 R2 it was done in a trial and error fashion. We knew that it was the new VM causing the disruption and kind of dropped the Maximum IOPS to a level that was acceptable.  We also ran some PerfMon stats & looked at the IOPS on the HBA going the host. It was all a bit tedious and convoluted.  Discussing this with Senthil Rajaram, who’s heavily involved with anything storage at Microsoft he educated me on how to get it done fast & easy.

Fast & easy insight into virtual machine IOPS.

The fast and easy way to get a quick feel for what IOPS a VM is generating has become available via resource metering and Measure-VM. In Windows Server 2012 R2 we have new storage metrics we can use for that, it’s not just cool for charge back or show back Smile.

So what did we get extra  in Windows Server 2012 R2? Well, some new storage metrics per virtual disk

  1. Average Normalized IOPS (Averaged over 20s)
  2. Average latency (Averaged over 20s)
  3. Aggregate Data Written (between start and stop metric command)
  4. Aggregate Data Read (between start and stop metric command)

Well that sounds exactly like what we need!

How to use this when you want to do storage QoS on a virtual machine’s virtual disk or disks

All we need to do is turn on resource metering for the VMs of interest. The below command run in an elevated PowerShell console will enable it for all VMs on a host.image

We now run measure-VM DidierTest01 | fl and see that we have no values yet for the properties . Since we haven’t generated any IOPS yes this is normal.image

So we now run IOMeter to generate some IOPSimage

and than run measure-VM DidierTest01 | fl again. We see that the properties have risen.image

It’s normal that the AggregatedAverageNormalizedIOPS and AggregatedAverageLatency are the averages measured over a period of 20 seconds at the moment of sampling. The value  AggregatedDiskDataRead and AggregatedDiskDataWritten are the averages since we started counting (since we ran Enable-VMResourceMetering for that VM ), it’s a running sum, so it’s normal that the average is lower initially than we expected as the VM was idle between enabling resource metering and generating some IOPS.

All we need to do is keep the VM idle wait 30 seconds so and when we run again measure-VM DidierTest01 | fl again we see the following?image

While the values AggregatedAverageNormalizedIOPS and AggregatedAverageLatency are the value reflecting a 20s average that’s collected at measuring time and as such drop to zero over time. The values for AggregatedDiskDataRead and AggregatedDiskDataWritten are a running sum. They stay the same until we disable or reset resource metering.

Let’s generate some extra IO, after which we wait a while (> 20 seconds) before we run measure-VM DidierTest01 | fl again and get updated information. We confirm see that indeed AggregatedDiskDataRead and AggregatedDiskDataWritten is a running sum and that AggregatedAverageNormalizedIOPS and AggregatedAverageLatency have dropped to 0 again.

image

Anyway, it’s clear to you that the sampled value of AggregatedAverageNormalizedIOPS is what you’re interested in when trying to get a feel for the value you need to set in order to limit a virtual hard disk to an acceptable number of normalized IOPS.

But wait, that’s aggregated! I have SQL Server VMs with 4 virtual hard disks. How do I know what hard disk is generating what IOPS? The docs say the metrics are per virtual hard disk, right?! I need to know if it’s the virtual hard disk with TempDB or the one with the LOGS causing the IO issue.

Well the info is there but it requires a few more lines of PowerShell:

cls
$VMName  = "Didiertest01" 
enable-VMresourcemetering -VMName $VMName 
$VMReport = measure-VM $VMName 
$DiskInfo = $VMReport.HardDiskMetrics
write-Host "IOPS info VM $VMName" -ForegroundColor Green
$count = 1
foreach ($Disk in $DiskInfo)
{
Write-Host "Virtual hard disk $count information" -ForegroundColor cyan
$Disk.VirtualHardDisk | fl  *
Write-Host "Normalized IOPS for this virtual hard disk" -ForegroundColor cyan
$Disk
$count = $Count +1 
}

Resulting in following output:

image

Hope this helps! Windows Server 2012 R2 make life as a virtualization admin easier with nice tools like this at our disposal.

Storage Quality of Service (QoS) In Windows Server 2012 R2


In Windows Server 2012 R2 Hyper-V we have the ability to set  quality-of-service (QoS) options for a virtual machine at the virtual disk level. There is no QoS (yet) for shared VHDX, so it’s a per individual VM, per virtual hard disk associated with that virtual machine setting for now.

What can we do?

  • Limit – Maximum IOPS
  • Reserve – Minimum IOPS threshold alerts
  • Measure – New Storage attributes in VM Metrics

image

Limit

Storage QoS allows you to specify maximum input/output operations per second (IOPS) value for a virtual hard disk associated with virtual machine. This puts a limit on what a virtual disk can use. This means that one or more VMs cannot steal away all IOPS from the others (perhaps even belonging to separate customers). So this is an automatic hard cap.

Reserve

We can also set a minimum IOPS value. This is often referred to as the reserve. This is not hard minimum. Here’s a worth of warning, unless you hopelessly overprovision your physical storage capabilities (ruling out disk, controller issues, HBA problems & other risks that impact deliverable IOPS) and dedicate it to a single Hyper-V host with a single VM (ruling out the unknown) you cannot ever guarantee IOPS. It’s best effort. It might fail but than events will alert you that things are going south. We will be notified when the IOPS to a specified virtual hard disk is below that reserve you specified?that is needed for its optimal performance.  We’ll talk more about this in another blog post.

Measure

The virtual machine metrics infrastructure have been extended with storage related attributes so we can monitor the performance (and so charge or show back).  To do this they use what they call “normalized IOPS” where every 8 K of data is counted as one I/O. This is how the values are measured and set. So it’s just for that purpose alone.

  • One 4K I/O = 1 Normalized I/O
  • One 8K I/O = 1 Normalized I/O
  • One 10K I/O = 2 Normalized I/Os
  • One 16K I/O = 2 Normalized I/Os
  • One 20K I/O = 3 Normalized I/Os

A Little Scenario

We take IO Meter and we put it inside 2 virtual machines. These virtual machine reside on a Hyper-V Cluster that is leveraging shared storage on a SAN. Let’s say you have a VM that requires 45000 IOPS at times and as long as it can get that when needed all is well.

All is well until one day a project that goes into production has not been designed/written with storage IOPS (real needs & effects) in mind. So while they have no issue the application behaves as a scrounging hog eating a humongous size of the IOPS the storage can deliver.

Now, you do some investigation (pays to be buddies with a good developer and own the entire infrastructure stack) and notice that they don’t need those IOPS as they:

  1. Can do more intelligent data retrieval slashing IOPS in half.
  2. They waste 75% of the time in several suboptimal algorithms for sorting & parsing data anyway.
  3. The number of users isn’t that high and the impact of reducing storage IOPS is non existent due to (2).

All valid excuses to take the IOPS away …You think let’s ask the PM to deal with this. They might, they might not, and if they do it might take time. But while that remains to be seen, you have a critical solution that serves many customers who’re losing real money because of that drop in IOPS has become an issue with the application. So what’s the easiest thing to do? Cap that IOPS hog! Here the video on how you deal with this on Vimeo: http://vimeo.com/82728497

image

Now let’s enable QoS as in the screenshot below. We give it a best effort 2000 IOPS minimum and a hard maximum of 3000 IOPS.

image

The moment you click “Apply” it kicks in! You can do this live, not service interruption/ system downtime is needed.

image

I actually have a hard cap of 50000 on the business critical app as well just to make sure the other VMs don’t get starved. Remember that minimum is a soft reserve. You get warned but it can’t give what potentially isn’t available. After all, as always, it’s technology, not magic.

In a next blog we’ll discuss QoS a bit more and what’s in play with storage IO management in Hyper-V, what the limitations are and as such we get an idea what Microsoft should pay attention to vNext.

Remarks

Well doing this for a 24 node Hyper-V cluster with 500 VMs could be a bit of challenge.

I’m In Austin Texas For Dell World 2013


This is the night time sky line of where I’m at right now. Austin, Texas, USA. That famous “Lone Star State” that until now I only knew from the movies & the media. Austin is an impressive city in an impressive state and, as most US experiences I’ve had, isn’t comparable with anything in my home country Belgium. That works both ways naturally and I’m lucky I get to travel a bit and see a small part of the world.image

Dell World 2013

So why am I here?  Well I’m here to attend DELL World 2013, but you got that already Smile

image

That’s nice Didier but why DELL World? Well, several reasons. For one, I wanted to come and talk to as many product owners & managers, architects & strategists as I can. We’re seeing a lot of interest in new capabilities that Windows Server 2012 (R2) brought to the Microsoft ecosystem. I want to provide all the feedback I can on what I as a customer, a Microsoft MVP and technologist expect from DELL to help us make the most of those. I’m convinced DELL has everything we need but can use some guidance on what to add or enhance. It would be great to get our priorities and those of DELL aligned. Form them I expect to hear their plans, ideas, opinions and see how those match up. Dell has a cost/value leadership position when it comes to servers right now. They have a great line up of economy switches that pack a punch (PowerConnect) & some state of the art network gear with Force10. it would be nice to align these with guidance & capabilities to leverage SMB Direct and NVGRE network virtualization. Dell still has the chance to fill some gaps way better than others have. A decent Hyper-V network virtualization gateway that doesn’t cost your two first born children and can handle dozens to hundreds of virtual networks comes to mind. That and real life guidance on several SMB Direct with DCB configuration guidance. Storage wise, the MD series, Equalogic & Compellent arrays offer great value for money. But we need to address the needs & interest that SMB 3.0, Storage Spaces, RDMA has awoken and how Dell is planning to address those. I also think that OEMs need to pick up pace & change some of their priorities when it comes to providing answers to what their customers in the MSFT ecosystem ask for & need, doing that can put them in a very good position versus their competitors. But I have no illusions about my place in & impact on the universe.

Secondly, I was invited to come. As it turns out DELL has the same keen interest in talking to people who are in the trenches using their equipment to build solutions that address real life needs in a economical feasible way.  No, this is not “just” marketing. A smart vendor today communicates in many ways with existing & potential customers. Social media is a big part of that but also off line at conferences, events and both contributor and sponsor.  Feedback on how that works & is received is valuable as well for both parties. They learn what works &n doesn’t and we get the content we need. Now sure you’ll have the corporate social media types that are bound by legal & marketing constrictions but the real value lies in engaging with your customers & partners about their real technological challenges & needs.

Third is the fact that all these trends & capabilities in both the Microsoft ecosystem and in hardware world are not happening in isolation. They are happening in a world dominated by cloud computing in all it’s forms. This impact everything from the clients, servers, servers to the data centers as well as the people involved. It’s a world in which we need to balance the existing and future needs with a mixture of approaches & where no one size fits all even if the solutions come via commodity products & services. It’s a world where the hardware  & software giants are entering each others turf. That’s bound to cause some sparks Smile. Datacenter abstraction layer, Software Defined “anything” (storage, networking, …), converged infrastructure. Will they collaborate or fight?

So put these three together and here I am. My agenda is full of meetings, think tanks, panels, briefings and some down time to chat to colleagues & DELL employees alike.

Why & How?

Some time ago I was asked why I do this and why I’m even capable to do this. It takes time, money and effort.  Am I some kind of hot shot manager or visionary guru? No, not at all. Believe there’s nothing “hot” about working on a business down issue at zero dark thirty. I’m a technologist. I’m where the buck stops. I need things to work. So I deal in realities not fantasies. I don’t sell methods, processes or services people, I sell results, that’s what pays my bills long term. But I do dream and I try to turn those into realities. That’s different from just fantasy world where reality is an unwelcome guest. I’m no visionary, I’m no guru. I’m a hard working IT Pro (hence the blog title and twitter handle) who realizes all to well he’s standing on the shoulders of not just giants but of all those people who create the ecosystem in which I work. But there’s more. Being a mere technologist only gets you that far. I also give architectural & strategic advice as that’s also needed to make the correct decisions. Solutions don’t exist in isolation and need to be made in relation to trends, strategies and needs. That takes insight & vision. Those don’t come to you by only working in the data center, your desktop or in eternal meetings with the same people in the same place. My peers, employers and clients actively support this for the benefit of our business, customers, networks & communities. That’s the what, why and who that are giving me the opportunities to learn & grow both personally & professionally. People like Arlindo Alves and may others at MSFT, my fellow MVPs (Aidan Finn, Hans Vredevoort, Carsten Rachfahl, …), Florian Klaffenbach & Peter Tsai. As a person you have to grab those opportunities. If you want to be heard you need to communicate. People listen and if the discussions and subjects are interesting it becomes a two way conversation and a great learning experience. As with all networking and community endeavors you need to put in the effort to reap the rewards in the form of information, insights and knowledge you can leverage for both your own needs as well as for those in your network. That means speaking your mind. Being honest and open, even if at times you’re wrong. That’s learning. That, to me, is what being in the DELL TechCenter Rock StarDELL TechCenter Rock Star program is all about.

Learning, growing, sharing. That and a sustained effort in your own development slowly but surely makes you an “expert”. An expert that realizes all to well how much he doesn’t known & cannot possible all learn.  Luckily, to help deal with that fact, you can turn to the community.

Upgrading The DELL PowerVault MD3600 Disk Firmware


In this post I’ll walk you through an disk firmware upgrade of a PowerVault MD3600F with 7 MD1200 extension bays filled with disks. We have several of these attached to R710 that act as Disk2Disk backup media servers with 10Gpbs networking and running W2K12R2 of course. At the moment of writing we’re at PowerVault MD Series Storage HDD (SSD) Firmware A13 and in our case we’ll be updating to Seagate_ST2000MN0001_PS08.

First of all read the readme.txt and such to make sure you’re not missing any special instructions for particular cases that might exist such as being on to old controller firmware that you need to upgrade first. Do your home work & due diligence. Of course I’m not responsible for whatever it is you do to your environment. The standard disclaimer of my blog applies Smile

I always make sure my controller firmware is up to date before I don disk firmware updates unless I’m instructed to act otherwise. In general you should not be applying firmware updates and such constantly and  all over the place to keep busy. It is, however, a good practice to keep an eye on releases and see if they fix any risky bugs you might be vulnerable to.  You also don’t get to far behind on firmware updates as this can complicate thing later in the useful service life or you hardware. Fear, doubt and only doing do something when things have broken down are also not a good practice.

So first of all get the latest version of your PowerVault Modular Disk Storage Manager software if you haven’t already. Often you’ll need to have a least a certain version before you can even do newer firmware updates. So just get this in order before starting, especially if they tell you to, don’t try and outsmart the system. Then download the disk firmware form the DELL support website or via the DELL Storage Community Wiki http://en.community.dell.com/techcenter/storage/w/wiki/4234.dell-powervault-md-downloads.aspx?dgc=SM&cid=257966&lid=4630585 and safe it to disk.

Fire up the PowerVault Modular Disk Storage Manager by clicking the icon …image

Enjoy the splash screen during the launch …image

Right click your storage array and select “Manage Storage Array”image

On the menu click Upgrade and select “Physical Disk Firmware…”image

A wizard pops up with some friendly advice and you are warned to stop all I/O. Which  is something you should really do. If you can un mount the file system(s) – off line it in Windows – you can do this to makes sure no I/On kick of during the upgrade. Depending on your workload this might require a maintenance window or not. For us just setting the backups to not run, stopping the agents/services of the backup product on these media servers and taking the storage LUNS off line is enough.image

When you have done so click Next. As you can see I have only one disk type in my MD3600F and its MD1200 extension disk bays and they are all on the same firmware.

image

Click on Add and browse to the location where you unzipped the disk firmware package. There’s a bunch of them but it should be straight forward to select the correct one. If not perhaps you shouldn’t be doing this Smile

image

In this example you see on disk type all being moved to one new firmware. If you have multiple disk types or brands you can select multiple firmware packages to be transferred. That’s quite handy.

image

When done click Next to continue. Note: Don’t select more that one version of the firmware for the same type of disk. That will just throw an error message at you telling you that’s not you smartest move.image

You can select all hard disk to deploy the firmware. I use this but I tend to test a run on one disk (a hot spare) first before I do this. If that one is successful I choose “Select All” click finish.

image

To make sure that you know what you’re doing and probably keep any lawyers at bay you must type in yes before you can continue. Yes making backups is always prudent. Always have at least one way out.

image

You can follow the firmware update process in the right hand column which displays the progress (Not attempted, in progress,Successful).

image

This process goes faster than you might think. When they are all done, close the wizard. Congratulations you have just installed the new disk firmware to 96 disks Smile.image

If, for 100% safety, you have brought your storage off line, you can it online and resume normal operations. I tend to give the hosts a reboot to see all still works as it should.

Upgrading The DELL MD3600F Controller Firmware Using the Modular Disk Storage Manager


As part of our hardware maintenance we’re deploying the updates of SUU 7.3.1 right now. As part of that effort we’re also making sure the PowerVault disk bays are getting their updates. We’ve got a couple of PowerVault MD3600F with 7 MD1200 extension bays filled with disks attached to R710  Power Edge servers that act as Disk2Disk backup media servers with 10Gpbs networking and running W2K12R2.

In general you should not be applying firmware updates and such constantly and all over the place just to keep busy. It is, however, a good practice to keep an eye on releases and see if they fix any risky bugs you might be vulnerable to. You see that sometimes these fix issues you don’t want to run into and those are often marked urgent. You also don’t get to far behind on firmware updates as this can complicate thing later in the useful service life or you hardware. First of all read the readme.txt and such to make sure you’re not missing any special instructions for particular cases that might exist. The standard disclaimer of my blog applies Smile, you’re responsible for your own actions.

First of all get the latest version of your PowerVault Modular Disk Storage Manager software if you haven’t already. Often you’ll need to have a least a certain version before you can even do newer firmware updates. So just get this in order before starting, especially if they tell you to, don’t try and outsmart the system. Download the disk firmware form the DELL support website or via the DELL Storage Community Wiki http://en.community.dell.com/techcenter/storage/w/wiki/4234.dell-powervault-md-downloads.aspx?dgc=SM&cid=257966&lid=4630585 and safe it to disk.

Here’s an example of a read me file you’d better not ignore. If you do, the Modular Disk Storage Manager will not allow you to upgrade anyway.

*** ATTENTION ***
If your PowerVault MD32xx/MD36xx series storage array is currently running a firmware version lower than 07.75.28.60, you cannot directly install the RAID controller firmware version 07.84.47.60 included in this release.
Instead, you must first install the bridge firmware version 07.75.28.60. After the bridge firmware is successfully installed, you can then install RAID controller firmware 07.84.47.60.

Specifically, follow these steps:
1. *** IMPORTANT *** Install Dell PowerVault MD Storage Manager (MDSM) software from the Resource DVD version 4.1.2.29 or higher.
2. Extract the following files from the Bridge_Firmware_07_75_28_60.zip archive:
* Bridge_Firmware_07_75_28_60.dlp
3. Download and activate RAID controller bridge firmware version 07.75.28.60 (contained in the Bridge_Firmware_07_75_28_60.dlp file)
4. Once the bridge firmware installation is successfully completed, wait at least 30 minutes to allow the attached hosts to re-discover the storage array.
5. Download and activate the RAID controller firmware version 07.84.47.60 (contained in the MD3xxx_MD3xxx_Firmware_07_84_47_60.dlp file) together with the NVSRAM configuration file version N26X0-784890-X04 (contained in the MD3xxx_MD3xxx_NVSRAM_N26X0-784890-X04.dlp file) using the Dell PowerVault MD Storage Manager for your corresponding array type.
* MD3200_MD3220_Firmware_07_84_47_60.dlp and MD3200_MD3220_N26X0_784890_004.dlp (6G Non-Dense SAS)
* MD3200i_MD3220i_Firmware_07_84_47_60.dlp and MD3200i_MD3220i_N26X0_784890_004.dlp (1G Non-Dense iSCSI)

* MD3600i_MD3620i_Firmware_07_84_47_60.dlp and MD3600i_MD3620i_N26X0_784890_904.dlp (10G Non-Dense iSCSI)
* MD3600f_MD3620f_Firmware_07_84_47_60.dlp and MD3600f_MD3620f_N26X0_784890_904.dlp (8G Non-Dense FC)
* MD3260_Firmware_07_84_47_60.dlp and MD3260_N26X0_784890_004.dlp (6G Dense SAS)
* MD3260i_Firmware_07_84_47_60.dlp and MD3260i_N26X0_784890_004.dlp (1G Dense iSCSI)
* MD3660i_Firmware_07_84_47_60.dlp and MD3660i_N26X0_784890_904.dlp (10G Dense iSCSI)
* MD3660f_Firmware_07_84_47_60.dlp and MD3660f_N26X0_784890_904.dlp (8G Dense FC)

*** WARNING ***
If you have a single controller PowerVault MD32/MD36 series storage array you must stop all I/O operations before starting the RAID controller firmware upgrade.

Launch the PowerVault Modular Disk Storage Manager, right click your array and select Upgrade Raid Controller Module Firmware.image

Wait while the status is being refreshed.Image 017

You might run into a an issue if the event log contains to many entries. In that case you’ll be warned you can’t upgrade until you’ve cleared it as shown below. Here are some screenshots from another previous upgrade where I ran into this issue.Image 018

In that case open up the Manage Storage Array  for your array.image just

Navigate to the support tab on your and got to “View Event Log”. image

It’s always wise to have a look to see if you have any issues before upgrading anyway. If you need to save the log for some reason do so otherwise just clear it with the Clear All button.image

The lawyers need assurance you confirm that you know what you’re doing so type yes in the warning form and click OK.

image 

You get an entry telling you the log has been cleared and you can close the form.image

If you launch the Upgrade Raid Controller Module Firmware menu option again you’ll now see that you’re good to go with the upgradeImage 024

Click on Download, Firmware and browse to the  and the NVSRAM. I prefer to do both in one go but you don’t have to. Just make sure that when both need an upgrade you don’t forget to do it when upgrading them separately. You can opt to download the firmware and  NVSRAM but activate them later. I normally do it all in one go (the default option).image

They’ll warn you no to do silly things, meaning you have to make sure that the firmware and NVSRAM versions are compatible. Read the documentation to make sure your OK.

image

If you’re good to go, click Yes and the update kicks of. First it’s the firmware update that runs.

image

When that’s done you’ll see the new firmware version in the Pending Version column. After that the NVSRAM update kicks in automatically. This takes a bit longer.Image 013

Once it’s done you get the green check indicating your firmware has upgraded successfully. We chose to activate in the same run but it’s not visible yet.image

Now to see this reflected I the version columns you‘ll need to close and reopen the Upgrade Raid Controller Module Firmware wizard again.
image

Close the wizard. You’re done upgrading the Raid Controller Firmware. Next things to do would be to upgrade disk firmware and perhaps the EMM (Enclosure Management Module) firmware of the extension disk bays if applicable. They

Great Hardware Support Equals Fast Windows 2012 R2 Implementation


I love it when a plan comes together

We adopted Windows 2012 when right after it went RTM in august 2012. Today we’re are already running Windows 2012 R2 and ready to step up the pace. If you are a VAR/ISV that does not have fast & good support for Windows Server 2012 R2 consider this your notice. You can’t lead from behind. Get your act together and take an example from Altaro. Small, sure, but good & fast. How do we get our act together so fast? Fast? Yes, but it does take time and effort.

As it turns out, we’re pretty well of with the DELL hardware stack. The generation 11 and 12 servers are supported by DELL and on the Windows Server Catalog for Windows Server 2012 R2.

image

For more information on Dell Server inbox driver support see: Windows Server 2012 R2 RTM Inbox Driver Support on Dell PowerEdge Servers. By the way I can testify that we’ve run Windows Sever 2012 R2 successfully on 9th Generation hardware (PowerEdge 1950/2950).

We’ve been running tests since Windows 2012 R2 Preview on R710/R720 and it has been a blast. We’ve kept them up to date with the latest firmware & drives via SUU. And for our Intel X520 and Mellanox ConnectX-3 we’ve had rapid support as well.

So what more could you want? Well support from your storage array vendor I would think. I’m happy to report that Storage Center 6.4 has been out since October 8th and it supports Windows Server 2012 R2. Dell Compellent Adds MLC SSD Tier – Bests 15K HDDs on Price and Performance. Mind you on a lazy Sunday afternoon 2 quick e-mails to CoPilot got me the answer that Storage Center 6.3.10 also supports Windows Server 2012 R2.  Sweet!

And that’s not just DELL, the Dell Compellent Storage Center 6.4 is fully Windows Server 2012 R2 logo certified! That’s what you want to see from you vendor. Fast & excellent support.

image

Here’s the entire DELL hardware line up with Windows Server 2012 R2 support. Happy upgrading & implementation! If you have Software Assurance you’re set to reap the benefits of that investment today!

To my all employers / clients, you see now, told you so. Now, I have a thing in common with Col. Hannibal Smith, I love it when a plan comes together Open-mouthed smile.

image

I know some of you think that all the testing, breaking, wrecking of Preview bits, RTM & GA versions we do looks like chaos. Especially when you visually add the test server & switch configurations. But that’s what it looks like to YOU. To the initiated this is well executed plan, dropping all assumptions, to establish what works & will hold up. The result is that we’re ready today and by extension, so are you.