Use Cases For Fluid Cache For SAN With DELL Compellent In High Performance Virtualization With Windows 2012 R2


Fluid Cache For SAN

At Dell World 2013 in Austin Texas I spent some time talking to engineers & managers about Fluid Cache For SAN. The demo in the keynote was enough to grab my distinct attention, especially as a Compellent customer.

What is it?

Dell already has Fluid Cache for DAS available in its PowerEdge servers. Now it’s time to bring this to their best SAN offering, the Compellent, and make Fluid Cache shared storage suitable for shared storage clustering. The way to do that cost effective and high performance is to build on the success of on board (local to the server) high performance storage and make that shared through software in a physical shared nothing replication/sync model. To make this happen they use a 10/40Gbps Ethernet solution leveraging RoCE (RDMA over Converged Ethernet). Yes that very technology I have been investing time & effort in for SMB Direct and which we leverage for CSV & Live Migration traffic and with SOFS in Windows 2012 R2.

Basically the super low latency an high throughput enable the memory to be synced across all nodes in a cluster and as such each node sees all the cluster memory. For redundancy you will need at least 3 nodes in a cluster. Dell will scale Fluid Cache For SAN to 128 nodes. Windows Server 2012 R2 can handle 64 nodes, which some think is ridiculously high, but then again, Dell aims even higher so it’s not as weird as you think. Some people have really huge computing needs. Just remember that 10 years ago you probably found that 16GB of RAM was extravagant.

Why this architecture?

Dell uses server based “shared nothing flash storage” & high speed low, latency synchronization to create a logical cluster wide shared pool of flash memory. This means the achieve stellar low latency as the flash storage is inside of the servers, close to the processors and as such delivers excellent performance for the workloads. Way better that “just” flash only SAN can. For data integrity they commit the data only when it written to the Express Flash drive(s) of one server and then also to another and verified. This needs to happen very fast and that where the RoCE network come sin to play. Later, at less speed critical times the data is pushed out to the Compellent SAN for storage. If that SAN is a flash based setup think about the capabilities this gives you in performance. Likewise data reads of the SAN that are highly active are pushed from the Compellent SAN and cached (also in multiple copies) on the Express Flash modules. While two servers with each a copy of the data on Express Flash modules would suffice DELL requires at least three. This is just a plain common sense N+1 redundancy design to have high availability even when a node fails. A cool think to note is that you can build larger clusters with 3 nodes each having one or more Express Flash modules and additional nodes don’t need it as long as they can read the cache of those 3. So the cost of this can be managed. The drawback is that you don’t read & write to a local Express Flash module on those extra node. If you want that you’ll need to put more $ on the table.

clip_image001

The thing to note here is that the Servers/SAN are connected over RoCE/RDMA. Well this look familiar. What technology can also leverages RDMA? SMB Direct in Windows Server 2012 R2! And where do we use this amongst other things? Storage IO in Scale Out File Server, CSV traffic, Live Migration …

The big benefit of this design it just it takes your SAN to the next level but also, if DELL does this right, they won’t break any of the good stuff like VSS aware snapshot with Replay Manager, Automatic Data Tiering, Live Volumes, Live Migration etc. A lot of the high IOPS/low latency solutions out their based on fast local flash break a lot of the good stuff and reduces centralized storage management. What if you can have your cookie and eat it to?

Demo Time at Dell World

Dell demonstrated an Oracle database load on an eight node cluster of PowerEdge R720 servers with Intel Xeon E5 processors, with Linux (no Windows Server 2012 R2 support yet Sad smile) These servers each used 350GB PCI-Express flash cards (“only” PCI-Express 2.0 capable by the way). This cluster, using a Compellent SAN, managed to get a result of more than 5 million IOPS at 6 millisecond response times, delivering 12,000 tps for 14,000 client connections. This was read only. If they dropped the Fluid Cache for SAN they  could “only” achieve 2,000 clients (6 times less clients due to 4 time less transactions and 99% slower responses). See this movie for more info: http://www.youtube.com/watch?v=uw7UHWWAtig and watch the keynote from Dell World 2013 here

clip_image003

Where would I use this?

Cost will determine use cases and this is unknown for now. We can only look at what Fluid Cache for DAS cost right now and speculate. I for one hope/bet on the fact that DELL won’t price itself out of the marked (they have a lot of competition from big & smalls players in a “good enough is good enough” world with a cloud mindset all around). So make it too expensive and we might be happy with “just” 500.000 IOPS at much less cost. It’s a fine line. Price it right, support it well and you might win the bulk of sales in the storage wars. Based on the DAS solution we’re looking at least 8000 $ per server (license is 3500 for DAS => see http://www.theregister.co.uk/2013/03/05/dell_fluid_cache_server_acceleration/  + cost of PCI-Express flash module (> 5000$ => see http://en.community.dell.com/techcenter/extras/chats/w/wiki/4480.3-5-2013-techchat-fluid-cache-for-das.aspx) &  yearly maintenance fee. Then we need to factor in the cost of the RDMA/RoCE capable NICs & the (dedicated) Force10 switches – 2 for redundancy  that are at least 10Gbps (S4810?) or probably 40Gbps (S6000?) & cabling. So this is not a cheap solution and you won’t just “throw it in” on a quiet afternoon to see what it does for you. Not that there will be a DIY “throw it on kit” I think, it’s a step above plug and play. If they keep it affordable and do some other things for Windows Server 201 R2 / Hyper-V they can be the absolute number one SAN vendor for any Microsoft customer. But that’s another blog topic.

Cost is indeed something that might make it a show stopper for us. I just can’t tell yet. One of the key factors is that if affordable it could give the point solutions we now see pop up more and more in storage. a run for their money. While cheap and workable in good enough is good enough scenarios it takes some of the centrally shared storage advantages away. But if we ever do a state full VDI project in an environment with high end physical desktops (500GB or more local storage, SSD disks, 8 core CPU, 8-32GB DDR3, dual or more screens) that run ArcGis, AutoCad, Visual Studio, SQL Server, Outlook with 5GB mailboxes, large documents & huge files (images) this might be the enabler we need to make VDI happen & works as desired with current all-purpose Compellent SANs. IIf the price is right it could enable VDI in now “NO GO” scenarios.  And those are plentiful, … Another use case I see is a virtualized SQL Server environment on Hyper-V with general purpose shared storage. We’re doing very well but the day might arrive that we need those IOPS in order to take it even further. Don’t laugh but realize how much IOPS an SSD delivers to a workstation today and that’s what your users expect & demand. Want to fail at VDI? Have it outperformed by a 4 year old physical PC where you slapped an SSD into.

Could it help in keeping excessive IOPS away from the SAN, making that capable of doing more over a longer life time? In other words can it play a part in the Storage QoS issue across server/cluster/storage system issue for non workload aware storage solutions?

So I might have some homework to do. For our next SQL Server cluster we’ll look at the next generation of servers & start counting our PCI Express slots. We now already consume 4 PCI-Express slots for 2*FC & 2*Dual Port 10Gbps) in our Hyper-V design. That’s another discussion, but they are built purposely for performance under any condition & to be highly redundant. A health check / improvement track by Microsoft for our SQL Server environment has proven this to be an outstanding setup (nice e-mail to see your bosses get by the way). I digress, free PCI-Slots should not be an issue, as we also don’t need the FC cards in the Fluid Cache Nodes. The storage IO uses the RoCE network, to which the Compellent SAN attaches.

Cost is very important in determining if we’ll ever get to deploy it. The cloud is here, and while that is far from cheap either, it’s a lot easier to sell than internal IT for various reasons. That’s just how the powers that be roll right now & how things are.

What we’ll get in our hands

There was a lot of love between Dell & Samsung at Dell World. Talking to Dell at the server/storage/networking boots I understood that Samsung is going to produce flash modules for this that support PCI-Express 3.0 and the industry backed NVM Express host interface for solid state drives which will reduce latency with 1/3 compared to now. As it seems they will produce higher capacity cards than what was used in the demos (800 GB and 1.6 TB). So capacity will increase & latency will drop even more. They leverage the Force10 10Gpbs or 40Gbps switches for the RoCE network. As Dell & Mellanox are cooperating heavily (Mellanox Collaborates with Dell to Deliver 10/40GbE Solution for Mainstream Servers and Networking Solutions) my bet is on Mellanox for the cards. Broadcom is not there yet for it to happen in time and Intel has no RoCE cards afaik. They seem to be playing the waiting game before they jump in.

Magic Ball Time, Speculation & Questions.

I’m not a DELL Server / Storage designer or architect, and those that are don’t tell me to plaster it all over the internet, so this really is magic ball time …

image

I’ll show my ignorance on what Samsung does under the hood when I hear that the next generation of DELL servers can have 6TB of RAM I can only speculate that with the advent of DDR4 in servers & ever dropping cost the path is open to leverage NV-RAM disk for the read/write cache in Fluid Cache for SAN as well a bit like what IDT did http://us.generation-nt.com/idt-announces-world-first-pci-express-gen-3-nvme-nv-dram-press-3732872.html. The persistence comes from writing the DRAM content to NAND at shutdown, can we do that fast enough at 1.6 TB sized caches? Can we fit enough of  those modules on a card? What would that do for IOPS & latency? Does that even make sense at this moment in time?

What if we could leverage the DDR4 dims in the server itself? This would perhaps cut costs and also save us some valuable PCI-Express 3.0 slots for our 10Gbps or better addiction Smile. Sure there is no persistence than but the content is distributed redundantly over the cluster anyway? Is that safe enough to make it feasible? What if we need to shut down the cluster? I guess it’s not that easy and perhaps we just need to make sure future motherboards have 8 or more PC-Express 3.0 slots & not worry about that. Or move to 40/100Gbps & have less need for NICs. Yeah that’s what was said of 10Gbps in the early days …

Support for Windows?

While it’s not there yet I have absolutely no doubt that they will bring it to Windows Server 2012 R2 and higher. Well Windows is a huge on premise market for native workloads like SQL Server, VDI and Hyper-V. The number of sales opportunities in the Microsoft ecosystem is growing (despite cloud) while others are stagnant or dropping. On top of that the low cost of Hyper-V leaves money to be spent on Fluid Cache for SAN. As Dell is in business to make money, they will not leave that big chunk of cash on the table.

When can we get our hands on this technology?

Timing wise that will be early to late Q2 in 2014, which is my best guestimate. Interesting times people, interesting times

ODX Speed Up VHDX Creation Times On Windows Server 2012 (R2)


Some technlogies you just need to see in action instead of reading about it. I have posted a video on Vimeo that shows ODX in action on Windows Server 2012 R2 and a DELL Compellent SAN running Storage Center 6.3.10 firmware that supports UNMAP & ODX. Watch the video here or on Vimeo itself for a better experience. It’s a rerun of the demo scripts used in my TechNet Belux Live Meeting of this week.

We demonstrate the amazing speeds at which we can create VHDX files on both a traditional clustered disk and a Cluster Shared Volume. If you have ever tried to create a lot of fixed VHD/VHDX files, especially larger one, then you really need to check out ODX and its potential. If you have a SAN or think about acquiring one make sure you get this feature and be sure that it works as advertised.

I hope you enjoy it and inspires you to look where you can leverage this technology in your own environments.

Windows Server 2012 R2 Unmap, ODX On A Dell Compellent SAN Demo


UNMAP & ODX Video

Some things are easier to show using a video so have a look at a video on UNMAP/ODX used with Windows Server 2012 R2 and Compellent SAN:

You can also go directly to the Vimeo page by clicking on the below screen shotimage

We start out with a 10.5TB large thinly provisioned LUN that has about 203GB of space in use on the SAN. So the LUN on the SAN might be 10.5TB and windows sees a volume that is 10.5TB only the effective data stored consumes storage space on the SAN. That ought to demonstrate the principle of thin provisioning adequately Smile. The nice PowerShell counter is made possible via the Compellent PowerShell Command Set.

We then copy 42GB worth of ISO files inside a Windows Server 2012 virtual machine from a fixed VHD to a dynamically expanding VDHX. Those are nice speeds. And look at how the size of the VHDX file grows on the CSV volume and how the space used on the SAN is growing. That’s because the LUN is thinly provisioned.

Secondly we copy the same ISO files to a fixed size VHDX. Again, some really nice speeds. As the VHDX is fixed in size you do not see it grow. When looking at the little SAN counter however we do see that the thinly provisioned LUN is using more storage capacity.

Once that is done we see that the total space consumed on the SAN for that CSV LUN has risen to 284GB. We then delete the data from both dynamically expanding VHDX and are about to run the Optimize-Volume command when we notice that the SAN has already reclaimed the space. So we don’t run the optimize command. Keep that in mind. By the way, this process is done as part of standard maintenance (defrag) and some NTFS check pointing mechanism that’s run every 5 minutes and sends down the info from the virtual layer to the physical layer to the SAN. During demo’s it’s kind of boring to sit around and wait for it to happen Smile. Just remember that in real life it’s a zero touch feature, you don’t need to baby sit it.

We then also delete the ISO files from the fixed VHDX and run Optimize-Volume G –Retrim and as result you see the space reclaimed on the SAN. As this is a fixed disk the size of the VDHX will not change. But what about the dynamically expanding VHDX? Well you need to shut it down for that. But hey, nothing happens. So we fire it up again and do run Optimize-Volume H –Retrim before shutting it down again and voila.

So what do you need for this?

Rest assured. You don’t need the most high end, most expensive, complex and proprietary SAN hardware to get this done. What you need is good software (firmware) on quality commodity hardware and you’re golden. If any SAN vendor wants to charge you a license fee for ODX/UNMAP just throw them out. If they don’t even offer it walk away from them and just use storage spaces. There are better alternatives than overpriced SANs lacking features.

I’ve found that systems like Equalogic & Compellent are in the sweet point for 90 % of their markets based on price versus capabilities and features.  Let’s look at the a Compellent for example. For all practical intend this SAN runs on commodity hardware. It’s servers & disk bays. SAS to the storage & FC, iSCSI or SMB/NFS for access. With capable hardware the magic is in the software. Make no mistake about it, commodity hardware when done right, is very, very capable. You don’t need a special proprietary hardware & processors unless for some specialized nice markets. And if you think you do, what about buying commodity hardware anyway at 50% of the cost and replacing it with the latest of the greatest commodity hardware after 4 years and still come out on top cost wise whilst beating the crap out of that now 4 year old ASIC and reaping the benefits of a new capabilities the technology evolutions offers? Things move fast and you can’t predict the future anyway.

Fixing A Little Quirk In Dell Compellent Replay Manager


If you’re running a DELL Compellent SAN you’re probably familiar with Replay Manager. It’s Compellent’s solution to take VSS based (and as such application consistent) snapshots.image

When you’re running Replay Manager you might run into the following issue when trying to access a host.image

Every time, you access a host for the first time after opening Replay Manager you’ll be prompted for your password, even if you select Remember my password. You don’t need to retype it so that’s fine, but you do need to click it.

In the system log you’ll see the below error logged.image

Log Name:      System
Source:        Microsoft-Windows-Security-Kerberos
Date:          7/08/2013 9:55:43
Event ID:      4
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      replayserver.test.lab
Description:
The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server replaymanagerservice. The target name used was HTTP/myhost.test.lab. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Ensure that the target SPN is only registered on the account used by the server. This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service. Ensure that the service on the server and the KDC are both configured to use the same password. If the server name is not fully qualified, and the target domain (TEST.LAB) is different from the client domain (TEST.LAB), check if there are identically named server accounts in these two domains, or use the fully-qualified name to identify the server.

Well, this a rather well know issue in the Microsoft world. Take a look here IIS 7+ Kerberos authentication failure: KRB_AP_ERR_MODIFIED. Browse to the possible causes & solutions. You’ll find this situation right in there. So what we do is execute the following command to register the correct SPN for the host or hosts on the Replay Manager service account:

SetSPN -a HTTP/myhost.test.lab TEST\replaymanagerservice

Do note to run this from an elevated command prompt using a account with sufficient AD permissions in AD. You’ll now no longer have to click on the username/password prompt and get rid of that error.

You can verify if the SPB for your hosts exists on your Replay Manager Service account by running:

SetSPN -l  TEST\replaymanagerservice

If this is the biggest issue you’ll ever have with a hardware snapshot service & hardware provider you know you’ve got a good solution.

Upgrading Your DELL Compellent Storage Center Firmware (Part 2)


This is Part 2 of this blog. You’ll find Part 1 over here.

In part 1 we prepared our Compellent SAN to be ready and install Storage Center 6.3.10 that has gone public.  As said, 6.3.10 brings interesting features like ODX and UNMAP to us Windows Server 2012 Hyper-V users. It also introduces some very nice improvements to synchronous replication and Live Volumes. But here we’ll just do the actual upgrade, the preparations & health check have been done in part 1 so we can get started here right away.

Log in to your Compellent system and navigate to the Storage Management menu. Click on “System”, select Update and finally click on “Install Update”.  It’s already there as we downloaded it in Part 1. Click on “Install Now” to kick it all off.

image

Click on Install now to launch the upgrade.

image

After initialization you can walk away for 10 minutes but you might want to keep an eye on things and the progress of the process.

image

So go have a look at your storage center. Look at the Alert Monitor for example and notice that the “System is undergoing maintenance”.

image

When the controller the VIP address of the SAN reboots it becomes unavailable. After a while you can login again to the other controller via the VIP, if you cant’ wait a few seconds just use the IP address of the active controller. That will do.

image

When you log in again you’ll see the evidence of an ongoing SAN firmware upgrade. Nothing to panic about.image

This is also evident in Alert Monitor. CoPilot knows you’re doing the upgrade so no unexpected calls to make sure your system is OK will come in. They’re there every step of the way. The cool thing is that is the very first SAN we ever owned that we don’t need engineers on site or complex and expensive procedure to do all this. It’s all just part of an outstanding customer service Compellent & DELL deliver.image

You can also take a peak at your Enterprise manager software to see paths going down and so on. The artifacts of a sequential controller failovers during an upgrade. Mind you you’re not suffering downtime in most cases.image

Just be patient and keep an eye on the process. When you log in again after the firmware upgrade and your system is up and running again, you’ll be asked to rebalance the ports & IO load between the controllers on the system. You do, so click yes.image

image

When done you’ll return to the Storage Center interface. Navigate to “Help”" and click on About Compellent Storage Center. image

You can see that both controllers are running 6.3.10.

image

You’re rocking the new firmware. As you kept an eye on your hosts you should know these are good to go. Send of an e-mail to CoPilot support and they’ll run a complete health check on your system to make sure you’re good to go. Now it’s time to start leveraging the new capabilities you just got.

Upgrading Your DELL Compellent Storage Center Firmware (Part 1)


This is Part 1 of this blog. You’ll find Part 2 over here

Well the Compellent firmware 6.3.10 has gone public and it’s time to put it on our systems. 6.3 brings interesting features like ODX and UNMAP to us Windows Server 2012 Hyper-V users. It also introduces some very nice improvements to synchronous replication and Live Volumes. But’s those are matters for other blog posts. Here We’ll focus on the upgrade.

In part 1 we’ll look at how we prepare the Compellent to be ready to apply the upgrade. We make sure on our side we have no outstanding issues on the SAN. Then we made sure we upgraded Enterprise Manager and Replay Manager to the latest versions. At this time of writing that is EM 6.3.5.7 and RM 7.0.1.1. We could do this prior to the firmware upgrade because 6.2.2. is also supported by these versions. Once we established all was working well with this software we contacted CoPilot to check our systems (the check it’s health an applicability as well). When all is in order they’ll release the firmware to us. Then It’s time to run a check for update on the systems.

Log in to your Compellent system and navigate to the Storage Management menu. Click on “System”, select Update and finally click on “Check for Update”.
image

The tool will check for updates.

image

If no new firmware has been released to your systems you’ll see this.image

If new firmware has been released you see this in the update status.image

This also shows in the Storage Center GUI

image

Downloading the update.

image

The download takes a while. Once it’s done you’ll see that the update is ready to install. Note that this update is non service affecting in OUR case (green arrow). We won’t install it yet however. We’ll look at the details & validate the components. Due diligence pays off Winking smileimage

Click on details to get some more information about what’s in the update. image

You can see that our disk and enclosure firmware is up to date already from a previous update. The ones related to 6.3.10 are mandatory( required, not optional). When done, hit Return.

We now select “Validate Components” to make sure we’re good to go and won’t get any surprises. Trust but verify is one of our mantras.image

image

So now we are ready to run the update.  We’ll leave that for Part 2.

Some ODX Fun With Windows Server 2012 R2 And A Dell Compellent SAN


I’m playing and examining some of the ODX capabilities of our SANs (Dell, Compellent) at the moment. It all seems pretty impressive in the demo’s. But how does that behave in real live on our gear? How impressive is ODX? Well pretty darn impressive actually. And as all great power it needs to be wielded carefully, with insight and thought.

Let’s create some fixed virtual disks. 10 * 50GB vhdx and 10* 475GB vhdx. We run a simple quick PowerShell script:

image

You see this correctly, it’s 41.5088855 seconds. let’s round up to 42 seconds. That’s 20 fixed VHDX files. 10 of 50GB, 10 of 475GB in 42 seconds. That’s a total of 5.12TB of vhdx files.

image

Compared to creating a single 5TB vhdx file this isn’t to shabby as that get done in 26 seconds!

You can only dream of the kind of scenario’s this kind of power enables. Woooot!!!

MVP Carsten Rachfahl Visits & Interviews Me On Networking & Storage in Windows Server 2012


Last month Carsten (MVP – Virtual Machine) & Kerstin Rachfahl (MVP – Office 365) visited me in my home town. Apart from a short visit to the historic center & a sushi diner amongst friends we also did an interview where we discussed our ongoing Windows Server 2012 Hyper-V activities. We’re trying to leverage as much of the product we can to get the best TCO & ROI and as early adopters we’ve been reaping the benefits form the day the RTM bits were available to us. So far that has been delivering great results. Funny to hear me mention the Fast Track designs as a week later we saw version 3 of those at MMS2013. The most interesting to me about those was the fact that the small & medium sizes focus on Cluster in a Box and Storage Spaces!

While we were having fun talking about the above we also enjoyed some of the most beautiful landmarks of the City of Ghent as a back drop for the interview. It was filmed in a meeting room at AGIV, to whom I provide Infrastructure services with a great team of colleagues. Just click the picture to view the video.

Videointerview_with_Didier_Van_Hoye_Storage_Networking_and_other_Stuff-Thumb2

You can also enjoy the video on Carsten’s blog http://www.hyper-v-server.de/videos/interview-mit-didier-van-hoye-ber-seinen-storage-netwerk-und-mehr/ All I need to do now is to arrange for Carsten to physically touch the Compellent storage I think.

Trouble Shooting Windows Server 2012 host based CommVault Backups with DELL Compellent hardware VSS provider of Hyper-V guests: ‘Microsoft Hyper-V VSS Writer’ State: [5] Waiting for completion


We have been running CommVault Simpana 9.0 R2 SP7 in combination with the DELL Compellent Hardware VSS provider to do host based backups of the virtual machines on our Windows Server 2012 Hyper-V clusters host with great success and speed.

We’ve run into two issues so far. One, I blogged about in DELL Compellent Hardware VSS Provider & Commvault on Windows Server 2012 Hyper-V nodes – Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0×80070005, Access is denied was an due to some missing permissions for the domain account we configured the Compellent Replay manager Service to run with. The solution for that issue can be found in that same blog post.

The other one was that sometimes during the backup of a Hyper-V host we got an error from CommVault that put the job in a “pending” status, kept trying and failing. The error is:

Error Code: [91:9], Description: Volume Shadow Copy Service (VSS) error. VSS service or writers may be in a bad state. Please check vsbkp.log and Windows Event Viewer for VSS related messages. Or run vssadmin list writers from command prompt to check state of the VSS writers.

clip_image001

When we look at the Compellent controller we see the following things happen:

  • The snapshots get made
  • They are mounted briefly and then dismounted.
  • They are deleted

The result at the CommVault end is that the job goes into a pending state with the above error. When we look at the state of the Microsoft Hyper-V VSS Writer by running “vssadmin list writer” …

image

… from an elevated command prompt we see:

Writer name: ‘Microsoft Hyper-V VSS Writer’
…Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
…Writer Instance Id: {2fa6f9ba-b613-4740-9bf3-e01eb4320a01}
…State: [5] Waiting for completion
…Last error: Retryable error

Note at this stage:

  1. Resuming the job doesn’t help (it actually keep trying by itself but no joy).
  2. Killing the job and restarting brings no joy. On top of that our friendly error “Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0×80070005, Access is denied.“ is back, but this time related to the error state of the ‘Microsoft Hyper-V VSS Writer’. The error now has changed a little and has become:

clip_image002

 

 

Writer name: ‘Microsoft Hyper-V VSS Writer’
…Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
…Writer Instance Id: {2fa6f9ba-b613-4740-9bf3-e01eb4320a01}
…State: [5] Waiting for completion
…Last error: Unexpected error

To get rid of this one we can restart the host or, less drastic, restart the Hyper-V Virtual Machine management Service (VMMS.exe) which will do the trick as well.  Before you do this , drain the node when you pause it, then resume it with the option failing back the roles. Windows 2012 makes it a breeze to do this without service interruption Smile

image

clip_image003

image

The Cause: Almost or completely full partitions inside the virtual machines

Looking for solutions when CommVault is involved can be tedious as their consultancy driven sales model isn’t focused on making information widely available. Trouble shooting VSS issues can also be considered a form of black art at times. Since this is Windows 2012 RTM an the date is September 20th 2012 as the moment of writing, there are not yet any hotfixes related to host level backups of Virtual machines and such. CommVault Simpana 9.0 R2 SP7 is also fully patched.

This,combined with the fact that we did not see anything like this during testing (and we did a fair amount) makes us look at the guests. That’s the big difference on a large production cluster. All those unique guests with their own history. We also know from the past years with VSS snapshots in Windows 2008(R2) that these tend to fail due to issues in the guests. Take a peak at Troubleshoot VSS issues that occur with Windows Server Backup (WBADMIN) in Windows Server 2008 and Windows Server 2008 R2 just for starters  As an example we already had seen one guest (dev/test server) that had 5 user logged in doing all kinds of reconfigurations and installs go into save mode during a backup, so it could be due to something rotten in certain guests. There is very much to consider when doing these kinds of backups.

By doing some comparing of successful & failed backups it really looks as if it was related to certain virtual machines. A lot of issues are caused by the VSS service, not running or not being able to do snapshots because of lack of space so perhaps this was the case here as well?

We poked around a bit. First let’s see what we can find in the Hyper-V specific logs like the Microsoft-Windows-Hyper-V-VMMS-Admin event log. Ah lot’s of errors relating to a number of guests!

image

Log Name:      Microsoft-Windows-Hyper-V-VMMS-Admin
Source:        Microsoft-Windows-Hyper-V-VMMS
Date:          19/09/2012 22:14:37
Event ID:      10102
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      undisclosed server
Description:
Failed to create the volume shadow copy inside of virtual machine ‘undisclosedserver’. (Virtual machine ID 84521EG0G-8B7A-54ED-2F24-392A1761ED11)

Well people, that is called a clue Winking smile. So we did some Live Migration to isolate suspect VMs to a single node, run backups, see them fail, do the the same with a new and clean VM an it all works. and indeed … looking at the guest involved when the CommVault backup fails we that the VSS service is running and healthy but we do see all kind of badness related to disk space:

  • Large SQL Server backup files put aside on the system partition or or other disks
  • Application & service pack installers left behind,
  • Log and tempdb volumes running out of space.
  • Application Logs running out of control

That later one left 0MB of disk space on the system (Test Controller TFS shitting itself), but we managed to clear just enough to get to just over 1GB of free space which was enough to make the backup succeed.

clip_image001[8]

image

Servers, virtual or physical ones, should to be locked down to prevent such abuse. I know, I know. Did I already tell you I do not reside in a perfect world? We cannot protect against dev and test server admins who act without much care on their servers. We’ll just keep hammering at it to raise their awareness I guess. For end users and production servers we monitor those well enough to proactively avoid issues. With dev & test servers we don’t do so, or the response team would have a day’s work reacting to all alerts that daily dev & test usage on those servers generate.

The fix

  • Clear at least 1GB or a bit more inside each partition in the guest running on the host that has a failing backup. I prefer to have at least a couple of GB free  (10% to 15% => give yourself some head room people).
  • Then you can resume the backup job manually or let CommVault do that for you if it’s still in a pending state.
  • If you’ve killed the job make sure you restore the
  • Microsoft Hyper-V VSS Writer  to a healthy state as described above. Thanks to Live Migration this can be achieved without any down time.

Conclusion

There is experimenting, testing, production testing, production and finally real life environments where not all is done as it should be. Yes, really the world isn’t perfect. Managers sometimes think it’s click, click, Next, click and voila we’ve got a complex multisite system running. Well it isn’t like that and you need some time and skills to make it all work. Yes even in todays “cheap, fast, easy to run your business form your smartphone”  ecosystem of the private, hybrid and public cloud, where all is bliss and world peace reigns.

The DELL Compellent Hardware VSS provider & replay manager service handle all this without missing a beat, which is very comforting. As previous experiences with hardware VSS provides of other vendors make us think that these would probably have blown up by now.

DELL Compellent Hardware VSS Provider & Commvault on Windows Server 2012 Hyper-V nodes – Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface. hr = 0×80070005, Access is denied.


As you know by now I’ve been building a high throughput, large volume Disk2Disk backup solution and that has been rather successful. At optimal speed we get 2TB/Hour per backup media server. As we currently have two, we can get to 4TB/hour at maximum throughput Smile. Currently that is, we’ll see if more is possible. The solution, which I’ll blog about later, is based on the Dell Compellent hardware VSS provider (Dell Compellent Replay Manager 6.2.0.9), Windows Server 2012, CommVault 9.0 R2 and PowerVault storage a the target disks and as it’s working now is saving us many hundred of thousands of Euros compared to dedicated D2D backup appliances or solutions.

The entire process has been a fairly smooth one, which was a relief as decent hardware VSS providers are not easy to come by, based on our experience and that of many colleagues. So, once again the DELL Compellent choice is turning out to be a good one. We did have to fix one small issue along the way.

Whilst using the DELL Compellent Hardware VSS Provider with Commvault Simpana 9.0 R2  to do host level back of the virtual machines on Windows Server 2012 Hyper-V cluster nodes. We ran into a small issue. The backup run fine but we saw this error being logged:

Volume Shadow Copy Service error: Unexpected error querying for the IVssWriterCallback interface.  hr = 0×80070005, Access is denied.
. This is often caused by incorrect security settings in either the writer or requestor process.

Operation:
   Gathering Writer Data

Context:
   Writer Class Id: {e8132975-6f93-4464-a53e-1050253ae220}
   Writer Name: System Writer
   Writer Instance ID: {3f4965d8-10ac-411b-bf6d-6a607f237775}

image

The description found here was not helpful in resolving this. This error is found all over the internet with just about any backup product. The possible causes and solutions that are suggested are as follows:

Change the Language for non-Unicode programs to English (United States)

This is not a solution in our case, which would have surprised me if it had been.

image

Eliminate the error condition by adding the access permissions for the domain account that the Compellent Replay Manager Service for Microsoft Servers (VSS) runs under to COM Security of the affected server

As the domain account used for this service is a member of the local administrators group that already has the permissions this would also have surprised me but we tested it anyway. It turns out this is not the cause either.

image

But for your information this is how it’s done:

You can eliminate the error condition by adding the access permissions for the Network Service account to the COM Security of the affected server. To add the access permissions for the Network Service account, do the following:

  • From the Start Menu, select Run
    The Run dialog opens.In the Open field, input dcomcnfg and click OK.
    The Component Services dialog opens.
  • Expand Component Services, Computers, and My Computer.
    Right-click My Computer and click Properties on the pop-up menu.
  • The My Computer Properties dialog opens.
  • Click the COM Security tab.
    Under Access Permission click Edit Default.
  • The Access Permissions dialog opens.
  • From the Access Permissions dialog, add the DOMAIN\compellentreplayservice account with Local Access & Remote access allowed (cluser => not just a local host).
  • Close all open dialogs.
  • Restart the computer.

The real cause & solution

According to Microsoft support this issue occurs when using a 3rd party backup program that utilizes Windows VSS (Volume Shadow Service) and has its own requestor. This is indeed the case here. “It looks like the requestor (the backup application) does not allow system writer to call back into their process and hence generates the error in the application log.” This sounds very plausible Winking smile

The fix:

  1. The following example grants access to the "DOMAIN\compellentreplayservice" account.
  2. Click on Start, type regedit in the search box.
  3. On the Registry Editor window, navigate to: KEY_LOCAL_MACHINE>SYSTEM>CurrentControlSet>Services>VSS>VssAccessControl
  4. Add a DWORD value with the name: "DOMAIN\compellentreplayservice" and set the value to “1”.

image

image

image

And voila: the error has gone when running backups Open-mouthed smile

image

I assume no responsibility if you do this in your environment but I can say that all this works perfectly in our CommVault Simpana 9.0 R2 setup whist backing up the virtual machines on our Hyper-V cluster nodes at the host level using the DELL Compellent hardware VSS provider. And yes this is Windows Server 2012, careful testing, planning helps when being an early adaptor.