This is going to be a busy night …
Update:The downloads are now available for TechNet and MSDN subscribers as well.
Excellent news. Windows Server 2012 Release Candidate is available for download at http://technet.microsoft.com/en-us/evalcenter/hh670538.aspx?wt.mc_id=TEC_108_1_3
I’m downloading it as I write this blog post
I did not see it yet on the downloads for subscribers of TechNet or MSDN but I’m sure they will be available there soon as well.
Start your lab servers, we’re in for some serious upgrading and testing the next couple of days. I’ve been looking forward to this.
Update 2012/05/31 21:00 :The downloads are now available for TechNet and MSDN subscribers as well.
In the grand effort to make Windows Server 2012 scale above and beyond the call of duty Microsoft has been addressing (potential) bottle necks all over the stack. CPU, NUMA, Memory, storage and networking.
Data Center TCP (DCTCP) is one of the many improvements by which Microsoft aims to deliver a lot better network throughput with affordable switches. Switches that can mange large amounts of network traffic tend to have large buffers and those push up the prices a lot. The idea here is that a large buffer creates the ability to deal with burst and prevents congestions. Call it over provisioning if you want. While this helps it is far from ideal. Let’s say it a blunt instrument.
To mitigate this issue Windows Server 2012 is now capable dealing with network congestion in a more intelligent way. It does so by reacting to the degree & not merely the presence of congestion using DCTCP. The goals are:
- Achieve low latency, high burst tolerance, and high throughput, with small buffer switches (read cheaper).
- Requires Explicit Congestion Notification (ECN, RFC 3168) capable switches. This should be no showstopper you’d think as it’s probably pretty common on most data center / rack switches but that doesn’t seem to be the case for the real cheap ones where this would shine …
- Algorithm enables when it makes sense to do so (low round trip times, i.e. it will be used inside the data center where it makes sense, not over a world wide WAN or internet).
To see if it is applied run Get-NetTcpConnection:
As you can see this is applied here on a DELL PC8024F switch for the CSV and LM networks. The internet connected NIC (connection of the RDP session) shows:
Yup, it’s East-West traffic only, not North-South where it makes no sense.
When I was prepping a slide deck for a presentation on what this is, does and means I compared it to the green wave traffic light control. The space between consecutive traffic lights is the buffer and the red light are stops the traffic has to deal with due congestion. This leaves room for a lot of improvement and the way to achieve this is traffic control that intelligently manages the incoming flow so that at every hop there is a green light and the buffer isn’t saturated.
Windows Server 2012 in combination with Explicit Congestion Notification (ECN) provides the intelligent traffic control to realize the green wave.
The result is very smooth low latency traffic with high burst tolerance and high throughput with cheaper small buffer switches. To see the difference look at the picture below (from Microsoft BUILD)of what this achieves. Pretty impressive. Here’s a paper by Microsoft Research on the subject
I’m very exited about the TRIM/UNMAP support in Windows Server 2012 & Hyper-V with the VHDX file. Thin provisioning is a great technology. It’s there is more to it than just proactive provisioning ahead of time. It also provides a way to make sure storage allocation stays thin by reclaiming freed up space form a LUN. Until now this required either the use of sdelete on windows or dd for the Linux crowd, or some disk defrag product like Raxco’s PerfectDisk. It’s interesting to note here that sdelete relies on the defrag APIs in Windows and you can see how a defragmentation tool can pull off the same stunt. Take a look at Zero-fill Free Space and Thin-Provisioned Disks & Thin-Provisioned Environments for more information on this. Sometimes an agent is provided by the SAN vendor that takes care of this for you (Compellent) and I think NetApp even has plans to support it via a future ONTAP PowerShell toolkit for NTFS partitions inside the VHD (https://communities.netapp.com/community/netapp-blogs/msenviro/blog/2011/09/22/getting-ready-for-windows-server-8-part-i). Some cluster file system vendors like Veritas (symantec) also offer this functionality.
A common “issue” people have with sdelete or the like is that is rather slow, rather resource intensive and it’s not automated unless you have scheduled tasks running on all your hosts to take care of that. Sdelete has some other issue when you have mount points, sdelete can’t handle that. A trick is to use the now somewhat ancient SUBST command to assign a drive letter to the path of the mount point you can use sdelete. Another trick would be to script it yourself see. Mind you can’t just create a big file in a script and delete it. That’s the same as deleting “normal” data and won’t do a thing for thing provisioning space reclamation. You really have to zero the space out. See (A PowerShell Alternative to SDelete) for more information on this. The script also deals with another annoying thing of sdelete is that is doesn’t leave any free space and thereby potentially endangers your operations or at least sets off all alarms on the monitoring tools. With a home grown script you can force a free percentage to remain untouched.
With Windows Server 2012 and Hyper-V VHDX we get what is described in the documentation “’Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks in the VM, and trim-compatible hardware.) It also requires Windows 2012 on hosts & guests.
I was confused as to whether VHDX supports TRIM or UNMAP. TRIM is the specification for this functionality by Technical Committee T13, that handles all standards for ATA interfaces. UNMAP is the Technical Committee T10 specification for this and is the full equivalent of TRIM but for SCSI disks. UNMAP is used to remove physical blocks from the storage allocation in thinly provisioned Storage Area Networks. My understanding is that is what is used on the physical storage depends on what storage it is (SSD/SAS/SATA/NL-SAS or SAN with one or all or the above) and for a VHDX it’s UNMAP (SCSI standard)
Basically VHDX disks report themselves as being “thin provision capable”. That means that any deletes as well as defrag operation in the guests will send down “unmaps” to the VHDX file, which will be used to ensure that block allocations within the VHDX file is freed up for subsequent allocations as well as the same requests are forwarded to the physical hardware which can reuse it for it’s thin provisioning purpose. Also see http://msdn.microsoft.com/en-us/library/hh848053(v=vs.85).aspx
So unmap makes it way down the stack from the guest Windows Server 2012 Operating system, the VHDX , the hyper visor and the storage array.This means that an VHDX will only consume storage for really stored data & not for the entire size of the VHDX, even when it is a fixed one. You can see that not just the operating system but also the application/hypervisor that owns the file systems on which the VHDX lives needs to be TRIM/UNMAP aware to pull this off.
The good news here is that there is no more sdelete to run, scripts to write, or agents to install. It happens “automagically” and as ease of use is very important I for one welcome this! By the way some SANs also provide the means to shrink LUNs which can be useful if you want the space used by a volume is so much lower than what is visible/available in windows and you don’t want people to think you’re wasting space or all that extra space is freely available to them.
To conclude I’ll be looking forward to playing around with this and I hope to blog on our experiences with this later in the year. Until Windows Server 2012 & VHDX specifications are RTM and fully public we are working on some assumptions. If you want to read up on the VHDX format you can download the specs here. It looks pretty feature complete.
As a Microsoft MEET member and MVP, I’d like to invite you all to attend the Microsoft “Experience Days”.
There are several tracks at the Experience Days from which you can choose. The complete track information can be found at here.
There are two tracks that are especially of interest to IP Pros: The Best of Microsoft Management Summit (MMS 2012) and Experience Windows Server 2012.
The Best of Microsoft Management Summit (MMS 2012)
During The Best of Microsoft Management Summit (MMS 2012), we will provide you with the best possible opportunity to learn about what’s new in System Center 2012. Led by experts who attended MMS 2012 in Las Vegas, you can expect in-depth sessions on infrastructure management, service delivery & automation, application management, desktop & device management.
Discover the full program
Experience Windows Server 2012
At Experience Windows Server 2012 day you will discover how Windows Server is going beyond virtualization by scaling and securing workload, how it will enable the modern work style by giving people access to information and data regardless of the infrastructure, network, device or application they use to access it. And you will discover the power of many servers with the simplicity of one by efficiently managing infrastructure while maximizing uptime and minimizing failures and downtime.
Join us and learn more about:
I’ll be talking on June 7th at 15:00 – 16:00 about Windows Server 2012 Storage Evolved For Hyper-V in the Experience Windows Server 2012 track:
Windows Server 2012 is a very storage centric version. We’ll cover the changes, improvements and additions to Windows Server 2012 storage capabilities and their impact on Hyper-V. We talk about the enhancements with the new virtual disk format (VHDX), offloaded data transfer (ODX), TRIM/UNMAP, large sector disks and the new storage options for Hyper-V including Storage Spaces, ReFS, Bitlocker, CSV 2.0, NTFS online scan/repair and SMB 3.0 file storage and what the latter means for Live Migration & Storage Options for Hyper-V
Virtualization with Windows Server 2012 with Hyper-V is simply the best, bar none. If you watched Brad Anderson’s MMS 2012 Keynotes you know what’s coming and that he encouraged you to take the lead in all this. Well here’s you chance. If you agree that there is war on for talent, you also know and understand that knowledge will give you opportunities and choices. Invest in your future and as such in addressing and solving the business needs of your both clients and businesses. We all know it takes a serious effort in combination with a sustained commitment to become and stay competent in ICT. The TechNet BeLux team & the community is there to help you cultivate your talent and gain the knowledge you need.
Here’s a quick overview of my speaking engagements in the next two months. I encourage everyone to attend the smaller and often les expensive or even free events as they provide a fast way to get up to speed with new technologies. Don’t be shy. Everyone is welcome and we’re all there to learn. It will not surprise you that all the sessions I’ll be presenting are on Windows Server 2012 & Hyper-V.
Experts2Experts Virtualization Conference (E2EVC) –Vienna 2012
I’ll be doing a session on Saturday 26th on the advanced networking features in Windows Server 2012 Hyper-V. At this small scale conference interactive “chalk & talk” sessions never stop. They just go on and on at breakfast, lunch, dinner, the bar until we need to get some sleep to repeat this process the day after. If you’re very lucky there might still be an spot open for which you can register.
Continued Education Day for IT Coordinators & Teachers in Education
At the end of May I’ll be presenting a session on what’s new in Windows Server 2012 Hyper-V targeted at this audience. I’m convinced that the combination of the tremendous licensing efforts Microsoft does in education and simply the best virtualization & cloud platform in existence, will provide them with the right solution to get the job done.
TechNet BeLux - Experience Days
On June 7th I’ll be presenting a session at the “Experience Days”. That session is called “Windows Server 2012 Storage Evolved For Hyper-V” in the Experience Windows Server 2012 track. You can register here for this track or, if you think another track is more of interest to you go to the links above to register for those. All tracks are free and open to all.
More to come
During the summer I’ll be doing a large storage migration project. That means I’ll be getting my hands on SMI-S support for System Center Virtual Machine Manager 2012 and ODX to use with Windows Sever 2012 & Hyper-V. So I’ll be putting my money where my mouth is so to speak. I’m looking forward to that for the learning experience alone and it’s time to find me my “No, I will not fix your computer” T-shirt as I won’t have time for that . But rest assured, I will share my experiences through blogging, tweets, presentations & with my fellow community members for all man kind.
I’m attending and speaking at one the of the best small scale virtualization conferences out there. I’m talking about the Experts2Experts Virtualization Conference (E2EVC) organized by Alex Juschin for many years now. I’ll be speaking at the conference on “Making Sense of RSS, DMVQ, SR-IOV, RDMA and other advanced networking features”. We’ll see where Windows Server 2012 & the new generation of Hyper-V is at in regards to these technologies, how it stacks up against some other solutions and what looks promising. In other words what we are looking at to use in real live once Windows Server 2012 goes RTM.
I have the good fortune to attend some pretty big, impressive & high quality industry events. These are excellent places for networking and getting up to speed with the latest of the greatest form the big vendors and the ecosystem around it. But they are pretty expensive and large scale, so most people are so crazy busy at those you often miss out on some of the interaction, there is just to much going on.
E2EVC is special and adds a different kind if value that goes beyond its low cost. For one, nobody is trying to sell you anything. All attendees and all speakers are IT Pro’s that design, build, work with and support the technologies that are discussed. Hence the name, Expert 2 Expert. It’s a reality check on what are people really using, trying, evaluating. You’ll see what is really hurting us and what really works. An event like this isn’t driven by marketing. It’s driven by interests, passion for technology and even more important from a business perspective the solutions they can and do deliver in real live. This proves that you don’t need to charge premium prices to keep the riff raff out. The fact that 2 days of this conference are in a weekend tells you the attendees are going there with intend and purpose.
The guys & gals attending & presenting are top notch. They don’t look like slick advisers and analysts. It’s all very informal and relaxed. But make no mistake, these people are sharp and at the top of their game. Discussion and interaction is stimulated and lively. The aim is not to breed or create rock star speakers but to get people to share their experiences and knowledge. And here in lies the value. I really commend Alex Juschin for having succeeded in this.
As you might very well know by experience sometimes the System Center Virtual Machine Manager GUI and database get out of sync with reality about what’s going on for real on the cluster. I’ve blogged about this before in SCVMM 2008 R2 Phantom VM guests after Blue Screen and in System Center Virtual Machine Manager 2008 R2 Error 12711 & The cluster group could not be found (0×1395)
Recently I had to trouble shoot the “Missing” status of some virtual machines on a Hyper-V cluster in SCVMM2008R2. Rebooting the hosts, guests, restarting agents, … none of the usual tricks for this behavior seemed to do the trick. The SCVMM2008R2 installation was also fully up to date with service packs & patches so there the issue dot originate.
Repair was greyed out and was no use. We could have removed the host from SCVMM en add it again. That resets the database entries for that host en can help fix the issues but still is not guaranteed to work and you don’t learn what the root cause or solution is. But none of our usual tricks worked.We could have deleted the VMs from the database as in but we didn’t have duplicates. Sure, this doesn’t delete any files or VM so it should show up again afterwards but why risk it not showing up again and having to go through fixing that.
The VMs were in a “Missing” state after an attempted live migration during a manual patching cycle where the host was restarted the before the “start maintenance mode” had completed. A couple of those VMs where also Live Migrated at the same time with the Failover Cluster GUI. A bit of confusion al around so to speak nut luckily all VMs are fully operational an servicing applications & users so no crisis there.
I’m not telling you to use this method to fix this issue but you can at your own risk. As always please make sure you have good and verified backups of anything that’s of value to you
We hade to investigate. The good news was that all VMs are up an running, there is no downtime at the moment and the cluster seems perfectly happy .
But there we see the first clue. The Virtual machines on the cluster are not running on the node SCVMM thinks they are running, hence the “Missing” status.
First of all let’s find out what host the VM is really running on in the cluster and see what SCVMM thinks on what host the VM is running. We run this little query against the VMM database. That gives us all hosts known to SCVMM.
SELECT [HostID],[ComputerName] FROM [VMM].[dbo].[tbl_ADHC_Host]
(9 row(s) affected)
Voila en now the fun starts. SCVMM GUI tells us “MissingVM” is missing on node4.
We check this in the database to confirm:
SELECT Name, ObjectState, HostId FROM VMM.dbo.tbl_WLC_VObject WHERE Name = 'MissingVM' GO
Which is indeed node4
Name ObjectState HostId
——— — ————————————
node4 220 C2DA03CE-011D-45E3-A389-200A3E3ED62E
(1 row(s) affected)
In SCVMM we see that the moving of the VM failed. Between node 4 and node 6.
Now let’s take a look at what the cluster thinks … yes there it is running happily on node 6 and not on node 4. There’s the mismatch causing the issue.
So we need to fix this. We can Live Migrate the VM with the Failover Cluster GUI to the node SCVMM thinks the VM still resides on and see if that fixes it. If it does, great! You have to give SCVMM some time to detect all things and update its records.
But what to do if it doesn’t work out? We can get the HostId from the node where the VM is really running in the cluster, which we can see in the Failover Cluster GUI, from the query we ran above and than update the record:
UPDATE VMM.dbo.tbl_WLC_VObject SET HostId = 'C0CF479F-F742-4851-B340-ED33C25E2013' WHERE Name = 'MissingVM' GO
We then reset the ObjectState to 0 to get rid of the Missing status. It would do this automatically but it takes a while.
UPDATE VMM.dbo.tbl_WLC_VObject SET ObjectState = '0' WHERE Name = 'MissingVM' GO
After some patience & Refreshing all is well again and test with live migrations proves that all works again.
As I said before people get creative in how to achieve things due to inconsistencies, differences in functionality between Hyper-V Manager, Failover Cluster Manager and SCVMM 2008R2 can lead to some confusing situations. I’m happy to see that in Windows 8 the action you should perform using the Failover Cluster GUI or PowerShell are blocked in Hyper-V Manager. But SCVMM really needs a “reset” button that makes it check & validate that what it thinks is reality.