First we take Redmond, then we take Berlin & Ede: Summits & Conferences


The traveling & speaking MVP

MVPs are a busy lot. They work, learn, travel & talk a lot. Why to share knowledge & experiences for the benefit of all. So in order to keep that reputation going I’ll be heading to SEATAC to attend the Global MVP Summit 2014 in Bellevue/Redmond. After that tech fest I return to Belgium where I’ll immediately head towards Berlin to present at the Microsoft Technical Summit 2014. After a weekend of rest I head north to The Netherlands to present at Experts Live.

No rest for the wicked. There is a a tremendous amount of things happening in IT right now. It takes a little bit of effort to keep up an asses the benefits and value butt once you’re doing that as part of your normal day to day operations it becomes a lot easier to map out  why it’s useful to you and what to use where, when and how.

All this is happening at a time that the information on “Treshold” or Windows vNext is becoming available if we can believe the rumors and the buzz on the internet. Don’t forget that TechEd Europe 2014 is on in the last week of October in Barcelona right before the MVP Summit in Redmond. Like Aidan Finn said, we could go on a 3 month presenting, training & consulting tour right now as the need for insights & skills is growing with the growth in Hyper-V adoption and with that all related technologies from networking, storage to Azure.

Now I can’t invite you to the MVP Summit but I can do that for both the Microsoft Technical Summit 2014 and Experts Live. There will be a lot of international expertise at both events who have hands on expertise with the technology in real life production environments. I always pick up knowledge form them myself.

The Global MVP Summit 2014

Every MVP on this planet tries to make it to the MVP Summit. The face time with and access to so many intelligent people at MSFT is invaluable. Combine that with the opportunity to network experts form all over the globe and you realize why we spend the time effort and money to attend.

747-BA-02

So once more we hop on that great Boeing 747 and let BA fly us to SEA-TAC airport from where we’ll head to Bellevue/Redmond as the summit is spread between both locations.

image

The MVP is NDA. So there will be tweets about the fun stuff with fellow MVPs but other than that we’ll be going dark. We have never and will never breach NDA. We’ll also make some time to meet up with old acquaintances, friends and fellow Belgians living & working around Seattle/Bellevue/Redmond.

Microsoft Technical Summit 2014

The moment I get back home I grab a change of clothes & a flight to Berlin.

The Microsoft Technical Summit 2014 is on in Berlin and together with a great number of fellow MVPs I have the distinct pleasure of presenting What’s new in Failover Clustering (Windows 2012 R2).

image

It’s amazing to see how many of our community experts are actually from Germany, Austria & Switzerland (DACH) and I’ll be happy so see so many familiar faces I just saw on the other side of the big pond just a week before Smile

Experts Live 2014

On November 18th I’ll be in The Netherlands at Experts Live 2014 in Ede. This is a great event and if you know the brain power of the organizers & presenters this is no surprise. I’ll be presenting “The Capable & Scalable Cloud OS “ and showing some of the scalable capabilities in Windows Server 2012 R2 when combined with great hardware. image

So that’s the travelling & scheduling agenda for now. Perhaps I’ll see you at one of those events & if you’re a reader of this blog ping us if you’ll be there for a meet and great. Live is good Smile.

Dell generation 13 servers & Intel E5 v3 18 core CPUs are upon us in world where per core licensing is reality


As I watched the Intel E5 v3 launch event & DELL releasing their next generation servers to the public to purchase there is a clear opportunity for hardware renewal next year. I’m contemplating on what the new Intel E5 v3 18 core processors

image

and the great DELL generation 13 PowerEdge Servers mean for the Hyper-V and SQL server environments under my care.

image

For the Hyper-V clusters I’m in heaven. At least for now as Windows is still licensed per socket at the time of writing. vNext has me worried a bit, thinking about what would happen if that changes to core based licensing to. Especially with SQL Server virtualization. I do hope that if MSFT ever goes for per core licensing for the OS they might consider giving us a break for dedicated SQL Server Hyper-V clusters.

image

For per core licensing with SQL Server Enterprise we need to run the numbers and be smart in how we approach this. Especially since you need Software Assurance to be able to have mobility & failover / high availability. All this at a time you’re told significant cost cutting has to happen all over the board.

So what does this mean? The demise of SQL Server in the Enterprise like some suggest. Nope. The direct competitors of SQL Server in that arena are even more expensive. The alternatives to SQL are just that, in certain scenarios you don’t need SQL (Server) or you can make due with SQL Server Express. But what about all the cases where you do really need it? You’ll just have to finance the cost of SQL Server. If that’s not possible the business case justifying the tool is no longer there, which is valid. As the saying goes, if you can’t afford it, you don’t need it. A bit harsh yes, I realize, but this is not a life saving medicine we’re talking about but a business tool. There might be another reason your SQL Server licensing has become unaffordable. You might be wasting money due to how SQL Server is deployed and used in your environment. To make sure you don’t overpay you need to evaluate if SQL Server consolidation is what is really needed to save the budget.

Now please realize that consolidation doesn’t mean stupidly under provisioning hardware & servers to make budget work out. That’s just plain silly. For some more information on this, please read Virtualizing Intensive Workloads on Hyper-V, Can It Be Done

So what is smart consolidation (not all specific to SQL Server by the way):

  • You have to avoid physical SQL Server sprawl with a vengeance.
  • You need to consolidate SQL Servers aggressively.
  • Virtualize on a dedicated SQL Server Hyper-V cluster if possible
  • Favor scale out over scale up in the Hyper-V scenario to keep node costs reasonable and allow for affordable expansion.
  • Use 2 socket servers and replace the hardware faster to keep the number of needed cores down.
    • This allows to leverage modern commodity, high performance storage, networking and compute where you can in order to optimize workloads & minimize costs.
    • It helps save on power consumption & cooling
    • More nodes with lesser cores (scale out approach) reduces VM density per node but also keep the cost of adding a node (with SQL Server per core licensing, or when it comes to that for the OS as well), which is your scaling block with a fixed cost under control. It’s all about balance and it isn’t as easy as it seems.
  • Play the same game with storage. This can be a harder sale to make internally. Traditionally people hang on to storage longer due to the high CAPEX. I have said it before, storage vendors have to deliver more & better. Even the challengers & hyper converged systems are still too expensive to really get into a short renewal cycle for most organizations.

Be smart about it. A great DBA can make a difference here and some hard core performance tuning is what can save a serious amount of money. If on top of that you have some good storage & network skills around you can achieve a lot. Next to the fact that you’ll have to spend serious money for serious workloads the ugly truth is that consolidation requires you find your peak loads and scale for those with a vengeance. Look, maxing out one server on which one SQL Server is running isn’t that bad. But what if 3 SQL Servers running a peak performance spread over a 3 node Hyper-V cluster dedicated to SQL Server VMs might kill performance all over!

The good news is I have solid ideas,visions, plans and options to optimize both the on premise & cloud of part of networking, storage & compute. Remember that there is no one size fits all. Execution follows strategy. The potential for very performant, cost effective  & capable solutions are right there. I cannot give you a custom solution for your needs in a blog post. One danger with fast release cycles is that it requires yearly OPEX end if they cannot guarantee it the shift in design to solutions with less longevity  could become problematic if they can’t come up with the money. Cutting some of the “fat” means you will not be able to handle longer periods of budget drought very well. There is no free lunch.

So measure twice & cut once or things can go wrong very fast and become even more expensive.

You might think this sounds a bit pessimistic. No this is an opportunity, especially for a Hyper-V MVP who happens to be a MCDBA Winking smile. The IT skills shortage is only growing bigger all over the planet, so not too much worries there, I won’t have to collect empty bottles for a living yet. The only so called “draw back” here could be that the environments I take care of have been virtualized and optimized to a high extend already. The reward for being good is sometimes not being able to improve things in orders of magnitude. Bad organizations living in a dream world, the ones without a solid grasp of the realities of functional IT in practice, might find that disappointing. Yes the “perception is reality” crowd. Fortunately the good ones will be happy to be in the best possible shape and they’ll invest money to keep it that way.  Interesting times ahead.

Migrate an old file server to a transparent failover file server with continuous availability


This is not a step by step “How to” but we’ll address some thing you need to do and the tips and tricks that might make things a bit smoother for you.

1) Disable Short file names & Strip existing old file names

Never mind that this is needed to be able to do continuous availability on a file share cluster. You should have done this a long time ago. For one is enhances performance significantly. It also make sure that no crappy apps that require short file names to function can be introduced into the environment. While I’m an advocate for mutual agreements there are many cases where you need to protect users, the business against itself. Being to much of a politician as a technologist can be very bad for the company due to allowing bad workarounds and technology debt to be introduced. Stand tall!

Read up on this here Windows Server 2012 File Server Tip: Disable 8.3 Naming (and strip those short names too. Next to Jose’s great blog read Fsutil 8dot3name on how to do this.

If you still have applications that depend on short file names you need to isolate and virtualize them immediately. I feel sorry for you that this situation exists in your environment and I hope you get the necessary means to deal with swiftly and decisively by getting rid of these applications. Please see The Zombie ISV® to be reminded why.

Some tips:

  • Only use the /F switch if it’s a non system disk and you can afford to do so as you’re moving the data LUN to a new server anyone. Otherwise you might run into issues. See the below example.image
  • If you stumble on path that are too long, intervene. Talk to the owners. We got people to reduce “Human Resources Planning And Evaluations” sub folder & file names reduced to HRMPlanEval. You get the gist, trim them down.
  • You’ll have great success on most files & folders but if they are open. Schedule a maintenance window to make sure you can run without anyone connected to the shares (Stop LanManServer during that maintenance window).image
  • Also verify no other processes are locking any files or folders (anti virus, backups, sync tools etc.)

2) Convert MBR disks to GPT if you can

With ever growing amounts of data to store and protect this makes sense. I’m not saying you need to start doing 64TB disks today but making sure you can grown beyond 2TB is smart. It doesn’t cost anything when you start out with GPT disks from the start.  If you have older LUNs you might want to use the migration as an opportunity to convert MBR LUNs to GPT. That means copying the data and all NTFS permissions.

Please see  NTFS Permissions On A File Server From Hell Saved By SetACL.exe & SetACL Studio for some tools that might help you out when you run into NTFS/ACL permissions and for parsing logs during this operation.

Here’s a useful Robocopy command to start out with:

ROBOCOPY L:\ V:\ /MIR /SEC /Z /R:2 /W:1 /V /TS /FP /NP /XA:SH /MT:16 /XD "System Volume Information" *RECYCLE* /LOG:"D:\RoboCopyLogs\MBR2GPTLUNL2V.txt"

3) Dump the existing shares on the old file sever into a text file for documentation an use on the new file server

Pre Windows Server 2012 the new SMB Cmdlets don’t work, but no fear, we have some other tools to use. Using NET SHARE does work and with you can also show the hidden and system share but the layout is a bit of a mess. I prefer to use.

Get-WmiObject –class Win32_Share > C:\temp\OldFileServerShares

It’s fast, complete and the layout is very user friendly. Which is what I need for easy use with PowerShell on the W2K12R2  file server. Some of you might say, what about the share security settings. 1) We’re going to cluster so exporting these from the registry doesn’t work and 2) you should have kept this plain vanilla and done security via the NFTS permissions on the folder structure only. But hey I’m a nice guy, so here’s a link to a community PowerShell script if you need to find out the share permissions: http://gallery.technet.microsoft.com/scriptcenter/List-Share-Permissions-83f8c419 I do however encourage you to use this time to consider just using security on NFTS.

4) Create the clustered file shares

Amongst the many gems in Windows Server 2012 R2 are the new SMB PowerShell Cmdlets. They are a simple and great way to create clustered files shares. Read up on these SMB Share Cmdlets and especially New-SmbShare

When we’ve unmapped the LUNs from the old file server and exposed them to the new file server cluster you’re ready to go. You can even reorganize the Shares, consolidate to less but bigger LUNs and, by just adapting the path to the share in the script make sure the users are not confused or nee to learn new shares and adapt how & what they connect to them. Here it goes:

New-SmbShare -Name "TEST2" -path "T:\Shares\TEST2" -fullaccess Everyone -EncryptData $True -FolderEnumerationMode AccessBased -ConcurrentUserLimit 0 -ScopeName TF-FS-MIG

First and foremost, this is where the good practice of not micro managing file hare permissions will pay back big time. If you have done security via NTFS permissions with AG(U)DLP principle to your folder structure granting should be breeze right?

Before you ask, no you can’t do the old trick of importing the registry export of the shares and their security settings form the old file server when you’re going to cluster the file shares. That might sound bad but with some preparation and the PowerShell I demonstrated above you’ll find it easy enough.

5) Recuperate old file server name (Optional)

After you have decommissioned the old file server you could use a cluster alias to keep the old file server UNC path. This has the drawback you will fall back to connecting to the SMB shares via NTLM as aliases don’t support Kerberos authentication. But there is another trick. Once you got rid of the old server object in AD you can rename. If you can do this you’ll be able to keep Kerberos for authentication.

So after you’ve gotten rid of the old server in Active Directory go to the file server role. Select properties and rename it to recuperate the old files server name.

image

Now look at the resources tab. Right click and select the properties tab of “Server Name”. Rename the DNS Name. That will update the server name and the DNS record. This will cause the role to go down temporarily.

image

Right click and select the properties tab of “File Server”. Rename the UNC path to reflect the older file server name.

image For good measure and to test everything works: stop and restart the cluster role, connect to the shares and voila live should be good. Users can access the transparent failover file server like they used to do with the old non cluster file server and they don’t sacrifice Kerberos to be able to do so!

image

Conclusion

I hope you enjoyed the tips and pointers on migrating an old file server to a  Windows Server 2012 R2 file share cluster. Remember that these tips apply for various permutations of P2V, V2V as well as for P2P migrations.

DELL Has Great Windows Server 2012 R2 Feature Support – Consistent Device Naming–Which They Help Develop


The issue

Plug ‘n Play enumeration of devices has been very useful for loading device drivers automatically but isn’t deterministic. As devices are enumerated in the order they are received it will be different from server to server but also within the system. Meaning that enumeration and order of the NIC ports in the operating system may vary and “Local Area Connection 2” doesn’t always map to port 2 on the  on board NIC. It’s random. This means that scripting is “rather hard” and even finding out what NIC matches what port is a game of unplugging cables.

Consistent Device Naming is the solution

A mechanism that has to be supported by the BIOS was devised to deal with this and enable consistent naming of the NIC port numbering on the chassis and in the operating system.

But it’s even better. This doesn’t just work with on board NICs. It also works with add on cards as you can see. In the name column it identifies the slot in which the card sits and numbers the ports consistently.

In the DELL 12th Generation PowerEdge Servers this feature is enabled by default. It is not in HP servers for some reason, you need to turn in it on manually.

I first heard about this feature even before Windows Server 2012 Beta was released but as it turns out Dell has been involved with the development of this feature. It was Dell BIOS team members that developed the solution to consistently name network ports and had it standardized via PCI SIG.  They also collaborated with Microsoft to ensure that Windows Server 2012 would support all this.

Here’s a screen shot of a DELL R720 (12th Generation PowerEdge Server) of ours. As you can see the Consistent Device Naming doesn’t only work for the on broad NIC card. It also does a fine job with add on cards of which we have quite a few in this server.image

It clearly shows the support for Consistent Device Naming for the add on cards present in this server. This is a test server of ours (until we have to take it into production) and it has a quad 1Gbps Intel card, a dual Intel X520 DA card and a dual port Mellanox 10Gbps RoCE card. We use it to test out our assumptions & ideas. We still need a Chelsio iWarp card for more testing mind you Winking smile

A closer look

This solution is illustrated the in the “Device Name column” in the screen shot below. It’s clear that the PnP enumerated name (the friendly name via the driver INF file) and the enumerated number value are very different from the number in Name column ( NIC1, NIC2, NIC2, NIC4) even if in this case where by change the order is correct. If the operating system is reinstalled, or drivers changed and the devices re-enumerated, these numbers may change as they did with previous operating systems.

image

The “Name” column is where the Consistent Device Naming magic comes to live. As you can see you are able to easily identify port names as they are numbered consistently, regardless of the “Device Name” column numbering and in accordance with the numbering on the chassis or add on card. This column name will NEVER differ between identical servers of after reinstalling a server because it is not dependent on PnP. Pretty cool isn’t it! Also note that we can rename the Name column and if we choose we can keep the original name in that one to preserve the mapping to the physical hardware location.

In the example below thing map perfectly between the Name column and the Device Name column but that’s pure luck.image

On of the other add on cards demonstrates this perfectly.image

TechEd North America 2014 Session


There is something extremely rewarding about seeing your name on the intro slide of a TechEd USA presentation. I helped deliver What’s New in Windows Server 2012 R2 Hyper-V together with Ben Armstrong yesterday and it was quite the experience.

DSCN2817_280

image

A big thank you to Ben and Microsoft for the confidence they have shown in me and the opportunity to do this. A mention to our CEO who has the ability to look beyond the daily needs and facilitates his and encourages his employees to get out of the village to learn, grow and prosper. This is the principle one of my high school teachers lived and worked by, help people be all they can be.

The IT community around the Microsoft ecosystem is both a local and a global one. In this day and age knowledge gets shared and flows freely. People work with people and with organizations. No one gets anywhere in isolation.I’m happy to see so may of my buddies do so well. It’s great to see people succeed, grow, enjoy their work and reap the fruits of their efforts. Look at Benedict Berger who was presenting in the room next to ours or Aidan Finn, a long time community member and experienced speaker who won speaker idol and by doing so secured a speaker slot for next year. This has many reasons and one of them is people believing in you and giving you the chance to grab opportunities. To those I say, thank you very much!

Attending And Presenting at TechEd North America 2014


As you might well know I’m attending TechEd North America right now. I blogged about that. But I have to correct this a bit. Today I will also be presenting together with Ben Armstrong and help him deliver session DCIM-B380 What’s New in Windows Server 2012 R2 Hyper-V.

Ben Armstrong, Principal Program Manager on the Hyper-V team, will be showing you the wealth of features that provide capability, scalability, performance, availability and reliability in Windows 2012 R2 Hyper-V that make it THE capable and scalable cloud OS.

I’m honored to be able to show case a few of the technologies in Windows 2012 R2 we are leveraging in production today. So can you, really!

image

Heading To TechEd North America 2014


Good times ahead as today I’m making my way over to the USA (Houston Texas) or TechEd 2014 North America. I’m in good company of a few of my colleagues and I have a great number of my buddies & industry relations inbound as well.

Time for some serious education, networking & passionate discussions on the state of the industry with people form all over the globe.  I’ll also make good use of my time over there to meet up with the people in my network that are US based.

I’ll be spending time in cloud/hybrid/virtualization tracks and focus on networking and identity. That’s starts off very well with a pre conference track hybrid identity on Sunday by john Craddock, a true scholar!

Network!

No need to bring SFP+ or RJ45, don’t worry. Next to sessions & labs don’t forget to connect with others. The ability to network with peers and industry experts is a great benefit of this conference so make the best of it. There are few events with this concentration of expertise & talent, tap into that resource.

To help all you shy people out there Aidan Finn has launched the The TechEd North America 2014 Hyper-V Amigo Selfie Game. You can read all about it over here and if you play, best of luck!

On Route

But first we need to get there. As I learned during visit of the Boeing factory in Seattle “If it’s not Boeing, I ‘m not going” Winking smile. No worries it appears they’re using a 777?

british_airways-777-300er

So I’m getting out of the village, into the world so tunnel visions and blinders can be avoided. See you all there.