HP Discover 2011 in Vienna


I’m off to Vienna (Austria) to attend HP Discover next week. The idea is to go look at their kit and learn a bit more about what’s available and possible. I hope to discuss their offerings with them and I will have storage as my main focus. This is because things are about to get busy in that area for us.  I’ll also provide them with some feedback on our experiences, what we like, don’t like etc. Call it “the good, the bad and the ugly" of free (no I don’t need or want a free Amazon gift card to provide it) customer feedback if you like.

I appreciate a chance to provide feedback to vendors directly and I do think it is important. Not because I have that much to say or have such a big impact but because apart from sales figures it’s the best way to help them and thus us as customers to get good things enhanced and broken stuff fixed.

If there is one thing that a lot of vendors are missing is a better view of the opportunities in the SME market that has a need for enterprise level features but on a smaller budget. That does exist and that market can be tapped more than it is now. Sometimes it seems like the commercial offerings in this market are divided in small & large and the sizes in between are crushed or forgotten between those two markets. I hear this a lot from colleagues & friends as well, so it’s not just me. We’re not the huge budget crowd but make up for that in numbers. We’ll see what HP has to offer that segment of the market in 2012.

The Private Cloud A Profitable Future Proofing Tactic?


The Current Situation

I’m reading up on the private cloud concept as Microsoft envisions we’ll be able to build ‘m with the suite of System Center 2012 products. The definition of private cloud is something that’s very flexible. But whether we’re talking about the private, hybrid or public cloud there is a point of disagreement between some on the fact that there are people that don’t see self-service (via a portal, with or without an approval process) as a required element to define a *cloud. I have to agree with Aidan Finn on this one. It’s a requirement. I know you could stretch the concept and that you could build  a private cloud to help IT serve it customers but the idea is that customers can and will do it themselves.

The more I look into system center 2012 and it’s advertised ability to provide private clouds the more I like it. Whilst the current generation has some really nice features I have found it lacking in many areas, especially when you start to cross security boundaries and still integrate the various members of the System Center suite. So the advancements there are very welcome. But there is a danger lurking in the shadows of it all. Complexity combined with the amount of products needed. In this business things need to go fast without sacrificing or compromising on any other thing. If you can’t do that, there is an issue. The answer to these issues is not always to go to the public cloud a hundred percent.

While the entire concept might seem very clear us techies (i.e. still lots of confusion to be found) and the entire business crowd is talking about cloud as if it’s a magic potion that will cure all IT related issues (i.e. they are beyond confused, they are lost) there are still a lot of questions. Even when you have the business grasping the concept (which is great) and have an IT team that’s all eager and wiling to implement it (which is fabulous) things are still not that clear on how to start building and/or using one.

In reality some businesses haven’t even stepped into the virtual era yet or only partially at best. Some people are a bit stuck in the past and still want to buy servers and applications with THEIR money that THEY own and is ONLY for them.  Don’t fight that to much The economics of virtualization are so good (not just financially but also in both flexibility & capabilities) that you can sell it to top management rather easily, no matter what. After that approval just sell the business units servers (that are virtual machines), deliver whatever SLA they want to pay for and be done with it. So that problem is easily solved.

But that’s not a cloud yet. Now that I’m thinking of it, perhaps getting businesses to adopt the concept will be the  hardest. You might not think so by reading about private clouds in the media but I have encountered a lot of skepticism and downright hostility towards the concept. No, it’s not just by some weary IT Pros who are afraid to lose their jobs. Sometimes the show stoppers are the business and users that won’t have any of it. They don’t want to order apps or servers on line, they want then delivered for them. I even see this with the younger workforce when the corporate culture is not very helpful. What ‘s up here? Responsibility. People are avoiding it and it shows in their behavior. As long as they want to take responsibility things go well. If not, technical fear masked as “complexity” or issues like “that’s not our job” suddenly appear.

There is more, a lot of people seems at their limit of what they can handle in information overload at every extra effort is too much.  Sometimes it’s because of laziness or perhaps even stupidity? Perhaps it’s a side effect of what Nicolas Carr writes about the: the internet is making us dumber and dumber as a species. But then again, we only have to look at history to learn that, perhaps, we’ve never been that smart. Sure we have achieved amazing things but that doesn’t mean we don’t act incredibly stupid as individuals or as a group. So perhaps things haven’t changed that much. It’s a bit like the “Meet the new boss, same as the old boss” sort of thing. But on the other hand things are often too complex. When things are easy and become an aid in their work people adopt technology fast and happily.

Sometimes the scale of the business is not of that nature that it’s worthwhile top deploy a cloud. The effort and cost versus the use and benefits are totally out of sync.

That’s all nice and well you tell me, but what’s are technologists to advice to their customers?

Fire & Maneuver

The answer is in the sub title. You can’t stand still and do nothing. It will get you killed (business is warfare with gloves on and some other niceties). Now that’s all good to know but how do we keep moving forward and scoring? There will always be obstacles, risks, fears etc. but we can’t get immobilized by them or we come to a standstill, which means falling behind. The answer is quite simple. Keep moving forward.  But how? Do what you need to do. Here’s my approach. Build a private cloud. Use it to optimize IT and to be ready to make use of * clouds at every opportunity. And to put your mind at ease you can do this without spending vast amounts of money that gets wasted. Just provide some scale up and scale out capacity & capability. The capability is free if you do it right. The capacity will cost you some money. But that’s your buffer to keep things moving smoothly. Done right your CAPEX will be less than not doing this. How can this be?

Private Clouds enable Hybrid Clouds

The thing that I like most about the private cloud is that it enables the use of hybrid cloud computing. On the whole and in the long run hybrid clouds might be a transition to public cloud but as I’ve written before, there are scenarios where the hybrid approach will remain. This might not be the case for the majority of businesses but still I foresee a more permanent role for hybrid clouds for a longer time that most trendy publications seem to indicate. I have no crystal ball but if hybrid cloud computing does remain a long term approach to server computing needs we night well see more and better tools to manage this in the years to come. Cloud vendors who enable and facilitate this approach will have a competitive advantage. The only thing you need to keep I mind that private or cloud computing should not bee seen as replacements or alternatives for the public cloud. They don’t have the elasticity, scale and economics of a public cloud. They are however complementary. As such they enable and facilitate the management and consumption of IT services that have to remain on premises for whatever reason.

Selling The Public Cloud

Where private cloud might help businesses who are cloud shy warm up to the concept, I think the hybrid cloud in combination with integrated and easy management will help them make the jump to using public cloud services faster. That’s the reason this concept will get the care and attention of cloud vendors. It’s a stepping stone for the consumption of their core business (cloud computing) that they are selling to businesses.

What’s in it for the business that builds one?

But why would a business I advise buy into this? Well a private cloud (even if used without the self-service component) is Dynamic Systems Initiative (SDI) / Dynamic Data Center on steroids. And as such it delivers efficiency gains and savings right now even if you never go hybrid or public. I’m an avid supported of this concept but it was not easy to achieve for several reasons, one of them being that the technologies used missed some capabilities we need. And guess what, the tools being delivered for the private could can/could fill those voids. By the way, I was in the room at IT Forum 2004 when Bill Gates came to explain the concept and launch that initiative. The demo back then was deploying hundreds of physical PCs. Times have changed indeed! But back to selling the private cloud. Building a private cloud means you’ll be running a topnotch infrastructure ready for anything. Future proofing your designs at no extra cost and with immediate benefits is to good to ignore for any manager/CTO/CIO. The economics are just too good. If you do it for the right reason that is, meaning you can’t serve all your needs in the public cloud as of yet. So go build that private cloud and don’t get discouraged by the fact that it won’t be a definition example of the concept, as long as it delivers real value to the business you’ll be just fine. It doesn’t guarantee your business survival but it will not be for your lack of trying. The inertia some businesses in a very competitive world are displaying makes them look like rabbits trapped in the beams of the car lights. Not to mention government administrations. We no longer seem to have the stability or rather slowness of change needed to function effectively. Perhaps this has always been the case. I don’t know. We’ve never before in history had such a long period of peace & prosperity for such a broad section of the population. So how to maintain this long term is new challenge by itself.

Danger Ahead!

As mentioned above, if there is one thing that can ruin this party it’s is complexity. I’m more convinced of this than ever before. I’ve been talking to some really smart people in the industry over the weekend and everyone seems to agree on that one. So if I can offer some advice to any provider of tools to build a private cloud.  Minimize complexity and the amount of tools needed to get it set up and working. Make sure that if you need multiple building blocks and tools the integration of them is top notch and second to none. Provide clear guidance and make sure it is really as easy to set up, maintain and adapt as it should be. If not businesses are going to get a bloody nose and IT Pros will choose other solutions to get things done.

Experts2Experts Conference London (UK) 2011


I’m at the Experts2Experts Conference in London and I’m having a great time talking shop, tech & business with my fellow IT Pro colleagues from around Europe. Aidan Finn, Jeff Wouters, Carsten Rachfahl, Ronnie Isherwood.

It might be fun for Microsoft to join us for some of these lunch & dinner time dicussions. It would provide them with great feedback, ideas, concerns. Very educational. While we’re discussing Citrix, VMware, Microsoft & ISV solutions (RES, Appsense) this is not a vendor centric conference. Sure we all work with these products but we’re discussing it from our point of view. The challenges, the issues, the successes & failures are discussed and mentioned.

There’s a high density of virtualization, private cloud, desktop virtualization (VDI, Terminal Servers, Application Virtualization, Client hosted virtual desktops etc.) expertise at the conference to make it interesting.

Tomorrow I’ll be sharing some musings on “High Performance & High availability Networks for Hyper-V Clusters” during my session.

Direct Connect iSCSI Storage To Hyper-V Guest Benefits From VMQ & Jumbo Frames


As I was preparing a presentation on Hyper-V cluster high available & high performance networking by, you guessed it, presenting it. During that presentation I mentioned Jumbo Frames & VMQ (VMDq in Intel speak)  for the virtual machine, Live Migration and CSV network. Jumbo frames are rather well know nowadays but VMQ is still something people have read about, at best have tinkered with, but no many are using it in production.

One of the reason for this that it isn’t explained and documented very well. You can find some decent explanation on what it is and does for you but that’s about it. The implementation information is woefully inadequate and, as with many advanced network features, there are many hiccups and intricacies. But that’s a subject for another blog post. I need some more input from Intel and or MSFT before I can finish that one.

Someone stated/asked that they knew that Jumbo frames are good for throughput on iSCSI networks and as such would also be beneficial to iSCSI networks provided to the virtual machines. But how about VMQ? Does that do anything at all for IP based storage. Yes it does. As a matter of fact It’s highly recommend by MSFT IT in one of their TechEd 2010 USA presentations on Hyper-V and storage.

So yes enable VMQ on both NIC ports used for iSCSI to the guest. Ideally these are two dedicated NICs connected to two separate switches to avoid a single point of failure. You do not need to team these on the host or have Multiple Path I/O (MPIO) running for this mat the parent level. The MPIO part is done in the virtual machines guests themselves as that’s where the iSCSI initiator lives with direct connect. And to address the question that followed, you can also use Multiple Connections per Session (MCS) in the guest if your storage device supports this but I must admit I have not seen this used in the wild. And then, finally coming to the point, both MPIO and MCS work transparently with Jumbo Frames and VMQ. So you’re good to go Smile

KB2616676 Patching Hiccup Discovered by Out of Sync Cluster Nodes


I was investigating an issue on a Windows 2008 R2 SP1 cluster and as part of my check list I ran the cluster validation. Than came out clean but for the fact that it complained about an update that was missing on some of the nodes.

That update was Microsoft Security Advisory: Fraudulent digital certificates could allow spoofing or KB2607712 Not that these cluster nodes are web clients but this is not good and we need to have this fixed for both security & cluster supportability reasons.

But neither WSUS or Windows Update indicate that there is an update available for these nodes. So I download the patch manually and try to install it. Then I get the response: ‘This update is not applicable to your computer’

No good! Now we need to find out what’s up. After some searching we find other people with this issue in the Microsoft forums: KB2607712 does not download to clients.

As it turns out KB2607712 was erroneously marked as superseded by KB2616676. This means that if that update is approved, or installed, the download/installation of KB2607712 is blocked. I check this on the nodes involved and this is indeed the case.

No please now that the forum reply states “erroneously marked as superseded” which means that BOTH updates are needed. The work around is to:

  • uninstall/unapprove KB2616676
  • install/approve KB2607712
  • reinstall/approve  KB2616676  again after you clients/host have KB2607712 installed.

There should be a revision of KB2616676 coming in the future that’s to include of KB2607712, meaning that KB2607712 will truly be supersede by it. As of this writing that revised version is not released yet so you’re left with the workaround until now.

Piece of advice. Keep your cluster nodes patched but do it in a well organized matter so they remain in sync.  Don’t just do half of the nodes. The good thing that came out of this that we discovered that some other servers/clients did not get the update for KB2607712 due to this. So now the company can address this issue using the workaround. I did the manual uninstall/reinstall workaround for the cluster nodes. For their clients an other servers  I suggested they’d go the WSUS way.

Using Host Names in IIS in Combination with a KEMP LoadMaster


At a client the change over of a web site from old servers to new ones lead to the investigation of an issue with the hardware load balancer. Since that web site is related to an existing surveyors solutions suite that already had a KEMP LoadMaster 2200 in use the figured we’d also use it for the web site and no longer use WNLB.

Now the original web site had multiple DNS entries and host header names defined in IIS (see Configure a Host Header for a Web Site (IIS 7)) . Host header names in IIS allow you to host multiple web sites on an IIS server using the same IP address and port. A small added security benefit is that surfing on IP address fails which means we marginally disrupt some script kiddies & get an extra security checkbox marked during an audit Winking smile.

In our example we needed:

Note: The real names have been changed as well as the reasons why as this has some business & historical justifications that don’t matter here.

ntrip.surveyor.lab needs to be handled by the load balanced web servers in the solution. The http://www.surveyor.lab needs to be redirected to another web server to keep the business happy. However for political reasons we have to keep the DNS record for http://www.surveyor.lab pointing to the load balanced servers, i.e. the load master VIP.

Now without host names IIS al worked fine until we wanted to use HTTP redirect. As the web site is the same IP address for both names we either redirected them both or none. To fix this we needed two sites in IIS. The real one hosting ntrip.surveyor.lab and a “fake” one hosting the http://www.surveyor.lab that we want to redirect. Well as both are hosted on the same IP address and port on the IIS server we need to use host names. But then the sites became unavailable.

When checking the LoadMaster configuration, the virtual service for the web servers seemed well.

image

Is this a limitation of hardware load balancing or this specific Loadmaster? Some searching on the internet made it look like I was about the only on on the planet dealing with this issue so no help there.

Kemp Support Rocks

I already knew this but this experience reaffirms it. KEMP Technologies really does care about their customers and are very fast & responsive. I threw a quick question on twitter to @KempTech on Twitter and they responded very fast with some pointers. After that I replied with some more details, they offered to take it on via other means as twitter has it limits. OK, no problems. The next morning I got an e-mail from one of their engineers (Ekkehard) with more information and a request for more input from our side. I quickly made a VISIO diagram of the current and the desired situation. Based on this he let me know this should work.

image

He asked for a copy of the configuration and already pointed to the solution:

And what exactly happens – does the RS turn “red” in the “View/Modify Services” view? That might be caused by the health check settings…
(Remember that a 302 is considered NOT ok, so you had to enter the proper check URL and or / HTTP1.1 hostname)

But at that moment I did not realize this yet. I saw no error or the real server turning red indicating it was down. So we went through the configuration and decided to test without forcing layer 7 to see what happened. This didn’t make a difference and it wasn’t really a solution if it had as we needed layer 7 and layer 7 transparency.

Ekkehard also noticed my firmware was getting rather old (don’t fix what isn’t broken Smile) and suggest an upgrade (5.1-24 to 5.1.-74). So I did, reboot and tested some more settings. To make sure I didn’t miss anything I threw a network sniffer (WireShark) against the issue. And guess what?  As soon as I added a host name to the IIS web site bindings I didn’t even get any request from my client on that server anymore. So it was definitely being stopped at the Loadmaster. Without it request from a client came through perfectly.  That was not IIS doing as with a host name nothing came into the server. So why would the LoadMaster stop traffic to a real server? Because it’s down, that’s why, just like Ekkehard has indicated in one of his mails but we didn’t see it then.

Better check again and sure enough, the health service told me the real servers are down. Hey … that’s new. Did the previous firmware not show this, or just slower? I can’t say for sure. It’s either me being to impatient, a hiccup, the firmware or premature dementia Confused smile

Root Cause

So what happens? The default health check uses HTTP 1.0. You can customize it with a path like  /owa or such but in essence it uses the IP address of the real server and guess what. With a Host header name in IIS that isn’t allowed other wise it can’t figure out what website you want to go to if you’re using this feature to run multiple sites on the same IP address and port. So we need to check the health based on host name. Can the LoadMaster do that for us? Yes it can!

The fix

You need to enable HTTP 1.1 and fill out the host name you want to use for health checking.  In our case that’s ntrip.surveyor.lab. That’s all there’s to it. Easy as can be if you know. And Ekkehard knew he indicated to this in his quoted mail above.

HTTP1 1host

 

Lessons Learned

So how did I not know this? Isn’t this documented? Sure enough on page  56 of the LoadMaster manual it says the following:

7  HTTP  The LoadMaster opens a TCP connection to the Real Server on the Service port (port80). The LoadMaster sends a HTTP/1.0 HEAD request the server, requesting the page ―/‖.  If the server sends a HTTP response with a status code of 2 (200-299, 301, 302, 401) the LoadMaster closes the connection and marks the server as active.  If the server fails to respond within the configured response time for the configured number of times or if it responds with a different status code, it is assumed dead.  HTTP 1.0 and 1.1 support available, using HTTP 1.1 allows you to check host header enabled web servers.

Typical, you read the exact line of information you need AND understand it after having figured it out. Now linking that information (yes we always read all manuals completely Embarrassed smile) to the situation at hand isn’t always that fast a process but I got there in the end with some help from KEMP Technologies.

One hint is perhaps to mention this is in the handy tips that pop up when you hover over a setting in the LoadMaster console. I rely on this a lot and a mention of “HTTP 1.1 allows you to check host header enabled web servers” might have helped me out. But it’s not there. A very poor excuse I know … Embarrassed smile

image

Host Header Names & HTTP redirection

After having fix this issue I proceeded to configure HTTP redirect in IIS 7.5. For this is used two sites. One was just a fake site tied to the www.surveyors.lab hostname in IIS on port 80.

image

For this site I created a HTTP redirect to www.bussines.lab/surveyors/services. This works just fine as long as you don’t forget the http:// in the redirect URL.

image

So it has to be http://www.bussines.lab/surveyors/services or you’ll get a funky loop effect looking like this:

http://www.surveyors.lab/www.bussines.lab/surveyors/services/www.bussines.lab/surveyors/services/www.bussines.lab/surveyors/services

Firefox will tell you you have a loop that will never end but Internet Explorer doesn’t, it just fails. You do get that URL as a pointer to the cause of the issue. That is if you can relate it to that.

The other was the real site  and was configured with following bindings and without redirection.

image

Don’t forget to do this on all real servers in the farm! The next thing I need to find out is how to health check two host names in the LoadMaster as I have two websites with the same IP address, port but different host names.