Quick Demo Video Of Site Failover With KEMP Loadmaster Global Balancing


Here’s a quick video that demonstrates how you can achieve site failover with via the KEMP Loadmaster Global Balancing feature. As long as you know what this can do for you and realize that it about site failover and high availability and not continuous availability without a second of service interruption you can deliver nice results with this technology across city campuses or between cities.

In our scenario we normally connect to the primary data center (weighted round robin) and fail over to the DRC when the primary site fails for some reason.

It’s very busy at the moment but I hope to address this topic a bit more in detail in the future. All of this runs virtualized on Hyper-V and performs just fine.

Advertisements

Don’t Forget To Leverage The Benefits of RD Gateway On Hyper-V & RDP 8/8.1


So you upgraded your TS Gateway virtual machine on W2K8(R2) to RDS Gateway on W2K12(R2) too make sure you get the latest and the greatest functionality and cut off any signs of technology debt way in advance. Perhaps you were inspired by my blog series on how to do this, and maybe you jumped through the x86 to x64 bit hoop whilst at it. Well done.

Now when upgrading or migrating from W2K8(R2) a lot of people forget about some of the enhancements in W2K12(R2). This is especially true of you don’t notice much by doing so. That’s why I see people forget about UDP. Why? Well things will keep working as they did before Windows Server 2012 RDS Gateway over HTTP or over RPC-HTTP (legacy clients). I have seen deployments where both the Windows and the perimeter firewall rules to allow UDP over 3391 were missing. Let alone that port 3391 was allowed in the RAP.  But then you miss out on the benefits it offers (a better user experience over less than great network connections and with graphics) ass well on those of that ever more capable thingy called RemoteFX, if you use that.

For you that don’t know yet:  HTTP and UDP protocols are both used preferably by RD Gateway and are more efficient than RPC over HTTP which is better for scaling and experience under low bandwidth and bad connectivity conditions. When HTTP transport channels are up (in & outgoing traffic), two UDP side channels are set up that can be used to provide both reliable (RDP-UDP-R) and best-effort (RDP-UDP-L) delivery of data. UDP also leveraged SSL via the RD gateway because is uses Datagram Transport Layer Security (DTLS). For more info RD Gateway Capacity Planning in Windows Server 2012. Further more it proves you have no reason not to virtualize this workload and I concur!

So why not set it up!?  So check you firewall rules on the RD Gateway Server and set the rules accordingly. Do the same for your perimeter firewalls or any other in between your users and your RD Gateway.

image

Under properties of your RS Gateway server you need to make sure UDP is enabled and listening on the needed IP address(es)

image

A client who connects over your RDS Gateway server, Windows Server 2012(R2) that is, and checks the network connection properties (click the “wireless NIC” like icon in the connection bar) sees the following: UDP is enabled. imageIf they don’t see UDP as enabled and they aren’t running Windows 8 or 8.1 (or W2K12R2) they can upgrade to RDP 8.1 on windows 7 or Windows Server 2008 R2! When they connect to a Windows 7 SP1 or Windows 2008R2  machine make sure you read this blog post Get the best RDP 8.0 experience when connecting to Windows 7: What you need to know as it contains some great information on what you need to do to enable RDP 8/8.1 when connecting to Windows 7 SP1 or Windows 2008 R2:

  1. “Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment\Enable Remote Desktop Protocol 8.0” should be set to “Enabled”
  2. “Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Connections\Select RDP Transport Protocols” should be set to “Use both UDP and TCP” => Important: After the above 2 policy settings have been configured, restart your computer.
  3. Allow port traffic: If you’re connecting directly to the Windows 7 system, make sure that traffic is allowed on TCP and UDP for port 3389. If you’re connecting via Remote Desktop Gateway, make sure you use RD Gateway in Windows Server 2012 and allow TCP port 443 and UDP port 3391 traffic to the gateway

Cool you’ve done it and you verify it works. Under monitoring in the RD Gateway Manager you can see 3 connections per session: one is HTTP and the two others are UDP.

image

Life is good. But if you want to see the difference really well demonstrated try to connect to Windows 7 SP1 computer with RDP8 & TCP/UDP disabled and play a YouTube video, then to the same with RDP8 & TCP/UDP enabled, the difference is rather impressive. Likewise if you leverage RemoteFX in VM. The difference is very clear in experience, just try it! While you’re doing this look a the UDP “Kilobytes Sent” stats (refresh the monitoring tab, you’ll see UDP being put to work when playing a video on in your RDP session.

image

Load balancing Hyper-V Workloads With High To Continuous Availability With a KEMP Loadmaster


I’m working on some labs and projects with KEMP Loadmaster load balancing appliances (LM 2400, LM-R320) That will lead to some blog post on  load balancing several workloads, which are all on Windows Server 2012 R2  Hyper-V or integrate in to Azure. The load balancers used in the labs are the virtual appliances, depending on the needs and environment these are a very good, cost effective option for production as well and depending on the version you get they scale very well. Hence their use in cloud environments, they will not hold you back at all!

To stimulate your interest in load balancing and high availability I’ve put up a video on load balancing RD Gateway services. Consider it a teaser or introduction to more about the subject.

Why use an appliance (hardware/virtual)? Well let’s look at the 2 alternatives:

  • Round robin DNS, which is also sometimes used is just to low tech for most real life scenarios and sometimes can’t be used or is less efficient which impacts scalability and performance. On top of that it doesn’t provide health checking for failover purposes.
  • I’ve also said  before that while Windows NLB  provides layer 4 load balancing out of the box it’s pretty basic. It also often causes a lot of network grief and the implementation can be tedious. This has not improved in an ever more virtualized & cloud based world. On top of that, when network virtualization comes into play you might paint yourself into a corner as those two don’t mix. But if that’s not a concern and you’re on a budget, I’ve used it with success in the past as well.

Load Balancing In An Ever More Demanding Virtualized & Cloudy World


We’ve been using the Kemp Loadmasters for many years now and they have served us very well. You might know that Microsoft Azure has a partnership with Kemp technologies to provide full featured load balancing in your public & hybrid cloud solutions. I pretty happy with that as when talk about load balancing with Microsoft we always end up discussing the need for more features and layer 7 support. I sometimes jokingly tease them that this is due to their Windows NLB legacy. While I have done some magic with that, it is way too limited for today’s (and yesterdays) demands and needs. Also the hacks they use to get it to work can’t be used in network virtualization. In the cloud Microsoft has the Azure Load Balancer. Whilst nice when combined with availability sets many of the current workloads need more. That’s exactly what the KEMP Virtual LoadMaster for Azure delivers in their partnership with Microsoft:

  • Layer 4, Layer 7 Load Balancing
  • Layer 7 (or Cookie) Persistence
  • SSL Offload/SSL Acceleration
  • Application Health Checking
  • Adaptive (Server Resource) Load Balancing
  • Layer 7 Content Switching
  • Application Acceleration: HTTP Caching, Compression & IPS

To me (and many other IT Pros) Kemp is the company that opened load balancing up to everyone on this planet with budget friendly but high value solutions. They took away the barrier to better & more capable load balancing for the masses. Furthermore they keep improving and I have seen many existing customers, including me get ever more benefits with the newer firmware releases, even on their entry level, older models like the LM2200 that are not for sale anymore. So you can keep using them or move them to the lab. They have great support and respond very quickly to vulnerabilities like Heartbleed, Shellshock and Poodle.

image 

Another benefit of this partnership is that we can use the load balancing solution we know and trust in all our environments: on premises (physical or virtual appliance), in the cloud & at our hosting companies. Partner ships with OEMs ensure that you can use the hardware you prefer (the DELL R320 is a nice example) and their Virtual Load Master now even extends into the cloud. So our options are to …

… deploy an appliance …

image

…  virtualize the LoadMasters …

image

… leverage Kemp in the cloud

image

…. or select your own preferred OEM …

image

They cover all our bases with that line up and it helps with operational ease & efficiencies.

As I’m investigating some scenarios with KEMP LoadMasters in a Hyper-V environment (on premises, multi sites, Azure IAAS & Multifactor Authentication you can expect to see some blog posts on this. Some of these will leverage technologies available in Windows Server vNext (Technical Preview). Lot’s of very interesting ideas to support high availability & flexibility that are affordable and not just point solutions.

Ah the joy of being in virtualization is that one gets great exposure to storage, networking, cloud solutions and on premises. The experience & knowledge of the entire stack isn’t just fun (yes working can be fun) but it is also what allows to build great solutions.

Using Host Names in IIS in Combination with a KEMP LoadMaster


At a client the change over of a web site from old servers to new ones lead to the investigation of an issue with the hardware load balancer. Since that web site is related to an existing surveyors solutions suite that already had a KEMP LoadMaster 2200 in use the figured we’d also use it for the web site and no longer use WNLB.

Now the original web site had multiple DNS entries and host header names defined in IIS (see Configure a Host Header for a Web Site (IIS 7)) . Host header names in IIS allow you to host multiple web sites on an IIS server using the same IP address and port. A small added security benefit is that surfing on IP address fails which means we marginally disrupt some script kiddies & get an extra security checkbox marked during an audit Winking smile.

In our example we needed:

Note: The real names have been changed as well as the reasons why as this has some business & historical justifications that don’t matter here.

ntrip.surveyor.lab needs to be handled by the load balanced web servers in the solution. The http://www.surveyor.lab needs to be redirected to another web server to keep the business happy. However for political reasons we have to keep the DNS record for http://www.surveyor.lab pointing to the load balanced servers, i.e. the load master VIP.

Now without host names IIS al worked fine until we wanted to use HTTP redirect. As the web site is the same IP address for both names we either redirected them both or none. To fix this we needed two sites in IIS. The real one hosting ntrip.surveyor.lab and a “fake” one hosting the http://www.surveyor.lab that we want to redirect. Well as both are hosted on the same IP address and port on the IIS server we need to use host names. But then the sites became unavailable.

When checking the LoadMaster configuration, the virtual service for the web servers seemed well.

image

Is this a limitation of hardware load balancing or this specific Loadmaster? Some searching on the internet made it look like I was about the only on on the planet dealing with this issue so no help there.

Kemp Support Rocks

I already knew this but this experience reaffirms it. KEMP Technologies really does care about their customers and are very fast & responsive. I threw a quick question on twitter to @KempTech on Twitter and they responded very fast with some pointers. After that I replied with some more details, they offered to take it on via other means as twitter has it limits. OK, no problems. The next morning I got an e-mail from one of their engineers (Ekkehard) with more information and a request for more input from our side. I quickly made a VISIO diagram of the current and the desired situation. Based on this he let me know this should work.

image

He asked for a copy of the configuration and already pointed to the solution:

And what exactly happens – does the RS turn “red” in the “View/Modify Services” view? That might be caused by the health check settings…
(Remember that a 302 is considered NOT ok, so you had to enter the proper check URL and or / HTTP1.1 hostname)

But at that moment I did not realize this yet. I saw no error or the real server turning red indicating it was down. So we went through the configuration and decided to test without forcing layer 7 to see what happened. This didn’t make a difference and it wasn’t really a solution if it had as we needed layer 7 and layer 7 transparency.

Ekkehard also noticed my firmware was getting rather old (don’t fix what isn’t broken Smile) and suggest an upgrade (5.1-24 to 5.1.-74). So I did, reboot and tested some more settings. To make sure I didn’t miss anything I threw a network sniffer (WireShark) against the issue. And guess what?  As soon as I added a host name to the IIS web site bindings I didn’t even get any request from my client on that server anymore. So it was definitely being stopped at the Loadmaster. Without it request from a client came through perfectly.  That was not IIS doing as with a host name nothing came into the server. So why would the LoadMaster stop traffic to a real server? Because it’s down, that’s why, just like Ekkehard has indicated in one of his mails but we didn’t see it then.

Better check again and sure enough, the health service told me the real servers are down. Hey … that’s new. Did the previous firmware not show this, or just slower? I can’t say for sure. It’s either me being to impatient, a hiccup, the firmware or premature dementia Confused smile

Root Cause

So what happens? The default health check uses HTTP 1.0. You can customize it with a path like  /owa or such but in essence it uses the IP address of the real server and guess what. With a Host header name in IIS that isn’t allowed other wise it can’t figure out what website you want to go to if you’re using this feature to run multiple sites on the same IP address and port. So we need to check the health based on host name. Can the LoadMaster do that for us? Yes it can!

The fix

You need to enable HTTP 1.1 and fill out the host name you want to use for health checking.  In our case that’s ntrip.surveyor.lab. That’s all there’s to it. Easy as can be if you know. And Ekkehard knew he indicated to this in his quoted mail above.

HTTP1 1host

 

Lessons Learned

So how did I not know this? Isn’t this documented? Sure enough on page  56 of the LoadMaster manual it says the following:

7  HTTP  The LoadMaster opens a TCP connection to the Real Server on the Service port (port80). The LoadMaster sends a HTTP/1.0 HEAD request the server, requesting the page ―/‖.  If the server sends a HTTP response with a status code of 2 (200-299, 301, 302, 401) the LoadMaster closes the connection and marks the server as active.  If the server fails to respond within the configured response time for the configured number of times or if it responds with a different status code, it is assumed dead.  HTTP 1.0 and 1.1 support available, using HTTP 1.1 allows you to check host header enabled web servers.

Typical, you read the exact line of information you need AND understand it after having figured it out. Now linking that information (yes we always read all manuals completely Embarrassed smile) to the situation at hand isn’t always that fast a process but I got there in the end with some help from KEMP Technologies.

One hint is perhaps to mention this is in the handy tips that pop up when you hover over a setting in the LoadMaster console. I rely on this a lot and a mention of “HTTP 1.1 allows you to check host header enabled web servers” might have helped me out. But it’s not there. A very poor excuse I know … Embarrassed smile

image

Host Header Names & HTTP redirection

After having fix this issue I proceeded to configure HTTP redirect in IIS 7.5. For this is used two sites. One was just a fake site tied to the www.surveyors.lab hostname in IIS on port 80.

image

For this site I created a HTTP redirect to www.bussines.lab/surveyors/services. This works just fine as long as you don’t forget the http:// in the redirect URL.

image

So it has to be http://www.bussines.lab/surveyors/services or you’ll get a funky loop effect looking like this:

http://www.surveyors.lab/www.bussines.lab/surveyors/services/www.bussines.lab/surveyors/services/www.bussines.lab/surveyors/services

Firefox will tell you you have a loop that will never end but Internet Explorer doesn’t, it just fails. You do get that URL as a pointer to the cause of the issue. That is if you can relate it to that.

The other was the real site  and was configured with following bindings and without redirection.

image

Don’t forget to do this on all real servers in the farm! The next thing I need to find out is how to health check two host names in the LoadMaster as I have two websites with the same IP address, port but different host names.

Hyper-V, KEMP LoadMaster & DFS Replication Provide FTP Solutions For Surveyors Network


Remember the blog entry about A Hardware Load Balancing Exercise With A Kemp Loadmaster 2200 KEMP Loadmaster to provide redundancy for a surveyors GPS network? Well, we got commissioned to come up with a redundant FTP solution for their needs last month and this blog is about what we came up with. The aim was to make due with what is already available.

image

FTP 7.5 in Windows 2008 R2

We use the FTP Server available in Windows 2008 R2 which provides us with all functionality we need: User Isolation and FTP over SSL.

The data from all the GPS stations is sent to the FTP server for safe keeping and is to be used to overcome certain issues customers might have with missing data from surveying solution. This data is not being made available to customers by default, it’s only for special cases & purposes. So we need to collect the data in its own folder named after its account so we can configure user isolation. This also prevents the GPS Stations from writing in locations where it shouldn’t.

As every GPS Station slogs in with the “Station” account it ends up in the “Station” folder as root FTP folder and can’t read or write out of that folder. The survey solution service desk can FTP into that folder and access any data they want.

The data that’s being provided by the software solution (LanSurvey01 and lanSurvey02) is to be sent to its own folder “Data” that is also set up with user Isolation to prevent the application from reading or writing anywhere else on the file system.

The data from should be publicly available to the customers and for this we created a separate FTP site called “Public” that is configured for anonymous access to the same Data folder but with read permissions only. This way the customers can get all the data they need but only have read access to the required data and nothing more.

For more information on setting up FTP 7.5 and using FTP over SSL you might take a look here http://learn.iis.net/page.aspx/304/using-ftp-over-ssl/ and read my blog on FTP over SSL Pollution of the Gene Pool a Real Life “FTP over SSL” Story

High Availability

In the section above we’ve taken care of the FTP needs. Now we still need redundancy. We could use Windows NLB but since this network already uses a KEMP Loadmaster due to the fact that the surveyor’s software has some limitations in its configuration capabilities that doesn’t allow Windows Network Load Balancing being used.

We want both the GPS stations and the surveyor’s application servers to be able to send FTP data when one of the receiving FTF servers is down for some reason (updates, upgrades, maintenance or failure). What we did is set up a VIP for use with FTP on the Kemp Loadmaster. This VIP is what is used by the GPS Stations and the application to write and by the customers to read the FTP data.

DFS-R to complete the solution

But up until now we’ve been ignoring an issue. When using NLB to push data to hosts we need to ensure that all the data will be available on all the nodes all of the time. You could opt to only have the users access the FTP service via a NLB VIP address and push the data to both nodes without using NLB. The latter might be done at the source but then you have twice the amount of data to push out. It also means extra work to configure and maintain the solution. We could copy the data to one FTP node and copy it from there. That works but leaves you very vulnerable to a service outage when the node that gets the original copy is down. No new data will be available. Another issue is the fact that you need a rock solid way to copy the data and have it done it a timely manner, even after down time of one or more of the nodes.

As you read above we provide a NLB VIP as a target for the surveyor’s application and the GPS Stations to send their data to. This means the data will be sent to the FTP NLB array even if one of the nodes is down for some reason. To get the data that arrives from 2 application servers and from 40 GPS Stations synchronized and up to date on both the NLB nodes we use the Data File System – Replication (DFS-R) built into Windows 2008 R2. We have no need for a DFS-Namespace here, so we only use the replication feature. This is easy and fast to set up (add the DFS service from the File Server Role) and it doesn’t require any service down time (no reboot required). The fact that both the FTP nodes are member of a Windows 2008 R2 domain does help with making this easy. To make sure we have replication in all direction we opt to set it up as a full and the replication schedule is 24/7, no days off J Since we chose to replicate the FTP root folder we have both the Data and the Stations folders covered as well as the folder structure needed to have FTP user Isolation function.

This solution was built fast and easily using Windows 2008 R2 out of the box functionality: FTP(S) with User Isolation and DFS-R. The servers are running as hyper-V guests in a Hyper-V cluster providing high availability through Live Migration.