Essential to my Modern Datacenter Lab: Azure Site 2 Site VPN with a DELL SonicWALL Firewall


If you’re a serious operator in the part of IT that is considered the tip of the spear, i.e. you’re the one getting things done, you need a lab. I have had one (well I upgraded it a couple of times) for a long time.  When you’re dealing with cloud as an IT Pro, mostly Microsoft Azure in my case, that need has not changed. It enables you gain the knowledge and insights that you can only acquire by experimenting and hands on work, there is no substitute. Sometimes people ask me how I learn. A lab and lots of hands on experimenting is a major component of my self education and training. I put in a lot of time and some money, yes.

Perhaps you have a lab at work, perhaps not, but you do need one. A lab is a highly valuable investment in education for both your employers and yourself. It takes a lot of time, effort and it cost a bit of money. The benefits however are huge and I encourage any employer who has IT staff to sponsor this at the ROI is huge for a relative small TCO.

I love the fact that in a lab you have (and want) complete control over the entire stack so you experiment at will and learn about the solutions you build end to end. You do need to deal with it all but that’s all good, you learn even more, even when at times it’s tough going. Note that a home lab, even with the associated costs, has the added benefit of still being available to you even if you move between employers or between clients.

You can set up a site to site VPN using Windows Server 2012 R2 RRAS (see Site-to-Site VPN in Azure Virtual Network using Windows Server 2012 Routing and Remote Access Service (RRAS) that works. But for for long term lab work and real life implementations you’ll be using other devices. In the SOHO lab I run everything virtualized & I need internet access for other uses cases than the on premises lab. I also like to minimize the  hosts/VMs/appliances I need to have running to save on electricity costs. For enterprise grade solutions you leverage solutions form CISCO, JUNIPER, CheckPoint etc. There is no need for “enterprise grade” solutions in a SOHO or small branch office environment.Those are out of budget & overkill, so I needed something else. There are some options out there but I’m using a DELL SonicWALL NSA 220. This is a quality product for one and I could get my hands on one in a very budget friendly manner. UTMs & the like are not exactly cheap, even without all the subscription, but they don’t exactly cater to the home user normally. You can go higher or lower but I would not go below a TZ-205 (Wireless) which is great value for money and more than up for the task of providing you with the capabilities you need in a home lab.

SonicWALL NSA 220 Wireless-N Appliance

I consider this minimum level as I want 1Gbps (no I do not buy 100Mbps equipment in 2015) and I want wireless to make sure I don’t need to have too many hardware devices in the lab. As said, the benefit over the RRAS solution are that it serves other purposes (UTM) and it can remain running cheaply so you can connect to the lab remotely to fire up your hosts and VMs which you normally power down to safe power.

Microsoft only dynamic routing with a limited number of vendors/devices but that doesn’t mean all others are off limits. You can use them but you’ll have to research the configurations that work instead of downloading the configuration manual or templates from the Microsoft web site, which is still very useful to look at an example configuration, even if it’s another product than you use.

Getting it to work is a multistep proccess:

    1. Set up your Azure virtual network.
    2. Configure your S2S VPN on the SonicWall
    3. Test connectivity between a on premises VM and one in the cloud
    4. Build out your hybrid or public cloud

Here’s a reference to get you started Tutorial: Create a Cross-Premises Virtual Network for Site-to-Site Connectivity I will be sharing my setup for the SonicWall in a later blog post so you can use it as a reference. For now, here’s a schematic overview of my home lab setup to Azure (the IP addresses are fakes). At home I use VDSL and it’s a dynamic IP address so every now and then I need to deal with it changing. I’d love to have a couple of static IP address to play with but that’s not within my budget. I wrote a little Azure scheduled run book that takes care of updating the dynamic IP address in my Azure site-to-site VPN setup. It’s also published on the TechNet Gallery

image

You can build this with WIndows RRAS, any UTM, Firewall etc … device that is a bit more capable than a consumer grade (wireless) router. The nice things is that I’ve had multiple subnets on premises and the 10 tunnels in a standard Azure  site-to-site VPN accommodate that nicely. The subnets I don’t want to see in a tunnel to azure I just leave out of the configuration.

Tip to save money in your Azure lab for newbies, shut down everything you can when your done. Automate it with PowerShell. I just make sure my hybrid infra is online & the VPN active enough to make sure we don’t run into out of sync issues with AD etc.

In Defense of Switch Independent Teaming With Hyper-V


For many old timers (heck, that includes me) NIC teaming with LACP mode was the best of the best, at least when it comes to teaming options. Other modes often led to passive/active, less than optimal receiving network traffic aggregation. Basically, and perhaps over simplified, I could say the other options were only used if you had no other choice to get things to work. Which we did a lot … I used Intel’s different teaming modes for various reasons in the past (before we had MLAG, VLT, VPC, …). Trying to use LACP where possible was a good approach in the past in physical deployments and early virtualized environments when 1Gbps networking dominated the datacenter realm and Windows did not have native support for LBFO.

But even LACP, even in those days, had some drawbacks. It’s the most demanding form of teaming. For one it required switch stacking. This demands the same brand and type of switches and that means you have no redundancy during firmware upgrades. That’s bad, as the only way to work around that is to move all workload to another rack unit … if you even had the capability to do that! So even in days past we chose different models if teaming out of need or because of the above limitations for high availability. But the superiority of NIC teaming with LACP still stands for many and as modern switches support MLAG, VLT, etc. the drawback of stacking can be avoided. So does that mean LACP for NIC teaming is always the superior choice today?

Some argue it is and now they have found support in the documentation about Microsoft CPS system documentation about Microsoft CPS system. Look, even if Microsoft chose to use LACP in their solutions it’s based on their particular design and the needs of that design I do not concur that this is the best overall. It is however a valid & probably the choice for their specific setup. While I applaud the use of MLAG (when available to you a no or very low cost) to have all bases covered but it does not mean that LACP is the best choice for the majority of use cases with Hyper-V deployments. Microsoft actually agrees with me on this in their Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management guide. They state that Switch Independent configuration / Dynamic distribution (or Hyper-V Port if on Hyper-V and if not on W2K12R2)  is the best possible default choice is for teaming in both native and Hyper-V environments. I concur, even if perhaps not that strong for native workloads (it depends). Exceptions to this:

  • Teaming is being performed in a VM (which should be rare),
  • Switch dependent teaming (e.g., LACP) is required by policy, or
  • Operation of a two-member Active/Standby team is required by policy.

In other words in 2 out of 3 cases the reason is a policy, not a technical superior solution …

Note that there are differences between Address Hash, Hyper-V Port mode & the new dynamic distribution modes and the latter has made things better in W2K12R2 in regards to bandwidth but you’ll need the read the white papers. Use dynamic as default, it is the best. Also note that LACP/Switch Dependent doesn’t mean you can send & receive to and from a VM over the aggregated bandwidth of all team members. Life is more complicated than that. So if that’s you’re main reason for switch dependent, and think you’re done => be ware Winking smile.

Switch Independent is also way better for optimization of VMQ. You have more queues available (sum-of-queues) and the IO path is very predictable & optimized.

If you don’t control the switches there’s a lot more cross team communication involved to set up teaming for your hosts. There’s more complexity in these configurations so more possibilities for errors or bugs. Operational ease is also a factor.

The biggest draw back could be that for receiving traffic you cannot get more than the bandwidth a single team member can deliver. That’s true but optimizing receiving traffic has it’s own demands and might not always be that great if the switch configuration isn’t that smart & capable. Do I ever miss the potential ability to aggregate incoming traffic. In real life I do not (yet) but in some configurations it could do a great job to optimize that when needed.

When using 10Gbps or higher you’ll rarely be in a situation where receiving traffic is higher than 10Gbps or higher and if you want to get that amount of traffic you really need to leverage DVMQ. And a as said switch independent teaming with port of dynamic mode gives you the most bang for the buck. as you have more queues available. This drawback is mitigated a bit by the fact that modern NICs have way larger number of queues available than they used to have. But if you have more than one VM that is eating close to 10Gbps in a non lab environment and you planning to have more than 2 of those on a host you need to start thinking about 40Gbps instead of aggregating a fistful of 10Gbps cables. Remember the golden rules a single bigger pipe is always better than a bunch of small pipes.

When using 1Gbps you’ll be at that point sooner and as 1Gbps isn’t a great fit for (Dynamic) VMQ anyway I’d say, sure give LACP a spin to try and get a bit more bandwidth but will it really matter? In native workloads it might but with a vSwith?  Modern CPUs eat 1Gbps NICs for breakfast, so I would not bother with VMQ. But when you’re tied to 1Gbps it’s probably due to budget constraints and you might not even have stackable, MLAG, VLT or other capable switches. But the arguments can be made, it depends (see Don’t tell me “It depends”! But it does!). But in any case I start saving for 10Gbps Smile

Today as the PC8100 series and the N4000 Series (budget 10Gbps switches, yes I know “budget” is relative but in the 10Gbps world, but they offer outstanding value for money), I tend to set up MLAG with two of these per rack. This means we have all options and needs covered at no extra cost and without sacrificing redundancy under any condition. However look at the needs of your VMs and the capability of your NICs before using LACP for teaming by default. The fact that switch independent works with any combination of budget switches to get redundancy doesn’t mean it’s only to be used in such scenarios. That’s a perk for those without more advanced gear, not a consolation price.

My best advise: do not over engineer it. Engineer it for the best possible solution for the environment at hand. When choosing a default it’s not about the best possible redundancy and bandwidth under certain conditions. It’s about the best possible redundancy and bandwidth under most conditions. It’s there that switch independent comes into it’s own, today more than ever!

There is one other very good, but luckily also a very rare case where LACP/Switch dependent will save you and switch independent won’t: dead switch ports, where the port becomes dysfunctional. So while switch independent protects against NIC, Switch, cable failures, here it doesn’t help you as it doesn’t know (it’s about link failures, not logical issues on a port).

For the majority of my Hyper-V deployments I do not use switch dependent / LACP. The situation where I did had to do with Windows NLB in combination with ICMP Multicast.

Note: You can do VLT, MLAG, stacking and still leverage switch independent teaming, LACP or static switch dependent is NOT mandatory even when possible.

DELL SonicWALL Site-to-Site VPN Options With Azure Networking


The DELL SonicWALL product range supports both policy based and route based VPN configurations. Specifically for Azure they have a configuration guide out there that will help you configure either.

Technically, networking people prefer to use route based configuration. It’s more flexible to maintain in the long run. As life is not perfect and we do not control the universe, policy based is also used a lot. SonicWALL used to be on the supported list for both a Static and Dynamically route Azure VPN connections. According to this thread it was taken off because some people had reliability issues with performance. I hope this gets fixed soon in a firmware release. Having that support is good for DELL as a lot of people watch that list to consider what they buy and there are not to many vendors on it in the more budget friendly range as it is. The reference in that thread to DELL stating that Route-Based VPN using Tunnel Interface is not supported for third party devices, is true but a bit silly as that’s a blanket statement in the VPN industry where there is a non written rule that you use route based when the devices are of the same brand and you control both points. But when that isn’t the case, you go a policy based VPN, even if that’s less flexible.

My advise is that you should test what works for you, make your choice and accept the consequences. In the end it determines only who’s going to have to fix the problem when it goes wrong. I’m also calling on DELL to sort this out fast & good.

A lot of people get confused when starting out with VPNs. Add Azure into the equation, where we also get confused whilst climbing the learning curve, and things get mixed up. So here a small recap of the state of Azure VPN options:

  • There are two to create a Site-to-Site VPN VPN between an Azure virtual network (and all the subnets it contains) and your on premises network (and the subnets it contains).
    1. Static Routing: this is the one that will work with just about any device that supports policy based VPNs in any reasonable way, which includes a VPN with Windows RRAS.
    2. Dynamic Routing: This one is supported with a lot less vendors, but that doesn’t mean it won’t work. Do your due diligence. This also works with Windows RRAS

Note: Microsoft now has added a a 3rd option to it’s Azure VPN Gateway offerings, the High Performance VPN gateway, for all practical purposes it’s dynamic routing, but a more scalable version. Note that this does NOT support static routing.

The confusion is partially due to Microsoft Azure, network industry and vendor terminology differing from each other. So here’s the translation table for DELL SonicWALL & Azure

Dynamic Routing in Azure Speak is a Route-Based VPN in SonicWALL terminology and is called and is called Tunnel Interface in the policy type settings for a VPN.

image

Static Routing in Azure Speak is a Policy-Based VPN in SonicWALL terminology and is called Site-To-Site in the “Policy Type” settings for a VPN.

image

  • You can only use one. So you need to make sure you won’t mix the two on both sites as that won’t work for sure.
  • Only a Pre-Shared Key (PSK) is currently supported for authentication. There is no support yet for certificate based authentication at the time of writing).

Also note that you can have 10 tunnels in a standard Azure site-to-site VPN which should give you enough wiggling room for some interesting scenarios. If not scale up to the high performance Azure site-to site VPN or move to Express Route. In the screenshot below you can see I have 3 tunnels to Azure from my home lab.

image
I hope this clears out any confusion around that subject!

GPS service issues resolved fast by Hyper-V & site resilience engineering


Diminished services on a GPS positioning network

The past couple of days there had been latencies negatively affecting a near real time GPS positioning service that allow the users the correct their GPS measurements in real time.

Flemish Positioning Service (FLEPOS)

That service is really handy when you’re a surveyor and it safes money by avoiding extra GIS post processing work later. It becomes essential however when you are relying on your GPS coordinates to farm automatically, fly aerial photogrammetry patterns, create mobile mapping data, build dams or railways, steer your dredging ships and maneuver ever bigger ships through harbor locks.

Flemish Positioning Service (FLEPOS)

It was clear this needed to be resolved. After checking for network issues we pretty much knew that the recently spiking CPU load was the cause. Partially due to the growth in users, more and more use cases and partially due to a new software version that definitely requires a few more CPU cycles.

The GPS positioning service is running on multiple virtual machines, on separate LUNS, on separate hosts, those hosts are on separate racks. All this is being replicated to a second data center. They have high to continuous availability with Microsoft Failover Clustering and leverage Kemp Loadmaster load balancing. Together with the operations team we moved the load away from every VM, shut it down, doubled the vCPU count and restarted the VM. Rinse and repeat until all VMS have been assigned more vCPUs.

The results where a dramatic improvement in the response times and services response times that went back to normal.

Breathing room with more vCPUs

They can move fast and efficient

All this was done fast. They have the power to decide and act to resolve such issues on our own responsibility. Now the fact that they operate in tight night team that span over bureaucracy, hierarchies and make sure that people who need to involved can communicate fats en effective (even if they are spread over different locations) makes this possible. They have a design for high availability and a vertically integrated approach to the solution stack that spans any resource (CPU, Memory, Storage, Network and Software) combined with a great app owner and rock solid operational excellence (Peopleware) to enable the Site Resilience side of the story. Fast & efficient.

I’m proud to have help design and deliver this service and I’ll be ready and willing to help design vNext of this solution in the near future. We moved it from hardware to a virtualized solution based on Hyper-V in 2008 and have not regretted one minute of it. The operational capabilities it offers are too valuable for that and banking on Hyper-V has proved to be a winner!

Would Hot Add CPU Capability have made this easier?

Yes, faster for sure Smile. The process they have now isn’t that difficult. Now would I not like hot add vCPU capabilities in Hyper-V? Yes, absolutely. I do realize however that not every application might be able to handle this without restarting making the exercise a bit of a moot point in those cases.

Why some people have not virtualized yet I do not know (try and double the CPUs on your hardware servers easily and fast without leaving the comfort of your home office). I do know how ever that you are missing out on a lot of capabilities & operational benefits.

Hyper-V Amigos Showcast Episode 8: Storage Replica in a Stretched Cluster


We finally go to make a next “Hyper-V Amigos Showcast”, due to very busy schedules we had to postpone this a couple of times. But we made it! In this Episode (the 8th one) Carsten and I show one application of a new great feature in Windows Server vNext: Storage Replication. This allows us to replicate a volume between two storage systems without caring what that storage system is as long a you have windows volumes on it. Replication can be synchronous or asynchronous and there are multiple scenarios in which to use this.

Here we focus on trying out replication between two clusters or in a stretched cluster scenario. I have already made a video demonstrating server to server replication. In this showcast we demonstrate  the Stretched Cluster scenario (and troubleshoot our own lab).

image

More info is available here:

Enjoy and see you next time!

Azure Done Well Means Hybrid Done Right


If you think that a hybrid cloud means you need to deploy SCVMM & WAP you’re wrong. It does mean that you need to make sure that you give yourself the best possible conditions to make your cloud a success and an asset in the biggest possible number of all scenarios that might apply or come up.

DC1

Cool you say, I hear you, but what does that mean in real life? Well it means you should stop playing games and get serious. Which translates into the following.

Connectivity

A 200Mbps is the absolute minimum for the SMB market. You need at least that for Office 365 Suite, if you want happy customers that is. Scale based on the number of users and usage but remember you’ll pinch at least a 100Mbps of that for a VPN to Azure.

Get a VPN already!

Or better still, take the gloves off and go for Express Route. Extend your business network to your cloud and be done with all the hacks, workarounds, limitations, tedious & creative yet finicky "solutions" to get thing done. I guess it beats living with the limitations but it will only get you that far.

Any country or business that isn’t investing in FC to the home & cheap affordable data connectivity to the businesses is actively destroying long term opportunity for some dubious short term gain.

So without further ado, life is to short to do hybrid cloud without. It opens up great scenarios that will allow you to get all the comforts of on premise in your Azure data center such as …

Extend AD  & ADFS into Azure

Get that AD & ADFS into the cloud people! What? Yes, do it. That’s what that good solid VPN between Azure and on premises or better still, Express Route enables. Just turn it into just another site of your business.  But one with some fascinating capabilities. DirSync or better Azure Active Directory Sync will only get you that far and mostly in a SAAS(PAAS) ecosystem. Once you’ve done that the world is your oyster!

https://media.licdn.com/mpr/mpr/p/4/005/083/346/127f314.jpg

Conclusion

So don’t be afraid. Just do it!  People I have my home lab and it’s AD connected to my azure cloud via VPN! That’s me the guy that works for his money and pays his own bills. So what are you as a business waiting for?

But wait Didier, isn’t AD going away, why would I not wait for the cloud to be 100% perfect for all I do? Well, just get started today and take it from there. You’ll enjoy the journey if you do it smart and right!

“Your cloud, your terms”. Well that’s true.  But that’s not a given, you’ll need to put in some effort. You have to determine what your terms are and what your cloud should look like. If you don’t you’ll end up in a bad state. If you have good IT staff, you should be OK. If they could handle your development environment & run your data center chances are good they’ll be able to handle “cloud”. Really.

Consultants? Sure, but get really good ones or you’ll get sold to. There’s a lot of churning and selling going on. Don’t get taken for a ride. I know a bunch of really good ones. How do I determine this? One rule … would I hire them Winking smile

Video Interview On Rolling Cluster Upgrades in Windows Server vNext


Carsten Rachfahl from Rachfahl IT-Solutions (quite possibly  Germany’s leading Hyper-V, Storage Spaces & Private cloud consultancy) and I got together in Berlin last November at the Microsoft Technical Summit 2014. Between presenting (I delivered What’s new in Failover Clustering in Windows Server 2012 R2), workshops, interviews we found some time to do a video interview.

We discussed a very welcome new capability in Windows Server vNext: “Rolling cluster updates” or “Cluster Operating System Rolling Upgrade” in Windows Server Technical Preview as Microsoft calls it. I blogged about this rather soon after the release of the Technical Preview First experiences with a rolling cluster upgrade of a lab Hyper-V Cluster (Technical Preview).

Videointerview with Didier Van Hoye about Rolling Cluster Upgrade Thumb1

We’ve been able to do rolling updates of Windows NLB for a long time and we’ve been asking for that same capability in Windows Failover Clustering for many years and now, it’s finally coming! And yes, as you will notice we like that a lot!

You need to realize that making the transition form one version to another as smooth, easy and risk free as possible is of great value to the customer as it enables them to upgrade faster and get the benefits of their investment quicker. For Microsoft it means they can have more people move to more modern environments faster which helps with support and delivering value in a secure and modern environment.

At the end we also joke around a bit about DevOps and how this is just as set of training wheels on the road to true site resilience engineering. All fun and all good. Enjoy!

Options For A Highly Available Load Balanced RD Gateway Server Farm on Hyper-V


When you need to make the RD Gateway service highly available you have some options. On the RD Gateway side you have capability of configuring a farm with multiple RD Gateway servers.image

When in comes to the actual load balancing of the connections there are some changes in respect load balancing from Windows Server 2008 R2 that you need to de aware of! With Windows 2008 R2 you could do:

  1. Load balancing appliances (KEMP Loadmaster for example, F5, A10, …) or Application Delivery Controllers, which can be hardware, OEM servers, virtual and even cloud based (see Load Balancing In An Ever More Demanding Virtualized & Cloudy World). KEMP has Hyper-V appliances, many others don’t. These support layer 4, layer 7, geo load balancing etc. Each has it’s use cases with benefits and drawback but you have many options for the many situations you might encounter.
  2. Software load balancing. With this they mean Windows NLB. It works but it’s rather limited in regards to intelligence for failure detection & failover. It’s in no way an “Application Delivery Controller” as load balancer are positioned nowadays.
  3. DNS Round Robin load balancing. That sort of works but has the usual drawbacks for problem detection and failover.  Don’t get me wrong for some use cases it’s fine, but for many it isn’t.

I prefer the first but all 3 will do the basic job of load balancing the end-user connections based on the traffic. I have done 2 when it was good enough or the only option but I have never liked 3, bar where it’s all what’s needed, because it just doesn’t fit many of the uses cases I dealt with. It’s just too limited for many apps.

In regards to RD Gateway in Windows Server 2012 (R2), you can no longer use  DNS Round Robin for load balancing with the new HTTP transport. The reason is that it uses two HTTP channels (one for input and one for output) and DNS round robin cannot guarantee that both these connections will be routed trough the same RD Gateways server which is a requirement for it to work. Basically RRDNS will only work for legacy RPC-HTTP. RPC could reroute a channel to make sure all flows over the same node at the cost of performance & scalability. But that won’t work with HTTP which provides scalability & performance. Another thing to note is that while you can work without UDP you don’t want to. The UDP protocol is used  to deliver graphics with a better user experience  over even low quality networks for graphics or high and experiences with RemoteFX. TCP (HTTP) is can be used without it (at the cost of a lesser experience) and is also used to maintain the sessions and actions. Do note that you CANNOT use UDP alone as these connections are established only after the main HTTP connection exists between the remote desktop client and the remote desktop server. See Don’t Forget To Leverage The Benefits of RD Gateway On Hyper-V & RDP 8/8.1 for more information

So you will need a least Windows Network Load Balancing (WNLB) because that supports IP affinity to make sure all channels stick to the same node. UDP & HTTP can be on different nodes by the way. Also please not that when using network virtualization WNLB isn’t a good choice. It’s time to move on.

So the (or at least my) preferred method is via a real “hardware” load balancer.  These support a bunch of persistence options like IP affinity, cookie-based affinity, … just look at the screenshot below (KEMP Loadmaster)

image

But they also support layer 7 functionality for better health checking and failover.  So what’s not to like?

So we need to:

  1. Build a RD Gateway Farm with at least two servers
  2. Load balance HTTP/HTTPS for the RD Gateway farm
  3. Load balance UDP for the RD Gateway farm.

We’ll do this 100% virtualized on Hyper-V and we’ll also make make the load balancer it self highly available. Remember, removing single points of failure are like bottle necks. The moment you take one away you just hit the next one Smile.

Kemp has a great deployment guide for RDS on how to do this but I should ass that you could leverage SUB Virtual Services (SUBVS) to deal with the other workloads such as RD Web Access if they’re on the same server. They don’t mention this in the white paper but it’s an option when using HTTP/HTTPS as service type for both configurations. #1 & #2 are the SUB Virtual Services where I used this in a lab.

image

But for RD Gateway you can also leverage the Remote Terminal Service type and in this case you won’t leverage SUBVS as the service type is different between RD Gateway (Remote Terminal) and RD Web Access (HTTP/HTTPS). This is actually used by their RDS template you can download form their support site.

image

Hope this helps some of you out there!

Quick Demo Video Of Site Failover With KEMP Loadmaster Global Balancing


Here’s a quick video that demonstrates how you can achieve site failover with via the KEMP Loadmaster Global Balancing feature. As long as you know what this can do for you and realize that it about site failover and high availability and not continuous availability without a second of service interruption you can deliver nice results with this technology across city campuses or between cities.

In our scenario we normally connect to the primary data center (weighted round robin) and fail over to the DRC when the primary site fails for some reason.

It’s very busy at the moment but I hope to address this topic a bit more in detail in the future. All of this runs virtualized on Hyper-V and performs just fine.

Don’t Forget To Leverage The Benefits of RD Gateway On Hyper-V & RDP 8/8.1


So you upgraded your TS Gateway virtual machine on W2K8(R2) to RDS Gateway on W2K12(R2) too make sure you get the latest and the greatest functionality and cut off any signs of technology debt way in advance. Perhaps you were inspired by my blog series on how to do this, and maybe you jumped through the x86 to x64 bit hoop whilst at it. Well done.

Now when upgrading or migrating from W2K8(R2) a lot of people forget about some of the enhancements in W2K12(R2). This is especially true of you don’t notice much by doing so. That’s why I see people forget about UDP. Why? Well things will keep working as they did before Windows Server 2012 RDS Gateway over HTTP or over RPC-HTTP (legacy clients). I have seen deployments where both the Windows and the perimeter firewall rules to allow UDP over 3391 were missing. Let alone that port 3391 was allowed in the RAP.  But then you miss out on the benefits it offers (a better user experience over less than great network connections and with graphics) ass well on those of that ever more capable thingy called RemoteFX, if you use that.

For you that don’t know yet:  HTTP and UDP protocols are both used preferably by RD Gateway and are more efficient than RPC over HTTP which is better for scaling and experience under low bandwidth and bad connectivity conditions. When HTTP transport channels are up (in & outgoing traffic), two UDP side channels are set up that can be used to provide both reliable (RDP-UDP-R) and best-effort (RDP-UDP-L) delivery of data. UDP also leveraged SSL via the RD gateway because is uses Datagram Transport Layer Security (DTLS). For more info RD Gateway Capacity Planning in Windows Server 2012. Further more it proves you have no reason not to virtualize this workload and I concur!

So why not set it up!?  So check you firewall rules on the RD Gateway Server and set the rules accordingly. Do the same for your perimeter firewalls or any other in between your users and your RD Gateway.

image

Under properties of your RS Gateway server you need to make sure UDP is enabled and listening on the needed IP address(es)

image

A client who connects over your RDS Gateway server, Windows Server 2012(R2) that is, and checks the network connection properties (click the “wireless NIC” like icon in the connection bar) sees the following: UDP is enabled. imageIf they don’t see UDP as enabled and they aren’t running Windows 8 or 8.1 (or W2K12R2) they can upgrade to RDP 8.1 on windows 7 or Windows Server 2008 R2! When they connect to a Windows 7 SP1 or Windows 2008R2  machine make sure you read this blog post Get the best RDP 8.0 experience when connecting to Windows 7: What you need to know as it contains some great information on what you need to do to enable RDP 8/8.1 when connecting to Windows 7 SP1 or Windows 2008 R2:

  1. “Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment\Enable Remote Desktop Protocol 8.0” should be set to “Enabled”
  2. “Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Connections\Select RDP Transport Protocols” should be set to “Use both UDP and TCP” => Important: After the above 2 policy settings have been configured, restart your computer.
  3. Allow port traffic: If you’re connecting directly to the Windows 7 system, make sure that traffic is allowed on TCP and UDP for port 3389. If you’re connecting via Remote Desktop Gateway, make sure you use RD Gateway in Windows Server 2012 and allow TCP port 443 and UDP port 3391 traffic to the gateway

Cool you’ve done it and you verify it works. Under monitoring in the RD Gateway Manager you can see 3 connections per session: one is HTTP and the two others are UDP.

image

Life is good. But if you want to see the difference really well demonstrated try to connect to Windows 7 SP1 computer with RDP8 & TCP/UDP disabled and play a YouTube video, then to the same with RDP8 & TCP/UDP enabled, the difference is rather impressive. Likewise if you leverage RemoteFX in VM. The difference is very clear in experience, just try it! While you’re doing this look a the UDP “Kilobytes Sent” stats (refresh the monitoring tab, you’ll see UDP being put to work when playing a video on in your RDP session.

image