DELL PowerEdge R730 Improves Boot Times


The DELL generation 13 servers are blazingly fast and capable servers. That’s has been well documented by now and more and more people are experiencing it themselves. These are my current preferred servers due to the best value in the market for hard core, no nonsense, high performance virtualization with Hyper-V.

They also have better boot/reboot speeds than the previous generations with UEFI.  We noticed this during deployment and testing. So we decided to informally check how much things have improved.

Using the DELL DRAC8 We test the speed form Windows Server restart …

image

… over the various boot phases …

image

… to the visual appearance of the logon screen

image

So now let’s quickly compare this for a DELL PowerEdge R720 and a PowerEdge R730. Bothe with the same amount of memory, cards, controllers etc. None of these servers had VMS running or another workload at the time of restart.

For the R720 this gave us:

image

and the results for a Windows initiated server restart on a DELL PowerEdge 730 with EUFI boot is:

image

This was reproducible. So we can see that we EUFI boot times have decrease with about 30%. I like that. You might think this is not important but it adds up during trouble shooting or when doing Cluster Aware Updates of a large 16+ node cluster.

Now thing are beginning to look even better as vNext of Windows has this feature call “Soft Restart” which should help us cut down on boot times even more when possible. But that’s for another blog post.

SMB Direct With RoCE in a Mixed Switches Environment


I’ve been setting up a number of Hyper-V clusters with  Mellanox ConnectX3 Pro dual port 10Gbps Ethernet cards. These Mellanox cards provide a nice amount of queues (128) for DVMQ and also give us RDMA/SMB Direct capabilities for CSV & live migration traffic.

Mixed Switches Environments

Now RoCE and DCB is a learning curve for all of us and not for the faint of heart. DCB configuration is non trivial, certainly not across multiple hops and different switches. Some say it’s to be avoided or can’t be done.

You can only get away with a single pair of (uniform) switches in smaller deployments. On top of that I’m seeing more and more different types of switches being used to optimize value, so it’s not just a lab exercise to do this. Combine this with the fact that DCB is an unavoidable technology in networking, unless it get’s replaced with something better and easier, and you might as well try and learn. So I did.

Well right now I’m successfully seeing RoCE traffic going across cluster nodes spread over different racks in different rows at excellent speeds. The core switches are DELL Force10 S4810 and the rack switches are PowerConnect 8132Fs. By borrowing an approach from spine/leave designs this setup delivers bandwidth where they need it a a price point they can afford. They don’t need more expensive switches for the rack or the core as these do support DCB and give the port count needed at the best price point.  This isn’t supposed to be the top in non blocking network design. Nope but what’s available & affordable today in you hands is better than perfection tomorrow. On top of that this is a functional learning experience for all involved.

We see some pause frames being sent once in a while and this doesn’t impact speed that very much. It does guarantee lossless traffic which is what we need for RoCE. When we live migrate 300GB worth of memory across the nodes in the different racks we get great results. It varies a bit depending on the load the switches & switch ports are under but that’s to be expected.

Now tests have shown us that we can live migrate just as fast with non RDMA 10Gbps as we can with RDMA leveraging “only” Multichannel. So why even bother? The name of the game low latency and preserving CPU cycles for SQL Server or storage traffic over SMB3. Why? We can just buy more CPUs/Cores. Great, easy & fast right? But then with SQL licensing comes into play and it becomes very expensive. Also storage scenarios under heavy load are not where you want to drop packets.

Will this matter in your environment? Great question! It depends on your environment. Sometimes RDMA is needed/warranted, sometimes it isn’t. But the Mellanox cards are price competitive and why not test and learn right? That’s time well spent and prepares you for the future.

But what if it goes wrong … ah well if the nodes fail to connect over RDAM you still have Multichannel and if the DCB stuff turns out not to be what you need or can handle, turn it of and you’ll be good.

RoCE stuff to test: Routing

Some claim it can’t be done reliably. But hey they said that for non uniform switch environments too Winking smile. So will it all fall apart and will we need to standardize on iWarp in the future?  Maybe, but isn’t DCB the technology used for lossless, high performance environments (FCoE but also iSCSI) so why would not iWarp not need it. Sure it works without it quite well. So does iSCSI right, up to a point? I see these comments a lot more form virtualization admins that have a hard time doing DCB (I’m one so I do sympathize) than I see it from hard core network engineers. As I have RoCE cards and they have become routable now with the latest firmware and drivers I’d love to try and see if I can make RoCE v2 or Routable RoCE work over different types of switches but unless some one is going to sponsor the hardware I can’t even start doing that. Anyway, lossless is the name of the game whether it’s iWarp or RoCE. Who know what we’ll be doing in 5 years? 100Gbps iWarp & iSCSI both covered by DCB vNext while FC, FCoE, Infiniband & RoCE have fallen into oblivion? We’ll see.

Looking Back at the DELL CIO Executive Summit 2014


Yesterday I attended the DELL CIO Executive Summit 2014 in Brussels. Basically it was home match for me (yes that happens) and I consider it a compliment that I have been given the opportunity to be invited to a day of C level discussions.

image

Apart from a great networking opportunity with our peers we had direct access to many of DELL’s executives. I found it interesting to see what some existing customers had to say and share about their experiences with DELL Services. Especially in the security side of things where they provide a level of expertise and assistance I did not yet realize they did.

The format was small scale and encouraged interactive discussions. That succeeded quite well and made for good interaction between the attending CIOs an DELL executives. We were not being sold to or killed by PowerPoint. Instead we engaged in very open discussions on our challenges and opportunities while providing feedback. It reminded me of the great interaction promoting format at the DELL Enterprise Forum 2014 in Frankfurt this year. You learn a lot from each other and how others deal with the opportunities that arise.

To give you an idea about the amount of access we got consider the following. Where can you walk up to the CEO of a +/- 24 Billion $ company and provide him some feedback on what you like and don’t like about the company he founded? Even better you get a direct, no nonsense answer which explains why and where.  Does he need to do this? My guess is not, but he does and I appreciate that as an IT Professional, Microsoft MVP and customer.

Before the CIO Executive Summit started I joined the Solutions Summit, to go talk shop with sponsors/partners like Intel and Microsoft, DELL employees & peers and lay my eyes on some generation 13 hardware for the 1st time in real life.

It was a long but very good day. As the question gets asked every now and then as to why I attend such summits and events, I can only say that it’s highly interesting to talk to your peers, vendors, engineers and executives. It prevents tunnel vision & acting in your village without knowledge of the world around you. Keeping your situational awareness in IT and business requires you to put in the effort and is highly advisable. It’s as important as a map, reconnaissance and intelligence to the military, without it you’re acting on a playing field you don’t even see let alone understand.

DELL CIO Executive Summit


I’ve been invited and I’m attending the CIO Executive Summit with DELL’s Executive Leadership Team on Wednesday September 17, 2014 in Brussels. It’s an opportunity to meet and network with my peers and IT leaders.  It also provide the opportunities to discuss challenges with Dell executives and where they see DELL help us with those.

It runs parallel with DELL Solutions Tour 2014 Brussels (see http://www.dellsolutionstour2014.com/ for events near you) where I’m sure many will be looking at the recently released generation 13 servers & new Intel CPU offerings.

image

I’ll be attending 2 “Strategic Deep Dive Sessions” that address some of critical challenges facing IT C-Level professionals. I’m doing the one on security. This is important as alone eternal vigilance, preparedness & situational awareness can help mitigate disaster. The technology is just a force multiplier.

The other track is on future ready IT solutions. That means a lot different thins to many of us. The new capabilities and ever faster evolving IT places a financial and operational burden on everyone. I’m very interested to discuss how DELL will deal with this beyond the traditional answers. The need for fast, effective & cost effective solutions that deliver great ROI & TCO is definitely there but the move to OPEX versus CAPEX and the potential loss of ownership also introduces risk that can cost us dearly if not managed right. IT, is still more than a financial model of service billing, even if sometimes it looks like that. It’s important to keep the mix in balance & do it smart.

So on Wednesday I’ll be focusing on strategy and not action or tools. Something that get’s missed way too much by way too many way too often. Michael Dell will be there and if I get the opportunity I’ll be happy to give some feedback.

Configuring timestamps in logs on DELL Force10 switches


When you get your Force10 switches up and running and are about to configure them you might notice that, when looking at the logs, the default timestamp is the time passed since the switch booted. During configuration looking at the logs can very handy in seeing what’s going on as a result of your changes. When you’re purposely testing it’s not too hard to see what events you need to look at. When you’re working on stuff or trouble shooting after the fact things get tedious to match up. So one thing I like to do is set the time stamp to reflect the date and time.

This is done by setting timestamps for the logs to datetime in configuration mode. By default it uses uptime. This logs the events in time passed since the switch started in weeks, days and hours.

service timestamps [log | debug] [datetime [localtime] [msec] [show-timezone] | uptime]

I use: service timestamps log datetime localtime msec show-timezone

F10>en
Password:
F10#conf
F10(conf)#service timestamps log datetime localtime msec show-timezone
F10(conf)#exit

Don’t worry if you see $ sign appear left or right of your line like this:

F10(conf)##$ timestamps log datetime localtime msec show-timezone

it’s just that the line is to long and your prompt is scrolling Winking smile.

This gives me the detailed information I want to see. Opting to display the time zone and helps me correlate the events to other events and times on different equipment that might not have the time zone set (you don’t always control this and perhaps it can’t be configured on some devices).

image

As you can see the logging is now very detailed (purple). The logs on this switch were last cleared before I added these timestamps instead op the uptime to the logs. This is evident form the entry for last logging  buffer cleared: 3w6d12h (green).

Voila, that’s how we get to see the times in your logs which is a bit handier if you need to correlate them to other events.

SMB 3, ODX, Windows Server 2012 R2 & Windows 8.1 perform magic in file sharing for both corporate & branch offices


SMB 3 for Transparent Failover File Shares

SMB 3 gives us lots of goodies and one of them is Transparent Failover which allows us to make file shares continuously available on a cluster. I have talked about this before in Transparent Failover & Node Fault Tolerance With SMB 2.2 Tested (yes, that was with the developer preview bits after BUILD 2011, I was hooked fast and early) and here Continuously Available File Shares Don’t Support Short File Names – "The request is not supported" & “CA failure – Failed to set continuously available property on a new or existing file share as Resume Key filter is not started.”

image

This is an awesome capability to have. This also made me decide to deploy Windows 8 and now 8.1 as the default client OS. The fact that maintenance (it the Resume Key filter that makes this possible) can now happen during day time and patches can be done via Cluster Aware Updating is such a win-win for everyone it’s a no brainer. Just do it. Even better, it’s continuous availability thanks to the Witness service!

When the node running the file share crashes, the clients will experience a somewhat long delay in responsiveness but after 10 seconds the continue where they left off when the role has resumed on the other node. Awesome! Learn more bout this here Continuously Available File Server: Under the Hood and SMB Transparent Failover – making file shares continuously available.

Windows Clients also benefits from ODX

But there is more it’s SMB 3 & ODX that brings us even more goodness. The offloading of read & write to the SAN saving CPU cycles and bandwidth. Especially in the case of branch offices this rocks. SMB 3 clients who copy data between files shares on Windows Server 2012 (R2) that has storage an a ODX capable SAN get the benefit that the transfer request is translated to ODX by the server who gets a token that represents the data. This token is used by Windows to do the copying and is delivered to the storage array who internally does all the heavy lifting and tell the client the job is done. No more reading data form disk, translating it into TCP/IP, moving it across the wire to reassemble them on the other side and write them to disk.

image

To make ODX happen we need a decent SAN that supports this well. A DELL Compellent shines here. Next to that you can’t have any filter drives on the volumes that don’t support offloaded read and write. This means that we need to make sure that features like data deduplication support this but also that 3rd party vendors for anti-virus and backup don’t ruin the party.

image

In the screenshot above you can see that Windows data deduplication supports ODX. And if you run antivirus on the host you have to make sure that the filter driver supports ODX. In our case McAfee Enterprise does. So we’re good. Do make sure to exclude the cluster related folders & subfolders from on access scans and schedules scans.

Do not run DFS Namespace servers on the cluster nodes. The DfsDriver does not support ODX!

image

The solution is easy, run your DFS Namespaces servers separate from your cluster hosts, somewhere else. That’s not a show stopper.

The user experience

What it looks like to a user? Totally normal except for the speed at which the file copies happen.

Here’s me copying an ISO file from a file share on server A to a file share on server B from my Windows 8.1 workstation at the branch office in another city, 65 KM away from our data center and connected via a 200Mbps pipe (MPLS).

image

On average we get about 300 MB/s or 2.4 Gbps, which “over” a 200Mbps WAN is a kind of magic. I assure you that they’re not complaining and get used to this quite (too) fast Winking smile.

The IT Pro experience

Leveraging SMB 3 and ODX means we avoid that people consume tons of bandwidth over the WAN and make copying large data sets a lot faster. On top of that the CPU cycles and bandwidth on the server are conserved for other needs as well. All this while we can failover the cluster nodes without our business users being impacted. Continuous to high availability, speed, less bandwidth & CPU cycles needed. What’s not to like?

Pretty cool huh! These improvements help out a lot and we’ve paid for them via software assurance so why not leverage them? Light up your IT infrastructure and make it shine.

What’s stopping you?

So what are your plans to leverage your software assurance benefits? What’s stopping you? When I asked that I got a couple of answers:

  • I don’t have money for new hardware. Well my SAN is also pré Windows 2012 (DELL Compellent SC40 controllers. I just chose based on my own research not on what VARs like to sell to get maximal kickbacks Winking smile. The servers I used are almost 4 years old but fully up to date DELL PowerEdge R710’s, recuperated from their duty as Hyper-V hosts. These server easily last us 6 years and over time we collected some spare servers for parts or replacement after the support expires. DELL doesn’t take away your access to firmware &drivers like some do and their servers aren’t artificially crippled in feature set.
  • Skills? Study, learn, test! I mean it, no excuse!
  • Bad support from ISV an OEMs for recent Windows versions are holding you back? Buy other brands, vote with your money and do not accept their excuses. You pay them to deliver.

As IT professionals we must and we can deliver. This is only possible as the result of sustained effort & planning. All the labs, testing, studying helps out when I’m designing and deploying solutions. As I take the entire stack into account in designs and we do our due diligence, I know it will work. The fact that being active in the community also helps me know early on what vendors & products have issues and makes that we can avoid the “marchitecture” solutions that don’t deliver when deployed. You can achieve this as well, you just have to make it happen. That’s not too expensive or time consuming, at least a lot less than being stuck after you spent your money.

Setting Up A Uplink (Trunk/General) With A Dell PowerConnect 2808 or 28XX


Introduction

I was deploying a bunch of PowerConnect 2808 switches that needed to provide connectivity to multiple VLANs  (Training, Guest, …)  in a class rooms. I should have figured it out before I got there with my “assumption” based quick configuration loaded on the switches if I had just refreshed my insights in how the PowerConnect family of switches work.

image

So before we go on, here are the basics on switch port (or LAG) modes in the PowerConnect family. Please realize that switch behavior (especially for trunk mode in this context) has changed over time with more recent switches/firmware. But the current state of affairs is as follows (depending on what model & firmware you have behavior differs a bit).You can put your port or LAG in the following 3 (main) modes:

Access: The port belongs to a single untagged VLAN. When a port is in Access mode, the packet types which are accepted on the port cannot be designated. Ingress filtering cannot be enabled/disabled on an access port. So only untagged received traffic is allowed and all transmitted traffic is untagged. The setting of the port determines the VLAN of traffic. Tagged received traffic is dropped. Basically, this is what you set your ports for client devices to (printer, PC, laptop, NAS).

Trunk: In older versions this means that ALL transmitted traffic is tagged.  That’s easy. Tagged received traffic is dropped if doesn’t belong to one of the defined VLAN on the trunk. In more recent switches/firmware untagged received traffic is dropped but for one VLAN, that can be untagged and still be received. Which is nice for the default VLAN and makes for a better compatibility with other switches.

General: You determine what the rules are. You can configure it to transmit tagged or untagged traffic per VLAN. Untagged received traffic is accepted and the PVID determines the VLAN it is tagged with.  Tagged received traffic is dropped if doesn’t belong to one of the defined VLANs.

Also see this DELL link PowerConnect Common Questions Between Access, General and Trunk mode

The PowerConnect 28XX Series

These  are good switches for their price point & use cases. Just make sure you buy them for the right use case. There is only one thing I find unforgiving in this day and age: the lack of SSH/HTTPS support for management.

Go ahead fire up a 2808 and take a look at the web interface and see what you can configure. In contrast with the PC54XX/55XX etc. Series you cannot set the port mode it seems. So how can this switch accommodate trunks/general/access modes at all. Well it’s implied in the configuration of ports that seem to be set in general mode by default and you cannot change that. The good news is that with the right setting a port in general mode behaves like a port in access or trunk mode. How? Well we follow the rules above.

So we assume here that a port is in general mode (can’t be changed). But we want trunk mode, so how do we get the same behavior? Let’s look at some examples in speudo CLI. (It’s web GUI only device).

Example 1: Classic Trunk = only defined tagged traffic is accepted. All untagged traffic is dropped

switchport mode trunk
switchport trunk allowed vlan add 9, 20

So we can have the same behavior is general mode using

switchport mode general
switchport general allowed vlan add 9, 20 tagged
switchport general pvid 4095   

The PVID  of 4095 is the industry standard discard VLAN, it assign this VLAN to all untagged traffic which is dropped. Ergo this is the same as the trunk config above!

Example 2: Modern Trunk = only defined tagged traffic and one untagged VLAN is accepted

switchport mode trunk
switchport trunk allowed vlan add 9, 20
switchport trunk allowed vlan add 1 untagged

So we can have the same behavior is general mode using

switchport mode general
switchport general allowed vlan add 9, 20 tagged
switchport general pvid 1  

This example is what we needed in the classroom. And is basically what you set with the GUI. So far so good. But we ran into an issue with connectivity to the access ports in VLAN 9 and VLAN 20. Let’s look at that in the next Example

Example 3: Access port mode = only one untagged VLAN is accepted

switchport mode access
switchport access vlan 9

Switchport mode general
switchport general allowed vlan add 9 untagged
switchport general pvid 9

If you’re accustomed to the higher end PC switches you define the port in access mode and add the VLAN of you choice untagged. That’s it. Here the mode is general and can’t be changed meaning we need to set the PVID to 9 so all untagged traffic is indeed tagged with VLAN 9 on the port.

Setting Up an uplink between a PowerConnect 5548 and a 2808

Here’s the normal deal with higher range series of PowerConnect switches: you normally use the port mode to define the behavior and in our case we could go with a trunk or general mode. We use trunk, leave the native VLAN for the one untagged VLAN and add 9 and 20 as tagged VLANs.

The “trunk” port of LAG is left on the default PVID

image

So an “access” port for VLAN 9 is is achieved by setting the PVID to 9

image

And an “access” port for VLAN 20 is achieved by setting the PVID to 20

image

While the VLAN  membership settings are what you’d expect them to be like on the higher end PowerConnect models:

VLAN 1 (native)

image

VLAN 9 (Corp)

image

VLAN 20 (Guest)

image

If it’s the first time configuring a PC2808 you might  totally ignore the fact that needed to do some extra work to make traffic flow. So to recap what you need to do  As described above there is no selection of access/general/trunk … on a PowerConnect 2808. The port or the lag is “implicitly” set to general and the extra settings of the PVID and adding tagged/untagged VLANs will make it behave as general, trunk or access.

  • The trick is to set any other VLAN than the default 1 to tagged on the port or LAG you’ll use as uplink. So far things are quite “standard PowerConnect”.
  • You set the VLAN membership of your “access” ports to untagged to the VLAN you want them to belong to.
  • After that in on the “access” ports you set the PVID to the VLAN you want the port to belong to. If you do not do this the port still behaves as if it’s a VLAN 1 port. It will not get a DHCP address for that VLAN but for for the the one on VLAN 1 if there  is one, or, if you use a static IP address for the subnet of a VLAN on that port you won’t have connectivity as it’s not set to the right VLAN.

The reason we used the PowerConnect 2808 series here is that we needed silent ones (passive cooling) and they need multiple ones in the training rooms to avoid to many cables running around the place. That was the 2 minutes at the desk of the project managers quick fix to a changed requirement. The real solution of cause would have been to get 24+ outlets to the room in the correct places and add 24+ ports to the normal switch count in the hardware analysis for the building solution. But after the facts you have to roll with the flow.