About workinghardinit

IT without the sales brochure gloss

Failed at dumping XP in a timely fashion? Reassert yourself by doing better with Windows Server 2003!


I could write a blog post that repeats the things I said bout XP here for Windows 2003 with even some more drama attached so I won’t. There’s plenty about that on the internet and you can always read these blogs again:

I also refer you to a old tweet of mine that got picked up by some one and he kind of agreed:

image

Replace “XP” with “Server 2003” and voila. Instant insight into the situation. You are blocking yourself from moving ahead and it getting worse by the day. All IT systems & solutions rot over time. They become an ever bigger problem to manage and maintain, costing you time, effort, money and lost opportunities due to blocking to progress. There comes a day that creative solutions won’t pop up anymore like the one in this blog post  Windows XP Clients Cannot Execute Logon Scripts against a Windows Server 2012 R2 Domain Controller – Workaround and more recently this on where people just waited to long to move AD over from Windows Server 2003 to something more recent It turns out that weird things can happen when you mix Windows Server 2003 and Windows Server 2012 R2 domain controllers. All situations where not moving ahead out of fear to break stuff actually broke the stuff.

In the environments I manage I look at the technology stack and plan the technologies that will be upgraded in the coming 12 months in the context of what needs to happen to support & sustain initiatives. This has the advantage that the delta between versions & technologies can never become to big. It avoids risk because it doesn’t let delta grow for 10 years an blocks introducing “solutions” that only supports old technology stacks. It make sure you never fall behind too much, pay off existing technology debt in a timely fashion and opens up opportunities & possibilities. That’s why our AD is running Windows Server 2012 R2 and our ADFS was moved to 3.0 already. It’s not because a lot of things have become commodities you should hand ‘m over to the janitor in break/fix mode. Oh the simplicity by which some wander this earth …

OODA

Observe, Orient, Decide, Act. Right now in 2014 we’ve given management and  every product/application owner their marching orders. Move away from any Windows 2008 / R2 server that is still in production. Why? They demand a modern capable infrastructure that can deliver what’s needed to grasp opportunities that exits with current technology. In return they cannot allow apps to block this. It’s as easy and simple as that. And we’ll stick to the 80/20 rule to call it successful and up the effort next year for the remainder. Whether it’s an informal group of dedicated IT staff or a full blown ITIL process that delivers that  doesn’t matter. It’s about the result and if I still see Windows 7 or Windows 2008 R2 being rolled out as a standard I look deeper and often find a slew of Windows 2003 or even Windows 2000 servers, hopefully virtualized by now. But what does this mean? That you’re in a very reactive modus & in a bad place. Courage & plans are what’s needed. Combine this with skills to deal with the fact that no plan ever woks out perfectly. Or as Mike Tyson said “Everybody has a plan until they get punched in the mouth. … Then, like a rat, they stop in fear and freeze.”

Organizations that still run XP and Windows Server 2003 are paralyzed by fear & have frozen even before they got hit. Hiding behind whatever process or methodology they can (or the abuse of it) to avoid failure by doing the absolute minimum for the least possible cost. Somehow they define that as success and it became a mission statement. If you messed up with XP, there’s very little time left to redeem yourself and avoid the same shameful situation with Windows Server 2003. What are you waiting for? Observe, Orient, Decide, Act.

Configuring timestamps in logs on DELL Force10 switches


When you get your Force10 switches up and running and are about to configure them you might notice that, when looking at the logs, the default timestamp is the time passed since the switch booted. During configuration looking at the logs can very handy in seeing what’s going on as a result of your changes. When you’re purposely testing it’s not too hard to see what events you need to look at. When you’re working on stuff or trouble shooting after the fact things get tedious to match up. So one thing I like to do is set the time stamp to reflect the date and time.

This is done by setting timestamps for the logs to datetime in configuration mode. By default it uses uptime. This logs the events in time passed since the switch started in weeks, days and hours.

service timestamps [log | debug] [datetime [localtime] [msec] [show-timezone] | uptime]

I use: service timestamps log datetime localtime msec show-timezone

F10>en
Password:
F10#conf
F10(conf)#service timestamps log datetime localtime msec show-timezone
F10(conf)#exit

Don’t worry if you see $ sign appear left or right of your line like this:

F10(conf)##$ timestamps log datetime localtime msec show-timezone

it’s just that the line is to long and your prompt is scrolling Winking smile.

This gives me the detailed information I want to see. Opting to display the time zone and helps me correlate the events to other events and times on different equipment that might not have the time zone set (you don’t always control this and perhaps it can’t be configured on some devices).

image

As you can see the logging is now very detailed (purple). The logs on this switch were last cleared before I added these timestamps instead op the uptime to the logs. This is evident form the entry for last logging  buffer cleared: 3w6d12h (green).

Voila, that’s how we get to see the times in your logs which is a bit handier if you need to correlate them to other events.

Defragmenting your CSV Windows 2012 R2 Style with Raxco Perfect Disk 13 SP2


When it comes to defragmenting CSV it seemed we took a step back when it comes to support from 3rd party vendors. While Windows provides for a great toolset to defragment a CSV it seemed to have disappeared form 3r party vendor software. Even from the really good Raxco Perfect disk. They did have support for this with Windows 2008 R2 and I even mentioned that in a blog.

If you need information on how to defragment a CSV in Windows 2012 R2, look no further.There is an absolutely fantastic blog post on the subject How to Run ChkDsk and Defrag on Cluster Shared Volumes in Windows Server 2012 R2, by Subhasish Bhattacharya one of the program managers in the Clustering and High Availability product group. He’s a great guy to talk shop to by the way if you ever get the opportunity to do so. One bizarre thing is that this must be the only place where PowerShell (Repair-ClusterSharedVolume cmdlet) is depreciated in lieu of chkdsk.

3rd party wise the release of Raxco Perfect Disk 13 SP2 brought back support for defragmenting CSV.

image

I don’t know why it took them so long but the support is here now. It looks like they struggled to get the CSVFS (the way CSV are now done since Windows Server 2012) supported. Whilst add it, they threw in support for ReFS by the way. This is the first time I’ve ever seen this. Any way it’s here and that’s good because I have a hard time accepting that any product (whatever it does) supports Hyper-V if it can’t handle CSV, not if you want to be taken seriously anyway. No CSV support equals = do not buy list in my book.

Here’s a screenshot of Perfect disk defragmenting away. One of the CSV LUNs in my lab is a SSD and the other a HDD.

image

Notice that in Global Settings you can tweak the behavior when defragmenting optimization of various drive types, including CSVFS but you just have to leave the default on unless you like manual labor or love PowerShell that much you can’t forgo any opportunity to use it Winking smile

image

Perfect disk cannot detect what kind of disks you have behind the CSV LUN so you might want to change the optimization method if you’re running SSD instead of HHD.

image

I’d love for Raxco to comment on this or point to some guidance.

What would also be beneficial to a lot of customers is guidance on defragmentation on the different auto-tiering storage arrays. That would make for a fine discussion I think.

Migrate A Windows 2003 RADIUS–IAS Server to Windows Server 2012 R2


Some days you walk into environments were legacy services that have been left running for 10 years as:

  1. They do what they need to do
  2. No one dares touch it
  3. Have been forgotten, yet they provide a much used service

Recently I had the honor of migrating IAS that was still running on Windows Server 2003 R2 x86, which was still there for reason 1. Fair enough but with W2K3 going it’s high time to replace it. The good news was it had already been virtualized (P2V) and is running on Hyper-V.

Since Windows 2008 the RADIUS service is provided by Network Policy Server (NPS) role. Note that they do not use SQL for logging.

Now in W2K3 there is no export/import functionality for the configuration in IAS. So are we stuck? Well no, a tool has been provided!

Install a brand new virtual machine with W2K12R2 and update it. Navigate to C:\Windows\SysWOW64 folder and grab a copy of IasMigReader.exe.

image

Place IasMigReader.exe in the C:\Windows\System32 path on the source W2K3 IAS server as that’s configured in the %path% environment variable and it will be available anywhere from the command prompt.

  • Open a elevated command prompt
  • Run IasMigReader.exe

image

  • Copy the resulting ias.txt file from the  C:\Windows\System32\IAS\folder. Please keep this file secure it contains password. TIP: As a side effect you can migrate your RADIUS even if no one remembers the shared secrets and you now have them again Winking smile

image

Note: The good news is that in W2K12 (R2) the problem with IasMigReader.exe generating a bad parameter in ias.txt is fixed ((The EAP method is configured incorrectly during the migration process from a 32-bit or 64-bit version of Windows Server 2003 to Windows Server 2008 R2). So no need to mess around in there.

  • Copy the ias.tx file to a folder on your target NPS server & run the following command from an elevated prompt:

netsh nps import <path>\ias.txt

image

  • Open the NPS MMC and check if this went well, normally you’ll have all your settings there.

image

When Network Policy Server (NPS) is a member of an Active Directory® Domain Services (AD DS) domain, NPS performs authentication by comparing user credentials that it receives from network access servers with the credentials that are stored for the user account in AD DS. In addition, NPS authorizes connection requests by using network policy and by checking user account dial-in properties in AD DS.

For NPS to have permission to access user account credentials and dial-in properties in AD DS, the server running NPS must be registered in AD DS.

Membership in Domain Admins , or equivalent, is the minimum required to complete this procedure.

  • All that’s left to do now is pointing the WAPs (or switches & other RADIUS Clients) to the new radius servers. On decent WAPs this is easy as either one of them acts as a controller or you have a dedicated controller device in place.
  • TIP: Most decent WAPS & switches will allow for 2 Radius servers to be configured. So if you want you can repeat this to create a second NPS server with the option of load balancing. This provides redundancy & load balancing very easily. Only in larger environments multiple NPS proxies pointing to a number of NPS servers make sense.Here’s a DELL PowerConnect W-AP105 (Aruba) example of this.

image

Is there longevity in Private & Hybrid Clouds?


This blog is just thinking out loud. Don’t get upset Smile

Private & hybrid clouds demand economies of scale or high value business

Let’s play devils advocate for a moment a look with a very critical eye at private & hybrid clouds. Many People are marketing, selling and buying private & hybrid clouds today. Some of us are building them ourselves, with or without help. Some of us even have good reasons to do so as it makes economical sense to do so. But for many that do it or consider doing it that might not be the case. It depends.

Why are so many marching to the beat of those drums? It’s being marketed as great, it’s being sold as what you need and that’s what makes money for many people. But one can say the same of Porsches, but chances are you’re not buying those as company cars. Well it’s perhaps a bit like VDI. If you have a use case that’s economically sound, design and implement it well, it will serve your needs. But it’s not for everyone as it can be expensive, complex & restrictive.

You want your cloud to be this:

AZurenice

Not this:

cloudnasty

To get great results you’ll need to do more than throw your money at vendors. So what’s the real motivation to do private/hybrid clouds for companies? If the answer is “well so many people are doing it, we can’t ignore it”. Well not doing something is not ignoring it, it’s a valid choice as well. And what others do isn’t relevant per definition. You need to know what you do where and why to make plans & choose technologies to achieve your goals. Think about what you do. When does a private/hybrid cloud make sense? How big do you need to be? What kind of delta should you have to make this worth while, i.e. how many VMs do you deploy per week? How many do you destroy each week?  What economies of scale must you have to make it wise? What kind of business? What are your pain points you’re trying to solve? What are you trying to achieve? Private clouds today are not void of complexity and there a are few abstraction layers that are at the quality/functionality level they need to be at.

My biggest concern here is that too many companies will build expensive, complexes private & hybrid clouds without ever seeing the return on investment. Not just because of the cost, complexity but also because they might not be very long lived for the use cases they have today. Many see these as transition models and they are great for that. The question is how good are you at transitioning? You don’t want to get stuck in that phase due to costs of complexity. What if the transition lasts to long and you complete it when public cloud has evolved into services that wipe away what the reasons your TCO/ROI was based on?

Note: as cloud means everything to every one you could call doing on premise & Office 365 + backup to the cloud also hybrid. So in that case Hybrid is a better fit for many more organizations.

Things are moving fast

Cloud offers are increasing at the speed of light and prices are dropping in free fall. While some say that’s a race to the bottom, it’s not. This is an all out battle which is raging to grab as much market share as possible. When the dust of this settles who’ll be left? Google, Amazon and Microsoft. They’re not loss leaders, they have a purpose and only they know the financial picture behind their solutions.

image

From there they’ll defend a fixed and entrenched position.  Where will that lead us? Stalemate and rising costs? Or a long term tug of ware where mutual assured bankruptcy will make sure prices won’t rise too much … until some game changing event that breaks it all open. For many people IAAS is still (too) expensive and non of the cloud vendors seem to run a profit, all this at ever lower prices. Sounds like a price hike will be in order once the market shares have been grabbed. But have people really calculated the cost of on premise? Can one compete? Or is the benefit of on premise worth the cost? Oh and I take on premise as being anything that even resembles racks in local or regional data centers running a cloud stack on it for you. Now I have to admit that in my region of the world most cloud hosters are not on a level of professionalism & scale like they are in the Nordics for example.

SAAS, PAAS, IAAS

That’s my order of preference actually. I think SAAS & PAAS are the areas where cloud really shines. IAAS can be a great solution for many needs but I don’t see it as ready yet a a whole sale replacement of on premise.  While many offerings in IAAS are not perfect yet and there are many blocking issues to be solved there is a lot of value in the cloud when you do it right for your needs. If you have a very modern and optimized IT infrastructure IAAS can feel like a step back right now but that will change in the right direction over the next 2 to 3 years I think. And as during that time frame you start using SAAS & PAAS more en more I which means improved IAAS will be able to cover (all?) your remaining needs better. Again, you need to things that deliver fast or you run high (financial) risks.

Intersecting fields of fire

In this race at light speed,which cloud vendor is best? If you want and need to have all bases covered I think it’s reasonably safe to say Microsoft holds the most complete port folio from IAAS, PAAS, SAAS & Cloud storage. They’re now throwing in MPLS networks (http://azure.microsoft.com/en-us/services/expressroute/)  to tie it into hybrid scenarios which should take last century VPN technology out of the picture. Some more standardization in network virtualization, flexibility and capabilities would be welcome as well. But in the end will it matter? People might choose based on possible use cases or capabilities but if you don’t need them that’s a moot point. They become commodities you buy from a few players, I just hope we like our cloud dealers a bit better than we do our energy and telecom providers. Nobody seems really happy with those. But as a buyer I like the idea of having options, as the saying goes “I’d rather have it and not need it than need it and don’t have it”.

Now MPLS s coming what else is missing? A storage gateway / proxy in IAAS

One of the biggest issues in airlifting the entire on premise infrastructure into the cloud is the legacy nature of the applications in combination with the high cost of IAAS (VHD) storage and the limitations compared to what you can do with VHDX on premise. That’s probably an artificial licensing decision bit what can you do? What we need to alleviate this is a REST based cloud gateway to present storage to legacy apps in IAAS while storing the data in Azure blob storage. It’s a bit of a cludge as we’’ just love the fact we can get rid of pass through, vISCSI, vFC thanks to (shared) VHDX. Why do I think we need a solution? Apps have a very long (too long?) live time and it would speed up cloud adoption big time. Just dropping the price for virtual disk storage would be the easiest path to go but I don’t see any indication of that.

The lure of being in the cloud is big but bandwidth & latency in combination with storage costs is keeping people from going there when it comes to so many “legacy” on premise applications. There is a fix. Put everything in the cloud where is is close together and where bandwidth and latency can become a none issue. We need affordable storage and a way for legacy apps to handle object based storage. The fact that the new StorSimple offering has an azure appliance doesn’t really help here as it’s tied to on premise and it’s iSCSI to the guest in IAAS. Not that great is it? For now it looks too much like on boarding to Azure for non MSFT shops and people who are way behind the herd in modern technologies. At least for the environment I work in. Physical server are there to host VMs, so no StorSimple. Other physical servers are point solutions (AD, Exchange or specialized software that needs more hardware access than virtualization can supply). Again, no StorSimple target.

I cloud, you cloud, we cloud

Building and maintaining a data center is loosing it’s economic edge fast. At least for now. I’m not saying all data center or even server rooms will disappear but they’ll reduce significantly. The economics of public cloud are to attractive to ignore. Private and hybrid clouds cost money on top of the cost of running a data center. So why would you? Sure, the cost of cloud isn’t cheap but there are other reasons to move:

  • Get rid of facility management of data centers and server rooms. It’s a big issue & cost.
  • Power/cooling needs. The big cloud players are rapidly becoming the only ones with a plan when it comes to developing an energy plan. Way more innovative & action driven then most governments. They’ll have way better deals than you’ll ever get.
  • Infrastructure costs. Storage, networking, compute, backup, DR, licensing … the entire life cycle of these cost a lot of money and require talent.
  • Personnel costs. Let’s face it. Talented people might be a companies most valuable resource in HRM speak, but in reality they’d love to get rid of a much of that talent as possible to maximize profits. The only reason they employ talent is because they have to.
  • The growth in compute & storage in the cloud is humongous. You’ll never keep up and compete at that level. It was said recently Moore’s law has been replaced by “Bezo’s law’’ http://gigaom.com/2014/04/19/moores-law-gives-way-to-bezoss-law/

I’m going to make a bold statement. If you want/need to do cloud, you should really seriously consider spending your money in public cloud and minimize your investment in private/hybrid clouds. Go as directly to the future and try to keep your private/hybrid stack as simple and cheap possible as a transition to the public cloud.  Leverage PowerShell, SMA and for example Azure automation to manage what you leave on premise. I have my doubts about the longevity of private/hybrid clouds for many organizations and a such investments should be “optimized” => cheap & easy to replace. So unless you have a real big business case for wanting to keep on premise and can make that economically feasible, it’s not your goal, it’s a transition tool. If you’re a huge enterprise, an agency involved in national security a hosting company or Switzerland you can ignore this advice Winking smile. But I see no one rushing to buy RackSpace?

Security, Privacy, Concentrated Power?

What about security, privacy, vendor lock in? You have to worry about that now as well, and you’re probably not that good at avoiding it on premise either. Switching from Oracle to SQL is not an easy feat.  Cloud companies will have a lot of power due to the information they distill form big (meta) data. On top of that they’re set to be the biggest providers of compute, energy & if they buy some telecoms companies  even of data communications. More and more power concentrated in ever less players. That’s not desirable, but it seems that’s how it will play out. The alternatives cost more and that determines most of all what happens. The economies are too good to ignore.

Government clouds to mitigate risk?

I now also see the call to build government clouds. Often at various levels. Well for decades now, bar some projects, a lot of their IT efforts have been slow, mediocre and expensive. 400$ to lift & place back some floor tiles. Having to buy a spool of 2km fibre channel if you need 80 meter. 5000$ to answer a question with yes or no, a VM that costs 750$ per month … (1000$ if you want a backup of the VM). 14 days to restore a VM from backup … abuse & money grabbing are rampant. Are these people going to do private cloud and compete? Are they any better at securing their infrastructure than Amazon? Is on premise encryption any better than in the cloud? And even if it is, it’s only until someone pulls a “Snowden”. And who’ll build ‘m? Where are the highly skilled, expert civil servants after decades of outsourcing leaving them at the mercy of 3rd parties? Are they going to buy them away in an era of cost cutting? And if they could, can they use them, do they have the organizational prowess to do so? So they’ll be build by the same pundits as before? Outsourcing to India would at least have been “the same mess for less”, while now it’s the same mess for more.

Sheep, lemmings, wolves & a smart CIO

I see way to little strategy building on this subject and to much “comfort” decisions being made that cost a lot of money and efforts delivering not enough competitive advantages. The smart CIO can avoid this an really deliver on “Your Cloud, Your Terms”. The others, well they’ll all play their role …

Just some food for thought. But I leave you with another musing. 100% cloud might be a great idea but it’s like leasing or renting. There are scenarios where ownership still makes since depending on the situation and business.

What You Need To Hear, Not What You Want To Hear


The usual disclaimer covers this blog. Dilbert® Life series are humorous post on corporate culture from hell and dysfunctional organizations running wild. This can be quite shocking and sobering to those who take themselves to serious. So these blog posts need to be read with a healthy dose of humor and be put into perspective. If you can’t do that, leave now. If it hits home too hard, you have other problems. It could be that you don’t like what you see in this mirror. Or perhaps …

You’re so vain, you probably think this blog is about you
You’re so vain, I’ll bet you think this blog is about you
Don’t you? Don’t you?

Many thanks to Carly Simon’s “You’re so vain” Smile

Shopaholic Organizations

There is a shocking addiction to trying to buy ones way out of problems. If the service desk process sucks then you buy a CRM package. If this doesn’t do what you hoped out of the box, have it customized. You don’t have 100% IT automation? You need to buy a CMDB! Need to track changes? Go ITIL & do ITLM/ITSM all over the board. Projects don’t respect their boundaries? Hire some PRINCE expertise. Can’t keep up with all the project & resource management? Buy a ERP and integrate it with the project management software you’ve been abusing. You have no clue what to do next? Hire management consultants! We have one for every flavor of management. Your employees suck? Hire consultants. Slow applications? Buy flash only storage and 40Gbps switches. Your employees are disengaged? Get a coach, buy a team building experience and a 5$ pizza discount coupon as an “atta boy”. Maybe you could even gamify the company to success? And if you feel all alone and misunderstood you can join all the peer groups & professional organizations you can find to play that same broken record to each other over and over again whilst hoping you catch a break to a better gig.

Whatever the problem you’re facing, there is a product to buy and help to be hired. Like a true addict you keep using more of the same in the hope it will work. Nice twist on what Einstein called the definition of insanity. Yet why do so many people think it will help, all evidence to the contrary?

The obsessive and compulsive need to buy stuff to fix or even solve problems, needs, lack of skills, knowledge and insights is staggering. Sure the world is full of people and companies that will gladly take your money. Why? Well that’s their business model. The only aim is to separate you from your money. They’ll tell you they understand you, that they’ve helped hundreds of people and businesses like you. So they’ll sell you whatever it is they sell and they couldn’t care less if you’re still around next year. Until perhaps the moment in 18 months they know they can sucker you again. The only line of defense you have against that is your own good judgment. It’s not that all of them their products or services have no value at all. The better vendors will even walk away from an engagement when it not mutually beneficial. But the core of the problem is that you are having issues and that’s your inability to deal with problems that cannot be solved by buying something. It’s very much like a shopaholic.

It’s a business model for someone

The idea that there is a an easy fix to solve the issues your facing and make sure you can shine as a successful leader instead of being stuck in your current mess is very temping one. There is always someone who understands this. Who’s ready to step up and deliver. Which would be great if it was not for a few simple rules:

  • A fool and his money are easily separated. And if not, as long as the money is good enough they’ll put in more effort.
  • Your problems are internal, they are caused by you and need to be fixed by you. Any addiction to whatever (products, services, consulting, coaching) are actually keeping you away from the solution.

image

  • You as a manager, perhaps even a leader, will have to step up. Be all you can be and if that is not enough step aside. Do the latter yourself before it’s done to you, it’s less messy that way.

Listen, when the money is gone, all that is left are your internal resources, if you’re lucky. Acting as if they don’t matter means they won’t be very engaged. All budgets are limited, but that doesn’t mean that you need to be a scrooge. It means you need to create and build a capable organization even when budgets are plentiful that can stand on its own feet. One that is able to analyze and decide independently what it needs to do and act on that. Spend your money there. Otherwise as soon as you run out, you lose all your capabilities to act. It’s like a ship without power, on top of not even not having a rudder. You’re a drift, floating between the sharks that bled you dry.

Also, if all your organization knows what to do is hire & buy everything from others it can easily replace it with a cheaper one that’s optimized that model needing 40% to 50% less employees & managers. Pure substitution play. Game over. Economics 101.

You need to get a clue, make it happen, you and your team, no one else.  But it has to start with you. If you need coaches, consultants, products just to get started you’re not going to make it.

Ouch, that hurt!

Deep down you know the painful truth. While it would indeed be great if you’d be able to hire a coach, consultant or buy service, product that can take away your pains it doesn’t work that way. You cannot purchase those magical bottles of pixie dust or unicorn tears that can put the struggles and headaches behind you allowing you to solely focus on enjoying a successful business and be forever bliss.

image

I could tell you that you’re in luck as I have a nice stash of pixie dust bottles I can use in a pinch and for a price. But that’s not it. It’s experience, knowledge, having to work and live with solutions, see the good, the bad and the ugly of both marketing, “marchitecture” in combination with grand and hopefully realistic visions of analysis & architects what’s need. The only thing this has in common with pixie dust is that is doesn’t come cheap or easy neither, but it does work Winking smile

Too many times solutions are nothing but rehashed marketing & sales pitches that succeed due to a lack of skill on both sides. All kinds of schemes are used to justify them. They don’t achieve much at all. These are often self-serving “quick fixes” to something that is as structural & often over-hyped, over complicated problem serving some people agendas.

So you spend your money and for a little while you experience the illusion that you’ve solved something. But like any addict, you, the shopaholic, will return hard and fast to reality. Poorer and sadly none the wiser. You coast from purchase to purchase never breaking this destructive pattern. You like to fool yourself into believing that you’re investing instead of spending money because you see so many successful companies buy the same products or services. It’s kind of painful and sad to watch. Some of you will blame the market, incompetent employees or dishonest vendors, lack of commitment, disobedience. While all these factors do exist and play their role it’s not the real cause of your woes. The environment you operate in is no different for you or competitors. Sure there might be a hobby business around, run by the son of a super-rich business tycoon but that’s a minority. No, the playing field is the same, so could it, however painful that thought, be you, that’s not made of the right stuff?

What if despite all your best efforts and even some pixie dust you still have issues that are killing your performance?  You can suck it up and BS your way out. Say that what you did is the best in the world and nothing more can be done. Hire consultants to audit whatever it is you want to audit (or whoever you want to put in their place if you’re really political), blame you predecessor, the lack of (upper) management vision or the current sun spots cycle. You can also really dive in and pint point where the issues are. But that’s hard, very hard. A lot harder than buying a vile of unicorn tears which seems the missing ingredient in any unrealistic project, overly ambitious architecture or design. It’s horribly difficult to obtain because it is scarce beyond imagination.

image

I’ll make you a deal. While I possess some flasks, they are the most expensive substance ever to come by. So if you require the tears of a unicorn, you’re going to need truck loads with money of large denomination kind.

But there are no unicorn tears. YOU will need to fix your problems. Forget about buying products, that’s in essence automation and optimization. If you do that to a problem you only make it bigger and worse faster. Forget about coaches and consultants, they’ll only enable you to move faster and more targeted if you know the goal, that is. They will not solve your problems. That’s your job.

Don’t try to improve things with tools and services until you really know what’s wrong. Look very deep, hard and honest at your company, your managerial results and your actions. If you only find you do things to save your own behind, cover your back and hopefully move ahead you’re not fit to lead anything at all and you’re a much a strategist as my hamster. But in defense of my hamster: he lacks any ambition.  As a leader / manager you should care a bit more. Action is needed, from you. Lip service is useless. Talk is cheap. Fear kills. Deflecting decisions and responsibility makes you lose all credibility. If you care, act like it. If you don’t care no one else will for sure. If you can’t be bothered to do the hard work, no one will. You can’t lead from behind.

So what needs to be done?

Stop what you’re doing right now. Observe, orient, decide, and act (OODA) and see the progress of intelligent decisions and watch how money invested differs in results so much from money spent. There is no substitute. You don’t need tools, coaches, taskforces, committees and services. Those are only for amplification, they are force multipliers and that’s great as long as you don’t apply them to your problems. Hard as it may sound, its (free) advise that you won’t get from a sales person. You cannot avoid your responsibilities.

The eyes of the world are upon you

You brought this on yourself. You stepped on the plate as a leader. So yes, your employees are watching and they don’t miss much what affects them. I know employees can act very entitled and be a major pain in the proverbial behinds, but this discussion isn’t about that. Do you want to know why they doubt you, don’t follow you, ignore or possibly even oppose you? Because you show no leadership and do not portray any sign of competence or insight. For the good of the company and themselves they do what they need to, with or without you. No one goes over the top anymore at the blow of the whistle. So don’t pull rank, instead try to become credible.

Migrate an old file server to a transparent failover file server with continuous availability


This is not a step by step “How to” but we’ll address some thing you need to do and the tips and tricks that might make things a bit smoother for you.

1) Disable Short file names & Strip existing old file names

Never mind that this is needed to be able to do continuous availability on a file share cluster. You should have done this a long time ago. For one is enhances performance significantly. It also make sure that no crappy apps that require short file names to function can be introduced into the environment. While I’m an advocate for mutual agreements there are many cases where you need to protect users, the business against itself. Being to much of a politician as a technologist can be very bad for the company due to allowing bad workarounds and technology debt to be introduced. Stand tall!

Read up on this here Windows Server 2012 File Server Tip: Disable 8.3 Naming (and strip those short names too. Next to Jose’s great blog read Fsutil 8dot3name on how to do this.

If you still have applications that depend on short file names you need to isolate and virtualize them immediately. I feel sorry for you that this situation exists in your environment and I hope you get the necessary means to deal with swiftly and decisively by getting rid of these applications. Please see The Zombie ISV® to be reminded why.

Some tips:

  • Only use the /F switch if it’s a non system disk and you can afford to do so as you’re moving the data LUN to a new server anyone. Otherwise you might run into issues. See the below example.image
  • If you stumble on path that are too long, intervene. Talk to the owners. We got people to reduce “Human Resources Planning And Evaluations” sub folder & file names reduced to HRMPlanEval. You get the gist, trim them down.
  • You’ll have great success on most files & folders but if they are open. Schedule a maintenance window to make sure you can run without anyone connected to the shares (Stop LanManServer during that maintenance window).image
  • Also verify no other processes are locking any files or folders (anti virus, backups, sync tools etc.)

2) Convert MBR disks to GPT if you can

With ever growing amounts of data to store and protect this makes sense. I’m not saying you need to start doing 64TB disks today but making sure you can grown beyond 2TB is smart. It doesn’t cost anything when you start out with GPT disks from the start.  If you have older LUNs you might want to use the migration as an opportunity to convert MBR LUNs to GPT. That means copying the data and all NTFS permissions.

Please see  NTFS Permissions On A File Server From Hell Saved By SetACL.exe & SetACL Studio for some tools that might help you out when you run into NTFS/ACL permissions and for parsing logs during this operation.

Here’s a useful Robocopy command to start out with:

ROBOCOPY L:\ V:\ /MIR /SEC /Z /R:2 /W:1 /V /TS /FP /NP /XA:SH /MT:16 /XD "System Volume Information" *RECYCLE* /LOG:"D:\RoboCopyLogs\MBR2GPTLUNL2V.txt"

3) Dump the existing shares on the old file sever into a text file for documentation an use on the new file server

Pre Windows Server 2012 the new SMB Cmdlets don’t work, but no fear, we have some other tools to use. Using NET SHARE does work and with you can also show the hidden and system share but the layout is a bit of a mess. I prefer to use.

Get-WmiObject –class Win32_Share > C:\temp\OldFileServerShares

It’s fast, complete and the layout is very user friendly. Which is what I need for easy use with PowerShell on the W2K12R2  file server. Some of you might say, what about the share security settings. 1) We’re going to cluster so exporting these from the registry doesn’t work and 2) you should have kept this plain vanilla and done security via the NFTS permissions on the folder structure only. But hey I’m a nice guy, so here’s a link to a community PowerShell script if you need to find out the share permissions: http://gallery.technet.microsoft.com/scriptcenter/List-Share-Permissions-83f8c419 I do however encourage you to use this time to consider just using security on NFTS.

4) Create the clustered file shares

Amongst the many gems in Windows Server 2012 R2 are the new SMB PowerShell Cmdlets. They are a simple and great way to create clustered files shares. Read up on these SMB Share Cmdlets and especially New-SmbShare

When we’ve unmapped the LUNs from the old file server and exposed them to the new file server cluster you’re ready to go. You can even reorganize the Shares, consolidate to less but bigger LUNs and, by just adapting the path to the share in the script make sure the users are not confused or nee to learn new shares and adapt how & what they connect to them. Here it goes:

New-SmbShare -Name "TEST2" -path "T:\Shares\TEST2" -fullaccess Everyone -EncryptData $True -FolderEnumerationMode AccessBased -ConcurrentUserLimit 0 -ScopeName TF-FS-MIG

First and foremost, this is where the good practice of not micro managing file hare permissions will pay back big time. If you have done security via NTFS permissions with AG(U)DLP principle to your folder structure granting should be breeze right?

Before you ask, no you can’t do the old trick of importing the registry export of the shares and their security settings form the old file server when you’re going to cluster the file shares. That might sound bad but with some preparation and the PowerShell I demonstrated above you’ll find it easy enough.

5) Recuperate old file server name (Optional)

After you have decommissioned the old file server you could use a cluster alias to keep the old file server UNC path. This has the drawback you will fall back to connecting to the SMB shares via NTLM as aliases don’t support Kerberos authentication. But there is another trick. Once you got rid of the old server object in AD you can rename. If you can do this you’ll be able to keep Kerberos for authentication.

So after you’ve gotten rid of the old server in Active Directory go to the file server role. Select properties and rename it to recuperate the old files server name.

image

Now look at the resources tab. Right click and select the properties tab of “Server Name”. Rename the DNS Name. That will update the server name and the DNS record. This will cause the role to go down temporarily.

image

Right click and select the properties tab of “File Server”. Rename the UNC path to reflect the older file server name.

image For good measure and to test everything works: stop and restart the cluster role, connect to the shares and voila live should be good. Users can access the transparent failover file server like they used to do with the old non cluster file server and they don’t sacrifice Kerberos to be able to do so!

image

Conclusion

I hope you enjoyed the tips and pointers on migrating an old file server to a  Windows Server 2012 R2 file share cluster. Remember that these tips apply for various permutations of P2V, V2V as well as for P2P migrations.

SMB 3, ODX, Windows Server 2012 R2 & Windows 8.1 perform magic in file sharing for both corporate & branch offices


SMB 3 for Transparent Failover File Shares

SMB 3 gives us lots of goodies and one of them is Transparent Failover which allows us to make file shares continuously available on a cluster. I have talked about this before in Transparent Failover & Node Fault Tolerance With SMB 2.2 Tested (yes, that was with the developer preview bits after BUILD 2011, I was hooked fast and early) and here Continuously Available File Shares Don’t Support Short File Names – "The request is not supported" & “CA failure – Failed to set continuously available property on a new or existing file share as Resume Key filter is not started.”

image

This is an awesome capability to have. This also made me decide to deploy Windows 8 and now 8.1 as the default client OS. The fact that maintenance (it the Resume Key filter that makes this possible) can now happen during day time and patches can be done via Cluster Aware Updating is such a win-win for everyone it’s a no brainer. Just do it. Even better, it’s continuous availability thanks to the Witness service!

When the node running the file share crashes, the clients will experience a somewhat long delay in responsiveness but after 10 seconds the continue where they left off when the role has resumed on the other node. Awesome! Learn more bout this here Continuously Available File Server: Under the Hood and SMB Transparent Failover – making file shares continuously available.

Windows Clients also benefits from ODX

But there is more it’s SMB 3 & ODX that brings us even more goodness. The offloading of read & write to the SAN saving CPU cycles and bandwidth. Especially in the case of branch offices this rocks. SMB 3 clients who copy data between files shares on Windows Server 2012 (R2) that has storage an a ODX capable SAN get the benefit that the transfer request is translated to ODX by the server who gets a token that represents the data. This token is used by Windows to do the copying and is delivered to the storage array who internally does all the heavy lifting and tell the client the job is done. No more reading data form disk, translating it into TCP/IP, moving it across the wire to reassemble them on the other side and write them to disk.

image

To make ODX happen we need a decent SAN that supports this well. A DELL Compellent shines here. Next to that you can’t have any filter drives on the volumes that don’t support offloaded read and write. This means that we need to make sure that features like data deduplication support this but also that 3rd party vendors for anti-virus and backup don’t ruin the party.

image

In the screenshot above you can see that Windows data deduplication supports ODX. And if you run antivirus on the host you have to make sure that the filter driver supports ODX. In our case McAfee Enterprise does. So we’re good. Do make sure to exclude the cluster related folders & subfolders from on access scans and schedules scans.

Do not run DFS Namespace servers on the cluster nodes. The DfsDriver does not support ODX!

image

The solution is easy, run your DFS Namespaces servers separate from your cluster hosts, somewhere else. That’s not a show stopper.

The user experience

What it looks like to a user? Totally normal except for the speed at which the file copies happen.

Here’s me copying an ISO file from a file share on server A to a file share on server B from my Windows 8.1 workstation at the branch office in another city, 65 KM away from our data center and connected via a 200Mbps pipe (MPLS).

image

On average we get about 300 MB/s or 2.4 Gbps, which “over” a 200Mbps WAN is a kind of magic. I assure you that they’re not complaining and get used to this quite (too) fast Winking smile.

The IT Pro experience

Leveraging SMB 3 and ODX means we avoid that people consume tons of bandwidth over the WAN and make copying large data sets a lot faster. On top of that the CPU cycles and bandwidth on the server are conserved for other needs as well. All this while we can failover the cluster nodes without our business users being impacted. Continuous to high availability, speed, less bandwidth & CPU cycles needed. What’s not to like?

Pretty cool huh! These improvements help out a lot and we’ve paid for them via software assurance so why not leverage them? Light up your IT infrastructure and make it shine.

What’s stopping you?

So what are your plans to leverage your software assurance benefits? What’s stopping you? When I asked that I got a couple of answers:

  • I don’t have money for new hardware. Well my SAN is also pré Windows 2012 (DELL Compellent SC40 controllers. I just chose based on my own research not on what VARs like to sell to get maximal kickbacks Winking smile. The servers I used are almost 4 years old but fully up to date DELL PowerEdge R710’s, recuperated from their duty as Hyper-V hosts. These server easily last us 6 years and over time we collected some spare servers for parts or replacement after the support expires. DELL doesn’t take away your access to firmware &drivers like some do and their servers aren’t artificially crippled in feature set.
  • Skills? Study, learn, test! I mean it, no excuse!
  • Bad support from ISV an OEMs for recent Windows versions are holding you back? Buy other brands, vote with your money and do not accept their excuses. You pay them to deliver.

As IT professionals we must and we can deliver. This is only possible as the result of sustained effort & planning. All the labs, testing, studying helps out when I’m designing and deploying solutions. As I take the entire stack into account in designs and we do our due diligence, I know it will work. The fact that being active in the community also helps me know early on what vendors & products have issues and makes that we can avoid the “marchitecture” solutions that don’t deliver when deployed. You can achieve this as well, you just have to make it happen. That’s not too expensive or time consuming, at least a lot less than being stuck after you spent your money.

Happy System Administrator Appreciation Day!


Yes, today is SysAdmin Day! http://sysadminday.com/  Why well read the information on the link. But I’ll put a quote here:

Let’s face it, System Administrators get no respect 364 days a year. This is the day that all fellow System Administrators across the globe, will be showered with expensive sports cars and large piles of cash in appreciation of their diligent work. But seriously, we are asking for a nice token gift and some public acknowledgement. It’s the least you could do.

Consider all the daunting tasks and long hours (weekends too.) Let’s be honest, sometimes we don’t know our System Administrators as well as they know us. Remember this is one day to recognize your System Administrator for their workplace contributions and to promote professional excellence. Thank them for all the things they do for you and your business.

The fact that your business is running on modern hardware with SSDs, lots of RAM/ cores , Windows 8.1, leveraging ODX, UNMAP, vRSS, Windows Server 2012 R2, SQL Sever Availability Groups, ADFS 3.0 has a great client network, modern servers, storage and 10Gbps has a reason. All this without breaking the budget or having VARs ravage it. Someone is watching out over all this and making it materialize. This does not happen by accident or without effort.

So when cost cutting axe comes around, think about what they’ve achieved for you without breaking the bank and what excellent position you’re in. Consider what you have and how you got it. Those sysadmins are not just there for your Flash plugin issues, printer toner or because you can’t configure a consumer device, that supposed to liberate your form have to rely on the helpdesk. That modern IT infrastructure, that “stuff” what you might think is a hobby in between fixing your ‘Internet” and installing “free productivity” tools is valuable asset. So don’t be a jerk and turn to meaningless “attaboys”, but reward ‘m if they deliver.

Latest Strip

I Can’t Afford 10GBps For Hyper-V And Other Lies


You’re wrong

There, I said it. Sure you can. Don’t think you need to be a big data center to make this happen. You just need to think and work outside the box a bit and when you’re not a large enterprise, that’s a bit more easy to do. Don’t do it like a big name brand, traditionalist partner would do it (strip & refit the entire structural cabling in the server room, high end gear with big margins everywhere). You’re going for maximum results & value, not sales margins and bonuses.

I would even say you can’t afford to stay on 1Gbps much longer or you’ll be dealing with the fall out of being stuck in the past. Really some of us are already look at > 10Gbps connections to the servers, actually. You need to move from 1Gbps or you’ll be micro managing a way around issues sucking all the fun out of your work with ever diminishing results and rising costs for both you and the business.

Give your Windows Server 2012R2 Hyper-V environment the bandwidth it needs to shine and make the company some money. If all you want to do is to spent as little money as possible I’m not quite sure what your goal is? Either you need it or you don’t.  I’m convinced we need it. So we must get it. Do what it takes. Let me show you one way to get what you need.

Sounds great what do I do?

Take heart, be brave and of good courage! Combine it with skills, knowledge & experience to deliver a 10Gbps infrastructure as part of ongoing maintenance & projects. I just have to emphasize that some skills are indeed needed, pure guts alone won’t do it.

First of all you need to realize that you do not need to rip and replace your existing network infrastructure. That’s very hard to get approval for, takes too much time and rapidly becomes very expensive in both dollars and efforts. Also, to be honest, quiet often you don’t have that kind of pull. I for one certainly do not. And if I’d try to do that way it takes way too many meetings, diplomacy, politics, ITIL, ITML & Change Approval Board actions to make it happen. This adds to the cost even more, both in time and money. So leave what you have in place, for this exercise we assume it’s working fine but you can’t afford to have wait for many hours while all host drains in 6 node cluster and you need to drain all of them to add memory. So we have a need (OK you’ll need a better business case than this but don’t make to big a deal of it or you’ll draw unwanted attention) and we’ve taking away the fear factor of fork lift replacing the existing network which is a big risk & cost.

So how do I go about it?

Start out as part of regular upgrades, replacement or new deployments. The money is their for those projects. Make sure to add some networking budget and leverage other projects need to support the networking needs.

Get a starter budget for a POC of some sort, it will get your started to acquire some more essential missing  bits.

By reasonably cheap switches of reasonable port count that do all you need. If they’re readily available in a frame work contract, great. You can get it as part of the normal procedures. But if you want to nock another 6% to 8% of the cost order them directly from the vendor. Cut out the middle man.

Buy some gear as part of your normal refresh cycle. Adapt that cycle life time a bit to suit your needs where possible. Funding for operation maintenance & replacement should already be in place right?

Negotiate hard with your vendor. Listen, just like in the storage world, the network world has arrived at a point where they’re not going to be making tons of money just because they are essential. They have lots of competition and it’s only increasing. There are deals to be made and if you chose the right hardware it’s gear that won’t lock you into proprietary cabling, SPF+ modules and such. Or not to much anyway Smile.

Design options and choices

Small but effective

If you’re really on minimal budget just introduce redundant (independent) stand alone 10Gbps switches for the East-West traffic that only runs between the nodes in the data center. CSV, Live Migration, backup. You don’t even need to hook it up to the network for data traffic, you only need to be able to remotely manage it and that’s what they invented Out Off Band (OOB) ports for. See also an old post of mine Introducing 10Gbps With A Dedicated CSV & Live Migration Network (Part 2/4). In the smallest cheapest scenario I use just 2 independent switches. In the other scenario build a 2 node spine and the leaf. In my examples I use DELL network gear. But use whatever works best for your needs and your environment. Just don’t go the “nobody ever got fired for buying XXX” route, that’s fear, not courage! Use cheaper NetGear switches if that fits your needs. Your call, see my  recent blog post on this 10Gbps Cheap & Without Risk In Even The Smallest Environments.

Medium sized excellence

First of all a disclaimer: medium sized isn’t a standardized way of measuring businesses and their IT needs. There will be large differences depending on you neck of the woods Smile.

Build your 10Gbps infrastructure the way you want it and aim it to grow to where it might evolve. Keep it simple and shallow. Go wide where you need to. Use the Spine/Leaf design as a basis, even if what you’re building is smaller than what it’s normally used for. Borrow the concept. All 10Gbps traffic, will be moving within that Spine/Leaf setup. Only client server traffic will be going out side of it and it’s a small part of all traffic. This is how you get VM mobility, great network speeds in the server room avoiding the existing core to become a bandwidth bottleneck.

You might even consider doing Infiniband where the cost/Gbps is very attractive and it will serve you well for a long time. But it can be a hard sell as it’s “another technology”.

Don’t panic, you don’t need to buy a bunch of Nexus 7000’s  or Force10 Z9000 to do this in your moderately sized server room. In medium sized environment I try to follow the “Spine/Leaf” concept even if it’s not true ECMP/CLOSS, it’s the principle. For the spine choose the switches that fit your size, environment & growth. I’ve used the Force10 S4810 with great success and you can negotiate hard on the price. The reasons I went for the higher priced Force10 S4810 are:

  • It’s the spine so I need best performance in that layer so that’s where I spend my money.
  • I wanted VLT, stacking is a big no no here. With VLT I can do firmware upgrades without down time.
  • It scales out reasonably by leveraging eVLT if ever needed.

For the ToR switches I normally go with PowerConnect 81XX F series or the N40XXF series, which is the current model. These provide great value for money and I can negotiate hard on price here while still getting 10Gbps with the features I need. I don’t need VLT as we do switch independent NIC teaming with Windows. That gives me the best scalability wit DVMQ & vRSS and allows for firmware upgrades without any network down time in the rack. I do sacrifice true redundant LACP within the rack but for the few times I might really need to have that I could go cross racks & still maintain a rack a failure domain as the ToRs are redundant. I avoid stacking, it’s a single point of failure during firmware upgrades and I don’t like that. Sure I can could leverage the rack a domain of failure to work around that but that’s not very practical for ordinary routine maintenance. The N40XXF also give me the DCB capabilities I need for SMB Direct.

Hook it up to the normal core switch of the existing network, for just the client/server.(North/South) traffic. I make sure that any VLANs used for CSV, live migration, can’t even reach that part of the network.  Even data traffic (between virtual machines, physical servers) goes East-West within your Spine/Leave and never goes out anyway unless you did something really weird and bad.

As said, you can scale out VLT using eVLT that creates a port channel between 2 VLT domains. That’s nice. So in a medium sized business you’re pretty save in growth. If you grow beyond this, we’ll be talking about a way larger deployment anyway and true ECMP/CLOS and that’s not the scale I’m dealing with where. For most medium sized business or small ones with bigger needs this will do the job. ECMP/CLOS Spine/leaf actually requires layer 3 in the design and as you might have noticed I kind if avoid that. Again, to get to a good solution today instead of a real good solution next year which won’t happen because real good is risky and expensive. Words they don’t like to hear above your pay grade.

The picture below is just for illustration of the concept. Basically I normally have only one VLT domain and have two 10Gbps switches per rack. This gives me racks as failure domains and it allows me to forgo a lot of extra structural cabling work to neatly provide connectivity form the switches  to the server racks .image

You have a  scalable, capable & affordable 10Gbps or better infrastructure that will run any workload in style.. After testing you simply start new deployments in the Spine/Leaf and slowly mover over existing workloads. If you do all this as part of upgrades it won’t cause any downtime due to the network being renewed. Just by upgrading or replacing current workloads.

The layer 3 core in the picture above is the uplink to your existing network and you don’t touch that. Just let if run until there nothing left in there and you can clean it up or take it out. Easy transition. The core can be left in place or replaces when needed due to age or capabilities.

To keep things extra affordable

While today the issues with (structural) 10Gbps copper CAT6A and NICs/Switches seem solved, when I started doing 10Gbps fibre cabling of Copper Twinax Direct Attach was the only way to go. 10GBaseT wasn’t an option yet and I still love the flexibility of fibre, it consumes less space and weighs less then CAT6A. Fibre also fits easily in existing cable infrastructure. Less hassle. But CAT6A will work fine today, no worries.

If you decide to do fibre, buy OM3, you can get decent, affordable cabling on line. Order it as consumable supplies.

Spend some time on the internet and find the SFP+ that works with your switches to save a significant amount of money. Yup some vendor switches work with compatible non OEM branded SPF+ modules. Order them as consumable supplies, but buy some first to TEST! Save money but do it smart, don’t be silly.

For patch cabling 10Gbps Copper Twinax Direct Attach works great for short ranges and isn’t expensive, but the length is limited and they get thicker & more sturdy and thus unwieldy by length. It does have it’s place and I use them where appropriate.

Isn’t this dangerous?

Nope. Technology wise is perfectly sound and nothing new. Project wise it delivers results, fast, effective and without breaking the bank. Functionally you now have all the bandwidth you need to stop worrying and micromanaging stuff to work around those pesky bandwidth issues and focus on better ways of doing things. You’ve given yourself options & possibilities. Yay!

Perhaps the approach to achieve this isn’t very conventional. I disagree. Look, anyone who’s been running projects & delivering results knows the world isn’t that black and white. We’ve been doing 10Gbps for 4 years now this way and with (repeated) great success while others have to wait for the 1Gbps structural cabling to be replaced some day in the future … probably by 10Gbps copper in a 100Gbps world by the time it happens. You have to get the job done. Do you want results, improvements, progress and success or just avoid risk and cover your ass? Well then, choose & just make it happen. Remember the business demands everything at the speed of light, delivered yesterday at no cost with 99.999% uptime.  So this approach is what they want, albeit perhaps not what they say.