Tag Archives: thinking out loud

Thinking out loud: Why do server vendors still struggle with driver and firmware management?

History:

Let me give you a little back story before digging into the meat of this post.  My team and I make a very concerted effort to keep our servers firmware and drivers updated.  We’ve gone so far as to purchase software from Dell, implement a process on how firmware / drivers are to be updated, and ensuring that its routinely done every quarter.  We do this because in general it’s a best practice, but also because we’ve run into too many occasions where troubleshooting with a vendor stops (if it ever starts) very quickly if the drivers / firmware isn’t recent.  In essence, we’re doing our best to be diligent and proactive with keeping our servers healthy, secure and updated.

Late last year we ran into two issues, both of which are related to drivers / firmware.

  1. A Broadcom NIC causing a purple screen of death (PSOD) in ESXi. This was a server that was freshly rebuilt and all drivers (or so we thought) and firmware updated.  Turns out the driver we were running was more than two years old and the PSOD we were having was a resolved issue in a newer driver.
  2. An Intel x710 quad port 10Gb NIC causing packets to black hole for certain VM’s on the same vLAN. Again, these were new hosts that were patched, firmware updated and in theory up to date.  This issue is what really triggered us to start evaluating other server vendors and their solutions.

 

Of those two issues above, only one was solved with a simple driver update, and the other, we just gave up on and switched to a different NIC (x520).

The Issue:

If you can’t see where this post is going already, let me lay it out.  Server vendors still can’t properly manage their own drivers, firmware and vendor specific software. I know what you’re thinking, you have tools that the vendor provided, you’re using them and you’re fine.  I hate to be the bearer of bad news, but I doubt it.  We just got done a rigorous evaluation of Dell, HP and Cisco.  None of them have a complete solution.  Don’t get me wrong, some of them are getting there, but no one has the problem solved.  If you’re wondering what the specific problems are, see the bullets below.

  • Server vendors would like that you keep your firmware, drivers and tools up to date.
  • OS vendors (and sometimes server vendors) require that the driver and firmware have a certified pairing. It is NOT good enough to simply have the latest driver and the latest firmware.  This of course may vary slightly depending on which server and OS vendor.  VMware though as an example, absolutely has driver and firmware paring that’s required.
    • This driver and firmware pairing is typically worked out between the server and OS vendor.
    • VMware has a strict HCL for this use case, and TMK, MS has an HCL, although they’re a little more forgiving when it comes to the pairing. I can’t speak about Linux, Solaris or other OS’s.

 

Think about this, when was the last time you did the following?

  • Retrieved an inventory of all your hardware firmware revisions and driver revisions.
    • Do you even know how to do this? Probably not as easy as you think.
  • Logged into your OS vendors HCL and one hardware item at a time, checked if you are running the latest driver and firmware, and that the pairing is also certified.
    • With VMware, you can use the device vendor ID, device ID and sub vendor ID to find the specific hardware in question on their HCL. Just remember its hex values.

I bet you’re either relying on the following.

  • VMware update manager, and vendor provided depots (if they exist).
  • Vendor supplied firmware management solutions.
    • Some may have driver management for select OS, but no one does it all.
  • Vendor custom ISOs / install discs.

I’m going to suspect if you go and check your VMware HCL, you’re out compliance in one way or another, or something is woefully out of date.

Solution:

Let’s share a pipe for a second and dream about what it should look like.

 

Server Vendor:

  • It should be a central console.
  • It should handle downloading all firmware, drivers and vendor specific tools.
  • It should use a concept of baselines. A baseline being defined as.
    • OS and server model specific.
    • Based on a release date. The baseline should define an approved pairing of drivers, firmware and vendor tools for a given month, quarter or however often the server vendor feels the need to establish a new baseline.
    • The baseline should support the concept of cumulative updates / hotfixes.
  • It should support grouping servers, and applying baselines to the server groups.
  • It should support compliance checking for the baselines. Not simply deploying the drivers and firmware and assuming everything is ok.  This would let you know if an admin went rouge and manually updated or downgraded firmware / drivers or tools.
  • It should support rolling back drivers, firmware or tools if it is determined to be too far ahead.
  • Provide very verbose information as to why an update process failed.
  • Bonus points
    • Support a multi-site architecture. Meaning be able to have a cached copy of the repo and a local server to perform the actual update and auditing process.
    • Auto discover servers

OS Vendor:

  • Should provide a comprehensive API
    • Look up hardware and driver pairings.
    • Enable the download of the driver or firmware directly would be nice.

Conclusion:

Coming back to reality a bit, what can you do?  Use the tools you have to the best of your ability, script what you can, and manually deal with the gaps.  That said, I’m working on the auditing part of the problem, at least with VMware. Hope to have blog post about it and a new GitHub commit in the coming month or so, stay tuned.

Oh, one final thing you can do, start bugging your server vendor sales team about the issue. If enough people raise the issue, it will get the attention it desperately needs.

Thinking out loud: What HP + Nimble means to me

Disclaimer:

These are opinions, not facts and these opinions are mine, not my employers.

Introduction:

Upon receiving the news that HP was to acquire Nimble, I can’t say I was exactly thrilled.  Nothing personal against HP, they make great servers, but I like Nimble the way it is right now.  None the less, I know the industry is moving in a direction where its either get big, or get out.  There is a huge storm “cloud” looming, and if you’re an on premises solution, its going to be a scary time in the coming years.

I was thinking about what would be some of the pros and cons of the HP acquisition and this is what I’ve come up with.

Pros:

  1. HP is a big company and an established one at that. We’ll focus on the pros being big/established in this section.
    1. HP will likely have an easier time pitching Nimble into companies that would not have given them a second look. HP is established, so there’s a perception that Nimble is established.  This leads to better market penetration.  Nimble getting better market penetration means Nimble makes HP more money and if Nimble makes HP more money, HP invests more into Nimble.  Hopefully the circle of money keeps snow balling and we all win.
      1. HP is a world wide company and while Nimble has done a great job so far, HP is going to take them into more countries faster than they could on their own. If you’ve had difficulty getting Nimble equipment purchased “in country”, I can see this getting easier long term.
    2. Obviously HP has more capital at their disposal than Nimble did. If invested correctly, I could see this accelerating Nimble’s innovation.
    3. HP has more purchasing power than Nimble does, this could lead to Nimble’s margins being better, which in turn may lead to us having a more affordable product (or more profit for HP).
  2. Look, we all know why tech companies pick Supermicro, and its not quality, its affordability. HP makes some pretty kick ass hardware, so if we were to see Nimble’s hardware platform change from Supermicro to HP equipment, not only would my datacenter look a little sexier, I wouldn’t cut my fingers trying to rack Nimble anymore.
  3. If you’re a current HP customer, I can see two nice integration points.
    1. Infosight for other HP solutions.
    2. Nimble integration into OneView.

Cons:

  1. As mentioned in the pros, HP is a large established company. While this in its self can have some pros, it also has the potential for a number of cons.
    1. Big companies tend to move slow, bureaucracy and over analysis being suspect causes. Nimble had far fewer hoops to jump through before making a decision.  Just remember, deadlines and accomplishments drift a day at a time.  Days become weeks and weeks become months, and you get the picture.
    2. Every company is profit conscious, but some larger companies will kill any sliver of waste, even at the cost of productivity or customer satisfaction. I’m not saying it will happen with HP, only that it could.
    3. While HP will open a lot of new doors for Nimble, it has the potential to close a lot of existing ones too. There are a lot of companies that have had bad experiences with HP and this may be enough for them to drop Nimble.  That said, being realistic, it seems one way or another, you’re going to be purchasing storage from some big vendor, and it may not be the same as the one you purchase servers from.
    4. If HP tries to assimilate Nimble into their ways, I can see this being bad for Nimble customers. Nimble for example has a great support experience.  If HP tries to force Nimble to adopt their triage and support structure, that would be a quick way to devalue Nimble.  There’s other things too, like getting stuck speaking with a generic HP sales rep and sales engineer, instead of having direct access to a Nimble SE and a Nimble sales person, or other typical large company sales and support processes.
  2. HP hasn’t made a great name for themselves here of late. We know they’ve split the company in half and sold off a lot of assets.  It’s hard to say if it’s too little too late, or if it was the right move and just in time.  Regardless, HP to me is a company that’s walking a fine line of a falling giant, or one that’s getting back on its feet.  If HP goes down, Nimble goes with it, and that’s not good for Nimble customers.
  3. HP isn’t exactly synonymous with innovation, at least not any more. I fear that HP has the potential of choking the life out of Nimble.  In my opinion, 3Par was a great storage solution.  Part of me wonders if HP couldn’t make that work, what makes them think Nimble will be any different?  Meaning, are they going to turn Nimble into the next Equallogic?

Other thoughts:

I think deep down everyone knew Nimble wanted to get bought.  Me personally, I was REALLY hoping Cisco was going to buy Nimble.  In my opinion, Cisco + Nimble would go together like peanut butter and fluff.  HP already has a storage company that’s flailing.  I don’t want Nimble to follow suite.  People like to remind me about about Whiptail and how bad that was.  I look at that as a rash move on Cisco part (the solution was doomed to fail), but Nimble would be a pick that no one could blame Cisco for.  Best of all, Cisco doesn’t have any competing products (other than Hyperflex, but that’s a different type of solution).  This would have led to a much stronger and untied focus on pushing Nimble.  From Nimble’s view, it would have solidified them as being established (opening the closed doors), and for Cisco, it would have given them a proven storage startup that’s on fire.  Honestly, if I was Cisco’s CEO, I would be doing everything I could to steal the deal from HP.  If it was a matter of HP vs. Dell vs. Cisco, and Cisco was the one with Nimble, IMO, Cisco would crush the other two like a ten-ton hammer.

Conclusion:

This is obviously all speculation at this point, just thinking out loud.  I hope all the pros of what I pointed out occur with the acquisition and none of the cons.  I wish both vendors the best of luck, and until proven otherwise, I’m still a diehard Nimble (HP) fan.

Thinking out loud: Hyper converged storages missing link

Introduction:

In general, I’m not a huge fan of hyper converged infrastructure.  To me, its more “hype” than substance at the moment.  It was born out of web scale infrastructure like Google, Facebook, etc. and IMO, that is still the area where it’s better suited.    The only enterprise layer where I see HCI being a good fit is VDI, other than that, almost every other enterprise workload would be better suited on new school shared storage.  I could probably go into a ton of reasons why I personally see shared storage still being the preferred architecture for enterprises, but instead I’ll focus on one area that if adopted might change my view (slightly).  You see, there is a balance between the best and good enough.  Shared storage IMO is the best, but HCI could be good enough.

What’s missing?

What is the missing link (pun intended)?  IMO, its external / independent DAS.  Can’t see where this is going?  Follow along on why I think external DAS will make hyper converged storage good enough for almost anyone’s environment.

Scaling Deep:  Right now the average server tops out at 24 2.5” drives and less for 3.5” drives.  In a lot of larger shops, that would mean running more hosts in order to meet your storage requirements, and that will come at the cost of paying for more CPU, memory and licensing then you should have to.  Just imagine a typical 1ru r630 + a 2u 60 drive JBOD!  That’s a lot more storage that you can fit under a single host, and it would only consume one more rack unit than a typical r730.  Add to this, theoretically speaking, the number of drives you could add to a single host would go beyond a single JBOD.  A quad port SAS HBA could have four 60 drive enclosures attached, and that’s a single HBA.

Storage independence:  Having the storage outside the server also makes that storage infinitely more flexible.  This is even true when you’re building vendor homogeneous solutions.  Take Dell for example.  Typically speaking their enclosures are movable between different server generations.  Currently with the storage stuck in the chassis, it gets really messy (support wise) and in many cases not doable, to move the storage from one chassis to another, especially if you’re talking about going from an older generation server to a newer generation.

Adding to this, depending on your confidence, white boxing also allows you to cut a server vendor out of the costliest part of the solution, which is the disks themselves.  Going with an enclosure from someone like RAID inc. or DataOn, Quanta QCT, Seagate, etc.  Add in a generic LSI (sorry Avago, oh sorry again, Broadcom) HBA, and now you have a solution that is likely good enough supportability wise.  JBODs tend to be pretty dumb and reliable, which just leaves the LSI card (well known established vendor) and your SSD / HDD.

Why do you want to move the storage anyway?  Simple, I’d bet a nice steak dinner that you want to upgrade or replace your compute long before you need to replace your storage. If you’re simply replacing your compute (not adding a new node but swapping it) then moving a SAS card + DAS is far more efficient than rebuying the storage, or moving the internal storage into a new host (remember warranty gets messy).  Simply vacate the host like you would with internal storage, shutdown, rip the hba out, swap server, put existing HBA back in, done.

If you’re adding a new host, depending on your storage, you may have the option of buying another enclosure and spreading the disks you have evenly across all hosts again.  So if for example, you had 50 disks in 4 hosts (total 200 disks) and you add a fifth host.  One option could be you simply remove 10 disks from each current node and place them in the new node. Your only additional cost was JBOD enclosure, and you now continue to keep your current investment in disks (with flash, that would be the expensive part).

Mix and match 3.5 / 2.5 drives:  Right now with internal storage, you are either running a 3.5” chassis, which doesn’t hold a lot of drives, but CAN support 2.5” drives with a sled.  Or you are running a 2.5” chassis which guarantees no 3.5” drives.  External DAS could mean one of two options:

  1. Use a denser 3.5” JBOD (say 60 disks) and use 2.5” sleds when you need to.
  2. Use one JBOD for 3.5” drives and a different one for 2.5” drives.

Again it comes down to flexibility.

Performance upgrades:  Now this is a big “it depends”.  Hypothetically if there were no SW imposed bottlenecks (which there are), one of your likely bottlenecks (with all flash at least) are going to be either how many drives you have per SAS lane, or how many drives you have per SAS card.  For example, if your SAS card is PCIe 3.0 internally, but the PCIe bus is 4.0, there’s a chance you could upgrade your server to a newer / better storage controller card.   More so, even if you were stuck on PCIe 3 (as an example).  There would be nothing stopping you from slicing your JBOD in half, and using two HBA to double your throughput.  Before you even go there, yes I do know the 730xd has an option for two RAID cards, glad you brought that up.  Guess what, with external DAS, you’re only limited by your budget, the number of PCIe slots you have and the constrains of your HCI vendor.  I for example could have 4 SAS cards, and 2 JBODs partially filled and each sliced in half.  You don’t have that flexibility with internal storage.

With the case of white boxing your storage, this also means to the extent of the HCL, you can run what you want.  So if you want to use all Intel dc3700’s, you can.  Heck, they’re even starting to make JBOF (just a bunch of flash) enclosures for NVMe, which again, would be REALLY fast.

Conclusion:

I say external DAS support is the missing link because it is what would allow HCI to offer similar scaling flexibility that exists in SAN/NAS.  I still think the HCI industry is at least 3 – 5 years out from matching the performance, scalability and features we’ve come to expect in enterprise storage, but external storage support would knock a big hole in a large facet of the scalability win with SAN/NAS.

Thinking out loud: The cloud (IaaS) delusion

Introduction:

Just so we’re all being honest here, I’m not going to sit here and lie about how I’m not biased and I’m looking at both sides 100% objectively.  I mean I’m going to try to, but I have a slant towards on prem, and a lot of that is based on my experience and research with IaaS solutions as they exist now.  My view of course is subject to change as technology advances (as anyones should), and I think with enough time, IaaS will get to a point where its a no brainer, but I don’t think that time is yet for the masses.  Additionally, I think its worth noting that in general, like any technology, I’m a fan of what makes my life easier, what’s better for my employer, and what’s financially sound.  In many cases cloud fits those requirement, and I currently run and have run cloud solutions (long before being trendy).  I’m not anti cloud, I’m anti throwing money away, which is what IaaS is mostly doing.

Where is this stemming from?  After working with Azure for the past month, and reading why I’m a cranky old SysAdmin for not wanting to move my datacenter to the cloud, I wanted to speak up on why in contrary, I think you’re a fool if you do.  Don’t get me wrong, I think there are perfectly valid reasons to use IaaS, there are things that don’t make sense to do in house, but running a primary (and at times a DR) datacenter in the cloud, is just waisting money and limiting your companies capabilities.  Let’s dig into why…

Basic IaaS History:

Let’s start with a little history as I know it on how IaaS was initially used, and IMO, this is still the best fit for IaaS.

I need more power… Ok, I’m done, you can have it back.

There are companies out there (not mine) that do all kinds of crazy calculations, data crunching and other compute intensive operations.  They needed huge amounts of compute capacity for relatively short periods of time (or at least that was the ideal setup).  Meaning, they were striving to get the work done as fast as possible, and for arguments sake, let’s just say their process scaled linearly as they added compute nodes.  There was only so much time, so much power, so much cooling, and so much budget to be able to house all these physical servers for solving what is in essence one big complex math equation.  What they were left with was a balancing act of buying as much compute as they could manage, without being excessively wasteful.  After all, if they purchased so much compute that they could solve the problem in a minimal amount of time, unless they were keeping those server busy, once the problem was solved, it was a waste of capital.  About 10 years ago (taking a rough guess here), AWS releases this awesome product capable of renting compute by the hour, and offering whats basically unlimited amounts of cpu / gpu power.  Now all of a sudden a company that would have had to operate a massive datacenter has a new option of renting mass amounts of compute by the hour.  This company could fire up as many compute nodes as they could afford, and not only could they solve their problem quicker, but they only had to pay for the time they used.

I want to scale my web platform on demand…. and then shrink it, and then scale it, and then shrink it.

It evolved further, if its affordable for mass scale up and scale down for folks that fold genomes, or trend the stock market, why not for running things like next generation web scale architectures.  Sort of a similar principle, except that you run everything in the cloud.  To make it affordable, and scalable, they designed their web infrastructure so that it could scale out, and scale on demand.  Again, we’re not talking about a few massive database servers, and a few massive web servers, we’re talking about tons of smaller web infrastructure components, all broken out into smaller independently scalable components.  Again the cloud model worked brilliantly here, because it was built on a premise that you designed small nodes, and scaled them out on demand as load increased, and destroyed nodes as demand dwindled.  You could never have this level of dynamic capacity affordably on prem.

I want a datacenter for my remote office, but I don’t need a full server, let alone multiples for redundancy.

At this stage IaaS is working great for the DNA crunchers and your favorite web scale company, and all the while, its getting more and more development time, more functionally, and finally gaining the attention of more folks for different use cases.  I’m talking about folks that are sick of waiting on their SysAdmins to deploy test servers, or folks that needed a handful of servers in a remote location, folks that only needed a handful of small servers in general, and didn’t need a big expensive SAN or server. Again, it worked mostly well for these folks.  They saved money by not needing to manage 20 small datacenters, or they were able to test that code on demand and on the platform they wanted, and things were good.

The delusion begins…

Fast forward to now, and everyone thinks that if the cloud worked for the genome folders, the web scale companies and finally for small datacenter replacements, then it must also be great for my relatively speaking static, large legacy enterprise environment.  At least that’s what every cloud peddling vendor and blogger would have you believe, and thus the cloud delusion was born.

Why do I call it the cloud delusion?  Simple, your enterprise architecture is likely NOT getting the same degrees of wins that these types of companies were/are getting out of IaaS.

Let’s break it down the wins that cloud offered and offers you.  In essence, if this is functionality that you need, then the cloud MAY make sense for you.

  1. Scale on demand:  Do you find your self frequently needing to scale servers by the hundreds every, day, week or even month?   Shucks, I’ll even give you same leeway and ask if you’re adding multiple hundred servers every year?   In turn are you finding that you are also destroying said servers in this quantity?  We’re trying to find out if you really need the dynamic scale on demand advantage that the cloud brings over your on prem solution.
  2. Programatic Infrastructure:  Now I want to be very clear with this from the start, while on prem may not be as advanced as IaaS, infrastructure is mostly programatic on prem, so weigh this pro carefully.  Do you find that you hate using a GUI to manage your infrastructure, or need something that you can that can be highly repeatable, and fully configurable via a few JSON files and a few scripts?  I mean really think about that.  How many of you right now are just drowning because you haven’t automated your infrastructure, and are currently head first in automating every single task you do?  If so, the cloud may be a good fit then because practically everything can be done via a script and some config files.  If however, you’re still running through a GUI, or using a handful of simple scripts, and really have no intention of doing everything through a JSON file / script, its likely that IaaS isn’t offering you a big win here.  Even if you are, you have to question if your on prem solution offers similar capabilities, and if so, whats the win that a cloud provider offers that your on prem does not.
  3. Supplement infrastructure personnel:    Do you find your infrastructure folks are holding you back?  If only they didn’t have to waste time on all that low level stuff like managing hypervisors, SANs, switches, firewalls, and other solutions, they’d have so much free time to do other things.  I’m talking about things like patching firmware, racking / unracking equipment, installing hypervisors, provisioning switch ports.  We’re talking about all of this consuming a considerable portion of your infrastructure teams time.  If they’re not spending that much time on this stuff (and chances are very high that they’re not), then  this is not going to be a big win for you.  Again, companies that would have teams busy with this stuff all the time, probably have problem number 1 that I identified.  I’d also like to add that even if this is an issue you have, there is still a limited amount of gain you’ll get out of this.  You’re still going to need to provision storage, networking and compute, but now instead of in the HW, it will simply be transferred to a CLI / GUI.  Mostly the same problem, just a different interface.  Again, unless you plan to solve this problem ALONG with problem 2, its not going to be a huge win.
  4. VM’s on demand for all:  Do you plan on giving all your folks (developers, DBA,  QA, etc.) access to your portal to deploy VM’s?  IaaS has an awesome on demand capability that’s easy to delegate to folks.  if you’re needing something like this, without having to worry about them killing your production workload, then IaaS might be great for you.  Don’t get me wrong, we can do this on prem too, but there’s a bit more work and planning involved.  Then again, letting anyone deploy as much as they want, can be an equally expensive proposition.  Also, let’s not forget problem number 2, chances are pretty high, your folks need some pre-setup tasks performed, and unless you’ve got that problem figured out, VM’s on demand probably isn’t going to work well anywhere, let alone the cloud.
  5. At least 95% of your infrastructure is going to the cloud:  While the number may seem arbitrary (and to some degree it is a guess), you need a critical mass of some sort for it to make financial sense to send you infrastructure to the cloud (if you’re not fixing a point problem).  What good is it to send 70% of your infrastructure to the cloud, if you have to keep 30% on prem.  You’re still dealing with all the on prem issues, but now your economies of scale are reduced.  If you can’t move the lions share of your infrastructure to the cloud, then what’s the point in moving random parts of it?  I’m not saying don’t move certain workloads to the cloud.  For example, if you have a mission critical web site, but everything else its ok to have an outage for, then move that component to the cloud.  However, if most of your infrastructure needs five 9’s, and you can only move 70% of it, then you’re still stuck supporting five 9’s on prem, so again, what’s the point?

Disclaimer:  Extreme amounts of snark are coming, be prepared.

Ok, ok maybe you don’t need any of these features, but you’ve got money to burn, you want these features just because you might use them at some point, everyone else is “going cloud” so why not you, or who knows whatever reason you might be coming up with for why the cloud is the best decision.  What’s the big deal, I mean you’re probably thinking you lose nothing, but gain all kinds of great things.  Well that my friend is where you’d be wrong.  Now my talking points are going to be coming from my short experience with Azure, so I can’t say these apply to all clouds.

  1. No matter what, you still need on prem infrastructure.  Maybe its not a hoard of servers, but you’ll need stuff.
    1. Networking isn’t going anywhere (should have been a network engineer).    Maybe you won’t have as many datacenter switches to contend with (and you shouldn’t have a lot if your infrastructure is modern and not greater than a few thousand VM’s), but you’ll still need access switches for you staff.  You’re going to need VPN’s and routers.  Oh, and NOW you’re going to need a MUCH bigger router and firewall (err… more expensive).  All that data you were accessing locally now has to go across the WAN, if you’re encrypting that data, that’s going to take more horsepower, and that means bigger badder WAN networking.
    2. You’re probably still going to have some form of servers on site.  In a windows shop that will be at least a few domain controllers, you’ll also have file server caching appliances, and possibly other WAN acceleration devices depending on what apps you’re running in the cloud.
    3. Well, you’ve got this super critical networking and file caching HW in place, you need to make sure it stays on.  That potentially is going to lead back to UPS’s at a minimum and maybe even a generator.  Then again, being fair, if the power is out, perhaps its out for your desktops too, so no one is working anyway.  That’s a call you need to make.
    4. Is your phone system moving to the cloud too?  No… guess you’re going to need to maintain servers and other proprietary equipment for that too.
    5. How about application “x”?  Can you move it to the cloud, will it even run in the cloud?  Its based on Windows 2003, and Azure doesn’t support Windows 2003.  What are application “X”‘s dependencies and how will they effect the application if they’re in the cloud?  That might mean more servers staying on prem.
  2. They told you it would be cheaper right, I mean the cloud saves you on so much infrastructure, so much personnel power, and it provides this unlimited flexibility and scalability that you don’t actually need.
    1. Every VM you build now actually has a hard cost.  Sorry, but there’s no such thing as “over provisioning” in the cloud.  Your cloud provider gets to milk that benefit out of you  and make a nice profit.  Yeah I can run a hundred small VM’s on a single host, those same VM’s I’d pay per in a cloud solution.  But hey, its cheaper in the cloud, or so the cloud providers have told me.
    2. Well at least the storage is cheaper, except that to get decent performance in the cloud, you need to run on premium storage and premium storage isn’t cheap (and not really all that premium either).  You don’t get to enjoy the nice low latency, high iop, high throughput, adaptive caching (or all flash) that your on prem SAN provided.  And if you want to try and match what you can get on prem, you’ll need to over-provision your storage, and do crazy in guest disk stripping techniques.
    3. What about your networking?  I mean what is one of the most expensive reoccurring  networking costs to a business?  The WAN links… well they just got A LOT more expensive.  So on top of now spending more capex on a router and firewall, you also need to pump more money into the WAN link so your users have a good experience.  Then again, they’ll never have the same sub-millisecond latency that they had when the app was local to them.
      1. No problem you say, I’ll just move my desktop to the cloud, and then you remember that the latency still exists, its just been moved from client and application, to the user interfacing with the client.  Not really sure which is worse.
        1. Even if you’re not deterred by this, now you’re incurring the costs of running your desktops in the cloud.  You know, the folks that you force 5 years or older desktops on.
    4. How many IP’s or how many NIC’s does your VM have?  I hope its one and one.  You see there are limitations (in Azure) of one IP per NIC, and in order to run multiple NIC’s per server, you need a larger VM.  Ouch…
    5. I hope you weren’t thinking you’d run exactly 8 vCPU’s and 8GB of vRAM because that’s all your server needs.  Sorry, that’s not the way the cloud works.  You can have any size VM you want, as long as its the sizes that your cloud provider offers.  So you may end up paying for a VM that has 8 vCPU and 64GB of RAM because that’s the closest fit.  But wait, there’s more…  what if you don’t need a ton of CPU or RAM, but you have a ton of data, say a file server.  Sorry, again, the cloud provider only enables a certain number of disks per vCPU, so you now need to bump up your VM size to support the disk size you need.
    6. At least with cloud, everything will be easy, I mean yeah it might cost more, but oh… the simplicity of it all.  Yep, because having a year 2005 limitation of 1TB disks just makes everything easy.  Hope you’re really good with dynamic disks, windows storage spaces, or LVM (Linux) because you’re going to need it  Also, I hope you have everything pre-thought out if you plan to stripe disks in guest.  MS has the most unforgiving disk stripping capabilities if you don’t.
    7. Snapshots, they at least have snapshots… right?  Well sort of, except its totally convoluted, not something you’d probably never want to implement for fear of wrecking your VM (which is what you were trying to avoid with the snap right?).
    8. Ok, ok, well how about dynamically resizing your VM’s?  They can at least do that right?  Yes, sort of, so long as your sizing up in a specific VM class.  Otherwise TMK, you have to rebuild once you outgrow a given VM class.  For example, the “D series” can be scaled until you reach the maximum for the “D”.  You can’t easily convert it to a “G” series in a few clicks to continue growing it.
    9. Changes are quick and non-disruptive right?  LOL, sure with any other hypervisor they might be, but this is the cloud (Azure) and from what I can see, its iffy if your VM’s don’t need to be shutdown, or even worse, if you do something that is supported hot, you may see longer than normal stuns.
    10. Ever need to troubleshoot something in the console?  Me too, a shame because Azure doesn’t let you access the console.
    11. Well at least they have a GUI for everything right?  Nope, I found I need to go drop into PS more often than not.  Want to resize that premium storage disk, that’s gonna take a powershell cmdlet.  That’s good though right, I mean you like wasting time finding the disk guid, digging into a CLI, just to resize one disk, which BTW is a powered off operation, WIN!
    12. You like being in control of maintenance windows right?  Of course you do, but with cloud you don’t get a say.

I could keep going on, but honestly I think you get the point.  There are caveats in spades when switching to the cloud as a primary (or even DR) datacenter.  Its not a simple case of paying more for features you don’t have, you lose flexibility / performance, and you pay more for it too.

Alright, but what about all those bad things they say about on prem, or things like TCO they’re trying to woo you to the cloud for.  Well lets dig into it a bit.

  1. Despite what “they” tell you, they’re likely out of touch.  Most of the cloud folks you’re dealing with, have been chewing their own dog food so long, they don’t have a clue about what exists in the on prem world, let alone dealing with your infrastructure and all its nuances.  They might convince you they’re infrastructure experts, but only THEIR infrastructure, not yours and certainly not on prem in general.  Believe me, most of them have been in their bubble for half a decade at least, and we all know how fast things change in technology, they’re new school in cloud, but a dinosaur in on prem.  Don’t misunderstand me, I’m not saying they’re not smart, I’m saying I doubt they have the on prem knowledge you do, and if you’re smart, you’ll educate yourself in cloud so you’re prepared to evaluate if IaaS really is a good fit for you and your employer.
  2. Going cloud is NOT like virtualization.  With virtualization you didn’t change the app, you didn’t’ lose control and more importantly it actually saved you money and DID provide more flexibility, scalability and simplicity.  Cloud does not guarantee any of those for a traditional infrastructure.  Or rather it may offer different benefits, that are not as equally needed.
  3. They’ll tell you the TCO for cloud is better and they MAY be right if you’re doing foolish things like.
    1. Leasing servers and swapping them every three years.  A total waste of money.  There’s very few good reason you aren’t financing a server (capex) and re-purposing that server through a proper lifecycle.  Five years is the minimum maximum life cycle for a modern server.  You have DR, and other things you can use older HW for.
    2. You’re not maxing out the cores in your server to maximize licensing costs, reduce network connectivity costs, and also reduce power, cooling and rack space.  An average dual socket 18 core server can run 150 average VM’s without breaking a sweat.
    3. Your threshold for a maxed out cluster is too low.  There’s nothing wrong with a 10:1 or even a 15:1 vCPU to pCPU ratio so long as your performance is ok.  Your milage may vary, but be honest with yourself before buying more servers based on arbitrary numbers like these.
    4. You take advice from a greedy VAR.  Do yourself a favor and just hire a smart person that knows infrastructure.  They’ll be cheaper than all the money you waste on a VAR, or cloud.   You should be pushing for someone that  is border line architect, if not an architect.
      1. FYI, I’m not saying all VARs are greedy, but more are than not.  I can’t tell you how many interviews I’ve had where I go “yeah, you got up sold”.
    5. Stop with this BS of only buying “EMC” or “Cisco” or “Juniper” or whatever your arbitrary preferred vendor is. Choose the solution based on price, reliability, performance, scalability and simplicity, not by its name.  I picked Nimble when NetApp would have been an easy, but expensive choice.  Again, see point 4 about getting the right person on staff.
    6. Total datacenter costs (Power, UPS, generator and cooling) are worth considering, but are often not as expensive as the providers would have you think.  If this is the only cost saving’s point they have you sold on, you should consider colocation first which takes care of some of that, but also incurs some of the same costs/caveats that come with cloud (but not nearly as many).  Again, I personally think this is FUD, and in a lot of cases, IT departments, let alone businesses don’t even see the bill for all of this.  Even things like DC space, if you’re using newer equipment, the rack density you get through virtualization is astounding.
    7. You’re not shopping your solution, ever.  I know folks that just love to go out to lunch (takes one to know one), and their VAR’s and vendors are happy to oblige.  If your management team isn’t pushing back on price, and lets you run around throwing PO’s like monopoly money, there’s a good chance you’re paying more for something than you need to.
    8. You suck at your job, or you’ve hired the wrong person.  Sounds a little harsh,but again, going back to point 4.  if you have the right people on staff,you’ll get the right solutions, they’ll be implemented better, and they’ll get implemented quicker.  Cloud by the way, only fixes certain aspects of this problem.
  4. They’ll tell you can’t do it better than them, they scale better, and it would cost you millions to get to their level.  They’re right, they can build a 100,000 VM host datacenter better than you or I, and they can run it better.  But you don’t need that scale, and more importantly, they’re not passing those economies of scale on to you.  That’s their profit margin.  Remember, they’re not doing this to save you money, they’re doing this to make money.   In your case, if your DC is small enough (but not too small) you can probably do it MUCH cheaper than what you’d pay for in a cloud, and it will likely run much better.
  5. They’ll tell you’ll be getting rid of a SysAdmin or two thanks to cloud.  Total BS… An average sysadmin (contrary to marketing slides) does not spend a ton of time with the mundane task or racking HW, patching hypervisors (unless its Microsoft :-)), etc.  They spend most of their time managing the OS layer, doing deployments, etc, which BTW all still need to be done in the cloud.

For now, that’s all I’ve got.  I wrote this because I was so tired of hearing folks spew pro cloud dogma from their mouthes without even having a simplistic understanding of what it takes to run infrastructure in the cloud or on prem.  Maybe I am the cranky main frame guy, and maybe I’m the one who is delusional and wrong.  I’m not saying the cloud doesn’t have its place, and I’m not even saying that IaaS won’t be the home of my DC in ten years.  What I am saying is right now, at this point in time, I see moving to the cloud as big expensive mistake if your goal is to simply replace your on prem DC.  If you’re truly being strategic with what you’re using IaaS for, and there are pain points that are difficult to solve on prem, then by all means go for it.  Just don’t tell me that IaaS is ready for general masses, because IMO, it has a long ways to go yet.