Thinking out loud: VMware, this is what I want from you

Warning:

This post is clicking in at 6k words.  If you are looking for a quick read, this isn’t for you.

Disclaimer:

Typical stuff, these are my personal views, not views of my employers.  These are not facts, merely opinions and random thoughts I’m writing down.

Introduction:

I don’t know about all of you, but for me, VMware has been an uninspiring company over the last couple of years.  VMworld was a time when I used to get excited.  It used to mean big new features were coming, and the platform would evolve in nice big steps.  However, over the last 5 – 7 years, VMware has gotten progressively disappointing.  My disappointment however is not limited to the products alone, but the company culture as well.

This post will not follow a review format like many of you are used to seeing, but instead, will be more of a pointed list of the areas I feel need improvement.

With that in mind, let it go on the record, that in my not so humble option, VMware is still the best damn virtualization solution.  I bring these points up not to say that the product / company sucks, but rather to outline that in many ways, VMware has lost its mojo, and IMO some of these areas would be good steps in recovering that.

The products:

The death of ESXi:

You know, there are a lot of folks out there that want to say the hypervisor is a commodity.  Typically, those folks are either pitching or have switched to a non-VMware hypervisor.  To me, they’re suffering from Stockholm’s syndrome.  Here’s the deal, ESXi kicks so much ass as a hypervisor.  If you try to compare Hyper-V, KVM, Xen or anything else to VMware’s full featured ESXi, there is no competition.  I don’t give a crap about anything you will try to point out, you’re wrong, plain and simple.  Any argument you make will get shot down in a pile of flames.  Even if you come at me with the “product x is free” I’m still going to shoot you down.

With that out of the way, it’s a no wonder that everyone is chanting the hypervisor commodity myth.  I mean, let’s be real here, what BIG innovation has been released to the general ESXi platform without some up charge?  You can’t count vSAN because that’s a separate “product” (more on the quotes later).  vVOLs you say?  Yeah, that’s a nice feature, only took how long?

So, what else?  How about the lack of trickle down and the elimination of Enterprise edition? There was a time in VMware’s history when features trickle down from Enterprise Plus > Enterprise > Standard.  Usually it occurred each year, so by the time year three rolled around, that one feature in Enterprise Plus you were waiting for, finally got gifted to Standard edition.  The last feature I recall this happening too, was the MPIO provider support, and that was ONLY so they could support vVOLS on Standard edition (TMK).

Here is my view on this subject, VMware is making the myth of a commoditized hypervisor a self-fulfilling prophecy.  Not only is there a complete lack innovation, but there’s no trickle down occurring.

If you as a customer, have gone from receiving regular (significant) improvements as part of your maintenance agreement, to basically nothing year over year, why would you want to continue to invest in that product?  Believe me, the thought has crossed my mind more than once.

From what I understand, VMware’s new business plan, is to make “products” like vSAN that depend on ESXi, but that aren’t included with the ESXi purchase.  Thus, a new revenue stream for VMware and renewed dependence on ESXi.  First glance says it working, at least sort of, but is it really doing as well as it could?  While it sounds like a great business model, if you’re just comparing whether you’re black / red, what about the softer side of things?  What is the customer perception of moving innovations to an al a carte model?  For me, I wonder if they took the approach below, would it have had the same revenue impact they were looking for, while at the same time, also enabling a more positive customer perception?  I think so…

  1. First and foremost, VMware needs to make money. I know I just went through that whole diatribe above, but hear me out.  This whole “per socket” model is dead.  It’s just not a sustainable licensing model for anyone.  Microsoft started with SQL and has finally moved Windows to a per core model.  In my opinion, VMware needs to evolve its licensing model in two directions.
    1. Per VM: There are cases, where you’re running monster VMs, and while you’re certainly taking advantage of VMware’s features, you’re not getting anywhere near the same vale add as someone who’s running 20, 30, 50, 100 VM’s per host.  Allowing customers to allocate per VM licenses to single host or an entire cluster would be a fair model for those that aren’t using virtualization for the overcommit, but for the flexibility.
    2. Per Core: I know this is probably the one I’m going to get the most grief from, but let’s be real, YOU KNOW it’s fair.  Let’s just pretend, VMware wasn’t the evil company that Microsoft is, and actually let you license as few as 2 cores at a time?  For all of you VARs that have to support small businesses, or for all of you smaller business out there, how much likelier would you have just done a full blow ESXi implementation for your clients?  Let’s just say VMware charged $165 per core for ESXi standard edition and your client had a quad core server.  Would you think $659 would be a reasonable price?  I get that number simply by taking VMware’s list price and dividing by 8 cores, which is exactly how Microsoft arrived at their trade-ins for SQL and Windows.  NOW, let’s also say you’re a larger company like mine and you’re running enterprise plus.  The new 48 core server I’m looking at would normally cost $11,238 at list for Enterprise Plus.  However, if we take my new per core model, that server would now cost ($703 per core) $33,714.  That’s approximately $22k that VMware is losing out on for just ONE server.  I know what you’re thinking, Eric, why in the world would you want to pay more?  I don’t, but I also don’t want a company that makes a kick ass product to stagnate, or worse crumble.  I’ve invested in a platform, and I want that platform to evolve.  In order for VMware to evolve, it needs capital.
  2. Ok, now that we have the above out of the way, I want a hell of a lot more out of VMware for that kind of cash, so let’s dig into that.
    1. vSAN should have never been a separate product. Including vSAN into that per core or per VM cost just like they do with Horizon, would add value into the platform.  Let’s be real, not everyone is going to use every feature of VMware.  I’m personally not a fan of vSAN, but that doesn’t mean I don’t think I should be entitled to it.  This could easily be something that is split among Standard and Enterprise plus editions.
      1. Yes, that also means the distributed switch would trickle down into Standard edition, which it should be by now.
    2. Similar to vSAN, NSX should really be the new distributed switch. I’m not sure exactly how to split it across the editions, but I think some form of NSX should be included with Standard, and the whole darn thing for Enterprise Plus.
    3. At this stage, I think it’s about time for Standard edition to really become the edition of the 80%. Meaning, 80% of the companies would have their needs met by Standard edition, and Enterprise plus is truly reserved for those that need the big bells and whistles.  A few notable things I would like to trickle down to Standard Edition are as follows.
      1. DRS (Storage and Host)
      2. Distributed Switch (as pointed out in 2ai)
      3. SIOC and NIOC
      4. NVIDIA Grid
  3. As for Enterprise Plus, and Enterprise Plus with Ops manager, those two should merge and be sold at the same price as Enterprise plus. I would also like to see some more of the automation aspects from the cloud suite brought into the Enterprise Plus edition as well.  I kind of view Enterprise Plus edition, as being an edition that focuses on all the automation goodies, that smaller companies don’t need.
  4. IMO, selling vCenter as separate SKU is just silly. So as part of all of this, I would like to see vCenter simply included with your per core or per VM licenses.  At the end of the day, a host can only be connected to one vCenter at a time anyway.
  5. Include a log insight licenses for every ESXi host sold, strictly used for collecting and managing a hosts VMware logs, including the VM’s running on top of them. I don’t mean inside the OS, rather things like the vmware.log as an example.

Evolving the features:

vCenter changes:

I know I was a little tough on VMware in the intro, and while I still stand behind my assertion in their lack of innovation, what they’ve done with the VCSA, it’s pretty kick ass.  I would say it’s long overdue, but at least it finally here.  That said, there’s still a ton of things VMware could be doing better with vCenter.

  1. If you have ever tried to setup a simplistic, but secure profile for some self-service VM management, you know that it’s nightmare. 99% of that problem is attributed to VMware’s very shitty ACL scheme.  The way permission entitlements work is confusing, conflicting, and ultimately leads to having more access granted, so you can get things to work.  It shouldn’t be this difficult to setup a small resource pool, a dedicated datastore and a dedicated network, and yet it is.  I would love to see VMware duplicate the way Microsoft handles ACLS, because to be 100% honest, they’ve nailed it.
  2. In general, the above point wouldn’t even be an issue, if VMware would just create a multi-tenancy ability. I’m not talking about wanting a “private cloud”.  This isn’t a desire for more automation or the like, simply a built-in way, to securely carve up logical resources, and allocated them to others.  I would LOVE to have an easy way for my Dev, QA and DBAs to all have access discrete buckets of resources.
  3. So, I generally hate web clients, and nothing enforced that more than VMware. Don’t get me wrong, web clients can be great, but the vSphere web client is not.  Here is what I would like to see, if you’re going to cram a web client down my throat.
    1. Finish the HTML5, before ripping the c# away from us. The flash client is terrible.
    2. Whoever did the UI design for the c# client, mostly got it right the first time. The web client should be duplicated aspects of the c# client that worked well.  Things like the right click menu, the color schemes and icons.  I have no problem with seeing a UI evolve over time, but us old heads, like things where they were.  The web clients feel like developers just moved shit around for no reason.  The manage vs. monitor tab gets a big thumb up from me, but it’s after that where it starts to fall apart.  Finding simple things like the storage paths, which used to be a simple right click on the datastore have moved to who knows where.  Take a lesson from Windows 8 and 10, because those UI’s are a disaster.  Moving shit around for the sake of moving it around is the wrong.  Apples OS X UI is the right way to progress change.
  4. The whole PSC + vCenter integration, feels half assed if you ask me. I think for a lot of admins, they have no clue why these roles should be separate, how to properly admin the PSC’s, and if shit break, good luck.  It was like one day you only had vCenter, and the next thing you know, there’s this SSO thing that who knows what about, and then the PSC pops out of nowhere.  It wasn’t a gradual migration, rather this huge burst of changes to authentication, permissions and certificate management.  I would say there a better understanding of the PSC’s at this point, but it wasn’t executed in a good way.  Ultimately though, I still think the PSC’s need some TLC.  Here are a few things l’d like to see.
    1. You guys need to make vCenter and the like smart enough to not need a load balancer in front of the PSC’s. When vCenter joins a PSC domain, it should become aware of all PSC’s that exist, and have automated failover.
    2. There should be PowerCLI for managing the PSC’s, and I mean EVERYTHING about them. Even the stuff where you might run for troubleshooting.
    3. There should be a really friendly UI that walks you through a few scenarios.
      1. Removing a PSC cleanly.
      2. Removing an orphaned PSC controllers or other components (like vCenter).
      3. Putting a PSC into maintenance mode. (which means a maintenance mode should exist)
      4. Troubleshooting replication.
        1. Show the status
        2. Let us force a replication
      5. Rolling back / restoring items, like users or certs.
      6. Re-linking a vCenter that’s orphaned, or even transferring a vCenter persona to a new vCenter environment.
      7. How about some really good health monitors? As in like single API / PowerCLI command type of stuff.
      8. Generating an overall status report.
  5. Update manager, while an awesome feature, hasn’t seen much love over the years, and what I’d really like to see are as follows.
    1. Let me remove an individual update, and provide an option to delete the patch on disk, or simply remove the update from the DB.
    2. Scan the local repo for orphaned patches (think in the above scenario where someone deletes a patch from update manager, without removing it from the file system).
    3. Add the dynamic ability baselines to all classifications of updates, not just updates themselves. Right now, we can’t create a dynamic extensions baseline.
    4. Give me PowerCLI admin abilities. I’d love to be able to use PowerClI to do all the things I can do in the GUI.  Anything from uploading a patch, to creating baselines.
    5. Open the product up, so that vendors could integrate firmware remediation abilities.
    6. Have an ability to check the VMware HCL for updated VIBs, that are certified to work with the current firmware we’re running. This would make managing drivers in ESXi so much easier.
    7. Offer a query derived baseline. Meaning let us use things like a SQL query to determine what a baseline should be.
    8. Check if a VIB is applicable before installing it, or have an option for it. Things like, “hey, you don’t have this NIC, so you don’t need this driver”.  I’ve seen drivers installed, that had nothing to do with the HW I had, actually cause outages.
  6. There are still so many things that can’t be adminsterd using PowerCLI, at least not without digging into extension data or using methods. Keep building the portfolio of cmdlets.  I want to be able to do everything in PowerCLI that I can in the GUI.  Starting with the admin stuff, but also on top of that, doing vCenter type tasks like repointing or other troubleshooting tasks.
  7. How about overhauled host profiles?
    1. Provide a Microsoft GPO like function. Basically, present me a template that shows “not configured” for everything and explain what the default setting is.  Then let me choose whatever values are supported then apply that vCenter wide, datacenter wide, folder / cluster wide or host specific.
      1. Similar feature for VM settings.
      2. Support the concept of inheritance, blocking and over rides.
    2. Let me create a host independent profile, and perhaps support the concept of sub-profiles for cases where we have different hosts. Basically, let me start with a blank canvas and enable what I want to control through the profile.
  8. Let us manage ESXi local users / groups and permissions from vCenter its self. In fact, having the ability to automatically create local users / groups via a GPO like policy would be great.
  9. I had an issue where a 3rd party plugin kept crashing my entire vSphere web client. Why in the world can a single plugin, crash my soon to be only admin interface?  That’s a very bad design.  Protect the admin interface, if you have to kill something, kill the plugins, and honestly, I’d much rather see you simply kill the troublesome plugin.  Adding to that, actually have some meaningful troubleshooting abilities for plugins.  Like “hey, I needed more memory, and there wasn’t enough”.
  10. vCenter should serve as a proxy for all ESXi access. Meaning if I want to upload an ISO, or connect to a VM’s console, proxies those connections through vCenter.  This allows me to keep ESXi more secure, while still allowing developers and other folks to have basic access to our VMware environment.
  11. Despite its maturity, I think vMotion and DRS need some love too.
    1. Resource pools basically get ripped apart during maintenance mode evacuations or moving VM’s (if you’re not careful). VMware should develop a similar wizard to what’s done when you move storage.  That is, default to leaving a VM in a resource pool when we switch hosts, but ask if we’d like to switch it to a resource pool.
    2. I would love to see a setting or setting(s) where we can influence DRS decision a bit more in a heavily loaded cluster. For example, I’ve personally had vCenter move VM’s to hosts that didn’t have enough physical memory to back the allocated memory, and guess what happened?  Ballooning like a kid’s birthday party.  Allow us to have a tick box or something that prevents VM’s from moving to hosts that don’t have enough physical memory to back the allocated + overhead memory of the VM’s.
    3. Would love to see fault zones added to compute. For example, maybe I want my anti-affinity rules to not only be host aware, but fault zone aware as well.
      1. Have a concept of dynamic fault zones based on host values / parameters. For example, the rack that a host happens to run in.
    4. Show me WHY you moved my VM’s around in the vMotion history.
  12. How about a mobile app for basic administration and troubleshooting? I shouldn’t need a third party to make that happen.  And for the record I know you have one, I want it to be good though.  I shouldn’t need to add servers manually, just let me point at vCenter(s) and bring everything in.

SDRS, vVOLS, vSAN and storage in general:

If I had to pick a weak spot of VMware, it would be storage.  It’s not that its bad, it’s just that it seems slow to evolve.  I get it, it’s super critical to your environment, but in the same tone, it’s super critical to my environment, and that means I need them to keep up with demand.  Here is some example.

  1. Add support for tape drives, and I mean GOOD support / GOOD performance. This way my tape server can finally be virtualized too without the need to do things like remote iSCSI, or SR-IOV.  I know what some of you might be thinking, tape is dead.  Wish it were true, but it’s not.  What I really want to see VMware do, is have some sort of library certification process, and then enable the ability to present a physical library as a virtual one to my VM.  Either that, or related to that, let me do things like raw device mappings of tape drives.  Give me like a virtual SAS or fiber channel card, that can do a raw mapping of a table library.  Even cooler, would be enabling me to have those libraries be part of a switch, and enabling vMotion too.
  2. I still continue to sweat bullets about the amount of open storage I have on a given host, or at least when purchasing new hosts. It’s 2017, a period of time where data has been growing at incredible rates, and the default ESXi is still tuned for 32TB of open storage?  I know that sounds like a lot, but it really isn’t.  To make matters worse, the tuning parameters to enable more open storage (VMDK’s on VMFS) is buried in an advanced setting and not documented very well.  If the memory requirements are negligible, ESXi should be tuned for the max open storage it can support.  Beyond that, VMware should throw a warning if the amount of open storage exceeds the configured storage pointer cache.  Why burry something so critical and make an admin dig through log messages to know what’s going on (after the fact mind you)?
    1. Related to the above, why is ESX even limited to 128TB (pointer cache)? Don’t get me wrong, it’s a lot of storage, but it’s not like a wow factor.  A PB of open storage would be a more reasonable maximum IMO.   If it’s a matter of consuming more memory (and not performance) make that an admin choice.
  3. RDM’s via local RAID should be a generally supported ability. I know it CAN work in some cases, but it’s not a generally supported configuration.  There are times where an RDM makes sense, and local RAID could very much be one of those cases.  I should be able to carve up vDisks and present them to a VM directly.
  4. How about better USB disk support? It’s more of a small business need, but a need none the less.  In fact, I would say being even more generic, removable disks in general.
  5. Why in the world is removing a disk/LUN such an involved task still? There should literally be a right click, delete disk, and then the whole work flow kicks off in the background.  Needing to launch PowerCLI, do an unmount, detach process is just a PITA.  There shouldn’t even need to be an order of operations.  I mean, in windows I can just rip the disk out and no issues occur (presuming nothings on the disk of course).  I don’t mind VMware making some noise about a disk being removed, but then make it an easy process to say “yeah, that disk is dead, whack it from your memory”.
  6. Pretty much everything on my vSAN / what’s missing in HCI posts has gone unimplemented in vSAN. You can check that out here and here.  That said, they have added a few things like parity and compression / dedupe, but that’s nothing in the grand scheme of things.
    1. What I really wished vSAN was / is, is a non-hyperconverged storage solution. As in, I wish I could install vSAN as a standalone solution on storage, and use it as a generic SAN for anything, without needing to share it with compute.  Hedvig storage has the right idea.  Don’t know what I’m talking about, go check them out here.  Just imagine what vSAN could do with all that potential CPU power, if it didn’t have to hold its self-back for the sake of the VM’s.  And yes, THIS would be worth of a separate product SKU.
  7. SDRS:
    1. I wish VMware would let you create fault zones with SDRS. This way when I create VM anti-affinity rules and specific different fault zones, I’d sleep better at night knowing my two domain controllers weren’t running on the same SAN, BUT, that they could move wherever they needed to.
    2. It would be really great to see SDRS have the ability to balance VM’s across ANY storage type. And have expanded use to local storage as well.  For example, I would love to see vVOLs have SDRS in front of it.  So, my VM’s could still float from SAN to SAN, even if they’re a vVOL.  For the local storage bit, what if I have a few generic local non-san luns.  I could still see there being value in pooling that storage from an automation standpoint.
    3. I would love to see a DRS integration for non-shared storage DRS. I know it would be REALLY expensive to move VM’s around.  But in the case of things like web servers, where shared storage isn’t needed, and vSAN just adds complexity, I could see this being a huge win.  If nothing else, it would make putting a host into maintenance mode a lot easier.
    4. Let me have affinity rules in standard edition of VMware. This way I can at least be warned that I have two VM’s comingling on the same host that shouldn’t be.
  8. vFlash (or whatever it’s called)
    1. It would be nice to see VMware actually continue to innovate this. For example.
      1. Support for multiple flash drives per host and LARGE flash drives per host.
      2. Cache a data store instead of a single VM. This way the cache is used more efficiently.  Or make it part of a storage policy / profile.
      3. Do away with static capacity amounts per VMDK. In essence offer a dynamic cache ability based on the frequency of the data access patterns.
      4. I would also suggest write caching, but let’s get decent read caching first.

ESXi itself:

The largest stagnation in the platform has been ESXi its self.  You can’t count vSAN or NSX if you’re going to sell it as a separate product.  Here are some areas I would like to see improved.

  • I would love to see the installation wizard ask more questions early on, so that when they’re all answered, my host is closer to being provisioned. I understand that’s what the host deploy is for, but that’s likely overkill for a lot of customers.
    • ASK me for my network settings and verify they work.
    • ASK me if I want to join vCenter and if so, where I want the host located
    • ASK me if I want to provision this host straight to a distributed switch so I don’t need to go through the hassle of migrating to one later.
  • Let the free edition be joined to vCenter. This way we can at least move a vm (shutdown) from one host to another, and also be able to keep them updated.  I could see a great use case for this if developers want / need dedicated hosts, but we need to keep them patched.  I’m not asking for you do anything, other than let us patch them, move vm, and be able to monitor their basic health of the host.  Keep all the other limits in place.
  • Give us an option to NEVER overcommit memory. I’d rather see a VM fail to power on, not migrate or anything if it’s going to risk memory swapping / ballooning.
  • Make reservations an actual “reservation” If I say I want the whole VM’s memory reserved, pre-reserve the whole memory space for that VM, regardless of whether the VM is using it.
  • Support for virtualizing other types of HW, like SSL offload cards and presenting them to VMs. I suspect this would also involve support from the card vendors of course, but it would still be a useful thing to see.  For example, SSL offloading in our virtual F5’s.
  • I want to see EVERYTHING that can done in an ESX CLI and other troubleshooting / config tools also be available in PowerCLI.
  • Have a pre-canned command I can run to report on all hardware, its drivers, firmware and modules.
  • I think it would be kind of slick to run ESXi as a container. Perhaps I want to carve up a single physical ESXi host, into a couple of smaller ESXi hosts and use the same license.  Again, developers would be a potentially great use case for this.
  • I would like to see an ability to export and import and ESXi image to another physical server. Simple use case would be migrating a server from one host to another.  Maybe even have a wizard for remapping resources such as the NICS, and the log location.  I’m not talking about a host backup, more like a host migration wizard.
  • Actually, get ESXi joining to an Active Directory working reliably.
  • How about showing us active NFC connections, how much memory they’re consuming and the last time they were used. While we’re at it, how about supporting MORE NFC connections.
  • Create a new kernel for NFC and cold migration traffic with a related friendly name.
  • Help us detect performance issues easier with top. Meaning, if there are particular metrics that have crossed well known thresholds, maybe raise an event or something in the logs.  Related though, perhaps offing a GUI (or PowerCLI) related option for creating / scheduling an ESXTOP trace and storing the results in a CSV.

Evolving the company:

Documentation:

Look, almost everyone hates being stuck with documenting things, or at least I do.  However, it’s something that everyone relies on, and when done well, it’s very useful.   I get that VMware is large and complex, so I have to imagine documentation is a tough job.  Still, I think they need to do better at it.  Here is what I see that’s not working well.

  • KB articles aren’t kept up to date as new ESXi versions are released. Is that limitation still applicable?  I don’t know, the documentation doesn’t tell me.
  • There is a lack of examples on changing a particular setting. For example, they may show a native ESXCLI method, while completely leaving out PowerCLI and the GUI.
  • There is a profound lack of good documentation on designing and tuning ESXi for more extreme situation. Things like dealing with very large VM’s, designing for high IOPS or high throughput, large memory and vCPU VM’s.  I don’t know, maybe the thought is you should engage professional services (or buy a book), but that seems overkill to me.
  • Tuning and optimizing for specific application workloads. For example, Microsoft Clustering on top of VMware.  Yeah they have a doc, but no it’s not good.  Most of their testing is under best case scenarios, small VM’s, minimal load, empty ESXi servers, etc.  It’s time for VMware to start building documentation based on reality.  To use a lazy excuse like “everyone’s environment is different” doesn’t absolve even an attempt at more realistic simulations.  For example, I would love to see them test a 24 vCPU, 384GB of vRAM VM with other similarlay sized VM’s on the same host, under some decent load.  I think they’d find, vMotion causes a lot of headaches at that scale.
  • Related to above, I find their documentation a little untrustworthy when they say “x” is supported. Supported in what way?  Is vMotion not supposed to cause a failover, or do you simply mean, the vMotion operation will complete?  Even still, there are SO many conflicting sub-notes it’s just confusing to know what restrictions exist and what doesn’t.  It’s almost like the writer doesn’t understand the application they’re documenting.

Support:

If there is one thing that has taken a complete downward spiral, it’s support.  Like, the VMware execs basically decided customers don’t need good support and decided to outsource it to the cheapest entity out there.  Let me be perfectly clear, VMware support sucks, big time, and I’m talking about production support just to be clear.  Sure, I occasionally get in touch with someone that knows the product well, communicates clearly, and actually corresponds within a reasonable time, but that’s a rarity.  Here are just a few examples of areas that they drop the ball in.

  • Many times, they don’t contact you within your time zone. Meaning, if I work 9 – 5 and I’m EST, I might get a call at 5 or 6, or an email at 4am.
  • Instead of coordinating a time with you, they just randomly call and hope you’re there, otherwise its “hey, get back to me when you’re ready”, which is followed by another 24-hour delay (typically). Sometimes attempts to coordinate a time with them works, other times it doesn’t.
  • I have seen plenty of times where they say they’ll get back to you the next day, and a week or more goes by.
  • Re-Opening cases, has led to me needing to work with a completely different tech. A tech that didn’t bother reading the former case notes, or contacting the original owner to get the back story.  In essence, I might as well have opened a completely new case.
  • Communication is hit or miss. Sometimes, they communicate well, other times, there’s a huge breakdown.  It’s not so much understanding words, but an inability to understand tone, the severity of the situation, or other related factors.
  • Being trained in products that have been out for months. I remember when I called about some issues with a PSC appliance 6 MONTHS after vSphere 6 was released, and the tech didn’t have a clue on how the PSC’s worked.  I had to explain to him the basics, it was a miserable experience.
  • Having a desire to actually figure out an issue, or really solve a problem. It’s like they read from a book, and if the answer isn’t there, they don’t know how to think beyond that.

While we’re still on the support topic, this whole notion of business critical and mission critical support is a little messed up.  I guess VMware basically wants us to fund the salary of an entire TAM or something like that, which is bluntly stupid.  It doesn’t matter if I’m a company with one socket of Enterprise Plus, or a company with 100 sockets, we apparently all pay the same price.  I don’t entirely have a problem with pay a little extra to get access to better support, but then it should be something that’s an upgrade to my production support per socket, not a flat fee. Again, it should be based around fair consumption.

Sales:

You know when I hear from my sales team, when they want to sell me something.  They don’t call to check-in and see if I’m happy.  They’re not calling to go over the latest features included with products I own to make sure I’m maximizing value, none of that happens.  All that kind of stuff is reactive at best.  It’s ME reaching out to learn about something new, or ME reaching out to let them know support is really dropping the ball.  I spend a TON of money on VMware, I’d like to see some better customer service out of my reps.  I have vendors that reach out to me all the time, just to make sure things are going ok.  A little effort like that, goes a long way in keeping a relationship healthy.

Website:

I want to pull my hair out with your website.  Finding things is so tough, because your marketing team is so obsessed with big stupid graphics, and trying to shove everything and anything down my throat.  You’re a company that sells lean and mean software, and your website should follow the same tone.  Everything is all over the place with your site.  Also, it’s 2017, having a proper mobile optimized site would be nice too.

Finally, you guys run blogs, but one thing I’ve noticed is you stop allowing new comments after “x” time.  Why do you do this?  I might need further clarification on a topic that was written, even if it’s years ago.

Cloud and innovation:

This one is a tough area, I’m not sure what to say, other than I hope you’re not the next Novell.  You guys had a pretty spectacular fail at cloud, and I could probably go into a lot of reasons, and most of them wouldn’t be related to Microsoft or AWS being too big to beat.  I suspect part of it was you guys got fat, lazy and way too cocksure.  It’s ok, it happens to a lot of companies, and professionals alike.  While it’s hard for me to forsee someone wanting to consume a serverless platform from you guys, I wouldn’t find it hard to believe that someone might want to consume a better IaaS platform than what’s offered by Microsoft or AWS.  While they have great automation, their fundamental platform still leaves a lot to be desired.  That to me, is an area that you guys could still capture.  I could foresee a great use case for a virtual colocation + all the IaaS scalability and automation abilities.  I still have to shutdown an Azure VM for what feels like every operation, need I say more?

Closing:

Look I could probably keep going on, and one may wonder why stop, I’m already at 6,000 plus words.  I will say kudos to you, if you’ve actually read this far and didn’t simply skip down.  However, the point of this post wasn’t to tear down VMware, nor was it to go after writing my longest post ever.  I needed to vent a little bit, and wanted VMware to know that I’m frustrated with them and what they could do to fix that.  I suspect a lot of my view points aren’t shared by all, but in turn, I’m sure some are.  VMware was the first tech company that I was truly inspired by.  To me, they exemplified what a tech company should strive to be, and somewhere along the way, they lost it.  Here’s to hoping VMware will be with us for the long haul, and that what’s going on now, is simply a bump in the road.

 

Powershell Scripting: Microsoft Exchange, Configure client-specific message size limits

Introduction:

If you don’t know by now, I’m a huge PowerShell fan. It’s my go to scripting language for anything related to Microsoft (and non-Microsoft) automation and administration. So when it came time to automating post exchange cumulative update setting, I was a bit surprised to see some of the code examples from Microsoft, not containing any PowerShell example. Surprised is probably the wrong word, how about annoyed? I mean, after all, this is not only the company that shoved this awesome scripting language down our throat, but also the very team that was the first one to have a comprehensive set of admin abilities via PowerShell. So if that’s the case, why in the world, don’t they have a single PS example for configuring client-specific message size limits?

Not to be discouraged, I said screw appcmd, I’m PS’ing this stuff, because it’s 2017 and PS / DSC is what we should be using. Here’s how I did it

The settings:

If you’re looking for where the setting are that I’m speaking of / about, check out this link here. That’s how you do it in the “old school” way.

The new school way:

My example below is for EWS, you need to adjust this if you want to also include EAS.


     Write-Host "Attempting to set EWS settings"
    Write-Host "Starting with the backend ews custom bindings"
    $AllBackendEWSCustomBindingsWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/custombinding/*/httpsTransport" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*https*/httpstransport"} 
    Foreach ($BackendEWSCustomBinding in $AllBackendEWSCustomBindingsWebConfigProperties)
        {
        Set-WebConfigurationProperty -Filter $BackendEWSCustomBinding.ItemXPath -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -value 209715200 -ErrorAction Stop
        }
    Write-Host "Finished the backend ews custom bindings"
    
    Write-Host "Starting with the backend ews web http bindings"
    $AllBackendEWwebwebHttpBindingWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/webHttpBinding/*" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*"} 
    Foreach ($BackendEWSHTTPmBinding in $AllBackendEWwebwebHttpBindingWebConfigProperties)
        {
        Set-WebConfigurationProperty -Filter $BackendEWSHTTPmBinding.ItemXPath -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -value 209715200 -ErrorAction Stop
        }
    Write-Host "Finished the backend ews web http bindings"

    Write-Host "Starting with the back end ews request filtering"
    Set-WebConfigurationProperty -Filter "/system.webServer/security/requestFiltering/requestLimits" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxAllowedContentLength -value 209715200 -ErrorAction Stop
    Write-Host "Finished the back end ews request filtering"

    Write-Host "Starting with the front end ews request filtering"
    Set-WebConfigurationProperty -Filter "/system.webServer/security/requestFiltering/requestLimits" -PSPath "MACHINE/WEBROOT/APPHOST/Default Web Site/EWS" -Name maxAllowedContentLength -value 209715200 -ErrorAction Stop
    Write-Host "Finished the front end ews request filtering" 

Is it technically better than appcmd?  Yes, of course, what did you think I was going to say?  It’s PS, of course it’s better than CMD.

As for how it works, I mean it’s pretty obvious, I don’t think there’s any good reason to go into a break down.  I took what MS did with AppCMD and just changed it to PS, with a foreach loop in the beginning to have even a little less code 🙂

You should be able to take this, and easily adapt it to other IIS based web.config settings.  My Get-WebConfigurationProperty in the very beginning, is a great way to explore any web.config via the IIS cmdlets.

Anyway, hope this helps someone.

***Update 07/29/2017:

So we did our exchange 2013 cu15 upgrade, and everything went well with the script, except for one snag.  My former script had an incorrect filter that added an “https” binding to an “http”  path.  EWS didn’t like that very much (as we found out the hard way).  Anyway, should be fixed now.  I updated the script.  Just so you know which line was affected you can see the before and after below.  Basically my original filter grabbed both the http and https transports.  I guess technically each web property has the potential for both.  My new filter goes after only https EWS configs + https transports.


#I changed this:

$AllBackendEWSCustomBindingsWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/custombinding/*/httpsTransport" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*"}

#To this

$AllBackendEWSCustomBindingsWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/custombinding/*/httpsTransport" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*https*/httpstransport"}

Powershell Scripting: Get-ECSESXHostVIBToPCIDevices

Introduction:

If you remember a little bit ago, I said I was trying to work around the lack of driver management with vendors.  This function is the start of a few tools you can use to potentially make your life a little easier.

VMware’s drivers are VIBS (but not all VIBS are drivers).  So the key to knowing if you have the correct drivers is to find which VIB matches which PCI device.  This function does that work for you.

How it works:

First, I hate to be the bearer of bad news, but if you’re running ESXi 5.5 or below, this function isn’t going to work for you.  It seems the names of the modules  and vibs don’t line up via ESXCLI in 5.5, but they do in 6.0.  So if you’re running 6.0 and above, you’re in luck,.

As for how it works, its actually pretty simple.

  1. Get a list of all PCI devices
  2. Get a list of all modules (which aren’t the same as VIBS).
  3. Get a list of all VIBs.
  4. Loop through each PCI device
    1. See if we find a matching module
      1. Take the module and see if we find a VIB that matches it.
  5. Take the results of each loop, add it to an array
  6. Spit out the array once the function is done running
  7. Your results should be present.

How to execute it:

Ok, to begin with, I’m not doing fancy pipelining or anything like that.  Simply enter the name of the ESXi host as it is in vCenter and things will work just fine.  There is support for verbose output if you want to see all the PCI devices, modules and vibs that are being looped through.

Get-ECSESXHostVIBToPCIDevices -VMHostName "ServerNameAsItIsInvCenter"

If you want to do something like loop through a bunch of hosts in a cluster, that’s awesome, you can write that code :).

How to use the output:

Ok great, so now you’ve got all this output, now what?  Well, this is where we’re back to the tedious part of managing drivers.

  1. Fire up the VMware HCL web site and go to the IO devices section
  2. Now, there are three main columns from my output that you need to find the potential list of drivers.  Yeah, even with an exact match, there maybe anywhere from 0 devices listed (take that as you’re running the latest) to having on or more hits.
    1. PCIDeviceSubVendorID
    2. PCIDeviceVendorID
    3. PCIDeviceDeviceID
  3. Those three columns are are all you need.  Now a few notes with this.
    1. if there are less than four characters, VMware will add leading zeros on their web drop down picker.  For example, if my output shows “e3f”, on VMwares drop down picker, you want to look for “0e3f”.
    2. if you get a lot of results, what I suggest doing next, is seeing if the vendor matches your server vendor.  If you find a server vendor match and there are still more than one result, see if its something like the difference between a dual port or single port card.  If you don’t see your server vendor listed, see if the card vendor is listed.  For example, in UCS servers, instead of seeing Cisco for a RAID controller, you would likely find a match for “Avago” or “Broadcom”.  Yeah, it totally gets confusing with HW vendors buying each other LOL.
  4. Once you find a match, the only thing left to do, is look at the output of column “ModuleVibVersion” in my script and see if you’re running the latest driver available, or if it at least is recent.  Just keep in mind, if you update the driver, make sure the FW you’re running is also certified for that driver.

Where’s the code?

Right here

What’s next / missing?

Well, a few things:

  1. I haven’t found a good way yet to loop through each PCI device and see its FW version.  That’s a pretty critical bit of info as I’ve said before.
  2. Even if i COULD find the firmware version for you, you’re still going to need to cross reference it against your server vendor.  Without an API, this is also going to be a tedious process.
  3. You need to manually check the HCL because in 2017, VMware still doesn’t have an API, let alone a restful one to do the query.  If we had that, the next logical step would be to take this output and query an API to find a possible match(es).  For now, you’ll need to do it manually.
    1. Ideally, the same API would let you download a driver if you wanted.
  4. VMware lacks an ability to add VIBS via PowerCLI or really manage baselines and what not.  So again, VMware really dropping the ball here.  This time it’s the “Update Manger” team.

Conclusion:

Hope this helps a bit, it’s far from perfect, but I’ve used it a few times, and found a few NIC drivers and RAID controllers that had older drivers.

Problem Solving: WSUS failing for Windows 10 with error 8024401c

Hi Folks,

After updating WSUS to support Windows 10 newer update format, we noticed that our Windows 10 client weren’t working. The error they were getting was 8024401c whenever we checked for updates (post WSUS upgrade).  Initially we thought it was related to the WSUS upgrade, but found out that most of our systems hadn’t been updating for a while.  So we moved on to troubleshooting the client further.  We found that the following GPO “Do not connect to any Windows Update Internet locations”  was not configured.  After doing some digging we determined that this was put in place to prevent our clients from downloading updates from MS directly, which was originally happening.  The weird thing to me was why were our clients going to MS anyway?  We have WSUS, that is the point of WSUS?  Disabling the setting resulted in us getting updates, but now they were coming from MS directly and not WSUS.  Enabling or setting it to “not configured” resulted in the lovely error.

Example snippet of log file below.

2017/06/13 10:08:31.2836183 676 10488 WebServices WS error: There was an error communicating with the endpoint at ‘http://%ServerName%/ClientWebService/client.asmx’.
2017/06/13 10:08:31.2836186 676 10488 WebServices WS error: There was an error receiving the HTTP reply.
2017/06/13 10:08:31.2836189 676 10488 WebServices WS error: The operation did not complete within the time allotted.
2017/06/13 10:08:31.2836280 676 10488 WebServices WS error: The operation timed out
2017/06/13 10:08:31.2836379 676 10488 WebServices Web service call failed with hr = 8024401c.

After a ton of Google Fu, I stumbled on to this article https://blogs.technet.microsoft.com/windowsserver/2017/01/09/why-wsus-and-sccm-managed-clients-are-reaching-out-to-microsoft-online/  Before you start reading, make sure you’re relaxed and read through it carefully, because the answer is there, but you have make sure you’re not just skimming.

Here is the main section and highlighted points that you need to glean from that article.


Ensure that the registry HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate doesn’t reflect any of these values.

  • DeferFeatureUpdate
  • DeferFeatureUpdatePeriodInDays
  • DeferQualityUpdate
  • DeferQualityUpdatePeriodInDays
  • PauseFeatureUpdate
  • PauseQualityUpdate
  • DeferUpgrade
  • ExcludeWUDriversInQualityUpdate

What just happened here? Aren’t these update or upgrade deferral policies?

Not in a managed environment. These policies are meant for Windows Update for Business (WUfB). Learn more about Windows Update for Business.

Windows Update for Business aka WUfB enables information technology administrators to keep the Windows 10 devices in their organization always up to date with the latest security defenses and Windows features by directly connecting these systems to Windows Update service.

We also recommend that you do not use these new settings with WSUS/SCCM.

If you are already using an on-prem solution to manage Windows updates/upgrades, using the new WUfB settings will enable your clients to also reach out to Microsoft Update online to fetch update bypassing your WSUS/SCCM end-point.

To manage updates, you have two solutions:

  • Use WSUS (or SCCM) and manage how and when you want to deploy updates and upgrades to Windows 10 computers in your environment (in your intranet).
  • Use the new WUfB settings to manage how and when you want to deploy updates and upgrades to Windows 10 computers in your environment directly connecting to Windows Update.

So, the moment any one of these policies are configured, even if these are set to be “disabled”, a new behavior known as Dual Scan is invoked in the Windows Update agent.

When Dual Scan is engaged, the following change in client behavior occur:

  • Whenever Automatic Updates scans for updates against the WSUS or SCCM server, it also scans against Windows Update, or against Microsoft Update if the machine is configured to use Microsoft Update instead of Windows Update. It processes any updates it finds, subject to the deferral/pausing policies mentioned above.

Some Windows Update GPOs that can be configured to better manage the Windows Update agent. I recommend you test them in your environment


After reading that, I went back in our GPO and did some more digging, since all our WSUS client settings are defined in GPO, turns out we have the “Do not include drivers with…” setting enabled.  So ultimately it was this setting that led to the whole “Dual Scan” mode being enabled, which led to us downloading MS updates (needed to happen anyway), which led to us disabling that, which led to WSUS not being used at all. So after setting both settings to not configured and doing a lot of GPUpdates / restarting of the windows update services, eventually I went from getting that error, to everything being back to normal.  That is, my client connecting to WSUS and downloading updates the right way.

Lessons learned besides not just randomly enabling WSUS settings, is that Microsoft in my not so humble opinion, needs to do a better job with the entire WSUS client control.  This is just stupid behaviour to be blunt.    What I would suggest that MS do is as follows.

  1. For Pete’s sake, have a damn setting that controls whether we want updates via WSUS, WUFB, or neither.  I mean it seems like such an obvious thing.  Clearly implied settings conflict.  If you have to write a damn article explaining all the gotcha’s you failed at building an user friendly solution.
  2. Group settings that are Windows update for business specific in their own damn GPO folder and their own damn reg key.  This way there’s no question these are for WUfB only.  Similar to WSUS.
  3. If WSUS is enabled, ignore WUfB settings and vice versa.

Anyway, hope that helps any other poor souls out there.

Review: 5 years with CommVault

Introduction:

Backup and recovery is a rather dry topic, but it’s an important one.  After all, what’s more critical to your company than their data?  You can have the best products in the world, but if disaster strikes and you don’t have a good solution in place, it can make your recovery painful or even impossible.  Still, many companies shirk investment in this segment.  The good solutions (like the one I’m about to discuss) cost a pretty penny, and that’s capital that needs to be balanced with technology that makes or saves your company money.  Still, insurance (and that’s what backup is) is something that’s typically on the back of companies minds.

Finding the right product in this segment can be a challenge, not only because every vendor tries to convince you that they’ve cracked the nut, but because it seems like all the good solutions are expensive.  Like many, our budget was initially constrained.  We had an old investment in CV (CommVault), but had not reinvested in it over the years, and needed a new solution.  We initially chose a more affordable Veeam + Windows Storage Spaces to handle our backup duties.  It was a terrible mistake, but you know, sometimes you have to fail to learn, and so we did.

After putting up with Veeam for a year, we threw in the towel and and went back to CV with open arms.  Our timing was also great too, as Veeam had put a serious hurt on their business and some of their licensing changed, to accommodate that.  We ultimately ended up with much better pricing than when we last looked at CV, and on top of that, we actually found their virtualization backup to be more affordable and in many ways more feature rich.  CV isn’t perfect as I’ll outline below, but they’re pretty much as close as you can get to perfection for a product that is the swiss army knife of backup.

CommVault Terms:

For those of you not super familiar with CV, you’ll find the following terms useful for understanding what I’m talking about.  There are a lot more components in CV, but these are the fundamental ones.

  • MA (Media Agent): Simply put, it’s a data mover.  It copies data to disk, tape, cloud, etc.
  • Agent: A client that is installed to backup an application or OS.
  • VSA (Virtual Server Agent): A client specially designed to for virtualization backup.
  • CC (CommCell): The central server that manages all the jobs, history, reporting, configuration, etc.  This is the brains of the whole operation.

Our Environment:

  • We have five MA’s.
    • Two virtual MA’s that backup to a Quantum QXS SAN (DotHill). This was done because we were reusing an old pair of VMhost and have a few other non-CV backup components running on these hosts.
      • The SAN has something like two pools of 80 disks. Not as fast as we’d like, but more than fast enough.  The QXS (DotHill) was our replacement for Storage Spaces.  Overall, better than Storage Spaces, but a lot of room for improvement.  The details of that are for another review.
    • Two physical MA’s with DAS, each MA has 80 disks in a RAID 60, yeah it rips from a disk performance perspective J. Multiple GBps
    • One physical MA that’s attached to our tape library.
  • We have five VSA’s, I’ll go more into this, but we’re not using five because I want to.
  • We have one CC, although we’ll be rolling out a second for resiliency and failover soon.
  • We have a number of agents
    • Several MS Exchange
    • Several MS Active Directory
    • Several Linux
    • The rest are file server / OS image agents.
  • In total, we have about a PB of total backup capacity between our SAN and DAS, but not all of that is consumed by CV (most is though).
  • We only use compression right now, no dedupe.
  • We only use active fulls (real fulls) not synthetics

Pros:

  • Backup:
    • CV can backup practically anything, and also has a number of application specific agents as well. You can backup your entire enterprise with their solution.  I would contend with CV, there are very few cases that you’d need point tools anymore.  Desktops, servers, virtualization, various applications and NAS devices are all systems that can be backed up by CV.  Honestly, it’s hard to find a solution that is as comprehensive as them.  That being said, I can imagine you’re wondering if they do it all, can they do it well?  I would say mostly.  I have some deltas to go over a little farther down, but they do a lot and a lot well.  It’s one of the reasons the solution was (and still is) expensive.
    • I went from having to babysit backup’s with Veeam, to having a solution that I almost never had to think about anymore (other than swapping tapes). There were some initial pains at first as we learned CV’s way of doing virtualization backup, but we quickly got to a stable state.
  • Deployment / Scalability:
    • CommCell has a great deployment model that works well in single office locations all the way to globally distributed implementations. They’re able to accomplish all of this with a single pane of glass, which a number of vendors can’t claim to do.
    • Besides the size of the deployment, you’re not forced into using Windows only for most components of CV. A lot of the roles outlined above run on Linux or Windows.
    • CV is software based, and best of all, its an application that runs on an OS which you’re already comfortable with (Linux / Windows). Because of this, the HW that you deploy the solution on is really only limited by minimum specs, budget and your imagination.  You can build a powerful and affordable solution on simple DAS, or you can go crazy and run on NVMe / all flash SANs.  It also works in the cloud because again, it’s just SW inside a generic OS.  I can’t tell you how many backup solutions I looked at that had zero cloud deployment capabilities.
    • There are so many knobs to turn in this solution, it’s pretty tough to run into a situation that you can’t tune for (there are a few though). Most of the out of box defaults are fine, but you’ll get the best performance when you dig in an optimize.  Some find this overwhelming and I’ll chat more about that in the cons, but with CV’s great support and reading their documentation, it’s not as bad as it sounds.  Ultimately the tuneablity is an incredible strength of this solution.  I’ve been able to increase backup throughput from a few hundred MBps to a few GBps simply by changing the IO size that CV uses.
  • Support:
    • Overall, they have fantastic support. Like any vendors support, it can vary and CV is no different.  Still, I can count on my hand the number of times support was painful, and even of those times, ultimately we got the issue resolved.
    • For the most part, support knows the application they’re backing up pretty well. I had a VMware backup issue that we ran into with Veeam and continued with CV.  CV while not being able to directly solve the problem, provided significantly more data for me to hand off to VMware, which ultimately led to us finding a known issue.   CV analyzed the VMware logs best they could and found the relevant entries that they suspected were the issue.  Veeam, was useless.
    • Getting CV issues fixed is something else that’s great about CV. No vendor is perfect, that’s what hotfixes and service packs are for.  CV, has an amazing escalation process.  I went from a bug, to a hotfix that resolved the issue in under two weeks.
    • My experience with their supports response time is fantastic. I rarely find a time where I don’t hear from them for a few hours.  They’re also not afraid to simply call you and work on the problem real time. I don’t mind email responses for simple questions, but when you’re running into a problem, sometimes you just want someone to call you and hash it out in real time.  I also like that most of the time you get the tech’s direct number if you need to call them.
  • Feature requests: A little hit or miss, but feature requests tend to get taken seriously with CV, especially if it’s something pretty simple.
  • Value: This one is a mixed bag.  Thanks to Veeam eating their lunch, virtualization backup with CV has never been a better value.  I could be wrong, but I actually think virtualization backup in CV rings in at a significantly lower price than Veeam.  I would say at least 50% of our backup’s are virtualization.  It’s our default backup method unless there is a compelling reason to use agents.   This is ultimately what made CV an affordable backup solution for us.  We were able to leverage their virtualization backup for most of our stuff, and utilize agents for the few things that really needed to be backed up at a file level or application level.  The virtualization backup entitles you to all their premium features, which is why I think it’s a huge value add.  That being said, I have some stuff to touch on in the cons with regards to the value.
  • Retention Management: Their retention management is a little tricky to get your head around, but it’s ultimately the right way to do retention.  Their retention is based on a policy, not based on the number of recovery point.    You configure things like how many days of fulls you want and how many cycles you need.  I can take a bazillion one off backup’s and not have to worry about my recovery history being prematurely purged.
  • Copy management: They manage copies of data like a champ.  Mix it with the above point, and you have all kinds of copies with different retentions, different locations, and it all works rock solid.  You have control over what data get’s copied.  So your source data might have all your VM’s and you only want a second copy of select VM’s, not problem for them.  Maybe you want dedupe on some, compression on other, some on tape, some on disk, some on cloud, again, no issue at all.
  • Ahead of the curve: CV seems to be the most forward thinking when it comes to backup / recovery destinations and sources.  They had our Nimble SAN’s certified for backup LONG before our previous vendor.  They support all kinds of cloud destinations, the ability to recover VM’s from physical to virtual, virtual to cloud, etc.  This goes back to the holistic approach that I brought up.  They do a very good job of wrapping everything up, and creating a flexible ecosystem to work with.  You typically don’t need point solutions with them.
  • Storage Management: I love their disk pools, and the way they store their backup data.  First and foremost, it’s tunable, so if you want 512MB files to whatever size files, it’s an option.  They shard the data across disks, etc.  Frankly the way they store data is a no brainer.  They also move jobs / data pretty easily from one disk to another which is great.  This type of flexability is not only helpful for things like making it easier to fit your data on disparate storage, but also in ensuring your backup’s can easily be copied to unreliable destinations.  Having to recopy a 512MB file is a lot better than having to recopy an 8TB file.  CV can take that 8TB file if you want, and break it up into various sized (default is 2GB).
  • Policies: Most of the way things are defined, are defined using policies.  Schedules, retention, copies, etc.  Not everything, but most things.  This makes it easy to establish standards for how things should act, and it also makes it easier to change thing.
  • CLI: They have a ton of capability with their CLI / API.  Almost anything can be executed or configured.  I actually developed a number of external work flows which call their CLI and it works well.
  • Tape Management:
    • They handle tapes like a librarian, minus the dewy decimal system. Seriously though, I haven’t worked with a solution that makes handling tapes as easy as they do.
    • If you happen to use Iron Mountain, they have integration for that too.
    • They’re pretty darn efficient with tape usage as well, which is mostly thanks to their “global copy” concept. We still have some white space issues, but it makes sense why
    • They are very good at controlling tape drive and parallel job management. This allows you to balance how many tape drives are used for what jobs.
  • Documentation: They document everything, and for good reason, there is a lot their product does. This includes things like advanced features and most of the special tuning knobs as well.  It’s not always perfect, but it’s typically very good.
  • Recovery:
    • File level recovery from tape for VM backups, without having to recover the whole file, need I say more. That means if I need one file off an 8TB backup VMDK, I don’t have to restore 8TB first.
    • Most application level backup’s offer some level of item level recovery. It’s not always straight forward, or quick, but its usually possible.
    • They’re smart with how they restore data. You can pick where you want the data recovered from (location and copy), and if it does need tapes, it tells you exactly what tapes you need.  No more throwing every single tape in and hoping that’s all you need.

Cons:

  • Backup:
    • Virtualization:
      • Their VMware backup in many ways isn’t as tunable as it should be. There are places where they don’t have stream limits where they really need them.  For example, they lack a stream limit on a the vCenter host, the ESXi host or even the VSA doing the backup.  It’s honestly a little strange as CV seems to offer a never-ending number of stream controls for other areas of their product.  I bring this up as probably my number one issue with their VMware backup.  This led us to have the most initial problems with their solution.  I would still say this is a glaring hole in their virtualization backup.  I just looked up their CV11 SP7 and nothing has changed with regards to this, which is disappointing to say the least.  This is one area that I think Veeam handles much better than them.
      • The performance of NBD (management network only) based backup is bluntly terrible. The only way we could get really good performance out of their product was to switch to hot add.  Typically speaking I hate hot add for Vmware backup.  It takes forever to mount disks, and it makes the setup of VM backup more complicated than it needs to be.  Not to mention if you do have an issue during the backup process (like vCenter dying) the cleanup of the backup is horrible.
      • They don’t pre-tune VSA for hot add. Things like disabling initialize disk in windows and what not.
      • Their inline compression throughput was also atrocious at first. We had to switch the algorithm used which fixed the issue, but it required a non-gui tweak to achieve and me asking if there was anything else they could do.  It was actually timely that the new algorithm had been released as experimental in the release we just upgraded to.
      • Their default VM dispatch to me is less than ideal. Instead of balancing VM’s in a least load method across the VSA’s, they pick the VSA closest to the VM or datastore.  I needed to go in and disable all of this.
    • Deployment / Scalability:
      • While I applaud their flexibility, the one area that I think still needs work is their dedupe. To me, they really need to focus on building a DataDomain level of solution that can scale to petabytes of logical data in a single media agent, and right now they can’t scale that big.  It seems like you need to have a bunch of mid sized buckets which is better than nothing, but still not as ideal as it should be.
      • Deployment for CV newbies is not straight forward. You’ll definitely need professional services to get most of the initial setup done, at least until you have time to familiarize yourself with it.  You’ll also need training so that you actually know how to care for and grow the solution.  I think CV could do a better job with perhaps implementing a more express setup just to get things up, and maybe even have a couple of into / how to videos to jump start the setup.  It’s complicated, because of it’s power, but I don’t think it needs to be.  The knobs and tuning should be there to customize the solution to a person’s environment, but there should be an easy button that suites most folks out of the box.
    • Support: In general I love their support, but there are times where I’m pretty confident the folks doing the support, don’t have at scale experience with the product.  There are times when I’ve tried explaining the scaling issue we were having, and they couldn’t wrap their heads around the issue.  They also tend to get wrapped up in the “this is the way it works” and not in the “this is the way it SHOULD work”.  Which again I think comes back to the experience with product at scale.  This would tend to happen more when I was trying to explain why I setup something in a particular way, and a way that didn’t match their norm.  For example, VM backups, they like to pile everything into subclients.  For more than a number of reasons I’m not going to go into in this blog post, that doesn’t work for us, and frankly it shouldn’t work for most folks.  I was able to punch holes in why their design philosophy was off, but they were stuck on “this is the way it is”.  The good news is you can typically escalate over short sited techs like this and get to someone who can think outside the box.
    • Value: This is a tough one.  On one hand, I want good support and a feature rich product, but on the other hand, the cost of agent based backup is frankly stupid expensive.  When the cost of my backup product costs more per TB than my SAN, that’s an issue.  It’s one of the primary reasons we push towards VM based backup’s as its honestly the only way we could afford their product.  Even with huge discounts, the cost per TB is insane with their solution.  In some cases, I would almost rather have a per agent cost rather than a per TB cost.  I could see how that could get out of control, but I think there are cases where each licensing model works better for each company.  If I had thousands of servers, I could see where the per TB model might make more sense.  This is one of the reasons we don’t backup SQL direct with CV, it just costs too much per TB.  It’s cheaper for us to use a (still too expensive) file based agent to pick up SQL dump files.
    • Storage Management: Once data is stored on its medium, moving it off isn’t easy.  If you have a mountpoint that needs to be vacated, you need to either aux copy data to a new storage copy, manually move the data to another mountpoint, or wait till it ages out.  They really should have an option in their storage pool to simply right click the mountpoint and say “vacate”.  This operation would then move all data/jobs to whatever mountpoints are left in the whole pool.  Similar to VMwares SDRS.  I would actually like to see this ability at a MA level as well too.
    • CLI: I’ll knock any vendor that doesn’t have a Powershell module and CV is one of those vendors.  Again, glad that you have API’s, but in an enterprise where Windows rules the house, Powershell should be standard CLI option.
    • Tape Management: As much as I think they do it better than anyone else, they could still improve the white space issue.  I almost think they need a tapering off setting.  Perhaps maybe even a preemptive analysis of the optimal number of tapes and tape drives before the start of each new aux copy, and re-analyze that each time you detect more data that needs to be copied to tape.  This way it could balance copy performance with tape utilization.  Maybe even define a range of streams that can be used.
    • Documentation: As great as their documentation is, it needs someone to really organize it better.  Taking into account the differences in CV versions.  I realize it’s probably a monumental task, but it can be really hard to find the right document to the right version of what you’re looking for.  I’ve also found times where certain features are documented in older CV version docs, but not in newer ones (but they do exist).  I guess you could argue at least they have so much documentation that it’s just hard to find the right one, vs. not having any doc at all.  When in doubt though, I contact support and they can generally point me in the right direction, or they’ll just answer the question.
    • Recovery:
      • Item level recovery that’s application based really needs a lot of work. One thing I’ll give Veeam is they seems to have a far more feature rich and intuitive application item level recovery solution than CV.
        • Restoring exchange at an item level is slow and involved (lots of components to install). I honestly still haven’t gotten it working.
        • AD item level recovery is incredibly basic and honestly needs a ton of work.
        • Linux requires a separate appliance, which IMO it shouldn’t. If Linux admins can write tools to read NTFS, why can’t a backup vendor write a Windows tool that can natively mount and ready EXT3/4, ZFS, XFS, UFS, etc.
      • P2P, V2V / P2V leaves a lot to be desired. If you plan to use this method, make sure you have an ISO that already works.  Otherwise you’ll be scrambling to recover bare metal when you need to.

Conclusion:

Despite CommVaults cons, I still think it’s the best solution out there.  It’s not perfect in every category, and that’s a typical problem with most do it all solution, but it’s pretty damn good at most.  It’s an expensive solution, and its complicated, but if you can afford it, and invest the time in learning it, I think you’ll fall in love with it, at least as much as one can with a backup tool.

Thinking out loud: Why do server vendors still struggle with driver and firmware management?

History:

Let me give you a little back story before digging into the meat of this post.  My team and I make a very concerted effort to keep our servers firmware and drivers updated.  We’ve gone so far as to purchase software from Dell, implement a process on how firmware / drivers are to be updated, and ensuring that its routinely done every quarter.  We do this because in general it’s a best practice, but also because we’ve run into too many occasions where troubleshooting with a vendor stops (if it ever starts) very quickly if the drivers / firmware isn’t recent.  In essence, we’re doing our best to be diligent and proactive with keeping our servers healthy, secure and updated.

Late last year we ran into two issues, both of which are related to drivers / firmware.

  1. A Broadcom NIC causing a purple screen of death (PSOD) in ESXi. This was a server that was freshly rebuilt and all drivers (or so we thought) and firmware updated.  Turns out the driver we were running was more than two years old and the PSOD we were having was a resolved issue in a newer driver.
  2. An Intel x710 quad port 10Gb NIC causing packets to black hole for certain VM’s on the same vLAN. Again, these were new hosts that were patched, firmware updated and in theory up to date.  This issue is what really triggered us to start evaluating other server vendors and their solutions.

 

Of those two issues above, only one was solved with a simple driver update, and the other, we just gave up on and switched to a different NIC (x520).

The Issue:

If you can’t see where this post is going already, let me lay it out.  Server vendors still can’t properly manage their own drivers, firmware and vendor specific software. I know what you’re thinking, you have tools that the vendor provided, you’re using them and you’re fine.  I hate to be the bearer of bad news, but I doubt it.  We just got done a rigorous evaluation of Dell, HP and Cisco.  None of them have a complete solution.  Don’t get me wrong, some of them are getting there, but no one has the problem solved.  If you’re wondering what the specific problems are, see the bullets below.

  • Server vendors would like that you keep your firmware, drivers and tools up to date.
  • OS vendors (and sometimes server vendors) require that the driver and firmware have a certified pairing. It is NOT good enough to simply have the latest driver and the latest firmware.  This of course may vary slightly depending on which server and OS vendor.  VMware though as an example, absolutely has driver and firmware paring that’s required.
    • This driver and firmware pairing is typically worked out between the server and OS vendor.
    • VMware has a strict HCL for this use case, and TMK, MS has an HCL, although they’re a little more forgiving when it comes to the pairing. I can’t speak about Linux, Solaris or other OS’s.

 

Think about this, when was the last time you did the following?

  • Retrieved an inventory of all your hardware firmware revisions and driver revisions.
    • Do you even know how to do this? Probably not as easy as you think.
  • Logged into your OS vendors HCL and one hardware item at a time, checked if you are running the latest driver and firmware, and that the pairing is also certified.
    • With VMware, you can use the device vendor ID, device ID and sub vendor ID to find the specific hardware in question on their HCL. Just remember its hex values.

I bet you’re either relying on the following.

  • VMware update manager, and vendor provided depots (if they exist).
  • Vendor supplied firmware management solutions.
    • Some may have driver management for select OS, but no one does it all.
  • Vendor custom ISOs / install discs.

I’m going to suspect if you go and check your VMware HCL, you’re out compliance in one way or another, or something is woefully out of date.

Solution:

Let’s share a pipe for a second and dream about what it should look like.

 

Server Vendor:

  • It should be a central console.
  • It should handle downloading all firmware, drivers and vendor specific tools.
  • It should use a concept of baselines. A baseline being defined as.
    • OS and server model specific.
    • Based on a release date. The baseline should define an approved pairing of drivers, firmware and vendor tools for a given month, quarter or however often the server vendor feels the need to establish a new baseline.
    • The baseline should support the concept of cumulative updates / hotfixes.
  • It should support grouping servers, and applying baselines to the server groups.
  • It should support compliance checking for the baselines. Not simply deploying the drivers and firmware and assuming everything is ok.  This would let you know if an admin went rouge and manually updated or downgraded firmware / drivers or tools.
  • It should support rolling back drivers, firmware or tools if it is determined to be too far ahead.
  • Provide very verbose information as to why an update process failed.
  • Bonus points
    • Support a multi-site architecture. Meaning be able to have a cached copy of the repo and a local server to perform the actual update and auditing process.
    • Auto discover servers

OS Vendor:

  • Should provide a comprehensive API
    • Look up hardware and driver pairings.
    • Enable the download of the driver or firmware directly would be nice.

Conclusion:

Coming back to reality a bit, what can you do?  Use the tools you have to the best of your ability, script what you can, and manually deal with the gaps.  That said, I’m working on the auditing part of the problem, at least with VMware. Hope to have blog post about it and a new GitHub commit in the coming month or so, stay tuned.

Oh, one final thing you can do, start bugging your server vendor sales team about the issue. If enough people raise the issue, it will get the attention it desperately needs.

Thinking out loud: What HP + Nimble means to me

Disclaimer:

These are opinions, not facts and these opinions are mine, not my employers.

Introduction:

Upon receiving the news that HP was to acquire Nimble, I can’t say I was exactly thrilled.  Nothing personal against HP, they make great servers, but I like Nimble the way it is right now.  None the less, I know the industry is moving in a direction where its either get big, or get out.  There is a huge storm “cloud” looming, and if you’re an on premises solution, its going to be a scary time in the coming years.

I was thinking about what would be some of the pros and cons of the HP acquisition and this is what I’ve come up with.

Pros:

  1. HP is a big company and an established one at that. We’ll focus on the pros being big/established in this section.
    1. HP will likely have an easier time pitching Nimble into companies that would not have given them a second look. HP is established, so there’s a perception that Nimble is established.  This leads to better market penetration.  Nimble getting better market penetration means Nimble makes HP more money and if Nimble makes HP more money, HP invests more into Nimble.  Hopefully the circle of money keeps snow balling and we all win.
      1. HP is a world wide company and while Nimble has done a great job so far, HP is going to take them into more countries faster than they could on their own. If you’ve had difficulty getting Nimble equipment purchased “in country”, I can see this getting easier long term.
    2. Obviously HP has more capital at their disposal than Nimble did. If invested correctly, I could see this accelerating Nimble’s innovation.
    3. HP has more purchasing power than Nimble does, this could lead to Nimble’s margins being better, which in turn may lead to us having a more affordable product (or more profit for HP).
  2. Look, we all know why tech companies pick Supermicro, and its not quality, its affordability. HP makes some pretty kick ass hardware, so if we were to see Nimble’s hardware platform change from Supermicro to HP equipment, not only would my datacenter look a little sexier, I wouldn’t cut my fingers trying to rack Nimble anymore.
  3. If you’re a current HP customer, I can see two nice integration points.
    1. Infosight for other HP solutions.
    2. Nimble integration into OneView.

Cons:

  1. As mentioned in the pros, HP is a large established company. While this in its self can have some pros, it also has the potential for a number of cons.
    1. Big companies tend to move slow, bureaucracy and over analysis being suspect causes. Nimble had far fewer hoops to jump through before making a decision.  Just remember, deadlines and accomplishments drift a day at a time.  Days become weeks and weeks become months, and you get the picture.
    2. Every company is profit conscious, but some larger companies will kill any sliver of waste, even at the cost of productivity or customer satisfaction. I’m not saying it will happen with HP, only that it could.
    3. While HP will open a lot of new doors for Nimble, it has the potential to close a lot of existing ones too. There are a lot of companies that have had bad experiences with HP and this may be enough for them to drop Nimble.  That said, being realistic, it seems one way or another, you’re going to be purchasing storage from some big vendor, and it may not be the same as the one you purchase servers from.
    4. If HP tries to assimilate Nimble into their ways, I can see this being bad for Nimble customers. Nimble for example has a great support experience.  If HP tries to force Nimble to adopt their triage and support structure, that would be a quick way to devalue Nimble.  There’s other things too, like getting stuck speaking with a generic HP sales rep and sales engineer, instead of having direct access to a Nimble SE and a Nimble sales person, or other typical large company sales and support processes.
  2. HP hasn’t made a great name for themselves here of late. We know they’ve split the company in half and sold off a lot of assets.  It’s hard to say if it’s too little too late, or if it was the right move and just in time.  Regardless, HP to me is a company that’s walking a fine line of a falling giant, or one that’s getting back on its feet.  If HP goes down, Nimble goes with it, and that’s not good for Nimble customers.
  3. HP isn’t exactly synonymous with innovation, at least not any more. I fear that HP has the potential of choking the life out of Nimble.  In my opinion, 3Par was a great storage solution.  Part of me wonders if HP couldn’t make that work, what makes them think Nimble will be any different?  Meaning, are they going to turn Nimble into the next Equallogic?

Other thoughts:

I think deep down everyone knew Nimble wanted to get bought.  Me personally, I was REALLY hoping Cisco was going to buy Nimble.  In my opinion, Cisco + Nimble would go together like peanut butter and fluff.  HP already has a storage company that’s flailing.  I don’t want Nimble to follow suite.  People like to remind me about about Whiptail and how bad that was.  I look at that as a rash move on Cisco part (the solution was doomed to fail), but Nimble would be a pick that no one could blame Cisco for.  Best of all, Cisco doesn’t have any competing products (other than Hyperflex, but that’s a different type of solution).  This would have led to a much stronger and untied focus on pushing Nimble.  From Nimble’s view, it would have solidified them as being established (opening the closed doors), and for Cisco, it would have given them a proven storage startup that’s on fire.  Honestly, if I was Cisco’s CEO, I would be doing everything I could to steal the deal from HP.  If it was a matter of HP vs. Dell vs. Cisco, and Cisco was the one with Nimble, IMO, Cisco would crush the other two like a ten-ton hammer.

Conclusion:

This is obviously all speculation at this point, just thinking out loud.  I hope all the pros of what I pointed out occur with the acquisition and none of the cons.  I wish both vendors the best of luck, and until proven otherwise, I’m still a diehard Nimble (HP) fan.

Naming Conventions: Server Names

Introduction:

One of the things I’m struggling with, is how to balance the number of posts per naming convention.  It would be easy in some ways to use a single post per server type, but it would also be overly redundant in many ways.  I originally wrote a dedicated post to SQL server naming conventions, and realized that the logic behind its name is ultimately a similar logic for other server names.  With that said, I’ve decided to create a consolidated post for server names.  I’ll rehash the overall structure used for the SQL naming convention, and show you how its reusable for other servers.

Limitations:

To begin with, 15 characters is a length limitation I would always suggest maintaining.  The only exception is in the following circumstances.  If you’re building a server that isn’t Windows, and will NEVER need to join a Windows domain, then and only then can you make names longer than 15.  Microsoft in their perpetual need to maintain excessive backwards compatibility, still hasn’t dropped NetBIOS out of its architecture.  Even if you build a Microsoft domain, 100% running on DNS resolution, they still truncate names for backwards compatibility.  I wish they would provide a naming resolution compatibility mode that would in essence switch the domain from NetBIOS supported to DNS only, but that’s a whole different blog post.

My naming conventions are designed to scale for smaller to mid-sized companies.  If you’re dealing with 10s of thousands of servers, this naming convention won’t scale to your needs.  My names are designed to give you a hint of what the server does.  When you’re at the 10s of thousands size, you need a whole new way of dealing with server names.

Other stuff:

I used to really get hung up on server numbering.  What I mean by that, is if I was running a smaller shop with say two domain controllers.  Let’s call them DC1 and DC2 for simplicity.  I would want to keep those names every time I did a major upgrade.  Ultimately it led to a lot of shuffling, and a lot of time spent for something that was ultimately cosmetic and not important.  Point being, if you are or were like me, learn to let it go.  Sometimes you will have a DC3 and a DC4 when there are no DC1 and no DC2.

When you’re dealing with systems that need to connect to other systems by name, learn that CNAME records can be your best friend. I strongly suggest to avoid pointing things directly at a server name, if whatever you’re pointing is mostly a generic service.  For example, its very common to have many things pointing at your DC’s for LDAP lookups.  Rather than doing something like pointing at DC1 and DC2, create a CNAME record for something like “ldap1.domain.com” and “ldap2.domain.com”.  Then when you change your DC’s, you only have two records to change.

It’s expensive, but load balancers can also help with renaming / moving things.  They help because of their ability to create a “virtual IP” and redirect that traffic to any real IP as needed.  In the case of DNS, it would enable you to move your DNS functionality to a new server without having to change the IP you have configured across all your systems.

My naming convention basics:

There are a few basics planned into my naming conventions.  These basics make it easier to organize servers, and ultimately find / figure out what a server’s purpose is.  Obviously with a 15-character limit, there are going to be a lot of abbreviations.  In my opinion, so long as you’re consistent, even vague abbreviations will eventually be memorable or make sense.

Hyphens:

I use hyphens to separate may of the naming conventions purposes.  I know its potentially wasting very precious characters, but at our scale, we can mostly afford to do it, in exchange for making it easier to programmatically find servers.  You don’t need to use hyphens, I do it because its easier for me to script with.  Plus, they make for a consistent separator.  Ultimately though, consistency as I stated above, is what’s important.

Location variable:

I prefer to start all names with either a location variable.  At two completely different companies, I inherited naming conventions that used their company’s acronym for their primary site, and DR for their disaster recovery site servers.  There are two problems with using something this descriptive.

  1. I’ve worked for a company that flipped the location of their DR and primary site. It meant for a very confusing period of time where some servers might have said “DR” but were actually now in the primary headquarters, and vice versa.
  2. If you’re company has more than one office or more than one DR site, the naming convention kind of falls apart.

The main goal for the server location should be generic, but consistent.  Using something like AA1, is just as likely to suffer from the issue above in point 1, but it does solve the issue of problem 2.  If you have multiple locations, you just keep incrementing the number.  So AA2, AA3 ….AA9, and then increment the letter to AB1.  It leaves a TON of room for different / unique locations.  Heck, maybe you don’t even need the double letter.  Math isn’t my strength, but if my calculations are correct even something like one letter + one number = 234 locations, and that assumes you never use the number “0”, in which case it would be 260 locations.

Application:

I like to use something short (as in 3 letters or less) to tell me something about the application or purpose of the server.  For example, I might use “SQL” for a SQL Database Server, or RMQ for a Rabbit MQ server, or EXM for an Exchange Mailbox server.  It can get a little tricky of course, after all you have MySQL and MSSQL, but maybe that doesn’t matter.  After all, a SQL DB is a SQL DB.  The reason I keep it three letters or shorter (on average) is to leave room for a clustering naming conventions (coming up).  Of course if you’re not limited by the 15 characters, you can get a lot more verbose, but at least for those of us in windows shops, that’s tough.

Environment:

This one is short and easy, I use a few letters to denote the environment of the server.

  • P = Production
  • S = Stage
  • U = UAT
  • D = Dev
  • T = Test
  • X = Sandbox

Clustered or Standalone:

I like to denote if this is a clustered system or a standalone system.  The standalone part of is pretty easy, I just use an “S”.  Sometimes I’ll trail it with a number, like S1, or S2 to denote that the application isn’t clustered but that they’re related (you’ll see an example later).

For the cluster part, it gets a little more involved and varies a little bit based on the application.  We start out with a simple “C” to denote clustered, but then I like to use another trailing letter / number as well.  Let’s look at a few cluster examples.

  • CN1 = Clustered Node 1
  • CN2 = Clustered Node 2
  • CDI1 = Clustered Database instance
  • CDG1 = Clustered Database Group 1

Application group number:

This is the final number that really ties everything together.  I use a simple “01”, “02” or whatever number really to tie all clustered nodes or even standalone (related) systems together.

Putting it all together:

Here are a few practical examples to give you an idea of how it all goes together.

Example 1:  A SQL environment to support the widgets application.  This SQL environment will have a full development lifecycle environment and UAT will mirror Production exactly.  We’ll be using SQL AAG’s.

  1. Dev = a1-sqlds-01
  2. Stage = a1-sqlss-01
  3. UAT =
    1. Nodes
      1. A1-sqlucn1-01
      2. A1-sqlucn2-01
    2. Clustered Named Object (management)
      1. A1-sqluc-01
      2. SQL AAG listener names
        1. A1-sqlucdg1-01
        2. A1-sqlucdg2-01
      3. Prod =
        1. Nodes
          1. A1-sqlpcn1-01
          2. A1-sqlpcn2-01
        2. Clustered Named Object (management)
          1. A1-sqlpc-01
  • SQL AAG listener names
    1. A1-sqlpcdg1-01
    2. A1-sqlpcdg2-01

Notice how the last number glues everything together.  Also notice how everything is built off a consistent naming standard.  If we deployed a new application

Example 2: How about something simple like a domain controller environment for 3 sites?  Let’s just say there will be a production environment and a test environment.

  1. Test
    1. Site a1
      1. A1-dctcn1-01
      2. A1-dctcn2-01
    2. Site a2
      1. A2-dctcn1-01
      2. A2-dctcn2-01
    3. Site a3
      1. A3-dctcn1-01
      2. A3-dctcn2-01
    4. Production:
      1. Site a1
        1. A1-dcpcn1-01
        2. A1-dcpcn2-01
      2. Site a2
        1. A2-dcpcn1-01
        2. A2-dcpcn2-01
      3. Site a3
        1. A3-dcpcn1-01
        2. A3-dcpcn2-01

Again, notice how I use a single final number to glue an entire “purpose” together.  If I built a second discrete domain, I would likely change the last number to “02” which would quickly tell me that the domain controller is part of a separate domain.  Also notice how you can easily tell which node a DC is, which site a DC is in, and what its environment is.

Example 3:  A non-clustered exchange server environment that’s serving the same company.

  1. Mailbox servers:
    1. A1-exmps1-01
    2. A1-exmps2-01
  2. CAS Nodes (load balanced)
    1. A1-excpcn1-01
    2. A1-excpcn2-01
  3. CAS VIP
    1. A1-excpc-01_vip

Here you can see how the standalone mailbox servers are working for the same purpose, but ultimately they’re not clustered together.  With the CAS servers, you can see that they are in fact clustered and that we even created a VIP DNS name that helps you understand how everything is related.

Other examples:

At this stage I’m just going to bullet a list of various options, I’ll use a single destination and a single application number since we’ve already gone over that.

File Servers:

  1. Clusters
    1.  Nodes
      1. a1-fspcn1-01
      2. a1-fspcn2-01
    2. Cluster Resource
      1. a1-fspc-01
    3. Clustered SMB share
      1. a1-fspcsmb1-01
      2. a1-fspcsmb2-01
    4. Clustered NFS share
      1. a1-fspcnfs1-01
      2. a1-fspcnfs1-01
  2. Standalone
    1. a1-fsps-01

CommVault:

  1. Comcell
    1. a1-cvccps1-01
  2. MediaAgent
    1. a1-cvmaps1-01
    2. a1-cvmaps2-01
  3. Virtual Server Agent (for dedicated VM’s)
    1. a1-cvvsaps1-01
    2. a1-cvvsaps2-01

DHCP:

  1. Clusters ***Note: because DHCP consumes 4 characters, I leave the “c” off the name. 
    1. a1-dhcppn1-01
    2. a2-dhcppn2-01
  2. Standalone
    1. a1-dhcpps1-01

Review: 1.5 years with MVP Systems Job Automation Scheduler (JAMS)

Introduction:

I wrote a really quick review here about MVP systems JAMS product about 1 year or so ago (maybe a little less).  At the time, I was in search of a solution that could help me glue together several disjointed systems in a workflow.  Specifically, we were trying to integrate Veeam and CommVault backup’s together.  Veeam was of course doing the VM backup’s, and CommVault was copying the Veeam files to tape.  We’ve since moved on from Veeam, but JAMS has continued to be a vital part of our infrastructure.

What is JAMS?

The simple answer is it’s a centralized task scheduler, the long answer is its not only that, but a whole lot more.  This is a solution that replaces cron, windows task scheduler, SQL agent jobs, or pretty much anything else that you would normally use to schedule and execute something.

What makes up a JAMS solution?

There are four main components.

  • The JAMS server: This is clusterable component that schedules, queues and executes any jobs or workflows.
  • The JAMS client: This is the administration GUI.  Kind of self-explanatory, but this is where you would configure all of the settings for the various jobs, and server settings.
    • For windows, this also includes a Powershell module for CLI administration. Pretty sure they have a generic API, but I never bothered to look since PS was available.
  • The JAMS agent: This is a component that is installed on a system where you want to execute jobs.  All kinds of OS’s supported.
  • Microsoft SQL server: Check with MVP systems if other DB’s are supported, but we’re a MS shop, and SQL is on their list.  This is used to store the job history, job status, and pretty much the entire server configuration.  If this goes down, you have big issues to deal with J.  And yes, a clustered SQL server IS supported.

All in all, the infrastructure is pretty simple to understand and for smaller use cases, these roles can all be installed on the same system.

History:

I didn’t start out with JAMS, in fact, they were nowhere in sight when the initial problem came to fruition.  I figured this would be a relatively trivial Powershell solution, and started down the path of trying to write a quick workflow.  Building the logic for the workflow was actually pretty easy, but what I kept running into was the good ‘ol Kerberos double hop condition.  Never heard about it?   Read about it here.  In order to centralize the solution, I basically tried to build my own poor mans centralized task scheduler.  In order to keep it central, I was utilizing “invoke-command” to execute scripts on our Veeam server and our CommVault server.  With Veeam, our database was stored on a different server, so when my “invoke-command” executed against Veeam, my credentials were never passed along to the SQL server.  I was able to work around it by using CredSSP, but it wasn’t reliable.  Sometimes it would work, and sometimes I guess it would timeout or something similar (don’t really remember to be honest).  Then there was the issue with CommVault.  See they used old fashion EXE’s to start jobs (we were on v8) from the command line.  The commands I needed to run had to be executed in sequential order.  Anyone who has worked with Powershell’s “start-process” via invoke-command, knows that the “-wait” parameter is ignored.  I don’t recall the reason, but it was lame on MS part.  Ultimately, it was this that was the deal breaker, and so started the search for some sort of centralized task scheduler.
We ultimately landed on a cheap’o but well known solution called “VisualCron”.  I’ve got nothing against the solution, but after working with it for a few days, not only was it very hacked together, but it wasn’t the most user friendly solution.  So the search continued and we ultimately stumbled across JAMS.  It took a lot of creative searching to find them, but I’m glad we did.  After installing the trial, we knew it was the solution we were looking for, and the rest as they say, is history.

The pros:

  • Easy solution: Pretty easy to install the solution and understand the components. Unlike some other solutions we’ve installed, JAMS takes care of installing any pre-requisites and also has an easy to understand architecture.
  • You get tech support: Normally not something to write home about, but we leveraged their support quite a bit at first, and they were normally helpful.  As simple as JAMS is, it can do a lot of stuff, and that’s where support can (and is) a huge help.  I remember one part of a solution where were trying to pass a variable from one job to another.  Called up support, and sure enough, JAMS could do it and they showed us how.  How about bulk creating a bunch of jobs via PS?  Yep, support had an example of that too.
  • The GUI: This is one where I have pros and cons.   We’re in the pros section, so that’s what I’ll focus on here.
    • I’ve never worked with a GUI that was capable of bulk edits, but JAMS is and it rocks. Just imagine wanting to change the start time on 60 jobs.  You could write a script to do it, or you could highlight the 60 jobs in a folder, right click and basically change the value of one field (time) to another value.  Then BAM! It goes and changes the time for all highlighted jobs.  Pretty much any column you can add to the GUI has this functionality and it rocks.
    • Easy to see all jobs scheduled to run, running or failed in one view.
    • It keeps a detailed log of each job execution. If you write output to the host (think write-host in PowerShell or REM in batch), that output gets logged to a file and stored for historical purposes.  So as long as your script has verbose output, you’ll know exactly what happened in your job.
    • Sort of related to the above, it keeps a history of all executed jobs and their final status. It also tracks things like when it ran, how long it ran, how many resources it consumed, etc.
    • They have some pretty neat dashboards (once you figure them out). There are a few cool built in ones (like projected schedule) too.
    • Last but not least, it’s a pretty easy GUI to use. I won’t say it doesn’t have any learning curve, but I think the learning curve is really more related to the solution than the client its self.
  • Scripting engines: The agent can execute all kinds of scripts.
    • Powershell
    • Batch
    • Bash
    • SSH
    • T-SQL
  • Agent OS Support: The agent can be installed on different OS’s, so this isn’t a 100% windows only solution.
  • Workflows (setups): You can build “setups” (workflows) that tie jobs together. The jobs themselves can run on completely different systems.  In our case, we had a setup which had a “job” that ran on a veeam server and a different job that ran on the CV server.  The Setup was configured to wait until the first job completed with a success before moving on.
  • Job Queueing: It supports queueing jobs. Probably not an issue for many folks, but we used to limit the number of tape backup’s running in parallel.  What’s great is each “job” in jams can share a queue or have different queues.  This allows a Setup to execute the first job (as an example) and if needed, the second job will queue.  We typically had 50 setups running in parallel, but only 4 tape jobs that were allowed to run in parallel.  JAMS would execute all 50 setups in parallel, but when it came time to run the tape portion of a setup, the tape jobs would go into a queue and trickle out as others completed (or failed).  This didn’t stop the first jobs (the backup its self) from completing, so it ultimately kept things moving at a great pace.
  • PowerShell: Being able to admin JAMS through PS is a huge win in my book.  You can create, modify and delete, jobs, setups, queues, etc.  Everything in the GUI can be done in PS.  It’s sad in 2017 that I even have to list this as a pro of a solution. None the less, it’s not as common as it should be, and it’s a win for JAMS.
  • Different Licensing: With a lot of solutions, there’s only one licensing strategy. I found that JAMS had several, and they do so to accommodate differing needs and purposes.
  • Sales team: The sales team I worked with was friendly, knowledgeable and not pushy in any kind of way.  Additionally, what I think is worth noting, is while it felt like we were shopping for a Ferrari, they understood we were on a Corvette budget, and worked with us to find a licensing model (and some pricing breaks) to let us drive home in a solution we really wanted.

The cons:

  • Price: I’m not saying it’s overpriced, all I’m saying is its not cheap.  I would love to use this solution for my whole environment, but it’s not cheap to do that.  I’m not saying they won’t work with you (they will), but to scale the solution, you will be digging deeper in your wallet.
  • The GUI: I think the GUI has some great design characteristics, but I also think it has some flaws too.
    • They recently updated the GUI look. I’m personally not a fan.  It’s a matter of opinion of course, but I find it harder to see what I need to see now.
    • I don’t like the way they separate jobs from setups. I wish they just used a different icon, or a value in a field to separate them.  There are plenty of times I click on a folder and forgot that I’m in the “setup view” when I’m looking for a job.
    • They don’t support right click for certain job management features. I intuitively want to right click in the jobs window and select “new job” or something related, but that’s not the way the GUI is designed.
    • When you bulk submit jobs, they ask you if you want to, for each job. That means if you selected 25 jobs, you’re clicking “submit” 25 times afterwards.
  • Their security design: I found that their security model didn’t work quite like one might think.  I remember working with a tech to do something simple like let our DBA’s manage jobs (execute and read) and something as simple as that required what seemed like a million hoops to jump through.  Ultimately IIRC (it’s been a while) we ended up needing to grant them more rights than I would have wanted in order to accomplish what seemed like a trivial task.  I gave up on it because I didn’t want to create a solution that going to be too complex to manage.
  • Overlapping job detection: I remember when I first started with their solution, we had run into a few cases where jobs (or setups) were overlapping on themselves.  Meaning Job A from Monday night was still running and Tuesday nights job started up and started running.  When I asked support about this, they handed me a script that would nuke the Tuesday job, but ultimately didn’t solve my issue of needing Tuesday’s job to just wait.  I ultimately ended up writing a pre-check job that would detect if any of the same jobs were running and if so, to go into a loop where it checks every minute, waiting for the previous job to complete.  What sucks about this and the script they gave me, is every job I launch that has a pre-check job, this ends up burning my job count.  To me, this just seems like something that should be built into the solution.
  • Maintenance Mode: They don’t seem to have a maintenance mode option.  What I mean by that, is being able to put JAMS into a paused state.  I think you can stop a service on the windows hosts, but honestly that’s a hack.  They should just have a maintenance mode option built right into the GUI.   I could see something like having a few options like, queue any new jobs that start, or let existing jobs finish, but queue anything new, or don’t let any jobs start at all.  Bonus points if this could be done at a folder level.

Conclusion:

Ultimately after living with JAMS for almost 1.5 years, I think they really rock as a solution.  I can’t say I have any experience with any other enterprise job scheduling solutions, but my overall experience with JAMS has been a pleasure.  No solution is perfect, and theirs is no exception, but the great news is they have a solution that is ultimately awesome, with a few negatives, which is a far cry from other vendor’s solutions I’ve used.  My suggestion is if you’re looking for something to replace SQL jobs, task scheduler, cron or any other isolated solution, to give them a look, I think you’ll be pleased.