Thinking out loud: Hyper converged storages missing link

Introduction:

In general, I’m not a huge fan of hyper converged infrastructure.  To me, its more “hype” than substance at the moment.  It was born out of web scale infrastructure like Google, Facebook, etc. and IMO, that is still the area where it’s better suited.    The only enterprise layer where I see HCI being a good fit is VDI, other than that, almost every other enterprise workload would be better suited on new school shared storage.  I could probably go into a ton of reasons why I personally see shared storage still being the preferred architecture for enterprises, but instead I’ll focus on one area that if adopted might change my view (slightly).  You see, there is a balance between the best and good enough.  Shared storage IMO is the best, but HCI could be good enough.

What’s missing?

What is the missing link (pun intended)?  IMO, its external / independent DAS.  Can’t see where this is going?  Follow along on why I think external DAS will make hyper converged storage good enough for almost anyone’s environment.

Scaling Deep:  Right now the average server tops out at 24 2.5” drives and less for 3.5” drives.  In a lot of larger shops, that would mean running more hosts in order to meet your storage requirements, and that will come at the cost of paying for more CPU, memory and licensing then you should have to.  Just imagine a typical 1ru r630 + a 2u 60 drive JBOD!  That’s a lot more storage that you can fit under a single host, and it would only consume one more rack unit than a typical r730.  Add to this, theoretically speaking, the number of drives you could add to a single host would go beyond a single JBOD.  A quad port SAS HBA could have four 60 drive enclosures attached, and that’s a single HBA.

Storage independence:  Having the storage outside the server also makes that storage infinitely more flexible.  This is even true when you’re building vendor homogeneous solutions.  Take Dell for example.  Typically speaking their enclosures are movable between different server generations.  Currently with the storage stuck in the chassis, it gets really messy (support wise) and in many cases not doable, to move the storage from one chassis to another, especially if you’re talking about going from an older generation server to a newer generation.

Adding to this, depending on your confidence, white boxing also allows you to cut a server vendor out of the costliest part of the solution, which is the disks themselves.  Going with an enclosure from someone like RAID inc. or DataOn, Quanta QCT, Seagate, etc.  Add in a generic LSI (sorry Avago, oh sorry again, Broadcom) HBA, and now you have a solution that is likely good enough supportability wise.  JBODs tend to be pretty dumb and reliable, which just leaves the LSI card (well known established vendor) and your SSD / HDD.

Why do you want to move the storage anyway?  Simple, I’d bet a nice steak dinner that you want to upgrade or replace your compute long before you need to replace your storage. If you’re simply replacing your compute (not adding a new node but swapping it) then moving a SAS card + DAS is far more efficient than rebuying the storage, or moving the internal storage into a new host (remember warranty gets messy).  Simply vacate the host like you would with internal storage, shutdown, rip the hba out, swap server, put existing HBA back in, done.

If you’re adding a new host, depending on your storage, you may have the option of buying another enclosure and spreading the disks you have evenly across all hosts again.  So if for example, you had 50 disks in 4 hosts (total 200 disks) and you add a fifth host.  One option could be you simply remove 10 disks from each current node and place them in the new node. Your only additional cost was JBOD enclosure, and you now continue to keep your current investment in disks (with flash, that would be the expensive part).

Mix and match 3.5 / 2.5 drives:  Right now with internal storage, you are either running a 3.5” chassis, which doesn’t hold a lot of drives, but CAN support 2.5” drives with a sled.  Or you are running a 2.5” chassis which guarantees no 3.5” drives.  External DAS could mean one of two options:

  1. Use a denser 3.5” JBOD (say 60 disks) and use 2.5” sleds when you need to.
  2. Use one JBOD for 3.5” drives and a different one for 2.5” drives.

Again it comes down to flexibility.

Performance upgrades:  Now this is a big “it depends”.  Hypothetically if there were no SW imposed bottlenecks (which there are), one of your likely bottlenecks (with all flash at least) are going to be either how many drives you have per SAS lane, or how many drives you have per SAS card.  For example, if your SAS card is PCIe 3.0 internally, but the PCIe bus is 4.0, there’s a chance you could upgrade your server to a newer / better storage controller card.   More so, even if you were stuck on PCIe 3 (as an example).  There would be nothing stopping you from slicing your JBOD in half, and using two HBA to double your throughput.  Before you even go there, yes I do know the 730xd has an option for two RAID cards, glad you brought that up.  Guess what, with external DAS, you’re only limited by your budget, the number of PCIe slots you have and the constrains of your HCI vendor.  I for example could have 4 SAS cards, and 2 JBODs partially filled and each sliced in half.  You don’t have that flexibility with internal storage.

With the case of white boxing your storage, this also means to the extent of the HCL, you can run what you want.  So if you want to use all Intel dc3700’s, you can.  Heck, they’re even starting to make JBOF (just a bunch of flash) enclosures for NVMe, which again, would be REALLY fast.

Conclusion:

I say external DAS support is the missing link because it is what would allow HCI to offer similar scaling flexibility that exists in SAN/NAS.  I still think the HCI industry is at least 3 – 5 years out from matching the performance, scalability and features we’ve come to expect in enterprise storage, but external storage support would knock a big hole in a large facet of the scalability win with SAN/NAS.

Wish list: Nimble Storage (2016)

Introduction:

These are just some random things that I would love to see implemented by Nimble Storage in their solution.  I don’t actually expect most of this stuff to happen in a year, but it would be interesting to see how many do.  Some are more realistic than others of course, but part of the fun in this is asking for some stuff that perhaps is a bit of a stretch.

Hardware:

  • Better than 10G networking: Given that we now have SSD’s becoming the storage of choice, its more than likely the networking which will bottleneck the SAN. I understand Nimble offers 4 10g links per controller, but this is less ideal than two 40g links.  40g is now available in all respectable server and switches, I see no reason why Nimble also shouldn’t offer the same.
  • Denser JBODs: I think Nimble has decent capacity per rack unit, but they could do a lot better. Modern JBOD’s offer as high as 84 disks in 5u and 60 disks in 4u for 3.5 inch drives and 60 disks in 2u for 2.5 inch drives.  I would love to see Nimble offer better density.  Just like they do now with SSD cache, I see no reason why the JBOD needs to be filled at time of purchase.  Make 15 disk packs available that are installable by customers, then simply add an “activate” option for the new disk span.
  • Cheap and Deep: Nimble now has a great storage line for most tier 0 – 3 workloads, but I still feel they’re missing one additionally important tier, backup. Yes, you could use their hybrid arrays for backup, but at the current cost structure, and based on the problem I raised above (rack units), I don’t feel Nimbles CS series are a prudent use of resources for backup.  Most backup data is already compressed, and random IO tends to not be a constraining factor.  I’d love to see an array designed for high throughput sequential workloads, with lots of non-compressed usable capacity.  Me personally, I’d even be ok if they offered a detuned CS series that disabled caching and compression to ensure the array is used for its designed purpose.  What I’m NOT asking for is yet another dedupe target.
  • More scalable architecture: Just my opinion, but I think the whole idea of keeping the controllers in the chassis while great for smaller workloads, doesn’t make as much sense for larger storage implementations.  There is a reason EMC, NetApp and others have external controllers.  It allows much greater expansion, and theoretically better performance.  I get the idea of having a series that can go from the smallest to the largest, with a simple hot swappable controller, but with external controllers, that would be doable as well and, it would more than likely work for completely different generations.  This would also allow more powerful controllers (quad socket as an example).  If CPU’s are truly the bottleneck, then why limit the number of cores and sockets you have access to by using a chassis that restricts such things?
  • All NVMe array: I know the tech is still new, but it would be awesome to see an NVMe array in the near future.

Hardware / Software:

  • Storage Tiers: Its either all flash or hybrid, but why not offer both under the same controller?  Simply rate the controller based on peak IOPS, and then lets us pick what storage tier (s) we want to run underneath.  Any spinning disk would go into a hybrid tier and any flash would go into a flash tier.  I’m not even looking for automated tiering, just straight up manual tiers.

Software:

  • Monitoring: I would really like to have more metrics available via SNMP that are typical on other arrays. Queue depth, average IO size, per disk metrics, CPU, RAM, etc.
  • File Protocols: Now that you have FC + iSCSI, why not add SMB + NFS as well?  I would love to see Tintri like features in Nimble.
  • Replica destinations: I’ve been asking for more than one destination for 4 years, so here it goes again.  I would like to replicate a volume to more than one destination.  If I have a traditional failover cluster, I want to replicate it locally and to DR via a snapshot.  Right now, that’s not doable TMK.
  • Full Clones: Linked clones are great for cases where you think you’re going to get rid of a volume after X period of time, but they really don’t make sense long term.  I would love an option to create full clones, or even promote existing linked clones into full clones.  Maybe I want to get rid of the parent volume and keep the clone, I can’t do that right now.
  • Full featured client agent: I would LOVE to have the ability for a client to do the following actions.  This would allow my DBA’s to do their own test / dev refreshes without me needing to delegate them access to the Nimble console.
    • Initiate a new snapshot (either at a volume group level or an individual volume)
    • Pick from a library of existing snapshots to mount
    • Clone a volume and mount it automatically
    • Delete a volume / snapshot / clone.
    • Swap a volume group (this would be a great workflow feature for UAT refreshes).
  • Virtual Appliance GA’ed:  How about letting me have access to a crippled virtual appliance so I can test some things out that might be a little risky trying in prod.  Everyone in IT knows its not a good practice to test in prod, and yet with storage, we have no choice but to do that.
  • Better Volume Management:  If there is one model that everyone should try and duplicate when it comes to management its Microsoft ACL propagation and inheritance structure.  I’m tired of going into individual volumes to change things like ACLS.  Even using volumes groups, while it helps, doesn’t eliminate the need.  I would much rather group volumes in folders (or even better tags) and apply things like protection policies and ACLS globally, rather than contend with trying to script static entries.  Additionally, being able to use tags IMO, is far smarter than say using folders.  Folders are rigid, tags are flexible.  Basically a tag can do everything a folder can do, but the reverse is not true.
  • Update Picker:  I like running the latest version of NOS, but for consistency sake, I also like having my SANs on the same revision.  There have been times where we were in the middle of a series of SAN upgrades and you released a new minor version.  When I go to download the NOS version on SAN I’m updating, it no longer matches the rev on the SAN’s I’ve already upgraded.  So ultimately I end up having to call support and request the older rev, and manually download it.
  • Centralized Console:  It would be great if you guys came out with something like a vCenter server for Nimble.  A central on premises console that I could use to do everything across all my SAN’s.  This would mitigate the need to use groups for management reasons, and instead would allow me to manage all SAN.  I could easily see this console being where things such as updates are pushed out, new volume ACL’s are created, performance monitoring is done, etc.
  • Aggressive Caching in GUI:  Offer an option right in the GUI to use default (random only), sequential and random, or pin to cache.
  • Web UI improvements:
    • Switch to a nice and fast HTML5 console.
    • Show random / sequential and write / read in the time line graph, and show it as a stacked graph instead of a line graph.
    • Don’t show cache misses for sequential IO.
    • Show CPU usage
    • Show the top 5 volumes for real time, last minute, last 5 minutes, last 15 minutes, last hour and last day for the following metrics.
      • Cache misses
      • Total IOPS
      • Total throughput
      • CPU usage
  • Use GUIDS not friendly names:  I would actually like to see Nimble switch to using a generic GUID for volume ID, and then have a simple friendly name that’s associated with it.  There are times where I wanted to change my naming convention, but doing so, would require detaching the volume from the host that’s serving it.
  • Per Volume / Per host MPIO policy:  Right now it seems the entire array needs to be enabled for intelligent MPIO.  I would actually like to see there be an additional option to only do this for certain initiator groups or certain volumes.
  • Snapshot browser:  I would love a tool that would allow us to open a snapshot through the management interface and browse for files to recover, rather than having to mount a snapshot to the OS.  Even better would be if I could open a Vmware snapshot and then open a VMDK as well.

Problem Solving: CommVault tape usage

Introduction:

I hate dealing with tapes, pretty much every aspect of them.  The tracking of them is a PITA, having to physically manage them is a PITA, dealing with tape library issues is a PITA, dealing with tape encryption is a PITA, running out of tapes is a PITA, dealing with legal hold for tapes is a PITA, and I could keep going on with the many ways that tape just sucks.  What makes matters worse is when you have to deal with MORE tapes.

Now that you know tapes are one of my personal seven levels of hell in IT, you’ll know why I put a bit of time into this solution.  Anything I can do to reduce the number of tapes getting exported every day, ultimately leads to some reduction in the PITA scale of tapes.

The issue:

To provide a better understanding of the issue at hand, for years I’ve been seeing way too many tapes being used by CV.  We’d kick out tapes that had 5% or 10% consumption, and the number of tapes with that level of consumption varied based on what phase of our backup strategy we were in, and what day of the week it was.  It could be anything as small as 4 partially filled tapes, to times where we had 10+ tapes that weren’t filled all the way up.  If the consumed data should fit on 16 tapes, and we’re kicking out 26 tapes, that’s a problem IMO.  I’m sure many of you out there have contended with this in CV specifically, and I’d bet those of you using other vendors products have run into this too.  I’m going to first explain why the problem is occurring, and then I’ll go over how I’ve reduced most of the waste.

The Why?

In CV, we have storage policies, and short of going into an explanation of what they are for others not familiar with CV, just think of it as an island of backup data.  That island of data doesn’t co-mingle with other islands of data on disk, and tape is no exclusion.  What that means is when you backup data to a storage policy and want to copy it to tape, that data getting copied to tape will automatically reserve the entire tape being used.  In turn, each storage policy then reserves its own unique tapes so that data does not co-mingle together.  This means for every storage policy you have, you’re guaranteed at least one unique tape per storage policy at a minimum.  Now, each storage policy can have a number of streams configured.  To keep things simple, let’s just ignore multiplexing for now.  When a storage policy has a stream limit of 1, that means only 1 tape drive will be used, when it has a stream policy of 4, that means 4 tape drives will be used.  Now, as you copy data to tape, you normally have more than 1 streams worth of data, you probably have at least one for each client in your environment (and likely much more than that).  This is a good thing, having more streams means we can run data copy operations in parallel.  In the case of the 4 streams example, that means we can use 4 tape drives in parallel to copy data for the example storage policy.  What this also means is depending on circumstances, we could end up with 4 tapes not being filled all the way as well.  Streams are optimized for performance, NOT for improving tape utilization.  Now, imagine you have more than one storage policy, let’s just say 4 storage polices, each being their own island, and each with a stream limit of 2.  That means you could end up with up to 8 tapes not being fully utilized.  I’m also ignoring for now that in CV, you can separate incremental and fulls to different storage policies which exacerbates the problem further (taking one island and making it two).

In our case, we have 4 storage policies and we had gone through a process of merging our Fulls and Incs into a single storage policy to consolidate tapes already.  We have a total of 6 tape drives, which means if we just configured the storage policies to fight over the tape drives @6 streams each, we could end up in theory with 24 partially filled tapes.  We’re smarter than that of course, so that wasn’t out problem.  Our problem was finding the right balance between how many streams a storage policy needed to copy all its data in our window, and not making it so high that we ended up wasting tape.  Pre-solution, we almost always had 4 – 6 tapes that were wasted, as in 100GB on a 2000GB tape.  It was annoying and wasteful.

Solution, problems again, improved solution:

There are two main components to the solution.

  • Scripting storage policy stream modification via task scheduler (MVP JAMS in our case).
  • CommVault introducing Global Tape Policies in v11
    • This allows tapes to be shared, no longer residing on an island as mentioned above. So storage policy 1, 2, 3 and 4 can all share the same tape.  Way more efficient.

In our case, when we saw the global tape policy, it was like a halo of light and angels singing, going off in our head.  This was it, our problems were FINALLY solved.  After going through the very tedious task of migrating to this solution, we found that we were still using 4 – 6 tapes a day more than we needed.  The problem was not that data was not co-mingling, it was.  No, the problem was that we set the global tape policy to 6 streams, and every day, it was using 6 tape drives for backups.   At first we tried to solve the problem by limiting the aux copy streams via a scheduled task in CV (start the job with 1 stream only as an example) but we had 4 storage polices, so that only reduced the tape usage to 4.  The problem again was that each storage police was scheduled and run in parallel.  So while we restricted any one storage policy, ultimately we were still letting more tape drives being used than needed and in turn more tapes than was needed.  We had set 6 streams, because we wanted to make sure that our FULL jobs had enough tape drives to complete over the weekend.

At this stage, I came to the conclusion that we needed a way to dynamically control the streams for the global tape policy so that during the week days it was restricted to 1 tape drive (all we needed) and on the weekend, we could start out with 6 and slowly ramp back down to 1, and hopefully more fully fill our tapes.  With a bit of research and some discussions with CV, I found out that they have a CLI option for controlling storage policy streams (found https://documentation.commvault.com/commvault/v10/article?p=features/storage_policies/storage_policy_xml_edit.htm).  Using my trusty scheduling tool, I setup a basic system where on Sunday @4PM we would set the streams to “1”, and then on Friday @4PM we would raise them to “6” and Saturday @7am we would drop them to “2”.  This basically solved our problem, and I’m happy to say that on week days, tapes are filled as much as is possible (1 – 2 tapes depending on which client ran a full), and on the weekend, 2 – 4 tapes are still being used.  I’m still tuning the whole thing, for the fulls (it’s a balance of utilization and performance), but its better than its ever been.  Its also worth noting, we went back and modified our aux copy schedules and told them to use all available streams since we now choke point it at the global tape policy.  This allows any storage policy to go as fast as possible (although potentially blocking other ones).

It’s a hack no doubt.  IMO, CV should develop this concept in their storage policies.  Basically creating a schedule window to dynamically control the queue depth.  For now, this is working well.