Category Archives: Uncategorized

Problem Solving: WSUS failing for Windows 10 with error 8024401c

Hi Folks,

After updating WSUS to support Windows 10 newer update format, we noticed that our Windows 10 client weren’t working. The error they were getting was 8024401c whenever we checked for updates (post WSUS upgrade).  Initially we thought it was related to the WSUS upgrade, but found out that most of our systems hadn’t been updating for a while.  So we moved on to troubleshooting the client further.  We found that the following GPO “Do not connect to any Windows Update Internet locations”  was not configured.  After doing some digging we determined that this was put in place to prevent our clients from downloading updates from MS directly, which was originally happening.  The weird thing to me was why were our clients going to MS anyway?  We have WSUS, that is the point of WSUS?  Disabling the setting resulted in us getting updates, but now they were coming from MS directly and not WSUS.  Enabling or setting it to “not configured” resulted in the lovely error.

Example snippet of log file below.

2017/06/13 10:08:31.2836183 676 10488 WebServices WS error: There was an error communicating with the endpoint at ‘http://%ServerName%/ClientWebService/client.asmx’.
2017/06/13 10:08:31.2836186 676 10488 WebServices WS error: There was an error receiving the HTTP reply.
2017/06/13 10:08:31.2836189 676 10488 WebServices WS error: The operation did not complete within the time allotted.
2017/06/13 10:08:31.2836280 676 10488 WebServices WS error: The operation timed out
2017/06/13 10:08:31.2836379 676 10488 WebServices Web service call failed with hr = 8024401c.

After a ton of Google Fu, I stumbled on to this article https://blogs.technet.microsoft.com/windowsserver/2017/01/09/why-wsus-and-sccm-managed-clients-are-reaching-out-to-microsoft-online/  Before you start reading, make sure you’re relaxed and read through it carefully, because the answer is there, but you have make sure you’re not just skimming.

Here is the main section and highlighted points that you need to glean from that article.


Ensure that the registry HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate doesn’t reflect any of these values.

  • DeferFeatureUpdate
  • DeferFeatureUpdatePeriodInDays
  • DeferQualityUpdate
  • DeferQualityUpdatePeriodInDays
  • PauseFeatureUpdate
  • PauseQualityUpdate
  • DeferUpgrade
  • ExcludeWUDriversInQualityUpdate

What just happened here? Aren’t these update or upgrade deferral policies?

Not in a managed environment. These policies are meant for Windows Update for Business (WUfB). Learn more about Windows Update for Business.

Windows Update for Business aka WUfB enables information technology administrators to keep the Windows 10 devices in their organization always up to date with the latest security defenses and Windows features by directly connecting these systems to Windows Update service.

We also recommend that you do not use these new settings with WSUS/SCCM.

If you are already using an on-prem solution to manage Windows updates/upgrades, using the new WUfB settings will enable your clients to also reach out to Microsoft Update online to fetch update bypassing your WSUS/SCCM end-point.

To manage updates, you have two solutions:

  • Use WSUS (or SCCM) and manage how and when you want to deploy updates and upgrades to Windows 10 computers in your environment (in your intranet).
  • Use the new WUfB settings to manage how and when you want to deploy updates and upgrades to Windows 10 computers in your environment directly connecting to Windows Update.

So, the moment any one of these policies are configured, even if these are set to be “disabled”, a new behavior known as Dual Scan is invoked in the Windows Update agent.

When Dual Scan is engaged, the following change in client behavior occur:

  • Whenever Automatic Updates scans for updates against the WSUS or SCCM server, it also scans against Windows Update, or against Microsoft Update if the machine is configured to use Microsoft Update instead of Windows Update. It processes any updates it finds, subject to the deferral/pausing policies mentioned above.

Some Windows Update GPOs that can be configured to better manage the Windows Update agent. I recommend you test them in your environment


After reading that, I went back in our GPO and did some more digging, since all our WSUS client settings are defined in GPO, turns out we have the “Do not include drivers with…” setting enabled.  So ultimately it was this setting that led to the whole “Dual Scan” mode being enabled, which led to us downloading MS updates (needed to happen anyway), which led to us disabling that, which led to WSUS not being used at all. So after setting both settings to not configured and doing a lot of GPUpdates / restarting of the windows update services, eventually I went from getting that error, to everything being back to normal.  That is, my client connecting to WSUS and downloading updates the right way.

Lessons learned besides not just randomly enabling WSUS settings, is that Microsoft in my not so humble opinion, needs to do a better job with the entire WSUS client control.  This is just stupid behaviour to be blunt.    What I would suggest that MS do is as follows.

  1. For Pete’s sake, have a damn setting that controls whether we want updates via WSUS, WUFB, or neither.  I mean it seems like such an obvious thing.  Clearly implied settings conflict.  If you have to write a damn article explaining all the gotcha’s you failed at building an user friendly solution.
  2. Group settings that are Windows update for business specific in their own damn GPO folder and their own damn reg key.  This way there’s no question these are for WUfB only.  Similar to WSUS.
  3. If WSUS is enabled, ignore WUfB settings and vice versa.

Anyway, hope that helps any other poor souls out there.

Thinking out loud: Why do server vendors still struggle with driver and firmware management?

History:

Let me give you a little back story before digging into the meat of this post.  My team and I make a very concerted effort to keep our servers firmware and drivers updated.  We’ve gone so far as to purchase software from Dell, implement a process on how firmware / drivers are to be updated, and ensuring that its routinely done every quarter.  We do this because in general it’s a best practice, but also because we’ve run into too many occasions where troubleshooting with a vendor stops (if it ever starts) very quickly if the drivers / firmware isn’t recent.  In essence, we’re doing our best to be diligent and proactive with keeping our servers healthy, secure and updated.

Late last year we ran into two issues, both of which are related to drivers / firmware.

  1. A Broadcom NIC causing a purple screen of death (PSOD) in ESXi. This was a server that was freshly rebuilt and all drivers (or so we thought) and firmware updated.  Turns out the driver we were running was more than two years old and the PSOD we were having was a resolved issue in a newer driver.
  2. An Intel x710 quad port 10Gb NIC causing packets to black hole for certain VM’s on the same vLAN. Again, these were new hosts that were patched, firmware updated and in theory up to date.  This issue is what really triggered us to start evaluating other server vendors and their solutions.

 

Of those two issues above, only one was solved with a simple driver update, and the other, we just gave up on and switched to a different NIC (x520).

The Issue:

If you can’t see where this post is going already, let me lay it out.  Server vendors still can’t properly manage their own drivers, firmware and vendor specific software. I know what you’re thinking, you have tools that the vendor provided, you’re using them and you’re fine.  I hate to be the bearer of bad news, but I doubt it.  We just got done a rigorous evaluation of Dell, HP and Cisco.  None of them have a complete solution.  Don’t get me wrong, some of them are getting there, but no one has the problem solved.  If you’re wondering what the specific problems are, see the bullets below.

  • Server vendors would like that you keep your firmware, drivers and tools up to date.
  • OS vendors (and sometimes server vendors) require that the driver and firmware have a certified pairing. It is NOT good enough to simply have the latest driver and the latest firmware.  This of course may vary slightly depending on which server and OS vendor.  VMware though as an example, absolutely has driver and firmware paring that’s required.
    • This driver and firmware pairing is typically worked out between the server and OS vendor.
    • VMware has a strict HCL for this use case, and TMK, MS has an HCL, although they’re a little more forgiving when it comes to the pairing. I can’t speak about Linux, Solaris or other OS’s.

 

Think about this, when was the last time you did the following?

  • Retrieved an inventory of all your hardware firmware revisions and driver revisions.
    • Do you even know how to do this? Probably not as easy as you think.
  • Logged into your OS vendors HCL and one hardware item at a time, checked if you are running the latest driver and firmware, and that the pairing is also certified.
    • With VMware, you can use the device vendor ID, device ID and sub vendor ID to find the specific hardware in question on their HCL. Just remember its hex values.

I bet you’re either relying on the following.

  • VMware update manager, and vendor provided depots (if they exist).
  • Vendor supplied firmware management solutions.
    • Some may have driver management for select OS, but no one does it all.
  • Vendor custom ISOs / install discs.

I’m going to suspect if you go and check your VMware HCL, you’re out compliance in one way or another, or something is woefully out of date.

Solution:

Let’s share a pipe for a second and dream about what it should look like.

 

Server Vendor:

  • It should be a central console.
  • It should handle downloading all firmware, drivers and vendor specific tools.
  • It should use a concept of baselines. A baseline being defined as.
    • OS and server model specific.
    • Based on a release date. The baseline should define an approved pairing of drivers, firmware and vendor tools for a given month, quarter or however often the server vendor feels the need to establish a new baseline.
    • The baseline should support the concept of cumulative updates / hotfixes.
  • It should support grouping servers, and applying baselines to the server groups.
  • It should support compliance checking for the baselines. Not simply deploying the drivers and firmware and assuming everything is ok.  This would let you know if an admin went rouge and manually updated or downgraded firmware / drivers or tools.
  • It should support rolling back drivers, firmware or tools if it is determined to be too far ahead.
  • Provide very verbose information as to why an update process failed.
  • Bonus points
    • Support a multi-site architecture. Meaning be able to have a cached copy of the repo and a local server to perform the actual update and auditing process.
    • Auto discover servers

OS Vendor:

  • Should provide a comprehensive API
    • Look up hardware and driver pairings.
    • Enable the download of the driver or firmware directly would be nice.

Conclusion:

Coming back to reality a bit, what can you do?  Use the tools you have to the best of your ability, script what you can, and manually deal with the gaps.  That said, I’m working on the auditing part of the problem, at least with VMware. Hope to have blog post about it and a new GitHub commit in the coming month or so, stay tuned.

Oh, one final thing you can do, start bugging your server vendor sales team about the issue. If enough people raise the issue, it will get the attention it desperately needs.

Thinking out loud: What HP + Nimble means to me

Disclaimer:

These are opinions, not facts and these opinions are mine, not my employers.

Introduction:

Upon receiving the news that HP was to acquire Nimble, I can’t say I was exactly thrilled.  Nothing personal against HP, they make great servers, but I like Nimble the way it is right now.  None the less, I know the industry is moving in a direction where its either get big, or get out.  There is a huge storm “cloud” looming, and if you’re an on premises solution, its going to be a scary time in the coming years.

I was thinking about what would be some of the pros and cons of the HP acquisition and this is what I’ve come up with.

Pros:

  1. HP is a big company and an established one at that. We’ll focus on the pros being big/established in this section.
    1. HP will likely have an easier time pitching Nimble into companies that would not have given them a second look. HP is established, so there’s a perception that Nimble is established.  This leads to better market penetration.  Nimble getting better market penetration means Nimble makes HP more money and if Nimble makes HP more money, HP invests more into Nimble.  Hopefully the circle of money keeps snow balling and we all win.
      1. HP is a world wide company and while Nimble has done a great job so far, HP is going to take them into more countries faster than they could on their own. If you’ve had difficulty getting Nimble equipment purchased “in country”, I can see this getting easier long term.
    2. Obviously HP has more capital at their disposal than Nimble did. If invested correctly, I could see this accelerating Nimble’s innovation.
    3. HP has more purchasing power than Nimble does, this could lead to Nimble’s margins being better, which in turn may lead to us having a more affordable product (or more profit for HP).
  2. Look, we all know why tech companies pick Supermicro, and its not quality, its affordability. HP makes some pretty kick ass hardware, so if we were to see Nimble’s hardware platform change from Supermicro to HP equipment, not only would my datacenter look a little sexier, I wouldn’t cut my fingers trying to rack Nimble anymore.
  3. If you’re a current HP customer, I can see two nice integration points.
    1. Infosight for other HP solutions.
    2. Nimble integration into OneView.

Cons:

  1. As mentioned in the pros, HP is a large established company. While this in its self can have some pros, it also has the potential for a number of cons.
    1. Big companies tend to move slow, bureaucracy and over analysis being suspect causes. Nimble had far fewer hoops to jump through before making a decision.  Just remember, deadlines and accomplishments drift a day at a time.  Days become weeks and weeks become months, and you get the picture.
    2. Every company is profit conscious, but some larger companies will kill any sliver of waste, even at the cost of productivity or customer satisfaction. I’m not saying it will happen with HP, only that it could.
    3. While HP will open a lot of new doors for Nimble, it has the potential to close a lot of existing ones too. There are a lot of companies that have had bad experiences with HP and this may be enough for them to drop Nimble.  That said, being realistic, it seems one way or another, you’re going to be purchasing storage from some big vendor, and it may not be the same as the one you purchase servers from.
    4. If HP tries to assimilate Nimble into their ways, I can see this being bad for Nimble customers. Nimble for example has a great support experience.  If HP tries to force Nimble to adopt their triage and support structure, that would be a quick way to devalue Nimble.  There’s other things too, like getting stuck speaking with a generic HP sales rep and sales engineer, instead of having direct access to a Nimble SE and a Nimble sales person, or other typical large company sales and support processes.
  2. HP hasn’t made a great name for themselves here of late. We know they’ve split the company in half and sold off a lot of assets.  It’s hard to say if it’s too little too late, or if it was the right move and just in time.  Regardless, HP to me is a company that’s walking a fine line of a falling giant, or one that’s getting back on its feet.  If HP goes down, Nimble goes with it, and that’s not good for Nimble customers.
  3. HP isn’t exactly synonymous with innovation, at least not any more. I fear that HP has the potential of choking the life out of Nimble.  In my opinion, 3Par was a great storage solution.  Part of me wonders if HP couldn’t make that work, what makes them think Nimble will be any different?  Meaning, are they going to turn Nimble into the next Equallogic?

Other thoughts:

I think deep down everyone knew Nimble wanted to get bought.  Me personally, I was REALLY hoping Cisco was going to buy Nimble.  In my opinion, Cisco + Nimble would go together like peanut butter and fluff.  HP already has a storage company that’s flailing.  I don’t want Nimble to follow suite.  People like to remind me about about Whiptail and how bad that was.  I look at that as a rash move on Cisco part (the solution was doomed to fail), but Nimble would be a pick that no one could blame Cisco for.  Best of all, Cisco doesn’t have any competing products (other than Hyperflex, but that’s a different type of solution).  This would have led to a much stronger and untied focus on pushing Nimble.  From Nimble’s view, it would have solidified them as being established (opening the closed doors), and for Cisco, it would have given them a proven storage startup that’s on fire.  Honestly, if I was Cisco’s CEO, I would be doing everything I could to steal the deal from HP.  If it was a matter of HP vs. Dell vs. Cisco, and Cisco was the one with Nimble, IMO, Cisco would crush the other two like a ten-ton hammer.

Conclusion:

This is obviously all speculation at this point, just thinking out loud.  I hope all the pros of what I pointed out occur with the acquisition and none of the cons.  I wish both vendors the best of luck, and until proven otherwise, I’m still a diehard Nimble (HP) fan.

Naming Conventions: Server Names

Introduction:

One of the things I’m struggling with, is how to balance the number of posts per naming convention.  It would be easy in some ways to use a single post per server type, but it would also be overly redundant in many ways.  I originally wrote a dedicated post to SQL server naming conventions, and realized that the logic behind its name is ultimately a similar logic for other server names.  With that said, I’ve decided to create a consolidated post for server names.  I’ll rehash the overall structure used for the SQL naming convention, and show you how its reusable for other servers.

Limitations:

To begin with, 15 characters is a length limitation I would always suggest maintaining.  The only exception is in the following circumstances.  If you’re building a server that isn’t Windows, and will NEVER need to join a Windows domain, then and only then can you make names longer than 15.  Microsoft in their perpetual need to maintain excessive backwards compatibility, still hasn’t dropped NetBIOS out of its architecture.  Even if you build a Microsoft domain, 100% running on DNS resolution, they still truncate names for backwards compatibility.  I wish they would provide a naming resolution compatibility mode that would in essence switch the domain from NetBIOS supported to DNS only, but that’s a whole different blog post.

My naming conventions are designed to scale for smaller to mid-sized companies.  If you’re dealing with 10s of thousands of servers, this naming convention won’t scale to your needs.  My names are designed to give you a hint of what the server does.  When you’re at the 10s of thousands size, you need a whole new way of dealing with server names.

Other stuff:

I used to really get hung up on server numbering.  What I mean by that, is if I was running a smaller shop with say two domain controllers.  Let’s call them DC1 and DC2 for simplicity.  I would want to keep those names every time I did a major upgrade.  Ultimately it led to a lot of shuffling, and a lot of time spent for something that was ultimately cosmetic and not important.  Point being, if you are or were like me, learn to let it go.  Sometimes you will have a DC3 and a DC4 when there are no DC1 and no DC2.

When you’re dealing with systems that need to connect to other systems by name, learn that CNAME records can be your best friend. I strongly suggest to avoid pointing things directly at a server name, if whatever you’re pointing is mostly a generic service.  For example, its very common to have many things pointing at your DC’s for LDAP lookups.  Rather than doing something like pointing at DC1 and DC2, create a CNAME record for something like “ldap1.domain.com” and “ldap2.domain.com”.  Then when you change your DC’s, you only have two records to change.

It’s expensive, but load balancers can also help with renaming / moving things.  They help because of their ability to create a “virtual IP” and redirect that traffic to any real IP as needed.  In the case of DNS, it would enable you to move your DNS functionality to a new server without having to change the IP you have configured across all your systems.

My naming convention basics:

There are a few basics planned into my naming conventions.  These basics make it easier to organize servers, and ultimately find / figure out what a server’s purpose is.  Obviously with a 15-character limit, there are going to be a lot of abbreviations.  In my opinion, so long as you’re consistent, even vague abbreviations will eventually be memorable or make sense.

Hyphens:

I use hyphens to separate may of the naming conventions purposes.  I know its potentially wasting very precious characters, but at our scale, we can mostly afford to do it, in exchange for making it easier to programmatically find servers.  You don’t need to use hyphens, I do it because its easier for me to script with.  Plus, they make for a consistent separator.  Ultimately though, consistency as I stated above, is what’s important.

Location variable:

I prefer to start all names with either a location variable.  At two completely different companies, I inherited naming conventions that used their company’s acronym for their primary site, and DR for their disaster recovery site servers.  There are two problems with using something this descriptive.

  1. I’ve worked for a company that flipped the location of their DR and primary site. It meant for a very confusing period of time where some servers might have said “DR” but were actually now in the primary headquarters, and vice versa.
  2. If you’re company has more than one office or more than one DR site, the naming convention kind of falls apart.

The main goal for the server location should be generic, but consistent.  Using something like AA1, is just as likely to suffer from the issue above in point 1, but it does solve the issue of problem 2.  If you have multiple locations, you just keep incrementing the number.  So AA2, AA3 ….AA9, and then increment the letter to AB1.  It leaves a TON of room for different / unique locations.  Heck, maybe you don’t even need the double letter.  Math isn’t my strength, but if my calculations are correct even something like one letter + one number = 234 locations, and that assumes you never use the number “0”, in which case it would be 260 locations.

Application:

I like to use something short (as in 3 letters or less) to tell me something about the application or purpose of the server.  For example, I might use “SQL” for a SQL Database Server, or RMQ for a Rabbit MQ server, or EXM for an Exchange Mailbox server.  It can get a little tricky of course, after all you have MySQL and MSSQL, but maybe that doesn’t matter.  After all, a SQL DB is a SQL DB.  The reason I keep it three letters or shorter (on average) is to leave room for a clustering naming conventions (coming up).  Of course if you’re not limited by the 15 characters, you can get a lot more verbose, but at least for those of us in windows shops, that’s tough.

Environment:

This one is short and easy, I use a few letters to denote the environment of the server.

  • P = Production
  • S = Stage
  • U = UAT
  • D = Dev
  • T = Test
  • X = Sandbox

Clustered or Standalone:

I like to denote if this is a clustered system or a standalone system.  The standalone part of is pretty easy, I just use an “S”.  Sometimes I’ll trail it with a number, like S1, or S2 to denote that the application isn’t clustered but that they’re related (you’ll see an example later).

For the cluster part, it gets a little more involved and varies a little bit based on the application.  We start out with a simple “C” to denote clustered, but then I like to use another trailing letter / number as well.  Let’s look at a few cluster examples.

  • CN1 = Clustered Node 1
  • CN2 = Clustered Node 2
  • CDI1 = Clustered Database instance
  • CDG1 = Clustered Database Group 1

Application group number:

This is the final number that really ties everything together.  I use a simple “01”, “02” or whatever number really to tie all clustered nodes or even standalone (related) systems together.

Putting it all together:

Here are a few practical examples to give you an idea of how it all goes together.

Example 1:  A SQL environment to support the widgets application.  This SQL environment will have a full development lifecycle environment and UAT will mirror Production exactly.  We’ll be using SQL AAG’s.

  1. Dev = a1-sqlds-01
  2. Stage = a1-sqlss-01
  3. UAT =
    1. Nodes
      1. A1-sqlucn1-01
      2. A1-sqlucn2-01
    2. Clustered Named Object (management)
      1. A1-sqluc-01
      2. SQL AAG listener names
        1. A1-sqlucdg1-01
        2. A1-sqlucdg2-01
      3. Prod =
        1. Nodes
          1. A1-sqlpcn1-01
          2. A1-sqlpcn2-01
        2. Clustered Named Object (management)
          1. A1-sqlpc-01
  • SQL AAG listener names
    1. A1-sqlpcdg1-01
    2. A1-sqlpcdg2-01

Notice how the last number glues everything together.  Also notice how everything is built off a consistent naming standard.  If we deployed a new application

Example 2: How about something simple like a domain controller environment for 3 sites?  Let’s just say there will be a production environment and a test environment.

  1. Test
    1. Site a1
      1. A1-dctcn1-01
      2. A1-dctcn2-01
    2. Site a2
      1. A2-dctcn1-01
      2. A2-dctcn2-01
    3. Site a3
      1. A3-dctcn1-01
      2. A3-dctcn2-01
    4. Production:
      1. Site a1
        1. A1-dcpcn1-01
        2. A1-dcpcn2-01
      2. Site a2
        1. A2-dcpcn1-01
        2. A2-dcpcn2-01
      3. Site a3
        1. A3-dcpcn1-01
        2. A3-dcpcn2-01

Again, notice how I use a single final number to glue an entire “purpose” together.  If I built a second discrete domain, I would likely change the last number to “02” which would quickly tell me that the domain controller is part of a separate domain.  Also notice how you can easily tell which node a DC is, which site a DC is in, and what its environment is.

Example 3:  A non-clustered exchange server environment that’s serving the same company.

  1. Mailbox servers:
    1. A1-exmps1-01
    2. A1-exmps2-01
  2. CAS Nodes (load balanced)
    1. A1-excpcn1-01
    2. A1-excpcn2-01
  3. CAS VIP
    1. A1-excpc-01_vip

Here you can see how the standalone mailbox servers are working for the same purpose, but ultimately they’re not clustered together.  With the CAS servers, you can see that they are in fact clustered and that we even created a VIP DNS name that helps you understand how everything is related.

Other examples:

At this stage I’m just going to bullet a list of various options, I’ll use a single destination and a single application number since we’ve already gone over that.

File Servers:

  1. Clusters
    1.  Nodes
      1. a1-fspcn1-01
      2. a1-fspcn2-01
    2. Cluster Resource
      1. a1-fspc-01
    3. Clustered SMB share
      1. a1-fspcsmb1-01
      2. a1-fspcsmb2-01
    4. Clustered NFS share
      1. a1-fspcnfs1-01
      2. a1-fspcnfs1-01
  2. Standalone
    1. a1-fsps-01

CommVault:

  1. Comcell
    1. a1-cvccps1-01
  2. MediaAgent
    1. a1-cvmaps1-01
    2. a1-cvmaps2-01
  3. Virtual Server Agent (for dedicated VM’s)
    1. a1-cvvsaps1-01
    2. a1-cvvsaps2-01

DHCP:

  1. Clusters ***Note: because DHCP consumes 4 characters, I leave the “c” off the name. 
    1. a1-dhcppn1-01
    2. a2-dhcppn2-01
  2. Standalone
    1. a1-dhcpps1-01

Review: 1.5 years with MVP Systems Job Automation Scheduler (JAMS)

Introduction:

I wrote a really quick review here about MVP systems JAMS product about 1 year or so ago (maybe a little less).  At the time, I was in search of a solution that could help me glue together several disjointed systems in a workflow.  Specifically, we were trying to integrate Veeam and CommVault backup’s together.  Veeam was of course doing the VM backup’s, and CommVault was copying the Veeam files to tape.  We’ve since moved on from Veeam, but JAMS has continued to be a vital part of our infrastructure.

What is JAMS?

The simple answer is it’s a centralized task scheduler, the long answer is its not only that, but a whole lot more.  This is a solution that replaces cron, windows task scheduler, SQL agent jobs, or pretty much anything else that you would normally use to schedule and execute something.

What makes up a JAMS solution?

There are four main components.

  • The JAMS server: This is clusterable component that schedules, queues and executes any jobs or workflows.
  • The JAMS client: This is the administration GUI.  Kind of self-explanatory, but this is where you would configure all of the settings for the various jobs, and server settings.
    • For windows, this also includes a Powershell module for CLI administration. Pretty sure they have a generic API, but I never bothered to look since PS was available.
  • The JAMS agent: This is a component that is installed on a system where you want to execute jobs.  All kinds of OS’s supported.
  • Microsoft SQL server: Check with MVP systems if other DB’s are supported, but we’re a MS shop, and SQL is on their list.  This is used to store the job history, job status, and pretty much the entire server configuration.  If this goes down, you have big issues to deal with J.  And yes, a clustered SQL server IS supported.

All in all, the infrastructure is pretty simple to understand and for smaller use cases, these roles can all be installed on the same system.

History:

I didn’t start out with JAMS, in fact, they were nowhere in sight when the initial problem came to fruition.  I figured this would be a relatively trivial Powershell solution, and started down the path of trying to write a quick workflow.  Building the logic for the workflow was actually pretty easy, but what I kept running into was the good ‘ol Kerberos double hop condition.  Never heard about it?   Read about it here.  In order to centralize the solution, I basically tried to build my own poor mans centralized task scheduler.  In order to keep it central, I was utilizing “invoke-command” to execute scripts on our Veeam server and our CommVault server.  With Veeam, our database was stored on a different server, so when my “invoke-command” executed against Veeam, my credentials were never passed along to the SQL server.  I was able to work around it by using CredSSP, but it wasn’t reliable.  Sometimes it would work, and sometimes I guess it would timeout or something similar (don’t really remember to be honest).  Then there was the issue with CommVault.  See they used old fashion EXE’s to start jobs (we were on v8) from the command line.  The commands I needed to run had to be executed in sequential order.  Anyone who has worked with Powershell’s “start-process” via invoke-command, knows that the “-wait” parameter is ignored.  I don’t recall the reason, but it was lame on MS part.  Ultimately, it was this that was the deal breaker, and so started the search for some sort of centralized task scheduler.
We ultimately landed on a cheap’o but well known solution called “VisualCron”.  I’ve got nothing against the solution, but after working with it for a few days, not only was it very hacked together, but it wasn’t the most user friendly solution.  So the search continued and we ultimately stumbled across JAMS.  It took a lot of creative searching to find them, but I’m glad we did.  After installing the trial, we knew it was the solution we were looking for, and the rest as they say, is history.

The pros:

  • Easy solution: Pretty easy to install the solution and understand the components. Unlike some other solutions we’ve installed, JAMS takes care of installing any pre-requisites and also has an easy to understand architecture.
  • You get tech support: Normally not something to write home about, but we leveraged their support quite a bit at first, and they were normally helpful.  As simple as JAMS is, it can do a lot of stuff, and that’s where support can (and is) a huge help.  I remember one part of a solution where were trying to pass a variable from one job to another.  Called up support, and sure enough, JAMS could do it and they showed us how.  How about bulk creating a bunch of jobs via PS?  Yep, support had an example of that too.
  • The GUI: This is one where I have pros and cons.   We’re in the pros section, so that’s what I’ll focus on here.
    • I’ve never worked with a GUI that was capable of bulk edits, but JAMS is and it rocks. Just imagine wanting to change the start time on 60 jobs.  You could write a script to do it, or you could highlight the 60 jobs in a folder, right click and basically change the value of one field (time) to another value.  Then BAM! It goes and changes the time for all highlighted jobs.  Pretty much any column you can add to the GUI has this functionality and it rocks.
    • Easy to see all jobs scheduled to run, running or failed in one view.
    • It keeps a detailed log of each job execution. If you write output to the host (think write-host in PowerShell or REM in batch), that output gets logged to a file and stored for historical purposes.  So as long as your script has verbose output, you’ll know exactly what happened in your job.
    • Sort of related to the above, it keeps a history of all executed jobs and their final status. It also tracks things like when it ran, how long it ran, how many resources it consumed, etc.
    • They have some pretty neat dashboards (once you figure them out). There are a few cool built in ones (like projected schedule) too.
    • Last but not least, it’s a pretty easy GUI to use. I won’t say it doesn’t have any learning curve, but I think the learning curve is really more related to the solution than the client its self.
  • Scripting engines: The agent can execute all kinds of scripts.
    • Powershell
    • Batch
    • Bash
    • SSH
    • T-SQL
  • Agent OS Support: The agent can be installed on different OS’s, so this isn’t a 100% windows only solution.
  • Workflows (setups): You can build “setups” (workflows) that tie jobs together. The jobs themselves can run on completely different systems.  In our case, we had a setup which had a “job” that ran on a veeam server and a different job that ran on the CV server.  The Setup was configured to wait until the first job completed with a success before moving on.
  • Job Queueing: It supports queueing jobs. Probably not an issue for many folks, but we used to limit the number of tape backup’s running in parallel.  What’s great is each “job” in jams can share a queue or have different queues.  This allows a Setup to execute the first job (as an example) and if needed, the second job will queue.  We typically had 50 setups running in parallel, but only 4 tape jobs that were allowed to run in parallel.  JAMS would execute all 50 setups in parallel, but when it came time to run the tape portion of a setup, the tape jobs would go into a queue and trickle out as others completed (or failed).  This didn’t stop the first jobs (the backup its self) from completing, so it ultimately kept things moving at a great pace.
  • PowerShell: Being able to admin JAMS through PS is a huge win in my book.  You can create, modify and delete, jobs, setups, queues, etc.  Everything in the GUI can be done in PS.  It’s sad in 2017 that I even have to list this as a pro of a solution. None the less, it’s not as common as it should be, and it’s a win for JAMS.
  • Different Licensing: With a lot of solutions, there’s only one licensing strategy. I found that JAMS had several, and they do so to accommodate differing needs and purposes.
  • Sales team: The sales team I worked with was friendly, knowledgeable and not pushy in any kind of way.  Additionally, what I think is worth noting, is while it felt like we were shopping for a Ferrari, they understood we were on a Corvette budget, and worked with us to find a licensing model (and some pricing breaks) to let us drive home in a solution we really wanted.

The cons:

  • Price: I’m not saying it’s overpriced, all I’m saying is its not cheap.  I would love to use this solution for my whole environment, but it’s not cheap to do that.  I’m not saying they won’t work with you (they will), but to scale the solution, you will be digging deeper in your wallet.
  • The GUI: I think the GUI has some great design characteristics, but I also think it has some flaws too.
    • They recently updated the GUI look. I’m personally not a fan.  It’s a matter of opinion of course, but I find it harder to see what I need to see now.
    • I don’t like the way they separate jobs from setups. I wish they just used a different icon, or a value in a field to separate them.  There are plenty of times I click on a folder and forgot that I’m in the “setup view” when I’m looking for a job.
    • They don’t support right click for certain job management features. I intuitively want to right click in the jobs window and select “new job” or something related, but that’s not the way the GUI is designed.
    • When you bulk submit jobs, they ask you if you want to, for each job. That means if you selected 25 jobs, you’re clicking “submit” 25 times afterwards.
  • Their security design: I found that their security model didn’t work quite like one might think.  I remember working with a tech to do something simple like let our DBA’s manage jobs (execute and read) and something as simple as that required what seemed like a million hoops to jump through.  Ultimately IIRC (it’s been a while) we ended up needing to grant them more rights than I would have wanted in order to accomplish what seemed like a trivial task.  I gave up on it because I didn’t want to create a solution that going to be too complex to manage.
  • Overlapping job detection: I remember when I first started with their solution, we had run into a few cases where jobs (or setups) were overlapping on themselves.  Meaning Job A from Monday night was still running and Tuesday nights job started up and started running.  When I asked support about this, they handed me a script that would nuke the Tuesday job, but ultimately didn’t solve my issue of needing Tuesday’s job to just wait.  I ultimately ended up writing a pre-check job that would detect if any of the same jobs were running and if so, to go into a loop where it checks every minute, waiting for the previous job to complete.  What sucks about this and the script they gave me, is every job I launch that has a pre-check job, this ends up burning my job count.  To me, this just seems like something that should be built into the solution.
  • Maintenance Mode: They don’t seem to have a maintenance mode option.  What I mean by that, is being able to put JAMS into a paused state.  I think you can stop a service on the windows hosts, but honestly that’s a hack.  They should just have a maintenance mode option built right into the GUI.   I could see something like having a few options like, queue any new jobs that start, or let existing jobs finish, but queue anything new, or don’t let any jobs start at all.  Bonus points if this could be done at a folder level.

Conclusion:

Ultimately after living with JAMS for almost 1.5 years, I think they really rock as a solution.  I can’t say I have any experience with any other enterprise job scheduling solutions, but my overall experience with JAMS has been a pleasure.  No solution is perfect, and theirs is no exception, but the great news is they have a solution that is ultimately awesome, with a few negatives, which is a far cry from other vendor’s solutions I’ve used.  My suggestion is if you’re looking for something to replace SQL jobs, task scheduler, cron or any other isolated solution, to give them a look, I think you’ll be pleased.

Quicky Review: GPO/GPP vs. DSC

Introduction:

If you’re not in a DevOps based shop, or living under a rock, you may not know that Microsoft has been working on a solution that by all accounts sounds like its poised to usurp GPO / GPP.  The solution I’m talking about is Desired State Configuration, or DSC. According to all the marketing hype, DSC is the next best thing for IT since virtualization.  If the vision comes to fruition, GPO and GPP will be a legacy solution.  Enterprise mobility management will be used for desktops and DSC will be used for servers.  Given that I currently manage 700 VM’s and about an equal number of desktops, I figured why not take it for a test drive. So I stood up a simplistic environment, and played around with it for a full week and my conclusion is this.

I can totally see why DSC is awesome for non-domain joined systems, but its absolutely not a good replacement in todays iteration for domain joined systems. Does that mean you should shun it since all your systems are domain joined?  That depends on the size of your environment and how much you care about consistency and automation.  Below are all my unorganized thoughts on the subject.

The points:

DSC can do everything GPO can do, but the reverse is not true. At first that sounds like DSC is clearly a winner, but I say no.  The reality is, GPO does what it was meant to do, and it does it well.  To reproduce what you’ve already done in GPO while certainly doable, has the potential of making your life hell.  Here are a few fun facts about DSC.

  1. The DSC “agent” runs as local system. This means it only has local computer rights, and nothing else.
  2. Every server that you want to manage with DSC, needs its own unique config file built. That means if you have 700 servers like me, and you want to manage them with DSC, they each are going to have a unique config file.  Don’t get me wrong, you can create a standard config, and duplicated it “x” times, but none the less, its not like GPO where you just drop the computer in an OU and walk away.  That being said, and to be fair, there’s no reason you couldn’t automate DSC config build process to do just that.
    1. DSC has no concept of “inheritance / merging” like you’re used to with GPO. Each config must be built to encompass all of those things that GPO would normally handle in a very easy way.  DSC does have config merges in the sense that you can have a partial config for say your OS team, your SQL team and maybe some other team.  So they can “merge” configs, and work on them independently (awesome).  However, if the DBA config and the OS config, conflict, errors are thrown, and someone has to figure it out.  Maybe not a bad thing at all, but none the less, it’s a different mindset, and there is certainly potential for conflicts to occur.
  3. A DSC configuration needs to store user credentials for a lot of different operations. It stores these credentials in a config file that hosted both on a pull server (SMB share / HTTPs site) and on the local host.  What this means is you need a certificate to encrypt the config file and then of course for the agent to decrypt the config file.  You thought managing certificates was a pain for a few exchange servers and some web sites?  Ha! now every server and the the build server need certs.  In the most ideal scenario, you’re using some sort of PKI infrastructure.  This is just the start of the complexity.
    1. You of course need to deploy said certificate to the DSC system before the DSC config file can be applied. In case you can’t figure it out by now, this is a boot strap solution you have to implement on your own if you don’t use GPO.  You could use the same certificate and bake it into an image.  That certainly makes your life easier, but its also going to make your life that much harder when it comes to replacing those certs on 700 systems.  Not to mention, a paranoid security nut would argue how terrible that potentially is.
  4. The DSC agent of course need to be configured before it knows what to do. You can “push” configurations, which does mitigate some of these issues, but the preferred method is “pull”.  So that means you need to find a way (boot strap again) to configure your DSC agent so that it knows where to pull its config from, and what certificate thumbprint to use.

Based on the above point, you probably think DSC is a mess, and to some degree it is. However, a few other thoughts.

  1. It’s a new solution, so it still needs time to mature. GPO has been in existence since 2000, and DSC, I’m going to guess, since maybe 2012.  GPO is mature, and DSC is the new kid.
  2. Remember when I wrote that DSC can do everything that GPO can do, but not the reverse? Well, lets dig into that.  Let’s just say you still manage Exchange on premises, or more likely, you manage some IIS / SQL systems.  DSC has the potential to make setting those up and administering them, significantly easier.  DSC can manage not only the simple stuff that GPO does, but also way beyond that.  For example, here are just a few things.
    1. For exchange:
      1. DSC could INSTALL exchange for you
      2. Configure all your connectors, including not only creating them, but defining all the “allowed to relay” and what not.
      3. Configure all your web settings (think removing the default domain\username).
      4. Install and configure your exchange certificate in IIS
      5. Configure all your DAG relationships
      6. Setup your disks and folders
    2. For SQL
      1. DSC could INSTALL sql for you.
      2. Configure your max member min memory
      3. Configure your TempDB requirements
      4. Setup all your SQL jobs and other default DB’s
    3. Pick another MS app, and there’s probably a series of DSC resources for it…
    4. DSC let’s you know when things are compliant, and it can automatically attempt to remediate them. It can even handle things like auto reboots if you want it to.  GPO can’t do this.  To the above point, what I like about DSC, is I’ll know if someone went in to my receive connector and added an unauthorized IP, and even better, DSC will whack it and set it back to what it should be.
    5. Part of me thinks that while DSC is cool, I wish Microsoft would just extend GPO to encompass the things that DSC does that GPO doesn’t. I know its because the goal is to start with non-domain joined systems, but none the less, GPO works well and honestly, I think most people would rather use GPO over DSC if both were equally capable.

Conclusion:

Should you use DSC for domain joined systems?  I think so, or at least I think it would be a great way to learn DSC.  I currently look at DSC as being a great addition to GPO, not a replacement.  My goal is going to be to use GPO to manage the DSC dependencies (like the certificates as one example) and then use DSC for specific systems where I need consistency, like our exchange, SQL and web servers.  At this stage, unless you have a huge non-domain joined infrastructure, and you NEED to keep it that way, I wouldn’t use DSC to replace GPO.

 

Thinking out loud: Hyper converged storages missing link

Introduction:

In general, I’m not a huge fan of hyper converged infrastructure.  To me, its more “hype” than substance at the moment.  It was born out of web scale infrastructure like Google, Facebook, etc. and IMO, that is still the area where it’s better suited.    The only enterprise layer where I see HCI being a good fit is VDI, other than that, almost every other enterprise workload would be better suited on new school shared storage.  I could probably go into a ton of reasons why I personally see shared storage still being the preferred architecture for enterprises, but instead I’ll focus on one area that if adopted might change my view (slightly).  You see, there is a balance between the best and good enough.  Shared storage IMO is the best, but HCI could be good enough.

What’s missing?

What is the missing link (pun intended)?  IMO, its external / independent DAS.  Can’t see where this is going?  Follow along on why I think external DAS will make hyper converged storage good enough for almost anyone’s environment.

Scaling Deep:  Right now the average server tops out at 24 2.5” drives and less for 3.5” drives.  In a lot of larger shops, that would mean running more hosts in order to meet your storage requirements, and that will come at the cost of paying for more CPU, memory and licensing then you should have to.  Just imagine a typical 1ru r630 + a 2u 60 drive JBOD!  That’s a lot more storage that you can fit under a single host, and it would only consume one more rack unit than a typical r730.  Add to this, theoretically speaking, the number of drives you could add to a single host would go beyond a single JBOD.  A quad port SAS HBA could have four 60 drive enclosures attached, and that’s a single HBA.

Storage independence:  Having the storage outside the server also makes that storage infinitely more flexible.  This is even true when you’re building vendor homogeneous solutions.  Take Dell for example.  Typically speaking their enclosures are movable between different server generations.  Currently with the storage stuck in the chassis, it gets really messy (support wise) and in many cases not doable, to move the storage from one chassis to another, especially if you’re talking about going from an older generation server to a newer generation.

Adding to this, depending on your confidence, white boxing also allows you to cut a server vendor out of the costliest part of the solution, which is the disks themselves.  Going with an enclosure from someone like RAID inc. or DataOn, Quanta QCT, Seagate, etc.  Add in a generic LSI (sorry Avago, oh sorry again, Broadcom) HBA, and now you have a solution that is likely good enough supportability wise.  JBODs tend to be pretty dumb and reliable, which just leaves the LSI card (well known established vendor) and your SSD / HDD.

Why do you want to move the storage anyway?  Simple, I’d bet a nice steak dinner that you want to upgrade or replace your compute long before you need to replace your storage. If you’re simply replacing your compute (not adding a new node but swapping it) then moving a SAS card + DAS is far more efficient than rebuying the storage, or moving the internal storage into a new host (remember warranty gets messy).  Simply vacate the host like you would with internal storage, shutdown, rip the hba out, swap server, put existing HBA back in, done.

If you’re adding a new host, depending on your storage, you may have the option of buying another enclosure and spreading the disks you have evenly across all hosts again.  So if for example, you had 50 disks in 4 hosts (total 200 disks) and you add a fifth host.  One option could be you simply remove 10 disks from each current node and place them in the new node. Your only additional cost was JBOD enclosure, and you now continue to keep your current investment in disks (with flash, that would be the expensive part).

Mix and match 3.5 / 2.5 drives:  Right now with internal storage, you are either running a 3.5” chassis, which doesn’t hold a lot of drives, but CAN support 2.5” drives with a sled.  Or you are running a 2.5” chassis which guarantees no 3.5” drives.  External DAS could mean one of two options:

  1. Use a denser 3.5” JBOD (say 60 disks) and use 2.5” sleds when you need to.
  2. Use one JBOD for 3.5” drives and a different one for 2.5” drives.

Again it comes down to flexibility.

Performance upgrades:  Now this is a big “it depends”.  Hypothetically if there were no SW imposed bottlenecks (which there are), one of your likely bottlenecks (with all flash at least) are going to be either how many drives you have per SAS lane, or how many drives you have per SAS card.  For example, if your SAS card is PCIe 3.0 internally, but the PCIe bus is 4.0, there’s a chance you could upgrade your server to a newer / better storage controller card.   More so, even if you were stuck on PCIe 3 (as an example).  There would be nothing stopping you from slicing your JBOD in half, and using two HBA to double your throughput.  Before you even go there, yes I do know the 730xd has an option for two RAID cards, glad you brought that up.  Guess what, with external DAS, you’re only limited by your budget, the number of PCIe slots you have and the constrains of your HCI vendor.  I for example could have 4 SAS cards, and 2 JBODs partially filled and each sliced in half.  You don’t have that flexibility with internal storage.

With the case of white boxing your storage, this also means to the extent of the HCL, you can run what you want.  So if you want to use all Intel dc3700’s, you can.  Heck, they’re even starting to make JBOF (just a bunch of flash) enclosures for NVMe, which again, would be REALLY fast.

Conclusion:

I say external DAS support is the missing link because it is what would allow HCI to offer similar scaling flexibility that exists in SAN/NAS.  I still think the HCI industry is at least 3 – 5 years out from matching the performance, scalability and features we’ve come to expect in enterprise storage, but external storage support would knock a big hole in a large facet of the scalability win with SAN/NAS.

Problem Solving: CommVault tape usage

Introduction:

I hate dealing with tapes, pretty much every aspect of them.  The tracking of them is a PITA, having to physically manage them is a PITA, dealing with tape library issues is a PITA, dealing with tape encryption is a PITA, running out of tapes is a PITA, dealing with legal hold for tapes is a PITA, and I could keep going on with the many ways that tape just sucks.  What makes matters worse is when you have to deal with MORE tapes.

Now that you know tapes are one of my personal seven levels of hell in IT, you’ll know why I put a bit of time into this solution.  Anything I can do to reduce the number of tapes getting exported every day, ultimately leads to some reduction in the PITA scale of tapes.

The issue:

To provide a better understanding of the issue at hand, for years I’ve been seeing way too many tapes being used by CV.  We’d kick out tapes that had 5% or 10% consumption, and the number of tapes with that level of consumption varied based on what phase of our backup strategy we were in, and what day of the week it was.  It could be anything as small as 4 partially filled tapes, to times where we had 10+ tapes that weren’t filled all the way up.  If the consumed data should fit on 16 tapes, and we’re kicking out 26 tapes, that’s a problem IMO.  I’m sure many of you out there have contended with this in CV specifically, and I’d bet those of you using other vendors products have run into this too.  I’m going to first explain why the problem is occurring, and then I’ll go over how I’ve reduced most of the waste.

The Why?

In CV, we have storage policies, and short of going into an explanation of what they are for others not familiar with CV, just think of it as an island of backup data.  That island of data doesn’t co-mingle with other islands of data on disk, and tape is no exclusion.  What that means is when you backup data to a storage policy and want to copy it to tape, that data getting copied to tape will automatically reserve the entire tape being used.  In turn, each storage policy then reserves its own unique tapes so that data does not co-mingle together.  This means for every storage policy you have, you’re guaranteed at least one unique tape per storage policy at a minimum.  Now, each storage policy can have a number of streams configured.  To keep things simple, let’s just ignore multiplexing for now.  When a storage policy has a stream limit of 1, that means only 1 tape drive will be used, when it has a stream policy of 4, that means 4 tape drives will be used.  Now, as you copy data to tape, you normally have more than 1 streams worth of data, you probably have at least one for each client in your environment (and likely much more than that).  This is a good thing, having more streams means we can run data copy operations in parallel.  In the case of the 4 streams example, that means we can use 4 tape drives in parallel to copy data for the example storage policy.  What this also means is depending on circumstances, we could end up with 4 tapes not being filled all the way as well.  Streams are optimized for performance, NOT for improving tape utilization.  Now, imagine you have more than one storage policy, let’s just say 4 storage polices, each being their own island, and each with a stream limit of 2.  That means you could end up with up to 8 tapes not being fully utilized.  I’m also ignoring for now that in CV, you can separate incremental and fulls to different storage policies which exacerbates the problem further (taking one island and making it two).

In our case, we have 4 storage policies and we had gone through a process of merging our Fulls and Incs into a single storage policy to consolidate tapes already.  We have a total of 6 tape drives, which means if we just configured the storage policies to fight over the tape drives @6 streams each, we could end up in theory with 24 partially filled tapes.  We’re smarter than that of course, so that wasn’t out problem.  Our problem was finding the right balance between how many streams a storage policy needed to copy all its data in our window, and not making it so high that we ended up wasting tape.  Pre-solution, we almost always had 4 – 6 tapes that were wasted, as in 100GB on a 2000GB tape.  It was annoying and wasteful.

Solution, problems again, improved solution:

There are two main components to the solution.

  • Scripting storage policy stream modification via task scheduler (MVP JAMS in our case).
  • CommVault introducing Global Tape Policies in v11
    • This allows tapes to be shared, no longer residing on an island as mentioned above. So storage policy 1, 2, 3 and 4 can all share the same tape.  Way more efficient.

In our case, when we saw the global tape policy, it was like a halo of light and angels singing, going off in our head.  This was it, our problems were FINALLY solved.  After going through the very tedious task of migrating to this solution, we found that we were still using 4 – 6 tapes a day more than we needed.  The problem was not that data was not co-mingling, it was.  No, the problem was that we set the global tape policy to 6 streams, and every day, it was using 6 tape drives for backups.   At first we tried to solve the problem by limiting the aux copy streams via a scheduled task in CV (start the job with 1 stream only as an example) but we had 4 storage polices, so that only reduced the tape usage to 4.  The problem again was that each storage police was scheduled and run in parallel.  So while we restricted any one storage policy, ultimately we were still letting more tape drives being used than needed and in turn more tapes than was needed.  We had set 6 streams, because we wanted to make sure that our FULL jobs had enough tape drives to complete over the weekend.

At this stage, I came to the conclusion that we needed a way to dynamically control the streams for the global tape policy so that during the week days it was restricted to 1 tape drive (all we needed) and on the weekend, we could start out with 6 and slowly ramp back down to 1, and hopefully more fully fill our tapes.  With a bit of research and some discussions with CV, I found out that they have a CLI option for controlling storage policy streams (found https://documentation.commvault.com/commvault/v10/article?p=features/storage_policies/storage_policy_xml_edit.htm).  Using my trusty scheduling tool, I setup a basic system where on Sunday @4PM we would set the streams to “1”, and then on Friday @4PM we would raise them to “6” and Saturday @7am we would drop them to “2”.  This basically solved our problem, and I’m happy to say that on week days, tapes are filled as much as is possible (1 – 2 tapes depending on which client ran a full), and on the weekend, 2 – 4 tapes are still being used.  I’m still tuning the whole thing, for the fulls (it’s a balance of utilization and performance), but its better than its ever been.  Its also worth noting, we went back and modified our aux copy schedules and told them to use all available streams since we now choke point it at the global tape policy.  This allows any storage policy to go as fast as possible (although potentially blocking other ones).

It’s a hack no doubt.  IMO, CV should develop this concept in their storage policies.  Basically creating a schedule window to dynamically control the queue depth.  For now, this is working well.

SQL Query: Microsoft – WSUS – Computers Update Status

Sometimes the WSUS console, just doesn’t give you the info you need, or it doesn’t provide it in a format you want.  This query is for one of those examples.  This query can be used in multiple ways to show the update status of a computer, computers or computer in a computer target.  For me, I wanted to see the update status, without worrying about what non-applicable updates were installed.  I also, didn’t care about updates that I didn’t approve, which was another reason I wrote this query.

First off, the query is located here on my GitHub page.  As time allows, I plan to update the read me on that section with more filters as I confirm how accurate they are and what value they may have.

All of the magic in this query is in the “where statement”. That will determine which updates you’re concerned about, which computers, which computer target groups, etc.

To begin with, even with lots of specifics in the “where statement”, this is a heavy query. I would suggest starting with a report about your PC or a specific PC, before using this to run a full report. It can easily take in excess of 30 minutes to an hour to run this report if you do NOT use any filters, and you have a reasonably large WSUS environment.   It’s also worth noting, in my own messing around, I’ve easily run out of memory / temp db space (over 25GB of tempdb).  It has the potential to beat the crap out of your SQL DB server, so proceed with caution.  My WSUS DB is on a fairly fast shared SQL server, so your mileage may vary.

Let’s go over a few way’s to filter data. First, the computer name column would be best served by using a wildcard (“%” not “*”) at the end of the computer name. Unless you’re going to use the FQDN of the computer name.  In other words, use ‘computername%” or ‘computername.domain.com’

Right now, I’m only showing updates that are approved to be installed. That is accomplished by the Where Action = ‘Install’ statement.

The “state” column is one that can quickly let you get down to the the update status you care about. In the case of the one below, we’re showing the update status for a computer called “computername”, but not showing non-applicable updates.


Where Action = ‘Install’ and [SUSDB].[PUBLIC_VIEWS].[vComputerTarget].[Name] like ‘computername%’ and state != 1


if we only wanted to see which updates were not installed, all we’d need to do is the following. By adding “state !=4” we’re saying only show updates that are applicable, and not currently installed.


Where Action = ‘Install’ and [SUSDB].[PUBLIC_VIEWS].[vComputerTarget].[Name] like ‘computername%’ and state != 1 and state != 4


If you want to see the complete update status of a computer, excluding only the non-applicable updates, this will do the trick.  That said, its a BIG query and take a long time.  As in, go get some coffee, chat with your buds and maybe play a round of golf.  You might run out of memory too with this query depending on your SQL server.  In case you didn’t notice, I took out the “where Action = ‘Install'”.  As in show me any update that’s applicable, with any status, and any approval setting.


Where ‘Install’ and [SUSDB].[PUBLIC_VIEWS].[vComputerTarget].[Name] like ‘pc-2158%’ and state != 1


Play around your self and I think you’ll see it’s pretty amazing all the different reports your can create.  I would love to say the WSUS DB was easy to read / figure out, but IMO, its probably one of the more challenging DB’s I’ve figured out.  There are sometimes multiple joins needed in order to link together data that you’d think would have been in a flat table.  I suspect that, combined with missing indexes is part of the reason the DB is so slow.  I wish MS would simplify this DB, but I’m sure there’as a reason its designed the way it is.