Tag Archives: powershell

Powershell Scripting: Get-ECSWSUSComputerUpdatesStatusReport


I hate the WSUS reports built into the console.  They’re slow, and when it comes to doing something useful with the data, it’s basically impossible.  That’s why I wrote this function.

I wanted an ability to gather data on a given WSUS computer(s), and work with it in Powershell.  This function gives me the ability to write scripts for bulk reports, automate my patching process (checking that all updates are done), and in general, gives me the same data the standard WSUS report does, but at a MUCH faster rate.

You can find the function here.


You’ll need my Invoke-ECSSQLQuery function located here.  This is going to mean a few things before you get going.

  • You need to make sure the account you’re running these functions under has access to the WSUS database.
  • You need to make sure the database server is setup so that you can make remote connections to it.
  • If you’re in need of SQL auth instead of windows auth, you’ll need to adjust the Get-ECSWSUSComputer and Get-ECSWSUSComputersInTargetGroup so that the embedded calls to my invoke-ecssqlquery use SQL auth instead of windows.

Secondly, this function doesn’t work without the “object” result of Get-ECSWSUSComputer or Get-ECSWSUSComputersInTargetGroup.  That means you need to run one of these functions first to get a list of computer(s) that you want to run a report against.  Store the results in an array.  Like $AllWSUSComputers = …..

Syntax examples:

if you’re reading this in Feedly or some other RSS reader, it’s not going to look right, you’ll need to hit my site if it looks like a bunch of garble.

$AllWSUSComputers =  Get-ECSWSUSComputer -WSUSDataBaseServerName "Database Server Name" -WSUSDataBaseName "SUSDB or whatever you called it" -WSUSComputerName "ComputerName or Computer Name pattern" -SQLQueryTimeoutSeconds "Optional, enter time in seconds"

Foreach ($WSUSComputer in $AllWSUSComputers)
     Get-ECSWSUSComputerUpdatesStatusReport -WSUSDataBaseServerName "Database Server Name" -WSUSDataBaseName "SUSDB or whatever you called it" -WSUSComputerObject $WSUSComputer -SQLQueryTimeoutSeconds "Optional, enter time in seconds"

Let me restate, you’re pointing at a SQL server.  Sometimes that’s the same server as the WSUS server, or sometimes that’s an external DB.  If you’re using an instanced SQL server, then for the database server name, you’d put “DatabaseServername\InstanceName”

if you actually want to capture the results of the report command,  my suggestion is to create an arraylist and add the results of the command into that array, or dump it to a JSON / XML file.  If you’re only running it against one computer, there’s probably no need for a foreach loop.


The output is the same not matter which function you run, with the one small exception being that I capture the computer target group name in the computer target group function.The

Name : pc-2158.asinetwork.local
AllPossiableUpdatesInstalled : True
AllApprovedUpdatesInstalled : True
AllPossiableUpdatesNotInstalledCount : 0
AllApprovedUpdatesNotInstalledCount : 0
LastSyncResult : Succeeded
LastSyncTime : 09/30/2017 16:11:33
LastReportedStatusTime : 09/30/2017 16:20:16
LastReportedInventoryTime :

Again, this output is really designed to feed my next function,but you might find it useful to do things like confirm that all WSUS computers are registered that should be, or to simply check the last time they synced.

$WSUSComputer | Select-Object -ExpandProperty UpdateStatusDetailed | Where-Object {$_.Action -eq "Install" -and $_.FriendlyState -ne "Installed"} | Select-Object DefaultTitle

That little snippet will show you all approved updates, that are not installed.  The friendlystate is whether the update is installed or not.  The action is whether the update is approved for install.

If we slightly modify the above command, we can show all updates that are not installed, but applicable by doing the following.

$WSUSComputer | Select-Object -ExpandProperty UpdateStatusDetailed | Where-Object {$_.FriendlyState -ne "Installed"} | Select-Object DefaultTitle

***NOTE1: This report is only as good as the updates that you allow via WSUS. Meaning, if you don’t download SQL updates, SQL updates are not going to show up in this report.

***NOTE2: This report only show non-declined updates. If you declined an update, it won’t show up here.


I hope you find this useful. I alway found the default WSUS reporting to be underwhelming and slow. It’s not that it doesn’t work, but it’s really only good for singular computers. These functions can easily be used to get the status of a large swath of systems. Best of all, with it being a Powershell object, you can now also export it in any number of formats, my preference being JSON if I want a full report, or CSV if I just want the summary.

You can also find out how I did all my SQL calls by reviewing the embedded SQL Query in my function if you prefer the raw SQL code.

Powershell Scripting: Microsoft Exchange, Configure client-specific message size limits


If you don’t know by now, I’m a huge PowerShell fan. It’s my go to scripting language for anything related to Microsoft (and non-Microsoft) automation and administration. So when it came time to automating post exchange cumulative update setting, I was a bit surprised to see some of the code examples from Microsoft, not containing any PowerShell example. Surprised is probably the wrong word, how about annoyed? I mean, after all, this is not only the company that shoved this awesome scripting language down our throat, but also the very team that was the first one to have a comprehensive set of admin abilities via PowerShell. So if that’s the case, why in the world, don’t they have a single PS example for configuring client-specific message size limits?

Not to be discouraged, I said screw appcmd, I’m PS’ing this stuff, because it’s 2017 and PS / DSC is what we should be using. Here’s how I did it

The settings:

If you’re looking for where the setting are that I’m speaking of / about, check out this link here. That’s how you do it in the “old school” way.

The new school way:

My example below is for EWS, you need to adjust this if you want to also include EAS.

     Write-Host "Attempting to set EWS settings"
    Write-Host "Starting with the backend ews custom bindings"
    $AllBackendEWSCustomBindingsWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/custombinding/*/httpsTransport" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*https*/httpstransport"} 
    Foreach ($BackendEWSCustomBinding in $AllBackendEWSCustomBindingsWebConfigProperties)
        Set-WebConfigurationProperty -Filter $BackendEWSCustomBinding.ItemXPath -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -value 209715200 -ErrorAction Stop
    Write-Host "Finished the backend ews custom bindings"
    Write-Host "Starting with the backend ews web http bindings"
    $AllBackendEWwebwebHttpBindingWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/webHttpBinding/*" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*"} 
    Foreach ($BackendEWSHTTPmBinding in $AllBackendEWwebwebHttpBindingWebConfigProperties)
        Set-WebConfigurationProperty -Filter $BackendEWSHTTPmBinding.ItemXPath -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -value 209715200 -ErrorAction Stop
    Write-Host "Finished the backend ews web http bindings"

    Write-Host "Starting with the back end ews request filtering"
    Set-WebConfigurationProperty -Filter "/system.webServer/security/requestFiltering/requestLimits" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxAllowedContentLength -value 209715200 -ErrorAction Stop
    Write-Host "Finished the back end ews request filtering"

    Write-Host "Starting with the front end ews request filtering"
    Set-WebConfigurationProperty -Filter "/system.webServer/security/requestFiltering/requestLimits" -PSPath "MACHINE/WEBROOT/APPHOST/Default Web Site/EWS" -Name maxAllowedContentLength -value 209715200 -ErrorAction Stop
    Write-Host "Finished the front end ews request filtering" 

Is it technically better than appcmd?  Yes, of course, what did you think I was going to say?  It’s PS, of course it’s better than CMD.

As for how it works, I mean it’s pretty obvious, I don’t think there’s any good reason to go into a break down.  I took what MS did with AppCMD and just changed it to PS, with a foreach loop in the beginning to have even a little less code 🙂

You should be able to take this, and easily adapt it to other IIS based web.config settings.  My Get-WebConfigurationProperty in the very beginning, is a great way to explore any web.config via the IIS cmdlets.

Anyway, hope this helps someone.

***Update 07/29/2017:

So we did our exchange 2013 cu15 upgrade, and everything went well with the script, except for one snag.  My former script had an incorrect filter that added an “https” binding to an “http”  path.  EWS didn’t like that very much (as we found out the hard way).  Anyway, should be fixed now.  I updated the script.  Just so you know which line was affected you can see the before and after below.  Basically my original filter grabbed both the http and https transports.  I guess technically each web property has the potential for both.  My new filter goes after only https EWS configs + https transports.

#I changed this:

$AllBackendEWSCustomBindingsWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/custombinding/*/httpsTransport" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*"}

#To this

$AllBackendEWSCustomBindingsWebConfigProperties = Get-WebConfigurationProperty -Filter "system.serviceModel/bindings/custombinding/*/httpsTransport" -PSPath "MACHINE/WEBROOT/APPHOST/Exchange Back End/ews" -Name maxReceivedMessageSize -ErrorAction Stop | Where-Object {$_.ItemXPath -like "*EWS*https*/httpstransport"}

Powershell Scripting: Get-ECSESXHostVIBToPCIDevices


If you remember a little bit ago, I said I was trying to work around the lack of driver management with vendors.  This function is the start of a few tools you can use to potentially make your life a little easier.

VMware’s drivers are VIBS (but not all VIBS are drivers).  So the key to knowing if you have the correct drivers is to find which VIB matches which PCI device.  This function does that work for you.

How it works:

First, I hate to be the bearer of bad news, but if you’re running ESXi 5.5 or below, this function isn’t going to work for you.  It seems the names of the modules  and vibs don’t line up via ESXCLI in 5.5, but they do in 6.0.  So if you’re running 6.0 and above, you’re in luck,.

As for how it works, its actually pretty simple.

  1. Get a list of all PCI devices
  2. Get a list of all modules (which aren’t the same as VIBS).
  3. Get a list of all VIBs.
  4. Loop through each PCI device
    1. See if we find a matching module
      1. Take the module and see if we find a VIB that matches it.
  5. Take the results of each loop, add it to an array
  6. Spit out the array once the function is done running
  7. Your results should be present.

How to execute it:

Ok, to begin with, I’m not doing fancy pipelining or anything like that.  Simply enter the name of the ESXi host as it is in vCenter and things will work just fine.  There is support for verbose output if you want to see all the PCI devices, modules and vibs that are being looped through.

Get-ECSESXHostVIBToPCIDevices -VMHostName "ServerNameAsItIsInvCenter"

If you want to do something like loop through a bunch of hosts in a cluster, that’s awesome, you can write that code :).

How to use the output:

Ok great, so now you’ve got all this output, now what?  Well, this is where we’re back to the tedious part of managing drivers.

  1. Fire up the VMware HCL web site and go to the IO devices section
  2. Now, there are three main columns from my output that you need to find the potential list of drivers.  Yeah, even with an exact match, there maybe anywhere from 0 devices listed (take that as you’re running the latest) to having on or more hits.
    1. PCIDeviceSubVendorID
    2. PCIDeviceVendorID
    3. PCIDeviceDeviceID
  3. Those three columns are are all you need.  Now a few notes with this.
    1. if there are less than four characters, VMware will add leading zeros on their web drop down picker.  For example, if my output shows “e3f”, on VMwares drop down picker, you want to look for “0e3f”.
    2. if you get a lot of results, what I suggest doing next, is seeing if the vendor matches your server vendor.  If you find a server vendor match and there are still more than one result, see if its something like the difference between a dual port or single port card.  If you don’t see your server vendor listed, see if the card vendor is listed.  For example, in UCS servers, instead of seeing Cisco for a RAID controller, you would likely find a match for “Avago” or “Broadcom”.  Yeah, it totally gets confusing with HW vendors buying each other LOL.
  4. Once you find a match, the only thing left to do, is look at the output of column “ModuleVibVersion” in my script and see if you’re running the latest driver available, or if it at least is recent.  Just keep in mind, if you update the driver, make sure the FW you’re running is also certified for that driver.

Where’s the code?

Right here

What’s next / missing?

Well, a few things:

  1. I haven’t found a good way yet to loop through each PCI device and see its FW version.  That’s a pretty critical bit of info as I’ve said before.
  2. Even if i COULD find the firmware version for you, you’re still going to need to cross reference it against your server vendor.  Without an API, this is also going to be a tedious process.
  3. You need to manually check the HCL because in 2017, VMware still doesn’t have an API, let alone a restful one to do the query.  If we had that, the next logical step would be to take this output and query an API to find a possible match(es).  For now, you’ll need to do it manually.
    1. Ideally, the same API would let you download a driver if you wanted.
  4. VMware lacks an ability to add VIBS via PowerCLI or really manage baselines and what not.  So again, VMware really dropping the ball here.  This time it’s the “Update Manger” team.


Hope this helps a bit, it’s far from perfect, but I’ve used it a few times, and found a few NIC drivers and RAID controllers that had older drivers.

Quicky Review: GPO/GPP vs. DSC


If you’re not in a DevOps based shop, or living under a rock, you may not know that Microsoft has been working on a solution that by all accounts sounds like its poised to usurp GPO / GPP.  The solution I’m talking about is Desired State Configuration, or DSC. According to all the marketing hype, DSC is the next best thing for IT since virtualization.  If the vision comes to fruition, GPO and GPP will be a legacy solution.  Enterprise mobility management will be used for desktops and DSC will be used for servers.  Given that I currently manage 700 VM’s and about an equal number of desktops, I figured why not take it for a test drive. So I stood up a simplistic environment, and played around with it for a full week and my conclusion is this.

I can totally see why DSC is awesome for non-domain joined systems, but its absolutely not a good replacement in todays iteration for domain joined systems. Does that mean you should shun it since all your systems are domain joined?  That depends on the size of your environment and how much you care about consistency and automation.  Below are all my unorganized thoughts on the subject.

The points:

DSC can do everything GPO can do, but the reverse is not true. At first that sounds like DSC is clearly a winner, but I say no.  The reality is, GPO does what it was meant to do, and it does it well.  To reproduce what you’ve already done in GPO while certainly doable, has the potential of making your life hell.  Here are a few fun facts about DSC.

  1. The DSC “agent” runs as local system. This means it only has local computer rights, and nothing else.
  2. Every server that you want to manage with DSC, needs its own unique config file built. That means if you have 700 servers like me, and you want to manage them with DSC, they each are going to have a unique config file.  Don’t get me wrong, you can create a standard config, and duplicated it “x” times, but none the less, its not like GPO where you just drop the computer in an OU and walk away.  That being said, and to be fair, there’s no reason you couldn’t automate DSC config build process to do just that.
    1. DSC has no concept of “inheritance / merging” like you’re used to with GPO. Each config must be built to encompass all of those things that GPO would normally handle in a very easy way.  DSC does have config merges in the sense that you can have a partial config for say your OS team, your SQL team and maybe some other team.  So they can “merge” configs, and work on them independently (awesome).  However, if the DBA config and the OS config, conflict, errors are thrown, and someone has to figure it out.  Maybe not a bad thing at all, but none the less, it’s a different mindset, and there is certainly potential for conflicts to occur.
  3. A DSC configuration needs to store user credentials for a lot of different operations. It stores these credentials in a config file that hosted both on a pull server (SMB share / HTTPs site) and on the local host.  What this means is you need a certificate to encrypt the config file and then of course for the agent to decrypt the config file.  You thought managing certificates was a pain for a few exchange servers and some web sites?  Ha! now every server and the the build server need certs.  In the most ideal scenario, you’re using some sort of PKI infrastructure.  This is just the start of the complexity.
    1. You of course need to deploy said certificate to the DSC system before the DSC config file can be applied. In case you can’t figure it out by now, this is a boot strap solution you have to implement on your own if you don’t use GPO.  You could use the same certificate and bake it into an image.  That certainly makes your life easier, but its also going to make your life that much harder when it comes to replacing those certs on 700 systems.  Not to mention, a paranoid security nut would argue how terrible that potentially is.
  4. The DSC agent of course need to be configured before it knows what to do. You can “push” configurations, which does mitigate some of these issues, but the preferred method is “pull”.  So that means you need to find a way (boot strap again) to configure your DSC agent so that it knows where to pull its config from, and what certificate thumbprint to use.

Based on the above point, you probably think DSC is a mess, and to some degree it is. However, a few other thoughts.

  1. It’s a new solution, so it still needs time to mature. GPO has been in existence since 2000, and DSC, I’m going to guess, since maybe 2012.  GPO is mature, and DSC is the new kid.
  2. Remember when I wrote that DSC can do everything that GPO can do, but not the reverse? Well, lets dig into that.  Let’s just say you still manage Exchange on premises, or more likely, you manage some IIS / SQL systems.  DSC has the potential to make setting those up and administering them, significantly easier.  DSC can manage not only the simple stuff that GPO does, but also way beyond that.  For example, here are just a few things.
    1. For exchange:
      1. DSC could INSTALL exchange for you
      2. Configure all your connectors, including not only creating them, but defining all the “allowed to relay” and what not.
      3. Configure all your web settings (think removing the default domain\username).
      4. Install and configure your exchange certificate in IIS
      5. Configure all your DAG relationships
      6. Setup your disks and folders
    2. For SQL
      1. DSC could INSTALL sql for you.
      2. Configure your max member min memory
      3. Configure your TempDB requirements
      4. Setup all your SQL jobs and other default DB’s
    3. Pick another MS app, and there’s probably a series of DSC resources for it…
    4. DSC let’s you know when things are compliant, and it can automatically attempt to remediate them. It can even handle things like auto reboots if you want it to.  GPO can’t do this.  To the above point, what I like about DSC, is I’ll know if someone went in to my receive connector and added an unauthorized IP, and even better, DSC will whack it and set it back to what it should be.
    5. Part of me thinks that while DSC is cool, I wish Microsoft would just extend GPO to encompass the things that DSC does that GPO doesn’t. I know its because the goal is to start with non-domain joined systems, but none the less, GPO works well and honestly, I think most people would rather use GPO over DSC if both were equally capable.


Should you use DSC for domain joined systems?  I think so, or at least I think it would be a great way to learn DSC.  I currently look at DSC as being a great addition to GPO, not a replacement.  My goal is going to be to use GPO to manage the DSC dependencies (like the certificates as one example) and then use DSC for specific systems where I need consistency, like our exchange, SQL and web servers.  At this stage, unless you have a huge non-domain joined infrastructure, and you NEED to keep it that way, I wouldn’t use DSC to replace GPO.


Powershell Scripting: Invoke-ECSSQLQuery

Quick Powershell post for those of you that may on occasion want to retrieve data out of a SQL table via Powershell.  I didn’t personally do most of the heavy lifting in this, I simply took some work that various folks out there did and put it into a repeatable function instead.

Firstly, head over to here to my GitHub if you want to grab it.  I’ll be keeping it updated as change requests come in, or as I get new ideas, so make sure if you do use my function, that you check back in on occasion for new versions.

The two examples are below:

Syntax example for windows authentication:

Invoke-ECSSQLQuery -DatabaseServer “ServerNameonly or ServerName\Instance” -DatabaseName “database” -SQLQuery “select column from table where column = ‘3’”

Syntax example for SQL authentication:

Invoke-ECSSQLQuery -DatabaseServer “ServerNameOnly or ServerName\Instance” -DatabaseName “database” -SQLUserID “SA” -SQLUserPassword “Password” -SQLQuery “select column from table where column = ‘3’”

There is also an optional “timeout” parameter that can be used for really long running queries.  By default its 30 seconds, you can set it as high as you want, or specify “0” if you don’t want any timeout.

Powershell Scripting: Get-ECSVMwareVirtualDiskToWindowsLogicalDiskMapping

Building off my last function Get-ECSPhysicalDiskToLogicalDiskMapping: which took a windows physical disk and mapped it to a windows logical disk, this function will take a VMware virtual disk and map it to a windows logical disk.

This function has the following dependencies and assumptions:

  • It depends on my windows physical to logical function and all its dependencies.
  • It assumes your VM name matches your windows server name
  • It assumes you’ve pre-loaded the VMware powershell snap-in.

The basic way the function works, is it starts by getting the windows physical to logical mapping and storing that in an array.  This array houses two key peaces of information.

  • The physical disk serial number
  • The logical disk name (C:, D:, etc.)

Then we get a list of all of the VM’s disks, which of course has the same exact serial number, just formatted a little differently (which I convert in the function).

Finally, we’re going to map the windows physical disk serial number to the VMware virtual disk serial number, and add the VMware virtual disk name and the Windows Logical disk name (we don’t care about the windows physical disk, it was just used for the mapping) into the final array, and echo them out for your use.

See below for an example:

Get-ECSVMwareVirtualDiskToWindowsLogicalDiskMapping -ComputerName “YourComputerName”

VMwareVirtualDisk WindowsLogicalDisk
—————– ——————
Hard disk 1 C:
Hard disk 2 K:
Hard disk 3 F:
Hard disk 4 N:
Hard disk 5 J:
Hard disk 6 V:
Hard disk 7 W:
Hard disk 8 G:

… and there you have it, a very quick way to figure out which vmware virtual disk houses your windows drive.  You’ll find the most recent version of the function here.

Powershell Scripting: Get-ECSPhysicalDiskToLogicalDiskMapping

I figured it was about time to knock out something a little technical for a change, and I figured I’d start with this little function, which is part of a larger script that I’ll talk more about later.

There may be a time where you need to find the relationship between a physical disk drive and your logical drive.  In my case, I had a colleague ask me if there was an easy way to match a VMware disk to a Window disk so he could extend the proper drive.  After digging into it a bit, I determined it was possible, but it was going to take a little work.  One of the prerequisites is to first find which drive letter belongs to which physical disk (from windows view).

For this post, I’m going to go over the prerequisite function I built to figure out this portion.  In a later post(s) we’ll put the whole thing together.  Rather than burying this into a larger overall script (which is the way it started), I broke it out and modularized it so that you may be able to use it for other purposes.

First, to get the latest version of the function, head on over to my GitHub project located here.  I’m going to be using GitHub for all my scripts, so if you want to stay up to date with anything I’m writing, that would be a good site to bookmark.  I’m very new to Git, so bear with me as I learn.

To start, as you can tell by looking at the function, its pretty simple by in large.  It’s 100% using WMI.  A lot of this functionality existed in Windows 2012+, but I wanted something that would work with 2003+.   Now that you know its based on WMI, there’s two important notes to bear in mind:

  1. WMI does not require admin rights for local computers, but it does for remote.  Keep this in mind if you’re planning to use this function for remote calls.
  2. WMI also requires that you have the correct FW ports open for remote access.  Again, I’m not going to dig into that.  I’d suspect if you’re doing monitoring, or any kind of bulk administration, you probably already have those ports open.

Microsoft basically has the mapping in place within WMI, the only problem is you need to connect a few different layers to get from Physical to Logical mapping.  In reading my function, you’ll see that I’m doing a number of nested foreach loops, and that’s the way I’m connecting things together.  Basically it goes like this….

  1. First we need to find the physical disk to partition mappings doing the following:  WIN32_DiskDrive property DeviceID  is connected to Win32_DiskDriveToDiskPartition property of Antecedent .  ***NOTE: the DeviceID needed to be formatted so that it matched the Actecedent by adding extra backslashes “\”.
  2. Now we need to map the partition(s) on the physical disk to the logical disk(s) that exist doing the following: Win32_DiskDriveToDiskPartition property Dependent is connected to Win32_LogicalDiskToPartition  property of Actecedent.
  3. Now that we know what logical drives exist on the physical disk, we can use the following to grab all the info we want about the logical drive doing the following: Win32_LogicalDiskToPartition property of Dependent maps to Win32_LogicalDisk property of “_path”.

That’s the basic method for connecting Physical Disk to Logical Disk.  You can see that I use an array to store results, and I’ve picked a number of properties from the the physical disk and the logical disk that I needed.  You could easily add other properties in that you want to serve your needs.  And if you do… please contribute to the function.

As for how to use the function, its simple.

For local computer, simply run the function with no parameters.

LogicalDiskSize : 1000198832128
PhysicalDiskController : 0
ComputerName : PC-2158
PhysicalDiskControllerPort : 1
LogicalDiskFreeSpace : 946930122752
PhysicalDiskNumber : 0
PhysicalDiskSize : 1000194048000
LogicalDiskLetter : E:
PhysicalDiskModel : Intel Raid 1 Volume
PhysicalDiskDiskSerialNumber : ARRAY

LogicalDiskSize : 255533772800
PhysicalDiskController : 0
ComputerName : PC-2158
PhysicalDiskControllerPort : 0
LogicalDiskFreeSpace : 185611300864
PhysicalDiskNumber : 1
PhysicalDiskSize : 256052966400
LogicalDiskLetter : C:
PhysicalDiskModel : Samsung SSD 840 PRO Series
PhysicalDiskDiskSerialNumber : S12RNEACC99205W

For a remote system simply specify the “-ComputerName” paramter

Get-ECSPhysicalDiskToLogicalDiskMapping -ComputerName “pc-2158”
LogicalDiskSize : 1000198832128
PhysicalDiskController : 0
ComputerName : PC-2158
PhysicalDiskControllerPort : 1
LogicalDiskFreeSpace : 946930122752
PhysicalDiskNumber : 0
PhysicalDiskSize : 1000194048000
LogicalDiskLetter : E:
PhysicalDiskModel : Intel Raid 1 Volume
PhysicalDiskDiskSerialNumber : ARRAY

LogicalDiskSize : 255533772800
PhysicalDiskController : 0
ComputerName : PC-2158
PhysicalDiskControllerPort : 0
LogicalDiskFreeSpace : 185611976704
PhysicalDiskNumber : 1
PhysicalDiskSize : 256052966400
LogicalDiskLetter : C:
PhysicalDiskModel : Samsung SSD 840 PRO Series
PhysicalDiskDiskSerialNumber : S12RNEACC99205W

Hope that helps you down the road.  Again, this is going to be part of a slightly larger script that will ultimately map a Windows Logical Disk to a VMware Virtual Disk to make finding which disk to expand easier.