Tuesday, March 12, 2013

Change Device COM Port via PowerShell

On a recent project, we had a situation where we had to make sure that a specific USB hardware device was always assigned to a COM1.  Because of the way USB devices are assigned COM ports, it was never going to get COM1 the first time. I whipped up a quick script to check for the device and then change its COM port. It worked great, but as is the case with many first iterations of scripts, there was a but...

Even though a new COM port was getting assigned, we found is that the previous COM port was remaining marked as used in the ComDB. The ComDB is a binary registry value that keeps track of which COM ports are in use. A great description of how the value works can be found here: http://www.rosseeld.be/DRO/PIC/NextSerialPortNo.htm.

To solve the problem we had two options:
1) Find a way to get access to the msports.dll functions within PowerShell which likely meant digging into the Windows Driver SDK.
2) Parse the binary registry value and update the ComDB manually.

I chose door number two because it seemed much more straight forward. The task was actually a little bit trickier than I expected because the ComDB reverses the COM port bits in each byte (e.g. COM1 is represented by Bit 8 and COM8 is represented by Bit 1). Also, to properly convert the Byte value, each Byte had to have 8 values (leading zeros can't be truncated). To remedy that I used a loop to make sure the binary character array had the proper number of entries.

The function below is the end result...

$DeviceName = "My Hardware Name"
$ComPort = "COM4"

function Change-ComPort {

Param ($Name,$NewPort)

#Queries WMI for Device
$WMIQuery = 'Select * from Win32_PnPEntity where Description = "' + $Name + '"'
$Device = Get-WmiObject -Query $WMIQuery

#Execute only if device is present
if ($Device) {

#Get current device info
$DeviceKey = "HKLM:\SYSTEM\CurrentControlSet\Enum\" + $Device.DeviceID
$PortKey = "HKLM:\SYSTEM\CurrentControlSet\Enum\" + $Device.DeviceID + "\Device Parameters"
$Port = get-itemproperty -path $PortKey -Name PortName
$OldPort = [convert]::ToInt32(($Port.PortName).Replace("COM",""))

#Set new port and update Friendly Name
$FriendlyName = $Name + " (" + $NewPort + ")"
New-ItemProperty -Path $PortKey -Name "PortName" -PropertyType String -Value $NewPort -Force
New-ItemProperty -Path $DeviceKey -Name "FriendlyName" -PropertyType String -Value $FriendlyName -Force

#Release Previous Com Port from ComDB
$Byte = ($OldPort - ($OldPort % 8))/8
$Bit = 8 - ($OldPort % 8)
if ($Bit -eq 8) { 
$Bit = 0 
$Byte = $Byte - 1
$ComDB = get-itemproperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\COM Name Arbiter" -Name ComDB
$ComBinaryArray = ([convert]::ToString($ComDB.ComDB[$Byte],2)).ToCharArray()
while ($ComBinaryArray.Length -ne 8) {
$ComBinaryArray = ,"0" + $ComBinaryArray
$ComBinaryArray[$Bit] = "0"
$ComBinary = [string]::Join("",$ComBinaryArray)
$ComDB.ComDB[$Byte] = [convert]::ToInt32($ComBinary,2)
Set-ItemProperty -path "HKLM:\SYSTEM\CurrentControlSet\Control\COM Name Arbiter" -Name ComDB -Value ([byte[]]$ComDB.ComDB)


Change-ComPort $DeviceName $ComPort

Tuesday, February 19, 2013

PowerShell Setting ACL Inheritance and Propegation

I recently worked on a project that required a lot of  AdHoc moving of user home directories. To make the process easier for the support teams, I put together a PowerShell script (and a web front end for it, but more on that later) that would move a users home directory to a new server and update their account's homeDirectory attribute.

As is often the case these days, these home directories are stored on NAS devices that typically auto-create home directories on the fly with the appropriate permissions. As a result, my script needed to explicitly grant the user Full Control to their new folder. I didn't find the native help on the set-acl cmdlet very helpful in regard to adding a user to an existing folder's ACL. Luckily I did something similar in VB.NET and had good idea of what needed to be done. That along with a little get-acl and get-member action (and a little trial and error) got me where I needed to be.

The process is actually pretty straight froward.

  1. Capture the current ACL with Get-ACL
    $DirACL = Get-ACL "\\FileServer\users$\JoeUser"
  2. Create a new FileSystemAccessRule for the user and add it to the ACL you just captured
    $AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule "Domain\JoeUser","FullControl","ContainerInherit,ObjectInherit","None","Allow"
  3. Write the new ACL back with set-acl
    Set-ACL "\\FileServer\users$\JoeUser" $DirACL
Since it take a couple of lines and is something that often needs to be repeated, it could easily be made a function like this:

Function Set-FullControl {
     param ([string]$User, [string]$FolderPath)

     $DirACL = Get-ACL $FolderPath
     $AccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule $User,"FullControl","ContainerInherit,ObjectInherit","None","Allow"
     Set-ACL $FolderPath $DirACL

Set-FullControl "DOMAIN\JoeUser" "\\FileServer\users$\JoeUser"

Monday, October 22, 2012

Flattening an XML Structure for MySQL Import

Recently a buddy asked me to help him convert BuddyPress XML data into a MySQL database. There are several software options that will import XML into MySQL or other databases, but they don't handle fields past the second level. So this project posed two challenges. First, the BuddyPress data is only available in single day increments in XML files. So you have to download an XML for every day for every table. That is a lot of files. The second challenge is that all of that XML data needed to be merged into a single file, with all of the fields flattened to a single level, while maintaining their identity.

To solve problem #1, I wrote a PowerShell script that would use dates and an array of table names to do a series of loops. This allowed the automatic generation of a URL for each day for each table. That URL was passed to the following function, which would download the XML file. Since each URL was limited to 1000 records, it was actually possible that a table could have more than one XML file for each day, so I added some logic to check the XML file to see if there was another page of records, and to call its self against that next page if it existed.

$global:wc=new-object system.net.webclient

function download-xml {

param ($url, $file)

write-host Downloading $file
[xml]$xml = get-content $file
if ($xml.ContentEnvelope.ContentHeader.NextPageUrl) { 
$nextFile = $file + "_" + $xml.ContentEnvelope.ContentHeader.CurrentPage
download-xml $xml.ContentEnvelope.ContentHeader.NextPageUrl $nextFile

That script worked great, and netted me about 3900 XML files, each with up to 1000 records. To get them into MySQL, I needed each tables XML file combined into a single file. That part is pretty easy, but while combining them, I also needed to flatten all of the fields into the second level. That meant turning this:

      <Title>Some Blog Title</Title>
      <UnencodedTitle>Some Blog Title</UnencodedTitle>
      <Body>&lt;p&gt;Lots of blog content :)&lt;/p&gt;</Body>
      <Tags />
      <TagsPrevious />

Into this:


<Title>Some Blog Title</Title>
<UnencodedTitle>Some Blog Title</UnencodedTitle>
<Body>&lt;p&gt;Lots of blog content :)&lt;/p&gt;</Body>

<CreatedOn>2012-06-19 10:15:50-07:00</CreatedOn>
<LastEditTimeStamp>2012-03-26 10:18:46-07:00</LastEditTimeStamp>
<LastUpdated>2012-06-19 10:15:50-07:00</LastUpdated>
<TimeStamp>2012-03-26 10:18:46-07:00</TimeStamp>
<Title>Keep Trying!</Title>
<UnencodedTitle>Keep Trying!</UnencodedTitle>

Since PowerShell handles XML natively, it seemed like the perfect tool. Since I didn't want to build a custom script for each XML file, I needed the script to be agnostic to the XML fields. To accomplish that, I had the script import an XML file, loop through each property in the XML file and check the property type. If the property type is "XmlElement", the script knows it has child items and starts looping through those properties. If the type isn't "XmlElement", it knows that it has a field and writes the output to an XML file. Because each table potentially has thousands of rows, the file write is done with the .NET [System.IO.StreamWriter], which is by far the fastest way to write a file from PowerShell. Placing the XML flattening process into a function allowed me to easily loop through all of the files for a specific table, and all of the records in each file. In order to maintain the integrity of the XML paths, I pass the parent path to the function. That way we don't end up with multiple "Key" fields at the root, but instead have fields named with the full XML path of the item moved to the root. For example, this path:

becomes this:

Here is the flatten-xml function that accomplished that:

$stream = [System.IO.StreamWriter] "C:\BlogPost.xml"

function global:flatten-field {

param (

if ($parent -eq "") {
$label = $fieldname
} else {
$label = $parent + "_" + $fieldname

if ((($element.GetType()).Name) -eq "XmlElement") { 
$element | get-member | where-object { $_.MemberType -eq "Property" } | % {
global:flatten-field $label $element.($_.Name) $_.Name
} else {
if (($element.length -eq 25) -and ($element -like "2012-*")) { $element = $element.replace("T"," ") }
if ($fieldname -eq "Byte") { 
$row = "`t<" + $label + "1>" + $element[0] + "</" + $label + "1>" + "<" + $label + "2>" + $element[1] + "</" + $label + "2>" 
} else {
$element = $element.replace("&","&amp;")
$element = $element.replace("'","&apos;")
$element = $element.replace('"',"&quot;")
$element = $element.replace("<","&lt;")
$element = $element.replace(">","&gt;")
$row = "`t<" + $label + ">" + $element + "</" + $label + ">"

Having the script loop through the hundred of files, I was able to start it processing and go to bed. When I got up the next morning, I had an XML file for each table that I was able to import right into a MySQL database for my friend.

If you would like copies of the full scripts, feel free to email me.

Monday, October 1, 2012

PowerShell Application for Deleting ThinApp Cache

User self-service is a win-win. Users are empowered to solve their own problems, and admins are freed up to do bigger things (which sometimes includes fixing the core issue behind the problem you are self-servicing). That is exactly the scenario a cohort and I found ourselves in. We inherited a VMWare View VDI environment where all of the applications were packaged in ThinApp. ThinApp is great is some scenarios, but it can also be problematic. The users of the VDI environment were constantly experiencing application problems that required them to exit the application and delete the ThinApp cache from their AppData directory. This was frustrating for them, and time consuming for us as we tried to help them resolve their issues while also trying to stabilize the environment. In order to free up our time and help the users achieve quicker resolution, I turned to PowerShell to create a GUI for users to delete their own ThinApp cache. The script uses PowerShell to see what items are in a user's ThinApp cache. It then generates a GUI that let's them select the application that is giving them problems. The script then kills the application and deletes the cache. We gave them an icon to execute the script app.

This is what the GUI looks like when a user launches it:
Below is the script that creates this window and deletes the cache. It relies on the .NET System.Windows.Forms to create the application window.

#$ErrorActionPreference = "silentlycontinue"

# Gets environment variables
$username = get-content env:username
$appdata = get-content env:AppData
$ScriptPath = Split-Path -Path $MyInvocation.MyCommand.Path -Paren

$ThinAppCache = $appdata + "\Thinstall"
$ThinAppUnreg = $ThinAppCache + "\UnRegister"

# Finds all entries in the ThinApp Cache
if (test-path $ThinAppCache) {
get-childitem $ThinAppCache | select-object Name | % { 
        if ($_.Name -ne "Unregister") { $CacheItems = $CacheItems + $_.Name }

# Creates Form
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") 
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Drawing") 

$objForm = New-Object System.Windows.Forms.Form 
$objForm.Text = "ThinApp Cache Deleter" 
$objForm.Size = New-Object System.Drawing.Size(320,100)
$objForm.MaximumSize = New-Object System.Drawing.Size(320,800)
$objForm.AutoSize = $true
$objForm.StartPosition = "CenterScreen"
$objForm.Icon = new-object System.Drawing.Icon("$ScriptPath\delete-thinappcache.ico")
$objForm.WindowState = [System.Windows.Forms.FormWindowState]"Normal";

#Sets Enter and Escape Keys to Close Form
$objForm.KeyPreview = $True
$objForm.Add_KeyDown({if ($_.KeyCode -eq "Enter") {$result=$objListBox.SelectedItem;$objForm.Close()}})
$objForm.Add_KeyDown({if ($_.KeyCode -eq "Escape") {$result="Cancel";$objForm.Close()}})

$objTableLayout = new-object "System.Windows.Forms.TableLayoutPanel" 
$objTableLayout.AutoSize = $true 
$objTableLayout.width = 320
$objTableLayout.AutoSizeMode = "GrowAndShrink"
$objTableLayout.Dock = "Fill" 
$objTableLayout.CellBorderStyle = "none"
$objTableLayout.RowCount = 3
$objTableLayout.ColumnCount = 2
[void]$objTableLayout.ColumnStyles.Add((new-object System.Windows.Forms.ColumnStyle([System.Windows.Forms.SizeType]::Percent, 50)))
[void]$objTableLayout.ColumnStyles.Add((new-object System.Windows.Forms.ColumnStyle([System.Windows.Forms.SizeType]::Percent, 50)))

$objLabel = New-Object System.Windows.Forms.Label
$objLabel.MaximumSize = New-Object System.Drawing.Size(320,600)
$objLabel.AutoSize = $true
$objLabel.Text = "Please select ThinApp Cache to delete."
$objLabel.Dock = "Fill"

$objListBox = New-Object System.Windows.Forms.ListBox 
$objListBox.Dock = "Fill"
$objListBox.Height = 80
$CacheItems | % { 
    $item = $_
    [void] $objListBox.Items.Add($item) 
$objListBox.SelectedIndex = 0

$DeleteButton = New-Object System.Windows.Forms.Button
$DeleteButton.Dock = "Fill"
$DeleteButton.Text = "Delete"

$CancelButton = New-Object System.Windows.Forms.Button
$CancelButton.Dock = "Fill"
$CancelButton.Text = "Cancel"

$objForm.Topmost = $True

$objForm.Add_Shown({$objForm.Activate();$objForm.WindowState = [System.Windows.Forms.FormWindowState]"Normal"})

[void] $objForm.ShowDialog()


#Acts on form result
if ($result -ne "Cancel") {
    # Find and kill all app exe's using the vbs files in the apps UnRegister folder entry
    get-childitem $ThinAppUnreg | % { if ($_.Name -like "$result*") { $AppUnreg = $_.Name } }
    $AppUnregPath = $ThinAppUnreg + "\" + $AppUnreg

    get-childitem $AppUnregPath | where-object {$_.Name -like "*.exe.vbs"} | select-object Name | % {
    $ProcessName = $_.Name -replace ".exe.vbs"
        $ProcessName = $ProcessName.Substring(1)
        write-host Process: $ProcessName
        $process = get-process $ProcessName
    stop-process -inputobject $process -force
        while ($process.HasExited -eq $false) {
            start-sleep -m 10
    $AppCache = $ThinAppCache + "\" + $result
    while ((test-path $AppCache) -and ($i -ne 10)) {
        start-sleep -m 100
        remove-item $AppCache -recurse -force
    if (test-path $AppCache) {
        $Msg = "Error deleteting ThinApp Cache for " + $result + ". Please log out and back in and try to delete the cache again."
        [System.Windows.Forms.MessageBox]::Show($Msg , "Error" , 0, "Error")
    } else {
        $Msg = "ThinApp Cache deleted for " + $result + "!"
        [System.Windows.Forms.MessageBox]::Show($Msg , "Success" , 0, "Exclamation")

Tuesday, June 26, 2012

Automating Database and Tables Size Checks

Working with SCCM, WSUS and other SQL DB driven applications, it seems like every once in a while you find that the database has grown out of control. Whenever this happens, obviously we have to quickly determine why it is so big, which means that we have to figure out what part of the database is growing and eating up all of that space. The sp_spaceused stored procedure is an easy and quick way to get database and database object sizes.

To get the database size, we can simply execute:
    exec sp_spaceused
This tells us how big the database is, and breaks it down into data, reserved space, index size, and unused space. As sweet as that is, at this point we already know that the db is big, but we need to figure out why. Again, we can use the sp_spaceused stored procedure to check specific tables that we know may be problematic. For instance, if your WSUS SUSDB database is out of control, you could check the size of the tbEventInstance table by firing off:
exec sp_spaceused 'tbEventInstance'

This is great if you have a pretty good idea about which table is causing problems or if your dealing with a database that only has a couple of tables, but in systems like SCCM or WSUS, there could be any number of tables causing the problem. Executing that command against each table would take far too long. We can, however, use the sp_msForEachTable stored procedure to execute it against all tables at once:
exec sp_msForEachTable 'exec sp_spaceused [?]'

This gets us the size of each table, which is what we want. The problem is that we get a different result for each execution, that looks something like this:
This is a good start, because now we just have to scan the list, but we can't sort this, so its still a manual process of looking through all of the output to find the problem. In a system like SCCM with a massive number of tables, this could still take a very long time.

From here we can solve that problem by adding some SQL to create a temp table, dump the results of the command into that table, and then display the sorted contents of that table.

declare @tablesize table (
    [name] nvarchar(256),
    [rows] varchar(18),
    reserved varchar(18), 
    data varchar(18), 
    index_size varchar(18),
    unused varchar(18)

insert @tablesize exec sp_msForEachTable 'exec sp_spaceused [?]' 
select * from @tablesize order by data desc

Now we have exactly what we want, with the largest tables right at the top!

This is much improved, but if we wanted to take it a step further, we could actually use PowerShell to run this query on a regular schedule and alert us if a table grows over a certain threshold. PowerShell makes it easy to grab the results of our exec sp_msForEachTable 'exec sp_spaceused [?]' and put it into a data set.

$SQLQuery = "exec sp_msForEachTable 'exec sp_spaceused [?]'"
$Conn = New-Object System.Data.SqlClient.SqlConnection
$Conn.ConnectionString = "Server = mydbserver; Database = MyDB; Integrated Security = True"
$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlCmd.CommandText = $SqlQuery
$SqlCmd.Connection = $Conn
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SqlAdapter.SelectCommand = $SqlCmd
$DataSet = New-Object System.Data.DataSet

Now you can loop through the $DataSer.Tables, check for a desired threshold, and execute actions if certain conditions are met. Then you will not only know that the database is growing rapidly before anyone notices, but you'll have the offending table identified so that you can go right to work on solving the problem. That certainly beats wasting time trying to identify the problem after people are complaining about application performance!

Sunday, June 10, 2012

PowerShell Script to Backup and Delete List of GPO's (with Restore Script)

One of my first tasks in cleaning up the previously referenced group policy infrastructure, was to delete all of the unused Group Policy Objects. There were a lot, so a script was definitely in order. Fortunately, PowerShell has a native cmdlet (via the Group Policy Module), delete-gpo, that can delete group policy objects. I could very easily whip up a script to loop through a file and delete policies, but when we're talking about scripts and production environments, safety always comes first. It would be irresponsible to simply delete all of the policies, without a plan to quickly recover if something went wrong. Again, PowerShell makes it easy by having native cmdlets backup-gpo and restore-gpo.

So to be safe and prepared called for two scripts: one to backup and delete the GPOs, and one to restore them if necessary. Since it best to use a Backup ID to restore a GPO, the logging in the first script gives us both the Backup ID and the GUID. To make restoring as quickly and easy as possible, the scripts are configured so that the log from the backup/delete script can simply be passed to the restore script, and it will restore everything in the log file.

One thing that is important to point out is that restoring a GPO does not restore its links. In my case, I was only deleting unlinked GPO's, so this was not an issue. However, if any GPO's were linked before being deleted, they will need to be re-linked in order to take effect.

Script #1: Delete-GPOList.ps1

# Usage: Delete-GPOList -file <path to file> -path <path for backups>
# Creates Two Log Files:
# Success Log: Delete-GPOList_<date>.log
# Error Log: Delete-GPOList_Errors_<date>.log

[parameter(Mandatory=$true)] [string] $path,
[parameter(Mandatory=$true)] [string] $file

import-module grouppolicy

# Functions

#Backs up a GPO and then deletes it
function BackupDelete-GPO
param($GPOGUID, $BackupPath)

#clears any errors

#Backup policy and check for errors
write-host Backing Up:`t $GPOGUID
$GPObackup = backup-gpo -guid $GPOGUID -path $BackupPath
if ($error) { return "Backup Error" }
write-host `t Backup Completed! Backup ID: $GPOBackup.Id.ToString()

#Delete policy and check for errors
write-host Deleting:`t $GPOGUID
remove-gpo -guid $GPOGUID
if ($error) { return "Delete Error" }
write-host `t Deletion Successful
return $GPOBackup.Id.ToString()

# Test Backup and File Path
if (!(test-path $path)) { 
write-warning "Backup Path Invalid: $Path"
if (!(test-path $file)) { 
write-warning "File Not Found: $File"

# Creates Log File

$date = (get-date).ToString('yyyy-MM-dd')
$logfile = "Delete-GPOList_$date.log"
$errorlog = "Delete-GPOList_Errors_$date.log"

# Parses file and and backs up deletes GPO's
Get-Content $file | foreach {
$Delete = BackupDelete-GPO $GPOGUID $path
if ($Delete -eq "Backup Error") {
"Backup Error: $GPOGUID" >> $errorlog
} elseif ($Delete -eq "Delete Error") {
"Delete Error: $GPOGUID" >> $errorlog
} else {
"$GPOGUID;$Delete" >> $logfile

Script #2: Restore-GPOList.ps1

# Usage: Restore-GPOList -file <GPO-Delete Log> -path <Backup Path>

[parameter(Mandatory=$true)] [string] $path,
[parameter(Mandatory=$true)] [string] $file

import-module grouppolicy

Get-Content $File | foreach {
$GPOGUID = ($_ -split ";")[0]
$BackupID = ($_ -split ";")[1]

write-host Restoring: $GPOGUID
restore-gpo -BackupId $BackupID -path $Path

if ($error) {
write-warning Error Restoring:
write-warning `t GPO GUID:`t $GPOGUID
write-warning `t Backup ID:`t $BackupID
} else {
write-host `tComplete!

Monday, June 4, 2012

Group Policy Status Excel Report PowerShell Script

I recently went through the process of streamlining and cleaning up a Group Policy implementation. There were hundreds of policies involved, so the only way to tackle this was through automation. To tackle this, I wrote a series of PowerShell scripts to help with the process. This post is about the script I use to gather the information that I needed.

With so much information to review, I really needed the information in a place where I could see everything at once, but could also sort and filter as needed. Since Excel is easily scriptable via PowerShell, that made the perfect choice.

The script I wrote grabs all of the following information and then dumps it into a spreadsheet:
  • GPO Name
  • GUID
  • When Created
  • Last Modified
  • WMI Filters
  • Computer Configuration Enabled
  • Computer Directory Version
  • Computer Sysvol Version
  • Comp Extensions Used
  • User Configuration Enabled
  • User Directory Version
  • User Sysvol Version
  • User Extensions Used
  • Links
  • Enabled Links
  • No Override
  • Owner
  • Security
  • Total Size
  • Policy Size
  • ADM File Size
  • ADM Files
From that point, I could sort, filter and compare to get whatever information that I needed (e.g. to generate a long list of stuff that needed to be fixed).

To summarize the script, it grabs a list of all policies in the domain, loops through them getting their XML data, dumps that into an array, and then writes it all to Excel.

Because the script requires the Group Policy module, it will only work on Windows Server 2008 or Windows 7 systems that have the Remote Server Administration Tools installed.

The script also takes an argument that tells it the path to the policies folder on the sysvol, which it uses to get adm files and policy sizes. So you would call the script like: .\get-gpoinfo.ps1 \\mydomain.com\sysvol\mydomain.com\policies

import-module grouppolicy

$Policies = $Args[0] 

# Functions

#Get-GPOInfo gets an XML report of the GPO and uses it to return specific data in an array
function Get-GPOInfo

#Gets the XML version of the GPO Report
$GPOReport = get-gporeport -guid $GPOGUID -reporttype XML
#Converts it to an XML variable for manipulation
$GPOXML = [xml]$GPOReport
#Create array to store info
$GPOInfo = @()

#Get's info from XML and adds to array
#General Information
$Name = $GPOXML.GPO.Name
$GPOInfo += , $Name
$GUID = $GPOXML.GPO.Identifier.Identifier.'#text'
$GPOInfo += , $GUID
[DateTime]$Created = $GPOXML.GPO.CreatedTime
$GPOInfo += , $Created.ToString("G")
[DateTime]$Modified = $GPOXML.GPO.ModifiedTime
$GPOInfo += , $Modified.ToString("G")
#WMI Filter
if ($GPOXML.GPO.FilterName) {
$WMIFilter = $GPOXML.GPO.FilterName
} else {
$WMIFilter = "<none>"
$GPOInfo += , $WMIFilter
#Computer Configuration
$ComputerEnabled = $GPOXML.GPO.Computer.Enabled
$GPOInfo += , $ComputerEnabled
$ComputerVerDir = $GPOXML.GPO.Computer.VersionDirectory
$GPOInfo += , $ComputerVerDir
$ComputerVerSys = $GPOXML.GPO.Computer.VersionSysvol
$GPOInfo += , $ComputerVerSys
if ($GPOXML.GPO.Computer.ExtensionData) { 
$ComputerExtensions = $GPOXML.GPO.Computer.ExtensionData | %{ $_.Name }
$ComputerExtensions = [string]::join("`n", $ComputerExtensions)
} else {
$ComputerExtensions = "<none>"
$GPOInfo += , $ComputerExtensions
#User Configuration
$UserEnabled = $GPOXML.GPO.User.Enabled
$GPOInfo += , $UserEnabled
$UserVerDir = $GPOXML.GPO.User.VersionDirectory
$GPOInfo += , $UserVerDir
$UserVerSys = $GPOXML.GPO.User.VersionSysvol
$GPOInfo += , $UserVerSys
if ($GPOXML.GPO.User.ExtensionData) {
$UserExtensions = $GPOXML.GPO.User.ExtensionData | %{ $_.Name }
$UserExtensions = [string]::join("`n", $UserExtensions)
} else {
$UserExtensions = "<none>"
$GPOInfo += , $UserExtensions
if ($GPOXML.GPO.LinksTo) {
$Links = $GPOXML.GPO.LinksTo | %{ $_.SOMPath }
$Links = [string]::join("`n", $Links)
$LinksEnabled = $GPOXML.GPO.LinksTo | %{ $_.Enabled }
$LinksEnabled = [string]::join("`n", $LinksEnabled)
$LinksNoOverride = $GPOXML.GPO.LinksTo | %{ $_.NoOverride }
$LinksNoOverride = [string]::join("`n", $LinksNoOverride)
} else {
$Links = "<none>"
$LinksEnabled = "<none>"
$LinksNoOverride = "<none>"
$GPOInfo += , $Links
$GPOInfo += , $LinksEnabled
$GPOInfo += , $LinksNoOverride
#Security Info
$Owner = $GPOXML.GPO.SecurityDescriptor.Owner.Name.'#text'
$GPOInfo += , $Owner
$SecurityInherits = $GPOXML.GPO.SecurityDescriptor.Permissions.InheritsFromParent
$SecurityInherits = [string]::join("`n", $SecurityInherits)
$GPOInfo += , $SecurityInherits
$SecurityGroups = $GPOXML.GPO.SecurityDescriptor.Permissions.TrusteePermissions | %{ $_.Trustee.Name.'#text' }
$SecurityGroups = [string]::join("`n", $SecurityGroups)
$GPOInfo += , $SecurityGroups
$SecurityType = $GPOXML.GPO.SecurityDescriptor.Permissions.TrusteePermissions | % { $_.Type.PermissionType }
$SecurityType = [string]::join("`n", $SecurityType)
$GPOInfo += , $SecurityType
$SecurityPerms = $GPOXML.GPO.SecurityDescriptor.Permissions.TrusteePermissions | % { $_.Standard.GPOGroupedAccessEnum }
$SecurityPerms = [string]::join("`n", $SecurityPerms)
$GPOInfo += , $SecurityPerms
#Policy File System Size
$GPOSize = Get-GPOSize $GUID $Policies
$GPOInfo += , $GPOSize.Total
$GPOInfo += , $GPOSize.Policy
$GPOInfo += , $GPOSize.ADM
$GPOInfo += , $GPOSize.ADMFiles
    return $GPOInfo

#Get-GPOSize returns the GPO file size. It requires a GUID and the sysvol policy path. It returns a Hash Table with three sizes (in Bytes) Total/ADM/Policy
function Get-GPOSize
#Creates $objFSO if not already created
if (!$objFSO) { $objFSO = New-Object -com  Scripting.FileSystemObject }

$PolicyPath = $PoliciesPath + $GPOGUID
$ADMPath = $PolicyPath + "\Adm"

$TotalSize = $objFSO.GetFolder($PolicyPath).Size
    if (test-path $ADMPath) { 
$ADMSize = $objFSO.GetFolder($ADMPath).Size
$Files = $objFSO.GetFolder($ADMPath).Files | %{ $_.Name }
if ($Files) { $ADMFiles = [string]::join("`n", $Files) } else { $ADMFiles = "<none>" }
} else { 
$ADMSize = 0 
$ADMFiles = "<none>"
$PolicySize = $TotalSize - $ADMSize
$Size = @{"Total" = $TotalSize.ToString(); "ADM" = $ADMSize.ToString(); "Policy" = $PolicySize.ToString(); "ADMFiles" = $ADMFiles}
return $Size

# Get's list of GPO's

write-host Getting GPO Information...
$GPOs = get-gpo -all
write-host `tGPOs: $GPOs.Count

# Creates an array and populates it with GPO information arrays

$AllGPOs = @()

write-host Getting GPO XML Reports...

$GPOCount = 0
$GPOs | foreach-object {

write-host `t$GPOCount : $_.DisplayName / $_.ID
$ThisGPO = get-gpoinfo $_.ID
$AllGPOs += ,$ThisGPO


# Exports all information to Excel (nicely formatted)

write-host Exporting information to Excel...

#Excel Constants
$White = 2
$DarkGrey = 56
$Center = -4108
$Top = -4160

$e = New-Object -comobject Excel.Application
$e.Visible = $True #Change to Hide Excel Window
$e.DisplayAlerts = $False
$wkb = $E.Workbooks.Add()
$wks = $wkb.Worksheets.Item(1)

#Builds Top Row
$wks.Cells.Item(1,1) = "GPO Name"
$wks.Cells.Item(1,2) = "GUID"
$wks.Cells.Item(1,3) = "Created"
$wks.Cells.Item(1,4) = "Last Modified"
$wks.Cells.Item(1,5) = "WMI Filter"
$wks.Cells.Item(1,6) = "Comp Config"
$wks.Cells.Item(1,7) = "Comp Dir Ver"
$wks.Cells.Item(1,8) = "Comp Sysvol Ver"
$wks.Cells.Item(1,9) = "Comp Extensions"
$wks.Cells.Item(1,10) = "User Config"
$wks.Cells.Item(1,11) = "User Dir Ver"
$wks.Cells.Item(1,12) = "User Sysvol Ver"
$wks.Cells.Item(1,13) = "User Extensions"
$wks.Cells.Item(1,14) = "Links"
$wks.Cells.Item(1,15) = "Enabled"
$wks.Cells.Item(1,16) = "No Override"
$wks.Cells.Item(1,17) = "Owner"
$wks.Cells.Item(1,18) = "Inherits"
$wks.Cells.Item(1,19) = "Groups"
$wks.Cells.Item(1,20) = "Perm Type"
$wks.Cells.Item(1,21) = "Permissions"
$wks.Cells.Item(1,22) = "Total Size"
$wks.Cells.Item(1,23) = "Policy Size"
$wks.Cells.Item(1,24) = "ADM Size"
$wks.Cells.Item(1,25) = "ADM Files"

#Formats Top Row
$wks.Range("A1:Y1").font.bold = "true"
$wks.Range("A1:Y1").font.ColorIndex = $White
$wks.Range("A1:Y1").interior.ColorIndex = $DarkGrey

#Fills in Data from Array
$row = 2
$AllGPOs | foreach {
$wks.Cells.Item($row,1) = $_[0]
$wks.Cells.Item($row,2) = $_[1]
$wks.Cells.Item($row,3) = $_[2]
$wks.Cells.Item($row,4) = $_[3]
$wks.Cells.Item($row,5) = $_[4]
$wks.Cells.Item($row,6) = $_[5]
$wks.Cells.Item($row,7) = $_[6]
$wks.Cells.Item($row,8) = $_[7]
$wks.Cells.Item($row,9) = $_[8]
$wks.Cells.Item($row,10) = $_[9]
$wks.Cells.Item($row,11) = $_[10]
$wks.Cells.Item($row,12) = $_[11]
$wks.Cells.Item($row,13) = $_[12]
$wks.Cells.Item($row,14) = $_[13]
$wks.Cells.Item($row,15) = $_[14]
$wks.Cells.Item($row,16) = $_[15]
$wks.Cells.Item($row,17) = $_[16]
$wks.Cells.Item($row,18) = $_[17]
$wks.Cells.Item($row,19) = $_[18]
$wks.Cells.Item($row,20) = $_[19]
$wks.Cells.Item($row,21) = $_[20]
$wks.Cells.Item($row,22) = $_[21]
$wks.Cells.Item($row,23) = $_[22]
$wks.Cells.Item($row,24) = $_[23]
$wks.Cells.Item($row,25) = $_[24]

#Adjust Formatting to make it easier to read
$wks.Range("I:I").Columns.ColumnWidth = 150
$wks.Range("M:M").Columns.ColumnWidth = 150
$wks.Range("N:N").Columns.ColumnWidth = 150
$wks.Range("S:S").Columns.ColumnWidth = 150
$wks.Range("Q:Q").Columns.ColumnWidth = 150
$wks.Range("U:U").Columns.ColumnWidth = 150
$wks.Range("Y:Y").Columns.ColumnWidth = 150
$wks.Range("A:U").Columns.VerticalAlignment = $Top
$wks.Range("F:H").Columns.HorizontalAlignment = $Center
$wks.Range("J:L").Columns.HorizontalAlignment = $Center
$wks.Range("R:R").Columns.HorizontalAlignment = $Center
$wks.Range("V:X").Columns.HorizontalAlignment = $Center

#Save the file
$Path = Get-Location
$SaveFile = $Path.path + "\GPO_Report.xlsx"