Sunday, July 13, 2014

Kansa: Passing arguments to collector modules

In my previous post on Kansa's automated analysis, I mentioned there was another improvement I made to the framework that I would cover in a future post. I thought at that time, that Kansa was at a point where I could go into some details about the new feature, but as it turns out, it wasn't quite ready.
 
Previously, some of Kansa's collector modules would need to be edited or customized prior to being run. Disk\Get-File.ps1, for example, was one that could acquire a specific file from target machines, but users would have to edit the collector to specify the file they wanted to acquire. Obviously that was less than ideal, so I did some work that would allow users to specify those kinds of things via command line arguments. In my limited testing, this worked... but, my testing was limited.

This week I had a pull request submitted by @z4ns4tsu for a collector module called Get-FilesByHash.ps1 that would allow investigators to take a cryptographic hash (MD5, SHA1, etc.) of a known suspect file, then search for files with that same hash across many machines in the environment. The module was the first that would take multiple arguments, the search path, the hash and the hash type; this is where Kansa had an issue. It couldn't pass multiple arguments to collectors, but after a couple nights of work, now it can.

I also added a few arguments to Get-FilesByHash.ps1, including a file extension regex so the script doesn't hash every single file looking for matches, instead, it will only hash those files with extensions that match the provided file extension regex, the default regex is \.(dll|exe|ps1|sys)$, this greatly reduces the number of files that will be hashed. I also added two more arguments that limit the files that will be hashed based on minimum and maximum file size in bytes.

Here's a command line example showing how this module can be used:
 
.\Kansa.ps1 -ModulePath ".\Modules\Disk\Get-FilesByHash.ps1 BF93A2F9901E9B3DFCA8A7982F4A9868,MD5,C:\Windows\System32,\.exe$" -target localhost -Verbose
 

Above Get-FilesByHash.ps1 will search for any files with the MD5 hash of BF93A2F9901E9B3DFCA8A7982F4A9868, in or below the C:\Windows\System32 path and ending with an extension of .exe., Notice that the arguments to Get-FilesByHash.ps1 are not named parameters. Named parameters are not supported for remoting, so they must be positional also note that they are comma separated and the whole module and arguments are quoted.
 
As with other modules, you can use the .\Modules\Modules.conf file to pass arguments to Get-FilesByHash.ps1 (or any other module that takes arguments) via the conf file itself. Here's the entry for the module above taken from the conf file:
 
Disk\Get-FilesByHash.ps1 BF93A2F9901E9B3DFCA8A7982F4A9868,MD5,C:\Windows\System32
 
Note the absence of quotes in the configuration file, and I've also omitted the regex extension argument.
 
Adding the ability to pass parameters to modules meant I could remove several collectors from Kansa that were written to acquire specific event logs, each one collecting a specific log file, instead, now Kansa has one collector written to generically collect any Windows event log and the specific log is simply passed as an argument. Here's the relevant section of the .\Modules\Modules.conf file:
 
Log\Get-LogWinEvent.ps1 Security
Log\Get-LogWinEvent.ps1 Microsoft-Windows-Application-Experience/Program-Inventory
Log\Get-LogWinEvent.ps1 Microsoft-Windows-Application-Experience/Program-Telemetry
Log\Get-LogWinEvent.ps1 Microsoft-Windows-AppLocker/EXE and DLL
Log\Get-LogWinEvent.ps1 Microsoft-Windows-AppLocker/MSI and Script
Log\Get-LogWinEvent.ps1 Microsoft-Windows-AppLocker/Packaged app-Deployment
Log\Get-LogWinEvent.ps1 Microsoft-Windows-Shell-Core/Operational
Log\Get-LogWinEvent.ps1 Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
Log\Get-LogWinEvent.ps1 Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational
 
Above we have a single collector, Log\Get-LogWinEvent.ps1, that replaced nine collectors because it accepts an argument specifying which log to collect.
 
As you can see, being able to pass command line arguments to collectors is a big benefit, just be mindful that they are positional, not named parameters and as a result, if you want to accept all the default arguments but the last one, you still have to specify every argument, supplying the default values for every argument except the one you want to modify.

You can find more information about Kansa and the latest release at https://github.com/davehull/Kansa/releases.

Friday, July 4, 2014

Kansa: Automating Analysis

Kansa, the PowerShell based incident response framework, was written from the start to automate acquisition of data from thousands of hosts, but a mountain of collected data is not worth bits without analysis, thus analysis has been part of the framework from almost the beginning as may be seen here in this commit from 2014-04-18.

Data collection has been configurable via the Modules.conf text file since the beginning and the project has been packaged with a default Modules.conf file with the order of volatility applied. Users could edit the file, comment and uncomment lines to disable and enable modules, customizing data collection.

After Kansa completed its collection, users could cd into the newly created output directory and then into the specific module directory and run the analysis script of their choosing to derive some intelligence from the collected data. For example, a user might run Kansa's Get-Autorunsc.ps1 collector to gather Autoruns data from a hundred hosts that should have identical or very similar configurations.

Following the data collection, they could cd into the new output directory's Autorunsc subdirectory, then run

Get-ASEPImagePathLaunchStringMD5Stack.ps1

which would return a listing of Autoruns aggregated by path to executable, command line arguments and MD5 hash of the executable or script all in ascending order, so any outliers would be at the top of the list and these entries may warrant further investigation.

This was all well and good, but with more than 30 analysis scripts, analysis of the collected data was becoming cumbersome. It was begging for automation. So, I added it.

There is now an Analysis.conf file in the Analysis folder that works in much the same way as the Modules\Modules.conf configuration file. Every analysis script has an entry in the configuration file and you can edit the file and comment out the scripts you don't want to run or uncomment the ones you want to run. Then when you run Kansa, simply add the -Analysis flag and after all the data is collected, Kansa will run each analysis script for you and save the output to a new folder under the time stamped output folder called AnalysisReports.

Below is a sample listing:

Click for original size image
In the top directory listing of the output directory, you can see the normal output file structure, one folder per module, this was obviously a very limited data collection with Autorunsc, File, Netstat and PrefetchListing modules being used. Error.Log contains information about errors encountered during runtime. What's new here is the AnalysisReports directory.

The bottom directory listing shows the contents of the AnalysisReports path. Each of these are TSV files containing summary data of the collected data with the file names reflecting the name of the script that produced the data set. And the beauty of this is, it's fully automated when you run Kansa with the -Analysis flag and you've configured the Analysis\Analysis.conf file.

I've made some other improvements to Kansa in the last couple weeks, but I'll save that for the next post. For now, I wanted to share the automated analysis piece. I'm pretty psyched about it because it's a big time-saver and it puts Kansa in a position where it can easily produce alerts based on the presence of a quality or quantity on which an analysis script is written to trigger.

Tuesday, June 17, 2014

Kansa: Get-LogUserAssist.ps1

Tonight I pushed the latest collector to Kansa, Get-LogUserAssist.ps1. This is probably the most complicated collector I've written for Kansa. It has several moving parts and there were some obstacles to overcome.

As with most Kansa modules, you can run it stand-alone on your localhost, or through Kansa to collect data from thousands of hosts via Windows Remote Management. To run it against your local system, you should be able to download it from the above link, unblock it either through Explorer by browsing to it, right-clicking on it and unchecking the unblock checkbox under properties somewhere, I don't GUI enough. Or you can download it, open a Powershell prompt to the location where you've downloaded it and do:

ls Get-LogUserAssist.ps1 | unblock-file

The above may assume you have PS v3, I'm not sure when unblock-file came into being. You should upgrade to PS v3, if you haven't already as it has more whizbang.

Another option is to use Sysinternals Streams.exe -d Get-LogUserAssist.ps1, but I digress.

Here's an example of me running this locally on my laptop:
Click for larger image
When run locally, the script returns Powershell objects. The output directive for this script, tells Kansa to save the output as tab separated values, making for easy import into a database or quick analysis with Logparser. An analysis script for this output is on the Kansa issues list as an enhancement.

If you run this locally and want to massage the output to TSV, at your Powershell prompt, you could do:

PS> Get-LogUserAssist.ps1 | ConvertTo-CSV -Delimiter "`t" -NoTypeInformation

I don't care for quoted TSV, so I'd go a step further adding:

| % { $_ -replace "`"" }

to the above and why not write it out to a file that you can load into Excel or a database or query with Logparser? To do that, simply add the below to the above:

| Set-Content LocalhostUserAssist.tsv

But you don't have to do TSV. You could drop the -Delimiter arg above and default to CSV or instead of using ConvertTo-CSV, use Export-CliXML and you've got XML output for those of you who want a more challenging and slower analysis experience. Zing!

One thing I'm not clear on and may have to research, is why so many of my "counts" are coming up as zero. Did Windows 8 stop incrementing run counts?

This collector starts by enumerating all of the local profiles on the target, then looks in each profile path for an ntuser.dat file. If it finds one, it will try and load that hive. If the hive loads, the script looks for UserAssist and parses it, if found. If UserAssist is not found, it moves on to the next user. If the script was unable to load the hive, it assumes that's because the user is currently logged on and the file is locked, so at that point, it looks in HKEY_USERS for all the loaded hives by SID, resolves those SIDs to usernames and compares them to the username associated with the locked profile. When it finds a match, it looks for UserAssist in the matching HKEY_USERS key by SID. One thing that occurs to me now, based on something I heard @forensic_matt say at this year's SANS DFIR Summit, if the user's account is renamed, this match will likely fail. Something to add to the Kansa issues list. Save for that edge case, this script will pull UserAssist key data for all user accounts on a running system.

And since it's a Kansa module, you can run it across thousands of hosts easily.

I hope someone finds it useful.

[Update] You may be wondering, "Why is this module under Modules\Log, the Registry is not a log file?"

As Harlan Carvey has rightly pointed out, the Registry sometimes is a log file and in the case of UserAssist, it most certainly is. Hence, I placed it under Modules\Log. You're free to move it elsewhere on your own set up.

[Update 20140623] Confirmed for renamed accounts the module was not able to resolve a loaded SID to an account name, but I've fixed this bug. The script now returns the user account name and the user profile path, so spotting renamed accounts is simple. Here's an example where the Local Administrator account has been renamed to Gomer.

Saturday, May 17, 2014

Kansa: Powershell profiles potentially hazardous

On the very day I published my previous post, Kansa: Collecting WMI Event Consumer backdoors, Mark Russinovich announced the release of a new version of Autoruns that collects WMI related ASEPs. I had a chance to play around with it on a machine with a WMI Event Consumer, Event Filter and Filter-to-Consumer Binding configured and indeed, Autoruns now picks up the Event Consumers. I still recommend using Kansa's Get-WMIEvtFilter.ps1 and Get-WMIFltConBind.ps1 collector modules to grab the other two essential pieces that make Event Consumer backdoors possible. The Event Filter is the piece that will tell you what triggers the Event Consumer.

In this post I want to cover another "auto start extension point" or ASEP and it happens to be another that is not covered by Autoruns, yet. It also happens to be specific to Powershell. The Windows Powershell Profile is a script that runs, if present, each time a user or SYSTEM opens a Powershell shell. It's akin to a .bash_profile or similar shell profile on *nix systems.

Adversaries can modify an existing Powershell profile for either a user or the default system profile, planting code enabling them to maintain persistence or perform any task that Powershell is capable of given the context of the script (non-administrator users obviously being less capable than administrators or SYSTEM).

Kansa's Get-PSProfiles.ps1 collector will enumerate local accounts on remote systems and check each of them for Powershell profiles. Where Powershell profiles exist, Get-PSProfiles will collect them all in a zip file (it will also check for and collect the default Powershell profile). The zip file will then be sent back to the host where Kansa was run.

Powershell profiles can be located in a few different locations. For user profiles, they are in:

$env:userprofile\Documents\WindowsPowershell\Microsoft.Powershell_profile.ps1

And the default system profile is in:

$env:windir\System32\WindowsPowershell\v1.0\Microsoft.Powershell_profile.ps1

User Powershell profiles on XP systems are in a slightly different path and Kansa will not acquire them.

Unfortunately, there's no quick way of analyzing the collected profile scripts for malicious capabilities, at least not that I'm aware of. Analysts will have to spend time reviewing profiles for suspect code. This is a good time to mention that any ASEP script, not just Powershell profiles could be modified by adversaries to perform nefarious actions.

This is another painful reminder of the asymmetry of information security. Adversaries have many places to hide malicious bits and may only need one (or none, if they have a big enough key ring of credentials). Incident responders, depending on the nature of the incident, may have to review every known ASEP.

Enjoy the code review and happy hunting!

Tuesday, May 13, 2014

Kansa: Collecting WMI Event Consumer backdoors

In my previous post, Kansa: Service related collectors and analysis, I discussed the Windows Service related collectors and analysis capabilities in Kansa and noted that some of the collected data is not currently collected by Sysinternals' Autoruns.

Today I'll cover another persistence mechanism that Kansa collects, which is not currently collected by Autoruns; namely WMI Event Consumers. That link tells us "Event consumers are applications or scripts that request notification of events, and then perform tasks when specific events occur."
[Update: 2014-05-13] Mark Russinovich released a new version of Autoruns today that reports WMI information. I have not tested it yet. It will be interesting to see if it only reports data form Event Consumers and not the Event Filter, which tells what the trigger is.

For an event consumer to work, three elements are required:
  • An Event Consumer -- this is the piece that performs some action
  • An Event Filter -- an event query watching for defined activity -- this triggers the consumer
  • A Filter-to-Consumer Binding -- this links the filter to the consumer
In my experience, WMI Event Consumers are not commonly used. So in many situations collecting the data and simply reviewing file sizes can tell you if something is worth investigating further. For example, I recently collected event consumer data from a few thousand hosts. Running the following Powershell command was enough to find which host contained a backdoor running from an event consumer:

ls *wmievtconsmr.xml | sort length -Descending | more

The output of that command follows, see if you can determine which host had the backdoor installed:
If you guessed DFWBOSSWEE01, congratulations, you may have the skills necessary to find WMI Event Consumer backdoors.

So what's in this file? Since it was collected with Kansa's Get-WMIEvtConsumer collector, which specifies its output should be written to an XML file, we can either open the XML file in a suitable editor or use the Powershell cmdlet Import-Clixml to read the file into a variable and examine the contents via the following commands:

$data = Import-Clixml .\DFWBOSSWEE01_wmievtconsmr.xml
$data | more

This command returns output like the following:
The most interesting bits above are those in the "CommandLineTemplate" property, which I've redacted a bit, but you can see there's a call to Powershell.exe and a long base 64 encoded string, which in this case was a Powershell encoded command, in essence, a script. We can decode that script via

[Convert]::ToBase64String()

Doing so would reveal that when this WMI Event Consumer is triggered, it connects to a remote site and downloads another script and runs it.

So how often is it triggered? What triggers it? To answer those questions, you'll have to review the data Kansa collected via Get-WMIEvtFilter.ps1. A consumer by itself is harmless, but if there's an Event Filter and a Filter-to-Consumer binding, then you've got all the ingredients needed for a WMI Event Consumer based backdoor.

Saturday, May 3, 2014

Kansa: Service related collectors and analysis

In my previous post on Kansa's Autoruns collectors and analysis scripts, I mentioned that the Get-Aurounsc.ps1 collector relies on Sysinternals' Autorunsc.exe to collect data on all of the Autostart Extension Points (ASEPs) that it has catalogued. Autorunsc and its GUI sibling, Autoruns, are great tools, but they are not comprehensive, there are other ASEPs that they don't catch, so Kansa includes a few additional modules that aim to collect additional ASEPs and additional data about ASEPs.

Get-SvcAll.ps1
Runs Get-WMIObject win32_service to collect details about all services. Output is saved as XML. Some of this same data is collected by Get-Autorunsc.ps1 above, however, this will pull additional properties for each service with some of them being specific to the type of service. If a service is running, you'll get its process id and the context it runs under (Local System, Local Service, etc.). There's even an InstallDate property, which is awesome, however, in my experience, it's never populated, which sucks.

For analysis of the data collected by Get-SvcAll.ps1, there are two very basic frequency analysis or stacking scripts as of this writing. They are Get-SvcAllStack.ps1 and Get-SvcStartNameStack.ps1. The former does its frequency analysis based on Service "Captions" and Pathnames. The Captions are the short friendly names you see when you look at the Services running on your system while the Pathnames include the path to the binary and any arguments. Here's an example from two systems where the Application Identity service has two different sets of command line arguments:

Click for larger image
Stacking by these properties across many hosts shows investigators services that may have the same Caption, but different binaries and arguments. This same kind of analysis is available in the Autoruns stacking scripts with the added benefit of stacking by file hash (e.g. MD5).

Get-SvcStartNameStack.ps1 stacks by Caption and StartName, the latter of which turns out to be the name of the account the service runs under.

Another Service analysis script, but not a stacker, is Get-SvcAllRunningAuto.ps1, which pulls the list of Services that were in a running state or set to start automatically when the Get-SvcAll.ps1 collector ran on the targets.

ASEPs not collected by Autorunsc:

As I mentioned above, Sysinternals' Autoruns and Autorunsc executables collect all the ASEPs they know to collect, but that is not the universe of ASEPs.

Windows Services can be configured to recover from failures. In my experience, restarting the service is the most common recovery option, but one option that adversaries can use is the "Run a Program" option as shown below:
Click for larger image
 
In the screen shot above, the Application Identity service is configured with a failure recovery response that will run a program called ServiceRecovery.exe from C:\ProgramData\Microsoft\ with the command line argument -L 443. This is a persistence mechanism that Autorunsc won't capture.

Kansa's Get-SvcFail.ps1 collector will collect service failure recovery information from all services. Kansa includes a few analysis scripts that will stack the service failure recovery data, but the most useful one is Get-SvcFailCmdLine.ps1, which returns the frequency count of the program and command line parameters from all the collected service failure information. The image below shows this data from a few thousand systems:
Click for larger image
In the example there are 129769 Service Failure entries, 75088 of them have the same program and command line arguments configured as a recovery option. Seems unlikely this is malicious.

In another smaller data set, the following data was returned:
Click for larger image
I include this screen shot because I've run into the customscript.cmd entry in multiple data sets and in all the cases I've investigated, I've not yet found a service that referenced customscript.cmd anywhere in the Services GUI, but you will see services reference it in the data of their Registry key values, like the following:

I've also searched file systems on hosts where I've seen this, but I've not found a file on disk called customScript.cmd. I wanted to mention it here in case you run across it. If you do see a reference to customscript.cmd that includes a path, you may have an adversary attempting to blend in with a common value.

The last Service related collector in Kansa, as of this writing, is Get-SvcTrigs.ps1, which collects another set of ASEPs that Autoruns does not collect, yet -- Service Triggers. Service Triggers are new with Windows 7 and later versions of Windows. They allow Windows Services to have more startup flexibility than the old Manual and Automatic startup modes. Now services can respond to the presence of specific hardware, group policy changes, networking events, etc. More information about Service Triggers can be found at the following links:
Kansa includes a basic stacker for Service Triggers. Interpreting the data to determine what's normal and what's suspicious can be daunting and tedious. Searching on GUIDs can be of some help. Below is a frequency listing of Service Triggers from a relatively small sample, two systems.
Click for larger image
I have Service Trigger data from a few thousand machines, but I'm not at liberty to share it here, trust me when I say finding outliers is easier with a larger data set, but keep in mind, just because something is an outlier doesn't mean it's bad and the inverse is also true, just because something is common, it's not necessarily good.

There is one more ASEP that I know of that Autoruns won't catch, but that Kansa collects, but I'll save that for another post.

Tuesday, April 29, 2014

Kansa: Autoruns data and analysis

I want your input.

With the "Trailer Park" release of Kansa marking a milestone for the core framework, I'm turning my focus to analysis scripts for data collected by the current set of modules. As of this writing there are 18 modules, with some overlap between them. I'm seeking more ideas for analysis scripts to package with Kansa and am hopeful that you will submit comments with novel, implementable ideas.

Existing modules can be divided into three categories:

  1. Auto Start Extension Points or ASEP data (persistence mechanisms)
  2. Network data (netstat, dns cache)
  3. Process data
In the interest of keeping posts to reasonable lengths, I'll limit the scope of each post to a small number of modules or collectors.

ASEP collectors and analysis scripts

Get-Autorunsc.ps1

Runs Sysinternals Autorunsc.exe with arguments to collect all ASEPs (that Autoruns knows about) across all user profiles, includes ASEP hashes, code signing information (Publisher) and command line arguments (LaunchString).

Current analysis scripts for Get-Autoruns.ps1 data:
Get-ASEPImagePathLaunchStringMD5Stack.ps1
Returns a frequency count of ASEPs aggregating on ImagePath, LaunchString and MD5 hash.

Get-ASEPImagePathLaunchStringMD5UnsignedStack.ps1
Same as previous stacker, but filters out signed ASEPs.

Get-ASEPImagePathLaunchStringPublisherStack.ps1
Returns frequency count of ASEPs aggregated on ImagePath, LaunchString and code signer.

Get-ASEPImagePathLuanchStringStack.ps1
Returns frequency count of ASEPs aggregated on ImagePath and LaunchString.

Get-ASEPImagePathLaunchStringUnsignedStack.ps1
Same as previous, but filters out signed code.

A picture is worth a few words, here's a sample of output for the previous analysis script for data from a couple systems:

Click for full size image
The image above shows the unsigned ASEPs on the two hosts, aggregated by ImagePath and LaunchString. You may want to know which host a given ASEP came from, however, including the host in the output above would break the aggregation. If you want to trace the 7-zip ASEP back to the host it was found on, copy the ImagePath or LaunchString value to the clipboard and from within this Output\Autorunsc\ path where the script was run, use the Powershell commandlet:

Select-String -SimpleMatch -Pattern "c:\program files (x86)\7-zip\7-zip.dll" *autoruns.tsv

The result will show the files and lines in those files, that match that pattern, each filename contains the hostname where the data came from, and the hostname is also in the file in the PSComputerName field.

Get-Autorunsc.ps1 returns the following fields:
Time: Last modification time from the Registry or file system for the EntryLocation
EntryLocation: Registry or file system location for the Entry
Entry: The entry itself
Enabled: Enabled or disabled status
Category: Autorun category
Description: A description of the Autorun
Publisher: The publisher from the code signing certificate, if present
ImagePath: File system path to the Autorun
Version: PE version info
LaunchString: Command line arguments or class id from the Registry
MD5: MD5 hash of the ImagePath file
SHA1: SHA1 hash of the ImagePath file
PESHA1: SHA1 Authenticode hash of the ImagePath file
PESHA256: SHA256 Authenticode hash of the Image Path file
SHA256: SHA256 hash of the ImagePath file
PSComputerName: The host where the entry came from
RunspaceId: The runspaceid for the Powershell job that collected the data
PSShowComputerName: A Boolean flag about whether or not the PSComputerName is included

These last three fields are artifacts of Powershell remoting.

Given the available data and the currently available analysis scripts, what other analysis capabilities make sense for the Get-Autorunsc.ps1 output?

One idea I have, is to take the idea explored in a previous post of mine, "Finding Evil: Automating Autoruns Analysis." This would be a script that takes an external dependency on a database of file hashes that are categorized as good, bad and unknown. The script would match hashes in the Get-Autorunsc.ps1 output, discarding the good, alerting on the bad and submitting unknowns to VirusTotal to see what, if anything is known about them. If VT says they are bad, insert them into the database and alert. If VT says they are good, insert them into the database and ignore them in future runs. If VT has no information on them, mark them for follow up and send them to Cuckoo sandbox or similar for analysis.

What ideas do you have? What would be helpful to you during IR?

Thanks for taking the time to read and comment with your thoughts and ideas.

If you found this information useful, please check out the SANS DFIR Summit where I'll be speaking about Kansa, IR and analysis in June.

Friday, April 25, 2014

Kansa: Get-Started

Last week I posted an introduction to Kansa, the modular, Powershell live response tool I've been working on in preparation for my presentation at the SANS DFIR Summit.

Please do me a favor and click the DFIR Summit link. :)

My previous post was a high level overview. This one will dive in. If you'd like to try it out, you'll need a system or systems that are configured for Windows Powershell remoting and you'll need an account with permissions to run remote jobs on those hosts.

Getting a copy of Kansa is easy, this link will pull down a zipped copy of the master repository. Simply download it and extract it to a location of your choice. On my machine I've extracted the files to C:\tools\Kansa. As of this writing the main directory of the archive consists of the following files and folders:


Kansa.ps1 is the script used for kicking off data collection.

The Analysis folder contains Powershell scripts for conducting basic analysis of the collected data. It's really just a starting point as of this writing and more analysis scripts will be added as I have time to create and commit them. Many of the Analysis scripts require Logparser, but much of the data collected by Kansa could be imported directly into your database of choice for analysis.

The lib folder contains some common code elements, but as of yet, none of it is in use elsewhere in Kansa.

The Modules folder contains the plugins that Kansa will invoke on remote hosts. As of this writing, the modules folder consists of the following:

I like to think of the modules as collectors. They follow the Powershell Verb-Noun naming convention and I'm guessing you can tell what most of them do based on their names. Most of them are very simple scripts, however, some may appear a bit complicated because they reformat the data they collect to make it ready for analysis. For example, the Get-Netstat.ps1 module calls Netstat.exe -naob on each remote host. Netstat's output when called with these flags is relatively unfriendly for analysis. Here's a sample:


Kansa's Get-Netstat.ps1 module reformats this data and the output becomes, well, first we have a bit of a diversion:


First, notice I'm running the Get-Netstat.ps1 module directly on the localhost. As of this writing, all of Kansa's modules may be run standalone, they don't have to be run within the framework. But more relevant to the screenshot above, when I try and run it, I get prompted by a "Security warning" because the script was downloaded from the internet. You should probably look through any script you download before running it in your environment, Powershell is trying to be helpful here. There are multiple ways around this, you can enter "R" or "r" here and the script will run once. You can view the properties of the files within Explorer and "unblock" them or you can use the Powershell Unblock-Files cmdlet as follows to unblock them all:


Now with the files unblocked, let's run Get-Netstat.ps1 again and see how the output is made more friendly for analysis:


What we have here are Powershell objects and Powershell objects can be easily manipulated into a variety of formats, XML, CSV, TSV, binary, etc. Most of the Kansa modules return Powershell objects and each module can include an "# OUTPUT directive" on their first line that directs Kansa how to treat the output. For example, Get-Netstat.ps1's OUTPUT directive is "# OUTPUT tsv", if that looks like a Powershell comment, well it is, but Kansa.ps1 looks for it on line one of each module and if it finds it, it will honor the directive, in this case it will convert the Powershell objects above into tab separated values and write the data out to a file on disk. If a module doesn't include an OUTPUT directive on line one, Kansa will default to treating the output as text.

The end result for Get-Netstat.ps1, when invoked by Kansa on a remote system, is that you'll have tab separated values for your Netstat -naob output, like this:

Click the image for original
That tab separated data can easily be imported into the database of your choice, you can run queries directly against it using Logparser, load it into Excel, etc.

Looking back at the Modules folder contents above, you'll notice a bin directory. There are a trio of modules that call binaries. At one point, I had functionality in Kansa to push binaries from this folder to remote hosts and I have it on the ToDo list for the project to get this back in. I just haven't found the perfect way to do it yet, so I've rolled it out. For now, the three modules that require binaries, expect those binaries to be in the $env:SystemRoot directory of each remote host, this is generally C:\Windows and also the ADMIN$ share. In practice today, I simply use Copy-Item to push binaries to remote hosts' ADMIN$ shares. A future version of Kansa will distribute these binaries at run time.

[Update: As of 2014-04-27, I've implemented a -Pushbin command line flag that will cause Kansa to try and copy required binaries to targets. See Get-Help -Full Kansa.ps1 for details. The Get-Autorunsc.ps1, Get-Handle.ps1 and Get-ProcDump.ps1 modules can be referenced as examples.]

Two other files in the Modules directory don't look like the others, default-template.ps1 is a template for building new modules. Mostly it gives some guidance. Reading it and some of the existing collectors should give you enough information to create your own.

Lastly, there's a modules.conf file in the Modules folder. This file controls which modules will be invoked on the remote systems and the order in which they will be invoked, this allows the user to execute collection in the order of volatility and to limit what is collected for more targeted acquisition. All modules are currently listed in the modules.conf file. To prevent a module from being executed, simply comment it out with the pound-sign/hashtag/octothorpe/#. If the modules.conf file is missing, all modules will be run in the default directory listing order.

Alright, so how do we run this thing? For most use cases, I recommend you create a text file containing the list of hosts you want to run modules on. For demonstration purposes, here is my hostlist file:


The script has been run on thousands of hosts at a time, so if you want to try it on a larger list, it should work fine, given you've satisfied two prerequisites: 1) your targets are configured for Windows Remoting, see the link above; 2) The account you're using has Admin access to the remote hosts.

If you don't use the -TargetList argument (see below), Kansa will query Active Directory for the list of computers in the domain and they will all be targeted, you can limit the number of targets with or without -TargetList with the -TargetCount argument.

Here's a sample run using the default modules.conf and my hostlist:


That's it. You'll see the same view if you run it over thousands of hosts, though it may take longer. Powershell's remoting has a default setting that limits it to running tasks on 32 hosts at a time, but it is configurable. Currently Kansa uses the default setting, but in a future version, I may make this a command line option.

Where does the output go? You can specify where you want the output written, but in this case I took the default, which is the Output folder within the Kansa folder. Let's see what's there:


Each collector's output is written to its own folder. Drilling down one more level, let's look in the Handle folder:


But wait you say, we ran this with two hosts in the hostlist file we provided to the -TargetList argument. Where's the data for the other host? Well, I ran this from an Admin command prompt and my local Admin account doesn't have permission to run jobs on Selfridge, so those jobs failed. Currently the user receives no warning or error when a job can't run on a remote host, fixing this is an item on the ToDo list. If I drop out of the admin command prompt and run it as a different user that has admin access to both my localhost and the remote host, the folder above would look like this:

Now, before you go running this in your environment and complain that you don't have handle data from your hosts, recall that the Get-Handle.ps1 module is one that has a binary dependency. If you want to run it, you'll first need to copy-item handle.exe to the ADMIN$ share of each target. This is easily accomplished with something like:

foreach($host in (gc hostlist)) { copy-item handle.exe \\$host\admin`$ }

from a Powershell prompt with the appropriate privileges to copy data to the remote hosts. I haven't tested that command, you may have to do some trickery to properly escape whack-whacks.

Incidentally, handle.exe, like netstat.exe with -naob, is another utility that has analysis unfriendly output. Luckily, Get-Handle.ps1 will return Powershell objects and direct Kansa to reformat the output as tsv, ready for import into your db or choice or for querying with Logparser.

Kansa was written with some help in the script, to view it, simply use the Powershell Get-Help -Full Kansa.ps1 command and you'll get something like the following:



That should be enough to get your started with Kansa. Please, take it for a spin and give me some feedback. Let me know what breaks, what sucks and what works. And if you have ideas for collectors, send them my way or if you want to contribute something, that would be welcome.

I've got a few more posts coming, so stay tuned and thanks! Oh, and if you found this interesting, useful, please take a moment to check out the SANS DFIR Summit where I'll be discussing Kansa and how it can be used to hunt at scale, or as I like to think of it, seine for evil.

Saturday, April 19, 2014

Kansa: A modular live response tool for Windows enterprises

Folks who follow me on Twitter, @davehull, have seen chatter from me about a side-project I've been working on, Kansa, in preparation for my presentation at the SANS DFIR Summit in Austin in June. While the Github page for the project contains a Readme.md that gives a little information about what the project is and does, I thought a series of blog post was in order.

A look at the Readme.md today says Kansa is a modular rewrite of another script in my Github repro called Mal-Seine. Mal-Seine was a Powershell script I hacked together for evidence collection during incident response.

Mal-Seine worked, but had issues. First, the 800 pound gorilla in the room. Andrew Case raises an excellent point about relying on Powershell for live response work:

https://twitter.com/attrc/status/444163636664082432

He's right. Users of live response tools relying on the Windows API must remain cognizant that adversaries may use malware to subvert the Windows API to prevent discovery. In plain English, when attackers compromise a computer, they can install malicious software that can lie about itself and its activities. A comprehensive incident response strategy must include other tools, some that get off the box entirely, a la network security monitoring, and some that subvert the Windows API themselves. Clicking the image above will take you to the Twitter thread on this subject.

My response to Case's absolutely correct claim, is two-fold.
  1. I've already mentioned, any investigator using tools on live systems that rely on the Windows API must keep in mind their tools may be lied to and therefore may provide incomplete or inaccurate information.
  2. As I replied to Case on Twitter, "not every threat actor is hooking" [the Windows API]. "If you can't find it with a first pass, go deep," meaning a tool like Mal-Seine can run relatively quickly across hosts and may not require you to push agent software to every node. If you don't find what you're looking for, you may need to push agents to each node and dig deeper.
To which the grugq smartly replied:

 
Based on this conversation, I sought data about the percentage of malware known to subvert the Windows API. The lack of response from the big players in the anti-malware community was disappointing. One anti-malware group engaged in the conversation and they couldn't provide numbers, but said that API hooking is a capability that generally runs in a small number of malware families and that based on the data I was collecting via Mal-Seine, it was unlikely that there would be very many families that could hide themselves completely.
 
That said, one is too many and in the cat-and-mouse-game that is information security, it's only a matter of time before every piece of malware has these capabilities. We absolutely need more tools in the defensive arsenal that are as advanced as the most advanced malware. Mal-Seine and its successor, Kansa, are not these advanced tools.
 
Potential Kansa users, I implore you to keep in mind this significant caveat. It's right there in the Readme.md.
 
Having said that, do I think it can still be a useful tool? Yes. If you're in a Windows 7 or later enterprise environment and your systems are configured for Powershell remoting, it can be a powerful way to collect data from hundreds, thousands, tens of thousands of systems in a relatively short amount of time.
 
Aside from this API subversion issue, which persists in Kansa, the issue with Mal-Seine was that it wasn't written to take advantage of Powershell's remoting capabilities, therefore it didn't scale well and more importantly, because it called binaries from a share and wrote its data to a share, it required CredSSP. What's the issue with CredSSP? From Microsoft's TechNet:
 
http://technet.microsoft.com/en-us/library/bb931352.aspx
 
The issue is highlighted. Because the script was calling binaries from a share, writing data to a share and being run remotely, it required the user's credentials be fully delegated to each system where it was running, so those remote systems could authenticate to the bin and data shares as that user. This unconstrained delegation meant that the user's credentials were exposed for harvesting by adversaries on every node where the script was run. That's bad. During IR we want to mitigate risk, not increase it. CredSSP was increasing risk.
 
Another short-coming of Mal-Seine was that it was monolithic. The logic for all the evidence to be collected from remote systems was contained in one file. If a user wanted to only collect a subset of the evidence or wanted to add new data for collection, they would have to modify the script.
 
When I set out to rewrite Mal-Seine, I had three goals in mind:
  1. It needed to obviate CredSSP.
  2. It needed to take full advantage of Powershell's remoting capability to make it more scalable.
  3. It needed to be modular.
I'm happy to say that with the help of folks like @jaredcatkinson,  @scripthappens, @JosephBialek and no small amount of hand-holding by @Lee_Holmes, items one and two from the list were brought to fruition.
 
For goal three, I turned to the grand-daddy of all modular forensics tools for inspiration, @keydet89's, RegRipper. As a result, Kansa is modular with the main script providing the following core functionality:
  • If the user doesn't supply their own list of remote systems, targets, Kansa will query Domain Controllers and build the list automatically
  • Error handling and transcription with errors written to a central file and transcription optional
  • Powershell remote job management -- invoking each module on target systems, in parallel, currently using Powershell's default of 32 systems at a time
  • Output management -- as the data comes in from each target, Kansa writes the output to a user specified folder, one file per target per module
  • A modules.conf file where users can specify which modules they want to run and in what order, lending support to the principal of collecting data in the order of volatility
There is more work to be done on the project and I'm actively maintaining a ToDo list on the Github site.
 
In addition to this core functionality, there are the modules, or collectors as I like to think of them. Today there are 18 collectors. For the most part, the collectors are stand-alone Powershell scripts, two current exceptions are Get-Autorunsc.ps1 and Get-Handle.ps1, which each require Sysinternals binaries, Autorunsc.exe and Handle.exe, respectively, to be in the $env:SystemRoot path of each target, which corresponds to the Windows ADMIN$ share, so if you want to use those two collectors, first push those binaries to the ADMIN$ shares of your targets. If your environment supports Windows Remoting, you can accomplish this with a foreach loop and Copy-Item; thousands of hosts in a relatively short order.
 
If you want to play around with Kansa, download the project, skim the code (the code is always the most accurate documentation, if not the most readable :b), ensure your target(s) support(s) Windows Remoting, covered elsewhere, Bing it. I recommend building a target list by putting the names of a couple test systems in a text file, below mine is called "hostlist", the -TargetCount argument is optional, my hostlist file contains dozens of systems, but I only want to run it on a couple.
 
Here's a sample command line:
 
.\kansa.ps1 -ModulePath .\Modules -OutputPath .\Output\ -TargetList .\hostlist -TargetCount 2 -Verbose
 
 In a future post, I'll cover more details about Kansa and its modules. The script enables data collection and data collection is easy compared to analysis. So, I've added an Analysis folder to the project and have provided some sample scripts therein. Most of these will require logparser. My goal is to automate analysis as much as possible. When dealing with data from many systems, automation is essential.
 
Thanks for reading, for trying out Kansa and for any feedback or contributions!

Saturday, February 15, 2014

Resolving some trigger GUIDs

My last post here on triggers as a Windows persistence mechanism, see http://trustedsignal.blogspot.com/2014/02/triggers-as-windows-persistence.html, gave an example of a Windows Scheduled Task that would run a script when a specific event id appeared in the Microsoft-Windows-Security-Auditing log (i.e. the Security event log).

I added a collector for Windows Service triggers to Mal-Seine, a script for collecting host artifacts during "breach hunts," (https://github.com/davehull/Mal-Seine). Breach hunts are undertaken by security teams who proactively look for evidence of adversaries in their systems and networks, rather than merely waiting for monitoring systems to fire alerts. Breach hunt activities may lead to new detections for those monitoring systems, but I digress.

When I wrote the new collector for Service triggers, I found the collected data pretty opaque. Here's an example:





















If you're wanting to analyze this data across many systems, say to identify outliers, this presentation leaves something to be desired. Mal-Seine includes a post-collection script to convert this output to separated values (https://github.com/davehull/Mal-Seine/blob/master/Convert-SvcTrigToSV.ps1) suitable for stack ranking or loading into Excel or other tools for further analysis, yielding something like the following:










Even in a separated values format, I still find the values in the "Condition" field leave something to be desired. Fortunately, some of the GUIDs can be replaced with human-readable values. Each of the entries that end with "[ETW PROVIDER UUDI]" corresponds to a Windows Event Log Provider, so we can at least get the provider name and the script above will perform this for us, if run with the -NameProviders flag, giving us:










Replacing the GUIDs makes the data a little more approachable. Searching for the remaining GUIDs online will reveal some information for the ones that are followed by information in brackets, but I've not found much on the ones that are not followed by information in brackets.

How would I use this data in a breach hunt? I would begin by stack-ranking the data from many similar systems and looking for outliers that I would mark for follow up investigation.

Wednesday, February 12, 2014

Triggers as a Windows persistence mechanism -- an example

@keydet89 posed the following question on Twitter:
 
 
The SANS ISC post discussing triggers as a persistence mechanism is at the following URL:
 
 
@z4ns4tsu responded that he'd seen it and gave some information about the scenario.
 
I replied that I'd encountered it as well and that it also works for Scheduled Tasks, which is actually where I've seen it used. So technically, I guess I should have answered that I hadn't seen it, because I've yet to encounter it on Services, but the mechanism is largely the same for Scheduled Tasks.
 
@keydet89 asked if I could provide more details.
 
Twitter is not the ideal medium, so here's an example:

PS C:\> Get-ScheduledTask | ? { $_.TaskName -match "lochemoot" } | fl *

State                 : Ready
Actions               : {MSFT_TaskExecAction}
Author                : Ridley\Scott
Date                  : 2019-11-07T07:07:07.031337
Description           :
Documentation         :
Principal             : MSFT_TaskPrincipal2
SecurityDescriptor    :
Settings              : MSFT_TaskSettings3
Source                :
TaskName              : lochemoot
TaskPath              : \Microsoft\
Triggers              : {MSFT_TaskEventTrigger}
URI                   :
Version               :
PSComputerName        :
CimClass              : Root/Microsoft/Windows/TaskScheduler:MSFT_ScheduledTask
CimInstanceProperties : {Actions, Author, Date, Description...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties


PS C:\> Get-ScheduledTask -TaskName lochemoot | % { $_.Triggers }

Enabled            : True
EndBoundary        :
ExecutionTimeLimit :
Id                 :
Repetition         : MSFT_TaskRepetitionPattern
StartBoundary      :
Delay              :
Subscription       : <QueryList><Query Id="0" Path="Security"><Select Path="Security">*[System[EventID=4732]]</Select><Query><QueryList>
ValueQueries       :
PSComputerName     :

What does this do? This Scheduled Task is set to run a script based on the appearance of Microsoft-Windows-Security-Auditing Event Id 4732.
 
What does that event correspond to? 

PS C:\> (Get-WinEvent -ListProvider Microsoft-Windows-Security-Auditing).Events | ? { $_.Id -eq 4732 }

Id          : 4732
Version     : 0
LogLink     : System.Diagnostics.Eventing.Reader.EventLogLink
Level       : System.Diagnostics.Eventing.Reader.EventLevel
Opcode      : System.Diagnostics.Eventing.Reader.EventOpcode
Task        : System.Diagnostics.Eventing.Reader.EventTask
Keywords    : {}
Template    : <template xmlns="http://schemas.microsoft.com/win/2004/08/events">
                <data name="MemberName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="MemberSid" inType="win:SID" outType="xs:string"/>
                <data name="TargetUserName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="TargetDomainName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="TargetSid" inType="win:SID" outType="xs:string"/>
                <data name="SubjectUserSid" inType="win:SID" outType="xs:string"/>
                <data name="SubjectUserName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="SubjectDomainName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="SubjectLogonId" inType="win:HexInt64" outType="win:HexInt64"/>
                <data name="PrivilegeList" inType="win:UnicodeString" outType="xs:string"/>
              </template>

Description : A member was added to a security-enabled local group.

              Subject:
                  Security ID:        %6
                  Account Name:        %7
                  Account Domain:        %8
                  Logon ID:        %9

              Member:
                  Security ID:        %2
                  Account Name:        %1

              Group:
                  Security ID:        %5
                  Group Name:        %3
                  Group Domain:        %4

              Additional Information:
                  Privileges:        %10

So we have a script that runs any time a member was added to a security-enabled local group.

Paperclip Maximizers, Artificial Intelligence and Natural Stupidity

Existential risk from AI Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI)...