Tuesday, April 29, 2014

Kansa: Autoruns data and analysis

I want your input.

With the "Trailer Park" release of Kansa marking a milestone for the core framework, I'm turning my focus to analysis scripts for data collected by the current set of modules. As of this writing there are 18 modules, with some overlap between them. I'm seeking more ideas for analysis scripts to package with Kansa and am hopeful that you will submit comments with novel, implementable ideas.

Existing modules can be divided into three categories:

  1. Auto Start Extension Points or ASEP data (persistence mechanisms)
  2. Network data (netstat, dns cache)
  3. Process data
In the interest of keeping posts to reasonable lengths, I'll limit the scope of each post to a small number of modules or collectors.

ASEP collectors and analysis scripts

Get-Autorunsc.ps1

Runs Sysinternals Autorunsc.exe with arguments to collect all ASEPs (that Autoruns knows about) across all user profiles, includes ASEP hashes, code signing information (Publisher) and command line arguments (LaunchString).

Current analysis scripts for Get-Autoruns.ps1 data:
Get-ASEPImagePathLaunchStringMD5Stack.ps1
Returns a frequency count of ASEPs aggregating on ImagePath, LaunchString and MD5 hash.

Get-ASEPImagePathLaunchStringMD5UnsignedStack.ps1
Same as previous stacker, but filters out signed ASEPs.

Get-ASEPImagePathLaunchStringPublisherStack.ps1
Returns frequency count of ASEPs aggregated on ImagePath, LaunchString and code signer.

Get-ASEPImagePathLuanchStringStack.ps1
Returns frequency count of ASEPs aggregated on ImagePath and LaunchString.

Get-ASEPImagePathLaunchStringUnsignedStack.ps1
Same as previous, but filters out signed code.

A picture is worth a few words, here's a sample of output for the previous analysis script for data from a couple systems:

Click for full size image
The image above shows the unsigned ASEPs on the two hosts, aggregated by ImagePath and LaunchString. You may want to know which host a given ASEP came from, however, including the host in the output above would break the aggregation. If you want to trace the 7-zip ASEP back to the host it was found on, copy the ImagePath or LaunchString value to the clipboard and from within this Output\Autorunsc\ path where the script was run, use the Powershell commandlet:

Select-String -SimpleMatch -Pattern "c:\program files (x86)\7-zip\7-zip.dll" *autoruns.tsv

The result will show the files and lines in those files, that match that pattern, each filename contains the hostname where the data came from, and the hostname is also in the file in the PSComputerName field.

Get-Autorunsc.ps1 returns the following fields:
Time: Last modification time from the Registry or file system for the EntryLocation
EntryLocation: Registry or file system location for the Entry
Entry: The entry itself
Enabled: Enabled or disabled status
Category: Autorun category
Description: A description of the Autorun
Publisher: The publisher from the code signing certificate, if present
ImagePath: File system path to the Autorun
Version: PE version info
LaunchString: Command line arguments or class id from the Registry
MD5: MD5 hash of the ImagePath file
SHA1: SHA1 hash of the ImagePath file
PESHA1: SHA1 Authenticode hash of the ImagePath file
PESHA256: SHA256 Authenticode hash of the Image Path file
SHA256: SHA256 hash of the ImagePath file
PSComputerName: The host where the entry came from
RunspaceId: The runspaceid for the Powershell job that collected the data
PSShowComputerName: A Boolean flag about whether or not the PSComputerName is included

These last three fields are artifacts of Powershell remoting.

Given the available data and the currently available analysis scripts, what other analysis capabilities make sense for the Get-Autorunsc.ps1 output?

One idea I have, is to take the idea explored in a previous post of mine, "Finding Evil: Automating Autoruns Analysis." This would be a script that takes an external dependency on a database of file hashes that are categorized as good, bad and unknown. The script would match hashes in the Get-Autorunsc.ps1 output, discarding the good, alerting on the bad and submitting unknowns to VirusTotal to see what, if anything is known about them. If VT says they are bad, insert them into the database and alert. If VT says they are good, insert them into the database and ignore them in future runs. If VT has no information on them, mark them for follow up and send them to Cuckoo sandbox or similar for analysis.

What ideas do you have? What would be helpful to you during IR?

Thanks for taking the time to read and comment with your thoughts and ideas.

If you found this information useful, please check out the SANS DFIR Summit where I'll be speaking about Kansa, IR and analysis in June.

Friday, April 25, 2014

Kansa: Get-Started

Last week I posted an introduction to Kansa, the modular, Powershell live response tool I've been working on in preparation for my presentation at the SANS DFIR Summit.

Please do me a favor and click the DFIR Summit link. :)

My previous post was a high level overview. This one will dive in. If you'd like to try it out, you'll need a system or systems that are configured for Windows Powershell remoting and you'll need an account with permissions to run remote jobs on those hosts.

Getting a copy of Kansa is easy, this link will pull down a zipped copy of the master repository. Simply download it and extract it to a location of your choice. On my machine I've extracted the files to C:\tools\Kansa. As of this writing the main directory of the archive consists of the following files and folders:


Kansa.ps1 is the script used for kicking off data collection.

The Analysis folder contains Powershell scripts for conducting basic analysis of the collected data. It's really just a starting point as of this writing and more analysis scripts will be added as I have time to create and commit them. Many of the Analysis scripts require Logparser, but much of the data collected by Kansa could be imported directly into your database of choice for analysis.

The lib folder contains some common code elements, but as of yet, none of it is in use elsewhere in Kansa.

The Modules folder contains the plugins that Kansa will invoke on remote hosts. As of this writing, the modules folder consists of the following:

I like to think of the modules as collectors. They follow the Powershell Verb-Noun naming convention and I'm guessing you can tell what most of them do based on their names. Most of them are very simple scripts, however, some may appear a bit complicated because they reformat the data they collect to make it ready for analysis. For example, the Get-Netstat.ps1 module calls Netstat.exe -naob on each remote host. Netstat's output when called with these flags is relatively unfriendly for analysis. Here's a sample:


Kansa's Get-Netstat.ps1 module reformats this data and the output becomes, well, first we have a bit of a diversion:


First, notice I'm running the Get-Netstat.ps1 module directly on the localhost. As of this writing, all of Kansa's modules may be run standalone, they don't have to be run within the framework. But more relevant to the screenshot above, when I try and run it, I get prompted by a "Security warning" because the script was downloaded from the internet. You should probably look through any script you download before running it in your environment, Powershell is trying to be helpful here. There are multiple ways around this, you can enter "R" or "r" here and the script will run once. You can view the properties of the files within Explorer and "unblock" them or you can use the Powershell Unblock-Files cmdlet as follows to unblock them all:


Now with the files unblocked, let's run Get-Netstat.ps1 again and see how the output is made more friendly for analysis:


What we have here are Powershell objects and Powershell objects can be easily manipulated into a variety of formats, XML, CSV, TSV, binary, etc. Most of the Kansa modules return Powershell objects and each module can include an "# OUTPUT directive" on their first line that directs Kansa how to treat the output. For example, Get-Netstat.ps1's OUTPUT directive is "# OUTPUT tsv", if that looks like a Powershell comment, well it is, but Kansa.ps1 looks for it on line one of each module and if it finds it, it will honor the directive, in this case it will convert the Powershell objects above into tab separated values and write the data out to a file on disk. If a module doesn't include an OUTPUT directive on line one, Kansa will default to treating the output as text.

The end result for Get-Netstat.ps1, when invoked by Kansa on a remote system, is that you'll have tab separated values for your Netstat -naob output, like this:

Click the image for original
That tab separated data can easily be imported into the database of your choice, you can run queries directly against it using Logparser, load it into Excel, etc.

Looking back at the Modules folder contents above, you'll notice a bin directory. There are a trio of modules that call binaries. At one point, I had functionality in Kansa to push binaries from this folder to remote hosts and I have it on the ToDo list for the project to get this back in. I just haven't found the perfect way to do it yet, so I've rolled it out. For now, the three modules that require binaries, expect those binaries to be in the $env:SystemRoot directory of each remote host, this is generally C:\Windows and also the ADMIN$ share. In practice today, I simply use Copy-Item to push binaries to remote hosts' ADMIN$ shares. A future version of Kansa will distribute these binaries at run time.

[Update: As of 2014-04-27, I've implemented a -Pushbin command line flag that will cause Kansa to try and copy required binaries to targets. See Get-Help -Full Kansa.ps1 for details. The Get-Autorunsc.ps1, Get-Handle.ps1 and Get-ProcDump.ps1 modules can be referenced as examples.]

Two other files in the Modules directory don't look like the others, default-template.ps1 is a template for building new modules. Mostly it gives some guidance. Reading it and some of the existing collectors should give you enough information to create your own.

Lastly, there's a modules.conf file in the Modules folder. This file controls which modules will be invoked on the remote systems and the order in which they will be invoked, this allows the user to execute collection in the order of volatility and to limit what is collected for more targeted acquisition. All modules are currently listed in the modules.conf file. To prevent a module from being executed, simply comment it out with the pound-sign/hashtag/octothorpe/#. If the modules.conf file is missing, all modules will be run in the default directory listing order.

Alright, so how do we run this thing? For most use cases, I recommend you create a text file containing the list of hosts you want to run modules on. For demonstration purposes, here is my hostlist file:


The script has been run on thousands of hosts at a time, so if you want to try it on a larger list, it should work fine, given you've satisfied two prerequisites: 1) your targets are configured for Windows Remoting, see the link above; 2) The account you're using has Admin access to the remote hosts.

If you don't use the -TargetList argument (see below), Kansa will query Active Directory for the list of computers in the domain and they will all be targeted, you can limit the number of targets with or without -TargetList with the -TargetCount argument.

Here's a sample run using the default modules.conf and my hostlist:


That's it. You'll see the same view if you run it over thousands of hosts, though it may take longer. Powershell's remoting has a default setting that limits it to running tasks on 32 hosts at a time, but it is configurable. Currently Kansa uses the default setting, but in a future version, I may make this a command line option.

Where does the output go? You can specify where you want the output written, but in this case I took the default, which is the Output folder within the Kansa folder. Let's see what's there:


Each collector's output is written to its own folder. Drilling down one more level, let's look in the Handle folder:


But wait you say, we ran this with two hosts in the hostlist file we provided to the -TargetList argument. Where's the data for the other host? Well, I ran this from an Admin command prompt and my local Admin account doesn't have permission to run jobs on Selfridge, so those jobs failed. Currently the user receives no warning or error when a job can't run on a remote host, fixing this is an item on the ToDo list. If I drop out of the admin command prompt and run it as a different user that has admin access to both my localhost and the remote host, the folder above would look like this:

Now, before you go running this in your environment and complain that you don't have handle data from your hosts, recall that the Get-Handle.ps1 module is one that has a binary dependency. If you want to run it, you'll first need to copy-item handle.exe to the ADMIN$ share of each target. This is easily accomplished with something like:

foreach($host in (gc hostlist)) { copy-item handle.exe \\$host\admin`$ }

from a Powershell prompt with the appropriate privileges to copy data to the remote hosts. I haven't tested that command, you may have to do some trickery to properly escape whack-whacks.

Incidentally, handle.exe, like netstat.exe with -naob, is another utility that has analysis unfriendly output. Luckily, Get-Handle.ps1 will return Powershell objects and direct Kansa to reformat the output as tsv, ready for import into your db or choice or for querying with Logparser.

Kansa was written with some help in the script, to view it, simply use the Powershell Get-Help -Full Kansa.ps1 command and you'll get something like the following:



That should be enough to get your started with Kansa. Please, take it for a spin and give me some feedback. Let me know what breaks, what sucks and what works. And if you have ideas for collectors, send them my way or if you want to contribute something, that would be welcome.

I've got a few more posts coming, so stay tuned and thanks! Oh, and if you found this interesting, useful, please take a moment to check out the SANS DFIR Summit where I'll be discussing Kansa and how it can be used to hunt at scale, or as I like to think of it, seine for evil.

Saturday, April 19, 2014

Kansa: A modular live response tool for Windows enterprises

Folks who follow me on Twitter, @davehull, have seen chatter from me about a side-project I've been working on, Kansa, in preparation for my presentation at the SANS DFIR Summit in Austin in June. While the Github page for the project contains a Readme.md that gives a little information about what the project is and does, I thought a series of blog post was in order.

A look at the Readme.md today says Kansa is a modular rewrite of another script in my Github repro called Mal-Seine. Mal-Seine was a Powershell script I hacked together for evidence collection during incident response.

Mal-Seine worked, but had issues. First, the 800 pound gorilla in the room. Andrew Case raises an excellent point about relying on Powershell for live response work:

https://twitter.com/attrc/status/444163636664082432

He's right. Users of live response tools relying on the Windows API must remain cognizant that adversaries may use malware to subvert the Windows API to prevent discovery. In plain English, when attackers compromise a computer, they can install malicious software that can lie about itself and its activities. A comprehensive incident response strategy must include other tools, some that get off the box entirely, a la network security monitoring, and some that subvert the Windows API themselves. Clicking the image above will take you to the Twitter thread on this subject.

My response to Case's absolutely correct claim, is two-fold.
  1. I've already mentioned, any investigator using tools on live systems that rely on the Windows API must keep in mind their tools may be lied to and therefore may provide incomplete or inaccurate information.
  2. As I replied to Case on Twitter, "not every threat actor is hooking" [the Windows API]. "If you can't find it with a first pass, go deep," meaning a tool like Mal-Seine can run relatively quickly across hosts and may not require you to push agent software to every node. If you don't find what you're looking for, you may need to push agents to each node and dig deeper.
To which the grugq smartly replied:

 
Based on this conversation, I sought data about the percentage of malware known to subvert the Windows API. The lack of response from the big players in the anti-malware community was disappointing. One anti-malware group engaged in the conversation and they couldn't provide numbers, but said that API hooking is a capability that generally runs in a small number of malware families and that based on the data I was collecting via Mal-Seine, it was unlikely that there would be very many families that could hide themselves completely.
 
That said, one is too many and in the cat-and-mouse-game that is information security, it's only a matter of time before every piece of malware has these capabilities. We absolutely need more tools in the defensive arsenal that are as advanced as the most advanced malware. Mal-Seine and its successor, Kansa, are not these advanced tools.
 
Potential Kansa users, I implore you to keep in mind this significant caveat. It's right there in the Readme.md.
 
Having said that, do I think it can still be a useful tool? Yes. If you're in a Windows 7 or later enterprise environment and your systems are configured for Powershell remoting, it can be a powerful way to collect data from hundreds, thousands, tens of thousands of systems in a relatively short amount of time.
 
Aside from this API subversion issue, which persists in Kansa, the issue with Mal-Seine was that it wasn't written to take advantage of Powershell's remoting capabilities, therefore it didn't scale well and more importantly, because it called binaries from a share and wrote its data to a share, it required CredSSP. What's the issue with CredSSP? From Microsoft's TechNet:
 
http://technet.microsoft.com/en-us/library/bb931352.aspx
 
The issue is highlighted. Because the script was calling binaries from a share, writing data to a share and being run remotely, it required the user's credentials be fully delegated to each system where it was running, so those remote systems could authenticate to the bin and data shares as that user. This unconstrained delegation meant that the user's credentials were exposed for harvesting by adversaries on every node where the script was run. That's bad. During IR we want to mitigate risk, not increase it. CredSSP was increasing risk.
 
Another short-coming of Mal-Seine was that it was monolithic. The logic for all the evidence to be collected from remote systems was contained in one file. If a user wanted to only collect a subset of the evidence or wanted to add new data for collection, they would have to modify the script.
 
When I set out to rewrite Mal-Seine, I had three goals in mind:
  1. It needed to obviate CredSSP.
  2. It needed to take full advantage of Powershell's remoting capability to make it more scalable.
  3. It needed to be modular.
I'm happy to say that with the help of folks like @jaredcatkinson,  @scripthappens, @JosephBialek and no small amount of hand-holding by @Lee_Holmes, items one and two from the list were brought to fruition.
 
For goal three, I turned to the grand-daddy of all modular forensics tools for inspiration, @keydet89's, RegRipper. As a result, Kansa is modular with the main script providing the following core functionality:
  • If the user doesn't supply their own list of remote systems, targets, Kansa will query Domain Controllers and build the list automatically
  • Error handling and transcription with errors written to a central file and transcription optional
  • Powershell remote job management -- invoking each module on target systems, in parallel, currently using Powershell's default of 32 systems at a time
  • Output management -- as the data comes in from each target, Kansa writes the output to a user specified folder, one file per target per module
  • A modules.conf file where users can specify which modules they want to run and in what order, lending support to the principal of collecting data in the order of volatility
There is more work to be done on the project and I'm actively maintaining a ToDo list on the Github site.
 
In addition to this core functionality, there are the modules, or collectors as I like to think of them. Today there are 18 collectors. For the most part, the collectors are stand-alone Powershell scripts, two current exceptions are Get-Autorunsc.ps1 and Get-Handle.ps1, which each require Sysinternals binaries, Autorunsc.exe and Handle.exe, respectively, to be in the $env:SystemRoot path of each target, which corresponds to the Windows ADMIN$ share, so if you want to use those two collectors, first push those binaries to the ADMIN$ shares of your targets. If your environment supports Windows Remoting, you can accomplish this with a foreach loop and Copy-Item; thousands of hosts in a relatively short order.
 
If you want to play around with Kansa, download the project, skim the code (the code is always the most accurate documentation, if not the most readable :b), ensure your target(s) support(s) Windows Remoting, covered elsewhere, Bing it. I recommend building a target list by putting the names of a couple test systems in a text file, below mine is called "hostlist", the -TargetCount argument is optional, my hostlist file contains dozens of systems, but I only want to run it on a couple.
 
Here's a sample command line:
 
.\kansa.ps1 -ModulePath .\Modules -OutputPath .\Output\ -TargetList .\hostlist -TargetCount 2 -Verbose
 
 In a future post, I'll cover more details about Kansa and its modules. The script enables data collection and data collection is easy compared to analysis. So, I've added an Analysis folder to the project and have provided some sample scripts therein. Most of these will require logparser. My goal is to automate analysis as much as possible. When dealing with data from many systems, automation is essential.
 
Thanks for reading, for trying out Kansa and for any feedback or contributions!

Other thoughts from Lean In

My previous posts in this series have touched on the core issues that Sheryl Sandberg addresses in her book  Lean In: Women, Work, and the W...