Friday, July 4, 2014

Kansa: Automating Analysis

Kansa, the PowerShell based incident response framework, was written from the start to automate acquisition of data from thousands of hosts, but a mountain of collected data is not worth bits without analysis, thus analysis has been part of the framework from almost the beginning as may be seen here in this commit from 2014-04-18.

Data collection has been configurable via the Modules.conf text file since the beginning and the project has been packaged with a default Modules.conf file with the order of volatility applied. Users could edit the file, comment and uncomment lines to disable and enable modules, customizing data collection.

After Kansa completed its collection, users could cd into the newly created output directory and then into the specific module directory and run the analysis script of their choosing to derive some intelligence from the collected data. For example, a user might run Kansa's Get-Autorunsc.ps1 collector to gather Autoruns data from a hundred hosts that should have identical or very similar configurations.

Following the data collection, they could cd into the new output directory's Autorunsc subdirectory, then run

Get-ASEPImagePathLaunchStringMD5Stack.ps1

which would return a listing of Autoruns aggregated by path to executable, command line arguments and MD5 hash of the executable or script all in ascending order, so any outliers would be at the top of the list and these entries may warrant further investigation.

This was all well and good, but with more than 30 analysis scripts, analysis of the collected data was becoming cumbersome. It was begging for automation. So, I added it.

There is now an Analysis.conf file in the Analysis folder that works in much the same way as the Modules\Modules.conf configuration file. Every analysis script has an entry in the configuration file and you can edit the file and comment out the scripts you don't want to run or uncomment the ones you want to run. Then when you run Kansa, simply add the -Analysis flag and after all the data is collected, Kansa will run each analysis script for you and save the output to a new folder under the time stamped output folder called AnalysisReports.

Below is a sample listing:

Click for original size image
In the top directory listing of the output directory, you can see the normal output file structure, one folder per module, this was obviously a very limited data collection with Autorunsc, File, Netstat and PrefetchListing modules being used. Error.Log contains information about errors encountered during runtime. What's new here is the AnalysisReports directory.

The bottom directory listing shows the contents of the AnalysisReports path. Each of these are TSV files containing summary data of the collected data with the file names reflecting the name of the script that produced the data set. And the beauty of this is, it's fully automated when you run Kansa with the -Analysis flag and you've configured the Analysis\Analysis.conf file.

I've made some other improvements to Kansa in the last couple weeks, but I'll save that for the next post. For now, I wanted to share the automated analysis piece. I'm pretty psyched about it because it's a big time-saver and it puts Kansa in a position where it can easily produce alerts based on the presence of a quality or quantity on which an analysis script is written to trigger.

No comments:

Post a Comment

Paperclip Maximizers, Artificial Intelligence and Natural Stupidity

Existential risk from AI Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI)...