Friday, November 7, 2008
Incident Response
Canadian expat, published info sec author and Bermudan beachcomber, Andrew Hay recently posted a question over at Michael Santarcangelo's Security Catalyst Community Forums asking about a framework for incident response for a bank of several hundred employees.
I gave my answer largely based on Ed Skoudis' outstanding SANS Security 504: Hacker Techniques & Incident Handling course.
I won't rehash that answer here, but I will provide some additional insights. I mentioned in my post that I used to work in info sec on a large, unprotected higher ed network. Enterprise uniformity was non-existent. The academic freedom that makes universities vibrant and interesting makes life hell for info sec personnel. If managing developers is herding cats, running an info sec program in higher ed is like being the poop-scooping clown at the back of the cat herder's parade. Hm, need a better metaphor.
Given the nature of our environment and the constraints placed on info sec, incident response was a regular activity. During my time working in academia, I responded to hundreds of incidents, thank you Blaster, Sasser, Zotob and the countless number of postcard.gif.exes.
If you're creating incident response policies and procedures, you want to be careful that they are not overly specific. Since incidents vary widely, you don't want to be constrained by a short-sighted plan that didn't account for this specific incident. If your plan states you won't pull the plug on a system without approval of the system's owner and an incident is occurring where plug pulling is needed, but the owner is unreachable, you're damned if you do and damned if you don't. Your policies and procedures need the right amount of flexibility.
If you're an info sec manager, you'll want to run interference for your incident response team. Send your IR folks out in pairs or larger teams. While one person works the incident at the keyboard, another can talk to the system's owner or the manager of the affected department. The handler at the keyboard is going to need to concentrate and that can be difficult with people walking all over you.
On a technical note, if you're going to image the system(s) in question, by all means make an image of the RAM for later analysis. There's more and more evidence of malware that remains memory resident only and if you don't grab the contents of RAM, you may not find all the evidence you need from a hard drive image only.
There's much more to be said about IR. It's a big and constantly changing field requiring practitioners to stay current.
Saturday, October 25, 2008
Computer Forensics, Investigation and Response
I'm excited. For 10 weeks this summer I was privileged to teach SANS Security 508: Computer Forensics, Investigation and Response via the Mentor program. It is one of my favorite SANS courses for its depth and the extensive hands-on exercises. Unlike other forensics courses that teach specific tools without getting into what's going on behind the scenes, this course pulls back the curtain with an in depth look at different file systems and how they store and organize data on disk.
Once we've covered the foundational materials we introduce a comprehensive methodology that covers all the important aspects of conducting a successful investigation. There's even a day focused on legal issues.
If there's one problem with the course, it's the sheer volume of information to be digested. One nice thing about covering it over 10 weeks, as opposed to in six days, is that you get more time to take it all in, try things out, absorb the content and experiment with the tools and concepts.
I'm excited because I get to do it again starting in January. Full details are available at http://www.sans.org/mentor/details.php?nid=14464. If you live in the Kansas City area and are interested, please check it out. If you know someone else who may benefit from this, please spread the word.
Saturday, October 11, 2008
As seen on Twitter
# mdowd @alexsotirov Because I'm not interested in the common good! about 4 hours ago
# alexsotirov Why is it that all these source code audits of voting machines are done by university professors instead of Mark Dowd? about 4 hours ago
Tuesday, October 7, 2008
SANS' Web App Pen Testing In Depth Day Four
I played around with some interesting tools that I hadn't used before, BeEF for one, and Kevin talked about some tools and ideas that are being developed by him and his colleagues at InGuardians. What a great bunch of minds at InGuardians. I aspire to be like the folks in that company and to work with a similar group of people.
The class wrapped up with an overview of the materials and the process. I'm excited for Kevin going forward. He's put together a good course and it's only going to get better when the six day version comes out.
I stand by my earlier statements about Kevin. He's a great teacher and a judging from the two nights I had the good fortune to have dinner with him and hangout for a bit, he's a quality human being. I've had some brilliant instructors over the years and they knew it and the result was that they were not very approachable. Kevin is a fantastic instructor and two days in a row invited anyone from the class to join him for dinner.
If you're looking to get started in web app pen testing, or you've been doing it for a little while and aren't sure about your methods, I strongly recommend this course. I had some experience with web app pen testing prior to taking the course. The result for me was that the first couple of days were mostly review with a few new nuggets here and there, but day three and especially day four really broke some new ground for me. The entire course also validated my own methods and as an aspiring instructor, it was great to watch Kevin teach the class. He's a natural and I am looking forward to seeing him again at a future con. I hope I can make it to Shmoo in February and maybe catch him there if he attends.
Sunday, October 5, 2008
Day Three of SANS' 542: Web App Pen Testing In Depth
By the way, it's really late and I should be asleep, so I'm keeping this post short. Or I intend to.
We looked at information leakage, this is one of the things I find most often in my own testing. Developers allow their applications to throw errors back to a user and the errors leak information about the implementation of the application, such as what OS, backend DB or other components of the system. Yes, this is bad. You should be returning generic error messages. Preferably something that has the right look and feel for your application. Many developers I work with on a regular basis simply redirect the user back to the start of the application when the app throws an exception. This is not as bad as leaking information, but it sucks for usability. Don't do it that way.
We looked at username harvesting. This is something I find quite a bit in my work. And it's a difficult problem to overcome if you spend much time thinking about ways to mitigate it. If you have an app where people can register for a new account, it's hard to prevent username harvesting for obvious reasons. Password resets, security questions and the like are another area where username harvesting is pretty common, but generally is more preventable. Account registration and creation is the biggie.
I brought this up with Kevin and he had an excellent suggestion. Don't prevent it. Detect it and block the attack. I'll be writing this up as a recommendation in the future.
We looked at fuzzing applications using the Burp Suite, talked about Absinthe. I wish there would have been an exercise for Absinthe. I have it installed on my pen testing box, but haven't used it yet.
Greasemonkey was introduced. I love Greasemonkey, though I've never used it for pen testing. I find it really useful for adding functionality to web interfaces. It rocks.
The last part of the day was a review of some of the newer developments on the web; namely Web Services Definition Language, Universal Description, Definition, Integration Specification (UDDI), Simple Object Access Protocol, AJAX and JSON. Frankly, I wish we could have spent an entire day on these areas alone.
You've all heard of Web 2.0. AJAX and JSON are two of the core components that drive Web 2.0, but many larger enterprises are only now beginning to role them out so many web app pen testers don't have much experience with them, including yours truly. I could use more info on these technologies and have it on my list to find out as much as I can in the coming weeks.
All in all, another good day. Again, I wish the course had more hands-on exercises and as I've mentioned previously, I know it's coming in the six day version of the course. In fact, I had dinner with Kevin Johnson, the author of the course and a couple other students and Kevin talked about how day six of the course is going to be a full on web app pen test exercise from start to finish. If it includes all the whiz-bang Web 2.0 aspects, that will be really beneficial.
Now if you'll excuse me, I should have been asleep a couple hours ago. Good night now!
Saturday, October 4, 2008
Day Two: SANS Network Security 2008
Scouting your client is an essential part of the process and the course presents methods and tools for accomplishing this. Google is your friend. Johnny Long's Google Hacking For Penetration Testers will be a trusted companion. Some less well-known, but highly useful methods were covered. If you've studied pen testing, you may have knowledge of these. If it seems I'm being vague, that's because I am. I respect the work that Kevin has put into developing the course and I'm not going to give it all away.
I do wish the course had more hands-on with some of the tools that are presented. I know from talking to Kevin that the course is going to be expanded. Perhaps the expanded version will include more exercises with some of these tools.
After gathering useful info about the company, we drill down and gather details about the technologies that are being used. The usual tools for gathering information about systems are covered, but there were a few that I have not used including RSnake's Fierce.
There was a discussion of some issues relating to web application server architecture and some of the caveats that can throw off a pen tester and ways to work around those obstacles.
We continued to drill down, from the servers to the apps on those servers and ways to gather useful information about those applications. Most of the tools discussed were ones I'd had experience with before. One exception was OWASP's DirBuster. I'd seen it before, but never bothered to try it out. Now I have and will incorporate it into my testing.
All in all, another good day in class and I'm really looking forward to tomorrow when we enter the discovery phase where we will uncover weaknesses in the app.
Friday, October 3, 2008
SANS Network Security 2008
I am currently in Las Vegas at Network Security 2008 attending Web Application Penetration Testing In-Depth developed and taught by Kevin Johnson of InGuardians, developer of BASE, Samurai and many other Open Source projects.
I wasn't sure I should take the course. I've been doing web app pen tests for a while. By no means am I an expert and I don't claim to know all there is to know, but I wasn't sure I would get enough from the course to make it worth my while. I'd say on a scale of one to five, five being an expert, I'm probably almost a four. You should know that one of my many flaws is that I consistently underestimate my abilities.
Day one didn't teach me very many new things about web application pen testing, but there were a few nuggets. However, based on day one, I am confident that over the next three days I will pick up many great insights that make me more effective.
Johnson has put a tremendous effort into the course materials and he may be the best instructor I've ever had. He has a very friendly and knowledgeable approach. He's clearly a subject matter expert, but he has the right amount of self-effacing humor.
Based on what I've seen thus far, being in this course is going to have two great benefits. I will learn to be a better web app pen tester and will learn how to improve my teaching skills.
Wednesday, October 1, 2008
Feldman's product or idea maturity model
I was listening to Gary McGraw's Silver Bullet Security Podcast Show 002 where Dan Geer was the victim.
If you haven't listened to the Silver Bullet Podcast, it is a series of interviews with information security luminaries. I find most of the guests to be fascinating and Geer was obviously no exception. He had many great things to say.
One of the things Geer mentioned during the show was that he used to attend talks by Stu Feldman a computer scientist forged in the bowels of Bell Labs, what an amazing place that must have been to work. Feldman is the creator of the make utility and is currently the VP of Engineering at Google.
Apparently Feldman had this concept for evaluating the maturity or quality of an idea, concept or product. After the show, I searched for it online and couldn't find it, so I thought I'd share it here in hopes that maybe Google will index it and others could find it later and know who to attribute it to.
Feldman's model has five levels so they can be tracked on one hand. Here it is:
1. You have a good idea.
2. You can make it work.
3. You convince a gullible friend to try it.
4. People stop asking why you're doing it.
5. People start asking others why they aren't doing it.
That's it Feldman's method for evaluating the maturity of a product or idea. With apologies to Mr. Feldman for publishing this without his permission.
Tuesday, September 9, 2008
Strategic thinking and doing
While the organization has an architect, he's too busy with short-term projects to focus on longer term strategy. His attention is on an encompassing portal project, while infrastructure issues such as developing standards relating to languages, tools and processes go unattended.
No two development teams work the same way. There's no central repository for code that can be shared across the enterprise. Even within the same team, developers don't have access to all of the same tools.
Few of the external facing web pages adhere to accepted web standards and even the branding is inconsistent. These are not security issues, but they reflect an overall pattern that permeates the organization's development efforts.
Part of me wonders if adapting the Fixing Broken Windows crime fighting approach might help us clean up all aspects of our development process the same way it helped clean up the streets of New York.
Tuesday, September 2, 2008
Freedom and security
Friday, August 29, 2008
SANS Network Security 2008
I'm psyched about going to NS2008. The last time I was in Vegas was for Black Hat 2006.
If I've met you online via Twitter or through the PaulDotCom.com IRC channel and you're going to be in Vegas, let's meet up and grab a beer. And if you're Kevin Johnson or Ed Skoudis, I've already promised you a round.
Friday, August 22, 2008
Security at your table?
How else do we explain the fact that security is often not invited to the table? Or if they are allowed a spot at the table, it is all too often so that they can bus the dirty dishes and clean up everyone else's mess.
If your organization is serious about security, give infosec a seat at the table at the start of the project planning process. Much has been written about the cost of adding security or fixing bugs late in the software development process. This is not new information.
If your company wants to develop more secure software, bring qualified security personnel to the table during the requirements gathering phase. Ask them to contribute to the project from start to finish. They should have input every step of the way. In addition to reviewing the customer's functional requirements, they should provide input on the system's security requirements.
After requirements have been gathered, include security in the planning phase of your project. Don't just ask security to review your plans, ask them to contribute to the planning process. Invite them to the planning meetings.
Ask security to help you with testing along the way. If you have a static code analysis tool use it early and often. You may save yourself days or weeks of refactoring if you discover insecure coding techniques early in the development process. Whether you have a static code analysis tool or not, you need to include security in your code review process.
Finishing your code and handing it off to security for a comprehensive review at the end without their involvement along the way is better than nothing, but it is less than ideal. Often as a project nears completion, delivery schedules are being made. Too often these delivery schedules are made without input from the security team. Developers and their managers underestimate the amount of time that will be required to review code. Do not make this mistake in your organization. Do your code reviews along the way and include security in that process as you go.
Once your development nears completion, begin planning your application penetration test. Obviously security needs to be included in the planning for the penetration test. I recently worked an application pen test where I was tasked with testing some changes to an existing application. Testing the changed functionality required three different types of accounts, yet when I was brought in to look at it, I hadn't been given a single account in the system. Due to the nature of the application, getting accounts created and properly setup took several days. All the while the clock was ticking on a scheduled delivery date. Fortunately for this organization, the test was completed successfully a day before the scheduled delivery date.
One more thing you should know, a successful penetration test will prove that your application has security problems. A failed penetration test does not prove that the application is secure. The best chance of building secure systems is to invite security to the table early and keep them engaged throughout the process.
Don't worry, when your system is compromised and someone makes a mess of things, you can still call security to have them clean up the mess.
Friday, August 15, 2008
Touch on Windows via PowerShell
My first thought was that maybe wmic could accomplish the task. Turns out wmic can only read timestamps, not set them.
More digging revealed that Microsoft's PowerShell could be used to modify file timestamps.
Below is the nitty and the gritty.
From within powershell:
$(Get-Item
$(Get-Item
$(Get-Item
There are also utc timestamp attributes (CreationTimeUtc, etc.). I
haven't touched (no pun intended) those.
Here's a sample run from my PowerShell prompt (PS>):
PS > date
Thursday, August 14, 2008 9:38:47 am
PS> echo > test.txt
PS> dir
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 8/14/2008 9:38 AM 0 test.txt
PS>$(get-item test.txt).lastwritetime=$(get-date "08/31/2012")
PS>dir
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 8/31/2012 12:00 AM 0 test.txt
You can use these commands to change timestamps such that their
CreationTime is later than their other timestamps.
Fun stuff.
Wednesday, August 13, 2008
Blended attacks for pen testers
If you haven't read CVE-2008-3280 it discusses findings by Ben Laurie and his team at Google in cooperation with Dr. Richard Clayton. In short, the advisory discusses awesomely powerful blended attacks that leverage Kaminsky's DNS findings and the entropy issues Debian suffered earlier this year and the lack of CRL checking by browsers. See (CVE-2008-0166).
Will this be the year of the blended attack? Recall CVE-2008-2540, the blended attack that relied on Safari's saving downloaded files to the desktop and the way Windows desktop deals with executables.
Along these lines, I am looking forward to the insights Ed Skoudis and Kevin Johnson will share at SANS Network Security 2008. Skoudis and Johnson are teaming up to deliver the keynote titled "The Ultimate Pen Test: Combining Network and Web App Techniques for World Domination."
In my own experience conducting web app pen tests, I've found command injection flaws that allowed me to execute arbitrary system commands as the Apache user. Granted running commands with Apache's privilege level isn't as good as being root (unless the box is misconfigured), but the Apache user can cat /etc/passwd, see who frequent users are via the last command, or depending on egress filtering, may be able to run traceroute from the web server to help map the network from the inside out, or download a pen tester's agent to facilitate deeper penetration. Ahem.
Or consider a web application that contains a Cross Site Request Forgery (CSRF or XSRF depending on who you ask) vulnerability. If such a flaw exists in the web based management interface for a network security device, we have a pen testing situation that will benefit from the skills of both the web app pen tester and the traditional pen tester. Sharpen your spears for a little targeted phishing. Use Google to find postings by the firewall administrator for the organization. What are the odds that admin will be logged into the firewall web gui for hours at a time each day? Craft a good email message with a tempting link for that admin, get him to click it while logged into the vulnerabe web app and you're way.
If someone like me with limited pen testing experience can think up simple ways like this to use a web app pen test as a force multiplier for a network pen test, imagine what Skoudis and Johnson, both experts in the field will have to say on the subject. Their keynote in Vegas will be one of the best infosec talks of the year.
Paperclip Maximizers, Artificial Intelligence and Natural Stupidity
Existential risk from AI Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI)...
-
If you're fortunate enough to be running a modern endpoint detection and response (EDR) product or even endpoint protection (EPP), you m...
-
I've been playing around with the matasano crypto challenges for my own edification. Let me say up front, I'm a noob when it comes t...
-
My last post here, XOR'd play: Normalized Hamming Distance, was a lengthy bit about the reliability of Normalized Hamming Distance to d...