I've got a new blog post up at the SANS Digital Forensics Blog titled Digital Forensics: Detecting time stamp manipulation. The post is my effort to demonstrate that time stamp manipulation on systems running NTFS can be spotted (for now) if examiners take the time to fully investigate all of the available evidence (i.e. compare $STANDARD_INFO and $FILE_NAME) time stamps.
This is another barrage in my quest to get Brian Carrier, a true giant in this field, to add the capability to fls to pull $FILE_NAME time stamps into the body file format so we can build time lines using mactime that include both $STANDARD_INFO time stamps and $FILE_NAME time stamps.
Fortunately, Mark McKinnon has written a tool called mft_parser that will do this. As soon as that tool is available for wider release, I'll post a link.
Oh and I said for now, because I'm confident that the right rootkit will be able to manipulate $FILE_NAME time stamps as well as $STANDARD_INFO time stamps. In such cases, we'll have to rely on time stamps in other artifacts.
Tuesday, November 2, 2010
Friday, October 22, 2010
Windows Persistence
I wrote up a post on the SANS Digital Forensics blog titled Digital Forensics: Persistence Registry keys where I gave a couple of links to text files containing Registry keys for Windows XP SP3 system that I'd run Autoruns on to gather a list of Registry keys that could (possibly) be used as persistence vector's for malware.
I have collected similar lists for Windows Vista and Windows 7. The files are available at trustedsignal.com/IR.
I have collected similar lists for Windows Vista and Windows 7. The files are available at trustedsignal.com/IR.
Friday, September 17, 2010
I see what you did there
I had the pleasure of presenting at Security BSides in KC this morning. Shouts out to hevnsnt and bsideskc for putting on the event.
Unfortunately, my schedule didn't allow me to see all of the talks, but what I did see was valuable, though the "face time" with my peers in the field (I met hal_pomeranz and kriggins in the flesh) was probably more fun than presenting or watching any of the presentations I was able to see.
My talk was called I see what you did there, and was about time lines in forensic investigations and incident response. Some of the material in the talk comes out of SANS 508: Computer Forensic Investigations and Incident Response, a course I've had the pleasure of teaching a few times. Thank you to Rob Lee and his contributions to the field over at the SANS Digital Forensics Blog. Obviously, the six day course is able to cover this topic in much more detail than I was able to do in one hour.
BSides is awesome. Everyone should submit talks, it makes you better, even if you can't talk about an original tool or concept, many people don't know what you know and when you prepare to share it with them, you become more knowledgeable than you were when you started.
Unfortunately, my schedule didn't allow me to see all of the talks, but what I did see was valuable, though the "face time" with my peers in the field (I met hal_pomeranz and kriggins in the flesh) was probably more fun than presenting or watching any of the presentations I was able to see.
My talk was called I see what you did there, and was about time lines in forensic investigations and incident response. Some of the material in the talk comes out of SANS 508: Computer Forensic Investigations and Incident Response, a course I've had the pleasure of teaching a few times. Thank you to Rob Lee and his contributions to the field over at the SANS Digital Forensics Blog. Obviously, the six day course is able to cover this topic in much more detail than I was able to do in one hour.
BSides is awesome. Everyone should submit talks, it makes you better, even if you can't talk about an original tool or concept, many people don't know what you know and when you prepare to share it with them, you become more knowledgeable than you were when you started.
Saturday, July 10, 2010
Givem 'em enough rope
I mentioned on Twitter that having worked in relatively unrestricted environments (higher ed) and in highly restricted environments (banking) and in between, that in my experience, those environments with more draconian policies have better security.
No sooner than I hit "Send", I realized I should have also said "controls" because policies by themselves are pretty lousy security controls.
Since Twitter is less than ideal for elaborating on, well, much of anything, let me explain what I mean, for those who may not agree.
Most of my work in information security has been in the incident response and forensics space, with a few years in application security -- I'm a recovering developer.
During my time in higher ed, most users ran as admin and could install whatever they wanted, whenever they wanted. They could browse to any web site of their choosing. This is the "give 'em enough rope" approach to information security. The problem is that this doesn't just lead to the users hanging themselves. Shops operating in this manner are giving their employees enough rope to hang the entire organization.
During my tenure in the financial sector, few people ran as admin, application whitelisting was in effect for most people and web content filtering kept most people from browsing to known-malicious or "inappropriate" domains.
The results of these two disparate approaches was striking. Higher ed was an incident responder/forensic investigator's dream job as there was never a shortage of interesting work. By contrast, the bank didn't have any full time incident response and forensics folks. During my two years at the bank, we had less than a handful of issues and they were all drive-by-downloads from rogue advertisements on mainstream web sites.
I believe most organizations could greatly improve their security and reduce costs by taking away internet access for those employees that don't need it and greatly restricting internet access for those who do need it. It's unpopular, it's draconian, but it works.
Don't let your users run as admin. I can't believe we're still seeing this as much as we are. If you have some users who need admin access, give them separate accounts to use when they need that level of access.
Whitelisting. It sucks. It's a horrible pain for the users and those who have to maintain it. Before I worked in an environment that used it, I dismissed it completely. But as much as it sucks and is painful to implement and maintain, it will reduce the number of security incidents that you have to deal with. Note, if you take away your user's admin rights, you may not need whitelisting.
I've said almost nothing of application security, but this is another area where more restriction leads to greater security. Limit your developers access to production environments, don't let them adapt new technologies/frameworks/libraries without first taking the time to review the security of those technologies. Don't let devs move forward on projects until threat models have been developed and threats have been addressed. Don't let code go to production without some type of review, don't push applications to prod without security testing those apps, etc.
Yes, this is expensive and time consuming, but in my opinion it's a pay now or pay more later scenario. Spending thousands up front may save you from spending hundreds of thousands after a breach.
Will all of this save every organisation 100% of the time? No, but it will significantly reduce the number of incidents. Will it be popular with employees? No, but watching Double Rainbow Song is probably something they should do on their own time and on their own computer.
Security will never be perfect, but a big part of the reason it is as broken today as it is, is because we haven't made the unpopular decisions that need to be made.
No sooner than I hit "Send", I realized I should have also said "controls" because policies by themselves are pretty lousy security controls.
Since Twitter is less than ideal for elaborating on, well, much of anything, let me explain what I mean, for those who may not agree.
Most of my work in information security has been in the incident response and forensics space, with a few years in application security -- I'm a recovering developer.
During my time in higher ed, most users ran as admin and could install whatever they wanted, whenever they wanted. They could browse to any web site of their choosing. This is the "give 'em enough rope" approach to information security. The problem is that this doesn't just lead to the users hanging themselves. Shops operating in this manner are giving their employees enough rope to hang the entire organization.
During my tenure in the financial sector, few people ran as admin, application whitelisting was in effect for most people and web content filtering kept most people from browsing to known-malicious or "inappropriate" domains.
The results of these two disparate approaches was striking. Higher ed was an incident responder/forensic investigator's dream job as there was never a shortage of interesting work. By contrast, the bank didn't have any full time incident response and forensics folks. During my two years at the bank, we had less than a handful of issues and they were all drive-by-downloads from rogue advertisements on mainstream web sites.
I believe most organizations could greatly improve their security and reduce costs by taking away internet access for those employees that don't need it and greatly restricting internet access for those who do need it. It's unpopular, it's draconian, but it works.
Don't let your users run as admin. I can't believe we're still seeing this as much as we are. If you have some users who need admin access, give them separate accounts to use when they need that level of access.
Whitelisting. It sucks. It's a horrible pain for the users and those who have to maintain it. Before I worked in an environment that used it, I dismissed it completely. But as much as it sucks and is painful to implement and maintain, it will reduce the number of security incidents that you have to deal with. Note, if you take away your user's admin rights, you may not need whitelisting.
I've said almost nothing of application security, but this is another area where more restriction leads to greater security. Limit your developers access to production environments, don't let them adapt new technologies/frameworks/libraries without first taking the time to review the security of those technologies. Don't let devs move forward on projects until threat models have been developed and threats have been addressed. Don't let code go to production without some type of review, don't push applications to prod without security testing those apps, etc.
Yes, this is expensive and time consuming, but in my opinion it's a pay now or pay more later scenario. Spending thousands up front may save you from spending hundreds of thousands after a breach.
Will all of this save every organisation 100% of the time? No, but it will significantly reduce the number of incidents. Will it be popular with employees? No, but watching Double Rainbow Song is probably something they should do on their own time and on their own computer.
Security will never be perfect, but a big part of the reason it is as broken today as it is, is because we haven't made the unpopular decisions that need to be made.
Tuesday, June 29, 2010
Wifi Security Slides
I had the privilege of being invited to speak about wireless security to the U.S. Army's Combined Arms Center, Office of the Chief Information Officer & G6's Information Security Symposium at Fort Leavenworth in Leavenworth, Ks on June 29, 2010. Yes, that is a "challenge coin" that says "Presented by the commanding general..." in the photo, a nice addition to the collection.
Judging by the comments I received after the talk, it went well. I didn't drop any "1337" hax0rs or any zero day, in fact, in the few weeks I spent preparing for the presentation, it seemed to me that there isn't too much new stuff coming out for 802.11, or more likely, I don't travel in the right circles.
The talk drew entirely on the research of others and I tried to give credit wherever it was due. Thank you Josh Wright for letting me stand on your shoulders. I did tell people to visit your site and take your course.
I have made the slides available in PDF and PPT format. There are a few canned video demos in the PPT version that are obviously not in the PDF version and the PPT version contains copious notes, not found in the PDF.
Thanks again to Austin Pearson, Major Fraley, et al for the opportunity.
Wednesday, April 14, 2010
Career change management
Twenty-three months ago I returned to the career field I am drawn to more than any other after an equally long hiatus. I was happy to be back in an information security role, even if I did give up a "director" level position for more of an in-the-weeds role. I do like weeds.
For the last two years, I spent about 99% of my working life on web application security. Week one at my new job had me in training, learning how to use Fortify's static code analysis tool. I'd done manual code review before in both developer and security roles. In a previous position, I'd evaluated the open source static analysis tools and found them to be less effective than performing data flow analysis manually. Granted, data flow analysis may not catch all security flaws, but it provides good coverage against attacks from user input.
After two years working with Fortify, I'm happy to be done with it. Code review is difficult for humans to do well. Humans can barely write good software. Creating an expert system that does code review effectively is a "hard problem."
To be fair, the company I worked for was using Fortify out of the box with no custom rules and a decision was made early on to review all findings regardless of confidence level and severity rating. Decisions have inertia. I was told by someone within Fortify that our usage was "really aggressive" and learned that many similar enterprises were only reviewing issues with high confidence and high severity. Perhaps our aggressive program skewed my perception of the tool's ability to find vulnerabilities.
I will say this in Fortify's favor though, requiring developers to use the tool and to audit the issues in their code (audit does not always mean fix), does educate developers who take the time to read and understand the issues. So even if the tool is suboptimal for finding real security issues, it will make at least some of your developers think and learn about security issues and in the end having developers who understand security issues and who can write safe code is probably more valuable than having a tool that can reliably find flaws.
I've glossed over it thus far, but for those reading between the lines, yes, I've moved on from the day job that I've had for the last two years. A few of you knew I worked for a regional bank holding company doing application security work (mostly web related). It was the most demanding position I've ever held, note demanding does not equate to challenging, though at times the position was both. All-in-all my experience over the last two years was very positive. I worked with a team of smart folks and pushed myself to learn some critical skills.
I'm working in a new role now that is more aligned with my interests in incident response and forensics, though I will continue to work in the web application penetration space as often as I can. But if you ask me to do code review, I'll likely be doing it the old fashioned way, tracing data flows, line-by-line.
With my professional and personal interests more in sync than they have been for several years, I hope to be able to post some new research here or at the SANS Digital Forensics Blog, which I've been helping Rob Lee manage since its inception.
For the last two years, I spent about 99% of my working life on web application security. Week one at my new job had me in training, learning how to use Fortify's static code analysis tool. I'd done manual code review before in both developer and security roles. In a previous position, I'd evaluated the open source static analysis tools and found them to be less effective than performing data flow analysis manually. Granted, data flow analysis may not catch all security flaws, but it provides good coverage against attacks from user input.
After two years working with Fortify, I'm happy to be done with it. Code review is difficult for humans to do well. Humans can barely write good software. Creating an expert system that does code review effectively is a "hard problem."
To be fair, the company I worked for was using Fortify out of the box with no custom rules and a decision was made early on to review all findings regardless of confidence level and severity rating. Decisions have inertia. I was told by someone within Fortify that our usage was "really aggressive" and learned that many similar enterprises were only reviewing issues with high confidence and high severity. Perhaps our aggressive program skewed my perception of the tool's ability to find vulnerabilities.
I will say this in Fortify's favor though, requiring developers to use the tool and to audit the issues in their code (audit does not always mean fix), does educate developers who take the time to read and understand the issues. So even if the tool is suboptimal for finding real security issues, it will make at least some of your developers think and learn about security issues and in the end having developers who understand security issues and who can write safe code is probably more valuable than having a tool that can reliably find flaws.
I've glossed over it thus far, but for those reading between the lines, yes, I've moved on from the day job that I've had for the last two years. A few of you knew I worked for a regional bank holding company doing application security work (mostly web related). It was the most demanding position I've ever held, note demanding does not equate to challenging, though at times the position was both. All-in-all my experience over the last two years was very positive. I worked with a team of smart folks and pushed myself to learn some critical skills.
I'm working in a new role now that is more aligned with my interests in incident response and forensics, though I will continue to work in the web application penetration space as often as I can. But if you ask me to do code review, I'll likely be doing it the old fashioned way, tracing data flows, line-by-line.
With my professional and personal interests more in sync than they have been for several years, I hope to be able to post some new research here or at the SANS Digital Forensics Blog, which I've been helping Rob Lee manage since its inception.
Wednesday, January 13, 2010
Musings on recent high profile hacks
So Google got hacked. You can read about it all over the place. The details are few, but from the sounds of the articles I've read, Google has been hit by what Mandiant likes to call the Advanced Persistent Threat or APT. In a nutshell, APT is likely nation-state backed hackers. Note that we don't have any idea which nation-state.
Google says they lost intellectual property, but claims that no customer data was compromised. Ok. I have actually worked more than one incident response case over the last few years where I felt we could honestly say that and that was after days and days of reviewing logs and running leads to ground. Maybe Google is being forthright about that. Maybe they are saying it for CYA. I don't think it's all that interesting.
What does interest me are the non-obvious ways that attacking Google can be leveraged into devastating attacks. Own Google, own the net.
Adobe has also been hacked. I think it would be a sad irony if they were hacked via PDF malware sent to an executive in the company. I also think that's highly likely.
And the US military has admitted that the unmanned drones they've been using in theaters of operation around the world are having their transmissions sniffed by $26 software readily available on the net. And the government has known that the transmissions were unencrypted and could be intercepted for like five years.
People have been crying about this quite a bit and about how shameful it is, etc. What I haven't heard any one else talk about is how the US government could possibly use this vulnerability to their advantage. First, they could reverse engineer the $26 software and see if it has any remotely exploitable vulnerabilities and use those to attack those intercepting the traffic.
A more obvious attack would be to feed bogus images through the drone to those sniffing the traffic, thus launching a misinformation campaign.
There are many facets to compromise.
Google says they lost intellectual property, but claims that no customer data was compromised. Ok. I have actually worked more than one incident response case over the last few years where I felt we could honestly say that and that was after days and days of reviewing logs and running leads to ground. Maybe Google is being forthright about that. Maybe they are saying it for CYA. I don't think it's all that interesting.
What does interest me are the non-obvious ways that attacking Google can be leveraged into devastating attacks. Own Google, own the net.
Adobe has also been hacked. I think it would be a sad irony if they were hacked via PDF malware sent to an executive in the company. I also think that's highly likely.
And the US military has admitted that the unmanned drones they've been using in theaters of operation around the world are having their transmissions sniffed by $26 software readily available on the net. And the government has known that the transmissions were unencrypted and could be intercepted for like five years.
People have been crying about this quite a bit and about how shameful it is, etc. What I haven't heard any one else talk about is how the US government could possibly use this vulnerability to their advantage. First, they could reverse engineer the $26 software and see if it has any remotely exploitable vulnerabilities and use those to attack those intercepting the traffic.
A more obvious attack would be to feed bogus images through the drone to those sniffing the traffic, thus launching a misinformation campaign.
There are many facets to compromise.
Subscribe to:
Posts (Atom)
Paperclip Maximizers, Artificial Intelligence and Natural Stupidity
Existential risk from AI Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI)...
-
If you're fortunate enough to be running a modern endpoint detection and response (EDR) product or even endpoint protection (EPP), you m...
-
I've been playing around with the matasano crypto challenges for my own edification. Let me say up front, I'm a noob when it comes t...
-
My last post here, XOR'd play: Normalized Hamming Distance, was a lengthy bit about the reliability of Normalized Hamming Distance to d...