Twenty-three months ago I returned to the career field I am drawn to more than any other after an equally long hiatus. I was happy to be back in an information security role, even if I did give up a "director" level position for more of an in-the-weeds role. I do like weeds.
For the last two years, I spent about 99% of my working life on web application security. Week one at my new job had me in training, learning how to use Fortify's static code analysis tool. I'd done manual code review before in both developer and security roles. In a previous position, I'd evaluated the open source static analysis tools and found them to be less effective than performing data flow analysis manually. Granted, data flow analysis may not catch all security flaws, but it provides good coverage against attacks from user input.
After two years working with Fortify, I'm happy to be done with it. Code review is difficult for humans to do well. Humans can barely write good software. Creating an expert system that does code review effectively is a "hard problem."
To be fair, the company I worked for was using Fortify out of the box with no custom rules and a decision was made early on to review all findings regardless of confidence level and severity rating. Decisions have inertia. I was told by someone within Fortify that our usage was "really aggressive" and learned that many similar enterprises were only reviewing issues with high confidence and high severity. Perhaps our aggressive program skewed my perception of the tool's ability to find vulnerabilities.
I will say this in Fortify's favor though, requiring developers to use the tool and to audit the issues in their code (audit does not always mean fix), does educate developers who take the time to read and understand the issues. So even if the tool is suboptimal for finding real security issues, it will make at least some of your developers think and learn about security issues and in the end having developers who understand security issues and who can write safe code is probably more valuable than having a tool that can reliably find flaws.
I've glossed over it thus far, but for those reading between the lines, yes, I've moved on from the day job that I've had for the last two years. A few of you knew I worked for a regional bank holding company doing application security work (mostly web related). It was the most demanding position I've ever held, note demanding does not equate to challenging, though at times the position was both. All-in-all my experience over the last two years was very positive. I worked with a team of smart folks and pushed myself to learn some critical skills.
I'm working in a new role now that is more aligned with my interests in incident response and forensics, though I will continue to work in the web application penetration space as often as I can. But if you ask me to do code review, I'll likely be doing it the old fashioned way, tracing data flows, line-by-line.
With my professional and personal interests more in sync than they have been for several years, I hope to be able to post some new research here or at the SANS Digital Forensics Blog, which I've been helping Rob Lee manage since its inception.
Subscribe to:
Post Comments (Atom)
Paperclip Maximizers, Artificial Intelligence and Natural Stupidity
Existential risk from AI Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI)...
-
If you're fortunate enough to be running a modern endpoint detection and response (EDR) product or even endpoint protection (EPP), you m...
-
I've been playing around with the matasano crypto challenges for my own edification. Let me say up front, I'm a noob when it comes t...
-
My last post here, XOR'd play: Normalized Hamming Distance, was a lengthy bit about the reliability of Normalized Hamming Distance to d...
No comments:
Post a Comment