For incident response teams these technologies have a more important purpose than denying deviants access to the inappropriate. Given that many malware variants use HTTP/S (or the ports for those protocols) for command and control (C2) and data exfiltration -- it's a great way to blend in with the noise of allowed web traffic -- these web content filtering tools can be put to good use both preventing and detecting malicious traffic.
Photo Source: http://www.flickr.com/photos/eneas/3471986083/sizes/z/in/photostream/ |
Many of the content filtering tools automatically block known-malicious web sites out of the box. In my experience putting together your own lists of malicious sites can give you additional protection. You can build your own lists based on publicly available sources (e.g. http://www.malwaredomains.com/, http://www.malwareurl.com/, http://bit.ly/xSO8rx, etc.), on your own intel gathering from analyzing malware found in your environment and if you run in certain circles, you may have access to government classified lists.
Over the last few years, commercial full-packet capture solutions have begun making inroads in the enterprises that can afford them. Some environments store this data and play it back later through IDSes after updating signatures to see if those devices now catch anything that they may have missed previously.
This same principle applies with web content filtering data. Rather than playing back the data, simply normalize and store the domains and IP addresses that devices in your network have communicated with over the last n months and periodically query your malicious domains data set to see which of those domains and IPs may have been overlooked because they were not known to be malicious at the time devices in your network were communicating with them.
Will you get some false positives from this? Undoubtedly, but you may also gain some insight into problems that you didn't know you had previously -- maybe a footnote to tuck into next quarter's 10-Q.
No comments:
Post a Comment