Wednesday, January 13, 2010

Musings on recent high profile hacks

So Google got hacked. You can read about it all over the place. The details are few, but from the sounds of the articles I've read, Google has been hit by what Mandiant likes to call the Advanced Persistent Threat or APT. In a nutshell, APT is likely nation-state backed hackers. Note that we don't have any idea which nation-state.

Google says they lost intellectual property, but claims that no customer data was compromised. Ok. I have actually worked more than one incident response case over the last few years where I felt we could honestly say that and that was after days and days of reviewing logs and running leads to ground. Maybe Google is being forthright about that. Maybe they are saying it for CYA. I don't think it's all that interesting.

What does interest me are the non-obvious ways that attacking Google can be leveraged into devastating attacks. Own Google, own the net.

Adobe has also been hacked. I think it would be a sad irony if they were hacked via PDF malware sent to an executive in the company. I also think that's highly likely.

And the US military has admitted that the unmanned drones they've been using in theaters of operation around the world are having their transmissions sniffed by $26 software readily available on the net. And the government has known that the transmissions were unencrypted and could be intercepted for like five years.

People have been crying about this quite a bit and about how shameful it is, etc. What I haven't heard any one else talk about is how the US government could possibly use this vulnerability to their advantage. First, they could reverse engineer the $26 software and see if it has any remotely exploitable vulnerabilities and use those to attack those intercepting the traffic.

A more obvious attack would be to feed bogus images through the drone to those sniffing the traffic, thus launching a misinformation campaign.

There are many facets to compromise.

No comments:

Post a Comment

Paperclip Maximizers, Artificial Intelligence and Natural Stupidity

Existential risk from AI Some believe an existential risk accompanies the development or emergence of artificial general intelligence (AGI)...