One of the most exasperating things about attempting to defend an organisation against attacks by hackers is that the conflict is so asymmetric. Enterprises and public bodies operate within the law – well, most of them — and are constrained by ethical and regulatory considerations. Attackers do not and are not.
There must be legions of corporate infosecurity professionals who would dearly love to retaliate against hackers using the same techniques the attackers themselves deploy – malware, network intrusion, social engineering and the rest. What’s stopping them? The law, certainly. The ethical considerations can be harder to grasp and a cynic might argue that the main restraint here is the potential publicity disaster of a botched, corporate-sponsored hacking attempt.
But I’d argue that there is a more fundamental factor – one that is largely responsible for the asymmetric nature of the infosecurity battlefield. And it’s cultural.
Corporate IT and security departments operate within a highly structured, bureaucratic and institutional culture. This shapes the defences they build, the products they buy, the ways in which they respond – in fact, the very way they think about security. Security vendors typically share the same mindset, which influences the solutions they offer.
At the recent ISSE conference in Brussels, I was struck by all the talk of frameworks, guidelines and standards. And this is all useful stuff, even if it does tend to address yesterday’s problems, given the way threats evolve so fast. But while implementing a framework might help you feel as though you’ve achieved something, the chances are that a hacker – who’s never heard of the various standards you’re so proud of achieving, and would be bored rigid by them – is exfiltrating your data via techniques never considered by the framework’s authors.
There’s an awful lot of box-ticking going on in the corporate security world. Got a firewall? Tick. Got an IDS? Tick. Done your annual pen-test? Tick. (No matter that the terms on which you engaged the pen-testers actually forbade them to venture anywhere near production systems, or engage in the social engineering of staff, and demanded completion in a week. In fact, you didn’t really have a pen test at all, did you? You’d have been better off downloading the free version of Nessus and doing a quick vulnerability scan for all the ‘testing’ that went on.)
Organisations like to think in terms of formal processes. If they can give these fancy names, and if they can reference EU standards, and if the reports can be written with people with impressive accreditations with the word ‘security’ in them — well, that’s got to be good, right?
The problem is, the attackers do not feel bound to abide by standards or processes. They do not have a well-defined career track. For them, it’s whatever works.
So should we change the way we think about security and the ways in which we respond? There are enormous problems with adopting a hacker mentality. There are those pesky laws and ethics, of course. But there’s also the problem that real hackers don’t fit comfortably in institutional settings. There are attempts to address this: for example, Abertay Dundee University runs an ethical hacking degree course that inculcates a hacker mindset in its students while stressing the ethical and legal aspects and developing security skills in a way that fits with the needs of organisations. This is an approach that has since been copied by other UK universities.
Governments are not above using hackers’ tools and techniques. Georgia’s Computer Emergency Response Team (CERT-Georgia) recently planted malware on systems that were under attack, and claim to have hooked a Russia-based hacker. They even say they have captured video of the alleged miscreant.
Authorities in Germany also, rather controversially, use malware as part of criminal investigations. I tried to sneak into a presentation about this at a recent NATO conference, but got chucked out. Apparently, they don’t like to talk about it too much.
And then there’s Stuxnet and its related malware, developed and deployed by state intelligence agencies. While state-sponsored hacking is usually associated with the likes of China and Russia, that may only be because they are the most flagrant or the most-often caught. Western states have organisations and teams in place to carry out these kinds of operations: only time will tell to what extent they’re already doing it.
Perhaps we are witnessing a wearing down of our resistance to using offensive security techniques to combat cybercrime and cyber-espionage. The bad guys have had it their own way for too long. But this is dangerous territory. It’s not as though we don’t know how to do this stuff: we have plenty of people wearing white hats who know the technologies and techniques inside-out. The question is whether we can effectively and safely adopt the guerilla tactics of the hacker for our lawful defence. That’s far from certain.