ContraRisk Security Podcast 0015: Prism and the cost of surveillance

» Listen or download now on the podcasts page »

Siraj Ahmed Shaikh

Siraj Ahmed Shaikh

In all the debate raging around the NSA’s phone and Internet interception programme, PRISM, little seems to have been said about how it works – and particularly, how well it works.

If you’re a conspiracy theorist or natural paranoid, it’s easy to imagine that PRISM flawlessly and effortlessly plucks suspicious messages from the ether. But it’s just an IT system – and when did you last encounter an IT system of any appreciable size that didn’t suffer from flaws and inefficiencies?

The NSA has access to technology, skills and budgets that corporates can only envy – but they’re not supermen. We can, perhaps draw some parallels between PRISM and the kinds of security monitoring systems that organisations deploy, such as IDS. But at this scale, how do these comparisons hold up?

We spoke to Siraj Ahmed Shaikh, a reader in cyber-security at Coventry University and who has a particular interest in large-scale monitoring systems.

Does the kind of network monitoring we’re accustomed to scale up to this massive level of interception? There are some similarities in terms of the technologies and algorithms used for looking for patterns of interest. Unfortunately, one common feature is that of false positives. And these are expensive, requiring further investigation, additional monitoring and analysts’ time.

When you scale up the systems and the volumes of traffic being analysed, you also scale up the false positives – potentially to the point where they become unacceptably costly. And if you’re spending so much of your resources on dealing with false positives, is there a danger of missing the very thing you’re looking for?

While, at the IDS level, you might be prepared to invest in expert analysts to deal with the false positives, and ensure that your systems are working as you want them to, at this massive scale there must be a great reliance on automation. But how do you validate and audit that automation? How can you ensure it’s working as advertised, especially where it may be difficult to assess whether an alert is truly genuine or false? How do you hold an algorithm accountable? And how can you be sure you’re getting value for money?

There are difficulties in dealing with digital evidence. Attribution is tricky. So determining the accuracy and integrity of the evidence is also a problem.

We also discussed whether the revelations about Prism threaten to damage the trust the public has in technology, with possible repercussions for the economy.

Of course, we often hear that if you’ve got nothing to hide, you’ve got nothing to fear. But everyone has something to fear from a false positive.

» Listen or download now on the podcasts page »

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.