The sad killing of British soldier Lee Rigby has been in the headlines lately after release of a report about how authorities handled the case. Publicity was boosted because the committee thinks Facebook is responsible for the killing. They think the social media giant has a clear obligation to identify and report people who plan attacks like this. Just like the fact that phone companies report everybody who are talking about terrorism and the postal service sends a copy of all fishy letters to the Scotland Yard. I’m sure you get the sarcasm.
What happened is that British agencies, MI5, MI6 and GCHQ, had identified the killers, Michael Adebolajo and Michael Adebowale, as interesting persons before the attack. They did however fail to investigate properly and apparently made no attempts to get the suspects’ communications from Facebook. There would have been several ways for them to do that, by a direct request from the police to Facebook or by the secret intelligence connections between GCHQ and NSA. Meanwhile Facebook’s internal controls had flagged the killers’ communications and automatically closed their accounts. Facebook did however never report this to the British agencies. Which gave the Brits a convenient scapegoat to focus on instead of the fact that they never asked for that data.
Ok, so the Brits blame Facebook. Let’s take a closer look at some numbers and what they really are demanding. There’s about 1,6 billion users total on Facebook. 1,3 billion monthly active and about 860 million daily active users. These users share around 5 billion items and send over 10 billion messages every day. This creates a total stream of around 10 million items per hour and 173 000 per second. Quite a haystack to look for terrorists in!
Facebook has some 8 300 employees. If every single one of them, Mark Zuckerberg included, would spend their full working day monitoring messages and shared items, they would have to do over 60 items per second to keep up. Needless to say, any kind of monitoring must be automated for volumes like this.
Facebook is monitoring its content automatically. Some keywords and phrases trigger actions, which can lead to closure of accounts. This is understandable as no company want to be a safe haven for criminals and many kinds of harmful activities are prohibited in the user agreement. But Facebook is walking a thin line here. Their primary task is not to be a law enforcement agency but to provide a social media service. They must also be well aware of the fact that reporting innocent people to the authorities is highly irresponsible. Commonly accepted practices of justice are not obeyed anymore when dealing with potential security threats and there is no transparency. There are numerous cases where western authorities have detained and even tortured innocent persons, apparently based on some very vague indications. Maher Arar’s case is a well-known example.
So the bar for reporting someone must be high. It is easy for an Internet service to throw out a suspected user. They are after all not paying anything and Facebook have no obligation to let them be users. This ensures compliance with the user terms, no criminal activities allowed. But the threshold to report someone is naturally a lot higher. Especially when the volume forces Facebook to make automated decisions. This is not a sign of carelessness from Facebook’s side, it’s because people by default are entitled to communication privacy. It is also a direct consequence of the fact that terrorism suspicions are handled outside the normal justice system in many western countries. You carry a heavy responsibility if you feed innocent peoples’ data into a system like that.
Let’s face it. There’s a large number of criminal conversations going on right now both on Facebook and other social services. Many terrorists are also on the phone right now and some are picking up deliveries with items related to planned attacks. Nobody is expecting the phone company to routinely listen in to identify potential terrorists and nobody is expecting the post to check parcels randomly. Facebook may not report every flagged conversation, but they are at least doing something to not be a safe haven for terrorists. Still they are the only of these services that the Brits call a safe haven. Not very logical.
The simple reason for this apparent inconsistency is naturally the need for a scapegoat. The British agencies failed to investigate so they need someone else to blame.
But there is a more dangerous aspect hidden here as well. Snowden made us aware of the privacy threats on Internet. The wide-spread mass surveillance has so far to a large extent been secret and even illegal. Pandora’s Box is open now and authorities all over the world are racing to get legal rights to mass surveillance, before the large masses understand what it really would mean. Putting pressure on Facebook fits that agenda perfectly.
To be fair, one can naturally also ask if Facebook could have done more. A calm and balanced debate about that is welcome and beneficial. The flagged messages is probably quite a haystack too. To what extent is Facebook reviewing those messages manually, and could this process be improved to catch more potential killers? And at the same time avoid reporting any innocent users.
To illustrate that this isn’t as simple as many think. People are asking why Facebook didn’t react on stuff containing the phrase “let’s kill a soldier”. Well, this blog post contains it too. Am I a killer because of that? Should this post be flagged and given to MI5?
F-Secure invites our fellows to share their expertise and insights. For more posts by Fennel, click…
March 22, 2018