WP_000796

The three kinds of privacy threats

WP_000796We talk a lot about privacy on the net nowadays. Some claim that privacy is dead, and you just have to cope with it. Some are slightly less pessimistic. But all agree that our new cyber-society will redefine and reduce what we once knew as personal privacy.

The privacy threat is not monolithic. There are actually many different kinds of privacy threats and they are sometimes mixed up. So let’s set this straight and have a look at the three major classes of privacy.

Peer privacy

This is about controlling what data you share with your family, spouse, friends, colleagues etc. Tools for doing this are passwords on web accounts, computers and mobile devices, as well as your privacy settings in Facebook and other social media.

This is the fundamental level of privacy that most of us are aware of already. When this kind of privacy is discussed, it is usually about Facebook privacy settings and how to protect your on-line accounts against hackers. Yes, protection against hacking is actually a sort of privacy issue too.

Provider privacy

Who knows most about your life? You, your spouse or Facebook? Chances are that the service providers you use have the most comprehensive profile on you. At least if we only count data that is stored in an organized and searchable way. This profile may be a lot wider than what you have shared yourself. Google knows what you Google for and your surfing habits are tracked and blended into the profile. The big data companies also try to include as much as possible of your non-digital life. Credit card data, for example, is low-hanging fruit that tells a lot about us.

But what exactly are they doing with that data? It’s said that if you aren’t paying for the product, then you ARE the product. The multitude of free services on the net is made possible by business models that utilize the huge database. Marketing on the service provider’s own page is the first step. Then they sell data to other marketing companies or run embedded marketing. And it gets scary when they start to sell data to other companies too. Like someone who consider employing you or who need to figure out if you’re a high-risk insurance customer.

The main problem with provider privacy is that there aren’t any simple tools to guard you. The service provider can use data in their systems freely no matter what kind of password you use to keep outsiders out. The only way to master this is to control what data they get on you, and your own behavior is what matters here. But it is hard to live a normal cyber-life and fight the big-data companies. I have posted some advice about Facebook and plan to come back to other aspects of the issue in later posts.

Authority privacy

The security and privacy of Internet is to a large extent enforced by legislation and trust, not by technical methods like encryption. But don’t expect the law to protect you if you do a crime. Authorities can break your privacy if there is a justified need for it. This can be a good compromise that guards both our privacy and security, as long as the authorities are trustworthy.

But what happens if they aren’t? Transparency and control are after all things that make the work harder for authorities, so they don’t like it. And a big threat, like terrorism for example, can easily be misused to expand their powers far beyond what’s reasonable. Authority privacy really becomes an issue when the working mode changes from requesting data on selected targets to siphoning up a broad stream of data and storing it for future use. There has been plenty of revelations recently showing that this is exactly what has happened in the US.

There can be many problems because of this. It is, first of all, apparent that data collected by US is misused. The European Union and United Nations are probably not very dangerous terrorist organizations, but still they rank high on the target list. Data collected by authorities is also supposed to be guarded well and used for our own good only. But keep in mind that a single person, Edward Snowden, could walk out with gigabytes of top secret data. He did the right thing and spoke out when his own ethics couldn’t take it anymore, and that’s why we know about him. But how many secret Snowdens have there been before him? More selfish persons who have exchanged data for a luxury life in some other country without going public. Maybe your data? Are you sure China, Russia or Iran don’t have some of the data that the US authorities have collected about you?

And let’s finally play a little game to remind us about how volatile the world is. Imagine that today’s Internet and computer technology was available in 1920. The Weimar republic, also known as Germany, was blooming in the golden twenties. But Europe was not too steady. The authorities had Word War I in fresh memory and wanted to protect the citizens against external threats. They set up a petabyte-datacenter and stored all mails, Facebook updates, cloud files etc. This was widely accepted as some criminal cases had been solved using the data, and the police was proud to present the cases in media. The twenties passed and the thirties brought depression and new rulers. The datacenter proved to be very useful once again, as it was possible to track everybody who had been in contact with Jews and communists. It also brought a benefit in the war to come because many significant services were located in Germany and foreign companies and state persons had been careless enough to use them. The world map might look different today if this imaginary scenario really had happened.

No, something like that could never happen today, you might be thinking. Well, I can’t predict the future but I bet a lot of people were saying the same in the twenties. So never take the current situation for granted. The world will change, often to the better but sometimes to the worse.

So lack of authority privacy is not something that will hurt you right away in your daily life. Your spouse or friends will not learn embarrassing details about you this way, and it will not drown you in spam. But the long term effect of the stored data is hard to predict and there are plenty of plausible harmful scenarios. This really means that proper privacy legislation and trustworthy authorities is of paramount importance for the Internet. A primary set of personal data is of course needed by the authorities to run society’s daily business. But data exceeding that should only be collected based on a justified suspicion, and not be kept any longer than needed. There need to be transparency and control of this handling to ensure it follows regulations, and to keep up peoples’ trust in the authorities.

So what can I do while waiting for the world to get its act together on authority privacy? Not much, I’m afraid. You could stop using a computer but that’s not convenient. Starting to use encryption extensively is another path, but that’s almost as inconvenient. Technology is not the optimal solution because this isn’t a technical problem. It’s a political problem. Political problems are supposed to be solved in the voting booth. It also helps to support organizations like EFF.

Safe surfing,
Micke

More posts from this topic

sign license

POLL – How should we deal with harmful license terms?

We blogged last week, once again, about the fact that people fail to read the license terms they approve when installing software. That post was inspired by a Chrome extension that monetized by collecting and selling data about users’ surfing behavior. People found out about this, got mad and called it spyware. Even if the data collection was documented in the privacy policy, and they technically had approved it. But this case is not really the point, it’s just an example of a very common business model on the Internet. The real point is what we should think about this business model. We have been used to free software and services on the net, and there are two major reasons for that. Initially the net was a playground for nerds and almost all services and programs were developed on a hobby or academic basis. The nerds were happy to give them away and all others were happy to get them for free. But businesses run into a problem when they tried to enter the net. There was no reliable payment method. This created the need for compensation models without money. The net of today is to a significant part powered by these moneyless business models. Products using them are often called free, which is incorrect as there usually is some kind of compensation involved. Nowadays we have money-based payment models too, but both our desire to get stuff for free and the moneyless models are still going strong. So what do these moneyless models really mean? Exposing the user to advertising is the best known example. This is a pretty open and honest model. Advertising can’t be hidden as the whole point is to make you see it. But it gets complicated when we start talking targeted advertising. Then someone need to know who you are and what you like, to be able to show you relevant ads. This is where it becomes a privacy issue. Ordinary users have no way to verify what data is collected about them and how it is used. Heck, often they don’t even know under what legislation it is stored and if the vendor respects privacy laws at all. Is this legal? Basically yes. Anyone is free to make agreements that involve submitting private data. But these scenarios can still be problematic in several ways. They may be in conflict with national consumer protection and privacy laws, but the most common complaint is that they aren’t fair. It’s practically impossible for ordinary users to read and understand many pages of legalese for every installed app. And some vendors utilize this by hiding the shady parts of the agreement deep into the mumbo jumbo. This creates a situation where the agreement may give significant rights to the vendor, which the users is totally unaware of. App permissions is nice development that attempts to tackle this problem. Modern operating systems for mobile devices require that apps are granted access to the resources they need. This enables the system to know more about what the app is up to and inform the user. But these rights are just becoming a slightly more advanced version of the license terms. People accept them without thinking about what they mean. This may be legal, but is it right? Personally I think the situation isn’t sustainable and something need to be done. But what? There are several ways to see this problem. What do you think is the best option?   [polldaddy poll=8801974]   The good news is however that you can avoid this problem. You can select to steer clear of “free” offerings and prefer software and services you pay money for. Their business model is simple and transparent, you get stuff and the vendor get money. These vendors do not need to hide scary clauses deep in the agreement document and can instead publish privacy principles like this.   Safe surfing, Micke     Photo by Orin Zebest at Flickr

Apr 15, 2015
BY 
webpage screenshot TOS

Sad figures about how many read the license terms

Do you remember our stunt in London where we offered free WiFi against getting your firstborn child? No, we have not collected any kids yet. But it sure was a nice demonstration of how careless we have become with user terms of software and service. It has been said that “Yes, I have read then license agreement” is the world’s biggest lie. Spot on! This was proven once again by a recent case where a Chrome extension was dragged into the spotlight accused of spying on users. Let’s first check the background. The “Webpage Screenshot” extension, which has been pulled from the Chrome Web Store, enabled users to conveniently take screenshots of web page content. It was a very popular extension with over 1,2 million users and tons of good reviews. But the problem is that the vendor seemed to get revenues by uploading user behavior, mainly visited web links, and monetizing on that data. The data upload was not very visible in the description, but the extension’s privacy policy did mention it. So the extension seemed to be acting according to what had been documented in the policy. Some people were upset and felt that they had been spied on. They installed the extension and had no clue that a screenshot utility would upload behavior data. And I can certainly understand why. But on the other hand, they did approve the user terms and conditions when installing. So they have technically given their approval to the data collection. Did the Webpage Screenshot users know what they signed up for? Let’s find out. It had 1 224 811 users when I collected this data. The question is how many of them had read the terms. You can pause here and think about it if you want to guess. The right answer follows below.   [caption id="attachment_8032" align="aligncenter" width="681"] Trying to access Webpage Screenshot gave an error in Chrome Web Store on April 7th 2015.[/caption]   The privacy policy was provided as a shortened URL which makes it possible to check its statistics. The link had been opened 146 times during the whole lifetime of the extension, slightly less than a year. Yes, only 146 times for over 1,2 million users! This means that only 0,012 % clicked the link! And the number of users who read all the way down to the data collection paragraph is even smaller. At least 99,988 % installed without reading the terms. So these figures support the claim that “I have read the terms” is the biggest lie. But they also show that “nobody reads the terms” is slightly incorrect.   Safe surfing, Micke   PS. Does F-Secure block this kind of programs? Typically no. They are usually not technically harmful, the user has installed them deliberately and we can’t really know what the user expects them to do. Or not to do. So this is not really a malware problem, it’s a fundamental problem in the business models of Internet.   Images: Screenshots from the Webpage Screenshot homepage and Chrome Web Store    

Apr 8, 2015
BY 
Snowden, Last Week Tonight, John Oliver

Could John Oliver make digital freedom history again with Section 215?

John Oliver -- the host of HBO's Last Week Tonight -- surprised the world on Sunday by punctuating a report on government surveillance with an exclusive interview of Edward Snowden taped in Moscow. But could the comedian's pointed attempt to focus attention on an issue of digital freedom that the public is largely ignoring actually influence policy? It's happened before. In a widely praised segment last June, Oliver helped spark a massive backlash to propose Federal Communication Commission guidelines that would have ended Net Neutrality as we know it. "Seize your moment, my lovely trolls," Oliver told his viewers, after directing them to the FCC site to offer their opinions on policies what would lead to preferential treatment of some data. "Turn on caps lock, and fly, my pretties!" Since then, the Obama Administration fully embraced Net Neutrality and the FCC followed by voting for a 400-page order that aims to "preserve the open internet." [youtube https://www.youtube.com/watch?v=XEVlyP4_11M] This weeks segment on the PATRIOT Act focused on Section 215, which has been used to justify bulk collection of electronic communication and is among the provisions of the law set to expire in June. Noting the consensus that the provision needs to be reformed, Oliver described what the law allows, "Section 215 says the government can ask for 'any tangible things' so long as it's for 'an investigation to protect against international terrorism.' That's basically a blank check." He went on to echo the pessimism our Security Advisor Sean Sullivan offered when he made "One Definitive Prediction" early this year. "Section 215 and Section 206 of the USA PATRIOT Act and Section 6001 of the Intelligence Reform and Terrorism Prevention Act will be reauthorized before their June 1, 2015 expiration date," Sean wrote. He added, "Don't expect reform in 2015. The violation of your digital freedom will continue." To show why this highly controversial provision was about to be rubber-stamped for five more years, Oliver showed interviews where random people in Times Square were asked to explain who Edward Snowden is. The closest someone got was to call him "the Wikileaks guy," who is actually Julian Assange. In Moscow, when Snowden lives in exile, Oliver attempted to explain to the former NSA contractor why his story -- which made international news in 2013 -- hadn't prompted any significant reforms in the US. The host explained that Americans don't care about foreign surveillance, but they do care about the government looking at their junk -- literally their private parts. "I guess I never thought about putting it in the context of your junk," Snowden said, after walking through a breakdown of National Security Agency initiatives like PRISM, MYSTIC and XKeyscore in the context of a picture of Oliver's junk. The Birmingham-born comic's unique brand of humorous deep dives into under-investigated news stories has been branded "investigative comedy" by some critics. But with his Net Neutrality story, Oliver veered toward "comedic activism." Unfortunately, Oliver didn't give his viewers an action to take at the end of the surveillance show. Perhaps shining a massive light on the story -- the show has already been viewed online more than 3 million times -- will be enough to influence lawmakers. But given the split in the American public's concerns between surveillance and worries about groups like ISIS, the chances for true reform seem dim. And that would make Sean very pessimistic. Because @iamjohnoliver is correct, if we can't get the USA to reform its own domestic surveillance… nothing will be reformed. — Sean Sullivan (@5ean5ullivan) April 7, 2015

Apr 7, 2015
BY