The three kinds of privacy threats

WP_000796We talk a lot about privacy on the net nowadays. Some claim that privacy is dead, and you just have to cope with it. Some are slightly less pessimistic. But all agree that our new cyber-society will redefine and reduce what we once knew as personal privacy.

The privacy threat is not monolithic. There are actually many different kinds of privacy threats and they are sometimes mixed up. So let’s set this straight and have a look at the three major classes of privacy.

Peer privacy

This is about controlling what data you share with your family, spouse, friends, colleagues etc. Tools for doing this are passwords on web accounts, computers and mobile devices, as well as your privacy settings in Facebook and other social media.

This is the fundamental level of privacy that most of us are aware of already. When this kind of privacy is discussed, it is usually about Facebook privacy settings and how to protect your on-line accounts against hackers. Yes, protection against hacking is actually a sort of privacy issue too.

Provider privacy

Who knows most about your life? You, your spouse or Facebook? Chances are that the service providers you use have the most comprehensive profile on you. At least if we only count data that is stored in an organized and searchable way. This profile may be a lot wider than what you have shared yourself. Google knows what you Google for and your surfing habits are tracked and blended into the profile. The big data companies also try to include as much as possible of your non-digital life. Credit card data, for example, is low-hanging fruit that tells a lot about us.

But what exactly are they doing with that data? It’s said that if you aren’t paying for the product, then you ARE the product. The multitude of free services on the net is made possible by business models that utilize the huge database. Marketing on the service provider’s own page is the first step. Then they sell data to other marketing companies or run embedded marketing. And it gets scary when they start to sell data to other companies too. Like someone who consider employing you or who need to figure out if you’re a high-risk insurance customer.

The main problem with provider privacy is that there aren’t any simple tools to guard you. The service provider can use data in their systems freely no matter what kind of password you use to keep outsiders out. The only way to master this is to control what data they get on you, and your own behavior is what matters here. But it is hard to live a normal cyber-life and fight the big-data companies. I have posted some advice about Facebook and plan to come back to other aspects of the issue in later posts.

Authority privacy

The security and privacy of Internet is to a large extent enforced by legislation and trust, not by technical methods like encryption. But don’t expect the law to protect you if you do a crime. Authorities can break your privacy if there is a justified need for it. This can be a good compromise that guards both our privacy and security, as long as the authorities are trustworthy.

But what happens if they aren’t? Transparency and control are after all things that make the work harder for authorities, so they don’t like it. And a big threat, like terrorism for example, can easily be misused to expand their powers far beyond what’s reasonable. Authority privacy really becomes an issue when the working mode changes from requesting data on selected targets to siphoning up a broad stream of data and storing it for future use. There has been plenty of revelations recently showing that this is exactly what has happened in the US.

There can be many problems because of this. It is, first of all, apparent that data collected by US is misused. The European Union and United Nations are probably not very dangerous terrorist organizations, but still they rank high on the target list. Data collected by authorities is also supposed to be guarded well and used for our own good only. But keep in mind that a single person, Edward Snowden, could walk out with gigabytes of top secret data. He did the right thing and spoke out when his own ethics couldn’t take it anymore, and that’s why we know about him. But how many secret Snowdens have there been before him? More selfish persons who have exchanged data for a luxury life in some other country without going public. Maybe your data? Are you sure China, Russia or Iran don’t have some of the data that the US authorities have collected about you?

And let’s finally play a little game to remind us about how volatile the world is. Imagine that today’s Internet and computer technology was available in 1920. The Weimar republic, also known as Germany, was blooming in the golden twenties. But Europe was not too steady. The authorities had Word War I in fresh memory and wanted to protect the citizens against external threats. They set up a petabyte-datacenter and stored all mails, Facebook updates, cloud files etc. This was widely accepted as some criminal cases had been solved using the data, and the police was proud to present the cases in media. The twenties passed and the thirties brought depression and new rulers. The datacenter proved to be very useful once again, as it was possible to track everybody who had been in contact with Jews and communists. It also brought a benefit in the war to come because many significant services were located in Germany and foreign companies and state persons had been careless enough to use them. The world map might look different today if this imaginary scenario really had happened.

No, something like that could never happen today, you might be thinking. Well, I can’t predict the future but I bet a lot of people were saying the same in the twenties. So never take the current situation for granted. The world will change, often to the better but sometimes to the worse.

So lack of authority privacy is not something that will hurt you right away in your daily life. Your spouse or friends will not learn embarrassing details about you this way, and it will not drown you in spam. But the long term effect of the stored data is hard to predict and there are plenty of plausible harmful scenarios. This really means that proper privacy legislation and trustworthy authorities is of paramount importance for the Internet. A primary set of personal data is of course needed by the authorities to run society’s daily business. But data exceeding that should only be collected based on a justified suspicion, and not be kept any longer than needed. There need to be transparency and control of this handling to ensure it follows regulations, and to keep up peoples’ trust in the authorities.

So what can I do while waiting for the world to get its act together on authority privacy? Not much, I’m afraid. You could stop using a computer but that’s not convenient. Starting to use encryption extensively is another path, but that’s almost as inconvenient. Technology is not the optimal solution because this isn’t a technical problem. It’s a political problem. Political problems are supposed to be solved in the voting booth. It also helps to support organizations like EFF.

Safe surfing,

More posts from this topic


Why Cameron hates WhatsApp so much

It’s a well-known fact that UK’s Prime Minister David Cameron doesn’t care much about peoples’ privacy. Recently he has been driving the so called Snooper’s Charter that would give authorities expanded surveillance powers, which got additional fuel from the Paris attacks. It is said that terrorists want to tear down the Western society and lifestyle. And Cameron definitively puts himself in the same camp with statements like this: “In our country, do we want to allow a means of communication between people which we cannot read? No, we must not.” David Cameron Note that he didn’t say terrorists, he said people. Kudos for the honesty. It’s a fact that terrorist blend in with the rest of the population and any attempt to weaken their security affects all of us. And it should be a no-brainer that a nation where the government can listen in on everybody is bad, at least if you have read Orwell’s Nineteen Eighty-Four. But why does WhatsApp occur over and over as an example of something that gives the snoops grey hair? It’s a mainstream instant messenger app that wasn’t built for security. There are also similar apps that focus on security and privacy, like Telegram, Signal and Wickr. Why isn’t Cameron raging about them? The answer is both simple and very significant. But it may not be obvious at fist. Internet was by default insecure and you had to use tools to fix that. The pre-Snowden era was the golden age for agencies tapping into the Internet backbone. Everything was open and unencrypted, except the really interesting stuff. Encryption itself became a signal that someone was of interest, and the authorities could use other means to find out what that person was up to. More and more encryption is being built in by default now when we, thanks to Snowden, know the real state of things. A secured connection between client and server is becoming the norm for communication services. And many services are deploying end-to-end encryption. That means that messages are secured and opened by the communicating devices, not by the servers. Stuff stored on the servers are thus also safe from snoops. So yes, people with Cameron’s mindset have a real problem here. Correctly implemented end-to-end encryption can be next to impossible to break. But there’s still one important thing that tapping the wire can reveal. That’s what communication tool you are using, and this is the important point. WhatsApp is a mainstream messenger with security. Telegram, Signal and Wickr are security messengers used by only a small group people with special needs. Traffic from both WhatsApp and Signal, for example, are encrypted. But the fact that you are using Signal is the important point. You stick out, just like encryption-users before. WhatsApp is the prime target of Cameron’s wrath mainly because it is showing us how security will be implemented in the future. We are quickly moving towards a net where security is built in. Everyone will get decent security by default and minding your security will not make you a suspect anymore. And that’s great! We all need protection in a world with escalating cyber criminality. WhatsApp is by no means a perfect security solution. The implementation of end-to-end encryption started in late 2014 and is still far from complete. The handling of metadata about users and communication is not very secure. And there are tricks the wire-snoops can use to map peoples’ network of contacts. So check it out thoroughly before you start using it for really hot stuff. But they seem to be on the path to become something unique. Among the first communication solutions that are easy to use, popular and secure by default. Apple's iMessage is another example. So easy that many are using it without knowing it, when they think they are sending SMS-messages. But iMessage’s security is unfortunately not flawless either.   Safe surfing, Micke   PS. Yes, weakening security IS a bad idea. An excellent example is the TSA luggage locks, that have a master key that *used to be* secret.   Image by Sam Azgor

November 26, 2015

POLL – Is it OK for security products to collect data from your device?

We have a dilemma, and maybe you want to help us. I have written a lot about privacy and the trust relationship between users and software vendors. Users must trust the vendor to not misuse data that the software handles, but they have very poor abilities to base that trust on any facts. The vendor’s reputation is usually the most tangible thing available. Vendors can be split into two camps based on their business model. The providers of “free” services, like Facebook and Google, must collect comprehensive data about the users to be able to run targeted marketing. The other camp, where we at F-Secure are, sells products that you pay money for. This camp does not have the need to profile users, so the privacy-threats should be smaller. But is that the whole picture? No, not really. Vendors of paid products do not have the need to profile users for marketing. But there is still a lot of data on customers’ devices that may be relevant. The devices’ technical configuration is of course relevant when prioritizing maintenance. And knowing what features actually are used helps plan future releases. And we in the security field have additional interests. The prevalence of both clean and malicious files is important, as well as patterns related to malicious attacks. Just to name a few things. One of our primary goals is to guard your privacy. But we could on the other hand benefit from data on your device. Or to be precise, you could benefit from letting us use that data as it contributes to better protection overall. So that’s our dilemma. How to utilize this data in a way that won’t put your privacy in jeopardy? And how to maintain trust? How to convince you that data we collect really is used to improve your protection? Our policy for this is outlined here, and the anti-malware product’s data transfer is documented in detail in this document. In short, we only upload data necessary to produce the service, we focus on technical data and won’t take personal data, we use hashing of the data when feasible and we anonymize data so we can’t tell whom it came from. The trend is clearly towards lighter devices that rely more on cloud services. Our answer to that is Security Cloud. It enables devices to off-load tasks to the cloud and benefit from data collected from the whole community. But to keep up with the threats we must develop Security Cloud constantly. And that also means that we will need more info about what happens on your device. That’s why I would like to check what your opinion about data upload is. How do you feel about Security Cloud using data from your device to improve the overall security for all users? Do you trust us when we say that we apply strict rules to the data upload to guard your privacy?   [polldaddy poll=9196371]   Safe surfing, Micke   Image by balticservers.com  

November 24, 2015

A temporary profile picture but permanent app permissions

We are all sad about what’s happened in Paris last Friday. It’s said that the terrorist attacks have changed the world. That is no doubt true, and one aspect of that is how social media becomes more important in situations like this. Facebook has deployed two functions that help people deal with this kind of crisis. The Safety Check feature collects info about people in the area of a disaster, and if they are safe or not. This feature was initially created for natural disasters. Facebook received criticism for using it in Paris but not for the Beirut bombings a day earlier. It turned out that their explanation is quite good. Beirut made them think if the feature should be used for terror attacks as well, and they were ready to change the policy when Paris happened. The other feature lets you use a temporary profile picture with some appropriate overlay, the tricolor in this case. This is a nice and easy way to show sympathy. And it became popular very quickly, at least among my friends. The downside is however that it seemed so popular that those without a tricolor were sticking out. Some people started asking them why they aren’t supporting the victims in Paris? The whole thing has lost part of its meaning when it goes that far. We can’t know anymore who genuinely supports France and who changed the picture because of the social pressure. I changed my picture too. And it was interesting to see how the feature was implemented. The Facebook app for iOS 9 launched a wizard that let me make a picture with the tricolor overlay. Either by snapping a new selfie or using one of my previous profile pictures. I guess the latter is what most people want to do. But Facebook’s wizard requires permissions to use the camera and refuses to start until the user has given that permission. Even if you just want to modify an existing picture. Even more spooky. The wizard also asked for permission to use the microphone when I first run it. That is, needless to say, totally unnecessary when creating a profile picture. And Facebook has been accused of misusing audio data. It’s doubtful if they really do, but the only sure thing is that they don’t if you deny Facebook microphone access. But that was probably a temporary glitch, I was not able to reproduce the mic request when resetting everything and running the wizard again. Your new profile picture may be temporary, but any rights you grant the Facebook app are permanent. I’m not saying that this is a sinister plot to get more data about you, it may be just sloppy programming. But it is anyway an excellent reminder about how important the app permissions are. We should learn to become more critical when granting, or denying, rights like this. This is the case for any app, but especially Facebook as its whole business model is based on scooping up data about us users. Time for an app permission check. On your iOS device, go to Settings and Privacy. Here you can see the categories of info that an app can request. Go through them and think critically about if a certain app really needs its permissions to provide value to you. Check Facebook's camera and microphone permissions if you have used the temporary profile picture feature. And one last thing. Make it a habit to check the privacy settings now and then.   [caption id="attachment_8637" align="aligncenter" width="169"] This is how far you get unless you agree to grant Facebook camera access.[/caption]   [caption id="attachment_8638" align="aligncenter" width="169"] The Settings, Privacy page. Under each category you find the apps that have requested access, and can select if the request is granted or denied.[/caption]     Safe surfing, Micke   PS. The temporary profile picture function is BTW simpler in Facebook's web interface. You just see your current profile picture with the overlay. You can pan and zoom before saving. I like that approach much more.   Photo by Markus Nikander and iPhone screen captures    

November 16, 2015