Image from EFF

Is e-mail OK for secret stuff?

Image by EFF

Image by EFF

Short answer: No. Slightly longer answer: Maybe, but not without additional protection.

E-mail is one of the oldest and most widely used services on Internet. It was developed during an era when we were comfortably unaware of viruses, worms, spam, e-crime and the NSA. And that is clearly visible in the architecture and blatant lack of security features. Without going deep into technical details, one can conclude that the security of plain e-mail is next to non-existing. The mail standards do by themselves not provide any kind of encryption or verification of the communicating parties’ identity. All this can be done with additional protection arrangements. But are you doing it and do you know how to?

Here’s some points to keep in mind.

  • Hackers or intelligence agencies may tap into the traffic between you and the mail server. This is very serious as it could reveal even your user ID and password, enabling others to log in to the server and read your stored mails. The threat can be mitigated by ensuring that the network traffic is encrypted. Most mail client programs offer an option to use SSL- or TLS-encryption for sent and received mail. See the documentation for your mail program or service provider. If you use webmail in your browser, you should make sure the connection is encrypted. See this article for more details. If it turns out that you can’t use encryption with your current service provider, then start looking for another one promptly.
  • Your mails are stored at the mail server. There are three main points that affect how secure they are there. Your own password and how secret you keep it, the service provider’s security policies and the legislation in the country where the service provider operates. Most ordinary service providers offer decent protection against hackers and other low-resource parties, but less protection against authorities in their home country.
  • Learn how to recognize phishing attacks as that is one of the most common reasons for mail accounts to be compromised.
  • There are some mail service providers that focus purely on secrecy and use some kind of encryption to keep messages secret. Hushmail (Canada) and Mega’s (New Zealand) planned service are good examples. Lavabit and Silent Mail used to provide this kind of service too, but they have been closed down under pressure from officials. This recent development shows that services run in the US can’t be safe. US authorities can walk in at any time and request your data or force them to implement backdoors, no matter what security measures the service provider is implementing. And it’s foolish to believe that this is used only against terrorists. It’s enough that a friend of a friend of a friend is targeted for some reason or that there is some business interest that competes with American interests.
  • The safest way to deal with most of the threats is to use end-to-end encryption. For this you need some additional software like Pretty Good Privacy, aka. PGP. It’s a bit of a hassle as both parties need to have compatible encryption programs and exchange encryption keys. But when it’s done you have protection for both stored messages and messages in transit. PGP also provides strong authentication of the message sender in addition to secrecy. This is the way to go if you deal with hot stuff frequently.
  • An easier way to transfer secret stuff is to attach encrypted files. You can for example use WinZip or 7-Zip to create encrypted packages. Select the AES encryption algorithm (if you have a choice) and make sure you use a hard to guess password that is long enough and contains upper and lowercase letters, numbers and special characters. Needless to say, do not send the password to the other party by mail. Agreeing on the password is often the weakest link and you should pay attention to it. Even phone and SMS may be unsafe if an intelligence agency is interested in you.
  • Remember that traffic metadata may reveal a lot even if you have encrypted the content. That is info about who you have communicated with and at what time. The only protection against this is really to use anonymous mail accounts that can’t be linked to you. This article touches on the topic.
  • Remember that there always are at least two parties in communication. And no chain is stronger than its weakest link. It doesn’t matter how well you secure your mail if you send a message to someone with sloppy security.
  • Mails are typically stored in plaintext on your own computer if you use a mail client program. Webmail may also leave mail messages in the browser cache. This means that you need to care about the computer’s security if you deal with sensitive information. Laptops and mobile devices are especially easy to lose or steal, which can lead to data leaks. Data can also leak through malware that has infected your computer.
  • If you work for a company and use mail services provided by them, then the company should have implemented suitable protection. Most large companies run their own internal mail services and route traffic between sites over encrypted connections. You do not have to care yourself in this case, but it may be a good idea to check it. Just ask the IT guy at the coffee table if NSA can read your mails and see how he reacts.

Finally. Sit down and think about what kind of mail secrecy you need. Imagine that all messages you have sent and received were made public. What harm would that cause? Would it be embarrassing to you or your friends? Would it hurt your career or employer? Would it mean legal problems for you or your associates? (No, you do not need to be criminal for this to happen. Signing a NDA may be enough.) Would it damage the security of your country?  Would it risk the life of you or others? And harder to estimate, can any of this stuff cause you harm if it’s stored ten or twenty years and then released in a world that is quite different from today?

At this point you can go back to the list above and decide if you need to do something to improve your mail security.

Safe surfing,
Micke

More posts from this topic

BYOD

Why Bring your own Device (BYOD)?

Do you ever use your personal phone to make work related calls? Or send work related e-mails? Maybe you even use it to work on Google Docs, or access company files remotely? Doing these things basically means you’re implementing a BYOD policy at your work, whether they know it or not. BYOD – that’s bring your own device – isn’t really a new trend, but it is one that’s becoming more widespread. Statistics from TrackVia suggest that younger generations are embracing BYOD on a massive scale, with nearly 70% of surveyed Millennials admitting that they use their own devices and software, regardless of their employer’s policies on the matter. This is essentially pressuring employers to accept the trend, as the alternative could mean imposing security restrictions that limit how people go about their work. Consequently, Gartner predicts that 38% of businesses will stop providing employees with devices by 2016. It kind of seems like workers are enforcing the trend, and not businesses. But it’s happening because it’s so much easier to work with phones, tablets, and computers that you understand and enjoy. Work becomes easier, productivity goes up, life becomes more satisfying, etc. This might sound like an exaggeration, and maybe it is a little bit. BYOD won’t solve all of life’s problems, but it really takes advantage of the flexibility modern technology offers. And that’s what mobility should be about, and that’s what businesses are missing out on when they anchor people to a specific device. BYOD promotes a more “organic” aspect of technology in that it’s something people have already invested in and want to use, not something that’s being forced upon them. But of course, there are complications. Recent research confirms that many of these same devices have already had security issues. It’s great to enjoy the benefits of using your own phone or tablet for sending company e-mails, but what happens when things go wrong? You might be turning heads at work by getting work done faster and more efficient, but don’t expect this to continue if you happen to download some malicious software that infiltrates your company’s networks. You’re not alone if you want to use your own phone, tablet, or computer for work. And you’re not even alone if you do this without telling your boss. But there’s really no reason not to try and protect yourself first. You can use security software to reduce the risk of data breaches or malicious infections harming your employer. And there’s even a business oriented version of F-Secure's popular Freedome VPN called Freedome for Business that can actually give you additional forms of protection, and can help your company manage an entire fleet of BYOD and company-owned devices. It’s worth bringing these concerns to an employer if you find yourself using your own devices at the office. After all, statistics prove that you’re not alone in your concerns, and your employer will most likely have to address the issue sooner rather than later if they want the company to use technology wisely.  

Apr 17, 2015
BY 
sign license

POLL – How should we deal with harmful license terms?

We blogged last week, once again, about the fact that people fail to read the license terms they approve when installing software. That post was inspired by a Chrome extension that monetized by collecting and selling data about users’ surfing behavior. People found out about this, got mad and called it spyware. Even if the data collection was documented in the privacy policy, and they technically had approved it. But this case is not really the point, it’s just an example of a very common business model on the Internet. The real point is what we should think about this business model. We have been used to free software and services on the net, and there are two major reasons for that. Initially the net was a playground for nerds and almost all services and programs were developed on a hobby or academic basis. The nerds were happy to give them away and all others were happy to get them for free. But businesses run into a problem when they tried to enter the net. There was no reliable payment method. This created the need for compensation models without money. The net of today is to a significant part powered by these moneyless business models. Products using them are often called free, which is incorrect as there usually is some kind of compensation involved. Nowadays we have money-based payment models too, but both our desire to get stuff for free and the moneyless models are still going strong. So what do these moneyless models really mean? Exposing the user to advertising is the best known example. This is a pretty open and honest model. Advertising can’t be hidden as the whole point is to make you see it. But it gets complicated when we start talking targeted advertising. Then someone need to know who you are and what you like, to be able to show you relevant ads. This is where it becomes a privacy issue. Ordinary users have no way to verify what data is collected about them and how it is used. Heck, often they don’t even know under what legislation it is stored and if the vendor respects privacy laws at all. Is this legal? Basically yes. Anyone is free to make agreements that involve submitting private data. But these scenarios can still be problematic in several ways. They may be in conflict with national consumer protection and privacy laws, but the most common complaint is that they aren’t fair. It’s practically impossible for ordinary users to read and understand many pages of legalese for every installed app. And some vendors utilize this by hiding the shady parts of the agreement deep into the mumbo jumbo. This creates a situation where the agreement may give significant rights to the vendor, which the users is totally unaware of. App permissions is nice development that attempts to tackle this problem. Modern operating systems for mobile devices require that apps are granted access to the resources they need. This enables the system to know more about what the app is up to and inform the user. But these rights are just becoming a slightly more advanced version of the license terms. People accept them without thinking about what they mean. This may be legal, but is it right? Personally I think the situation isn’t sustainable and something need to be done. But what? There are several ways to see this problem. What do you think is the best option?   [polldaddy poll=8801974]   The good news is however that you can avoid this problem. You can select to steer clear of “free” offerings and prefer software and services you pay money for. Their business model is simple and transparent, you get stuff and the vendor get money. These vendors do not need to hide scary clauses deep in the agreement document and can instead publish privacy principles like this.   Safe surfing, Micke     Photo by Orin Zebest at Flickr

Apr 15, 2015
BY 
webpage screenshot TOS

Sad figures about how many read the license terms

Do you remember our stunt in London where we offered free WiFi against getting your firstborn child? No, we have not collected any kids yet. But it sure was a nice demonstration of how careless we have become with user terms of software and service. It has been said that “Yes, I have read then license agreement” is the world’s biggest lie. Spot on! This was proven once again by a recent case where a Chrome extension was dragged into the spotlight accused of spying on users. Let’s first check the background. The “Webpage Screenshot” extension, which has been pulled from the Chrome Web Store, enabled users to conveniently take screenshots of web page content. It was a very popular extension with over 1,2 million users and tons of good reviews. But the problem is that the vendor seemed to get revenues by uploading user behavior, mainly visited web links, and monetizing on that data. The data upload was not very visible in the description, but the extension’s privacy policy did mention it. So the extension seemed to be acting according to what had been documented in the policy. Some people were upset and felt that they had been spied on. They installed the extension and had no clue that a screenshot utility would upload behavior data. And I can certainly understand why. But on the other hand, they did approve the user terms and conditions when installing. So they have technically given their approval to the data collection. Did the Webpage Screenshot users know what they signed up for? Let’s find out. It had 1 224 811 users when I collected this data. The question is how many of them had read the terms. You can pause here and think about it if you want to guess. The right answer follows below.   [caption id="attachment_8032" align="aligncenter" width="681"] Trying to access Webpage Screenshot gave an error in Chrome Web Store on April 7th 2015.[/caption]   The privacy policy was provided as a shortened URL which makes it possible to check its statistics. The link had been opened 146 times during the whole lifetime of the extension, slightly less than a year. Yes, only 146 times for over 1,2 million users! This means that only 0,012 % clicked the link! And the number of users who read all the way down to the data collection paragraph is even smaller. At least 99,988 % installed without reading the terms. So these figures support the claim that “I have read the terms” is the biggest lie. But they also show that “nobody reads the terms” is slightly incorrect.   Safe surfing, Micke   PS. Does F-Secure block this kind of programs? Typically no. They are usually not technically harmful, the user has installed them deliberately and we can’t really know what the user expects them to do. Or not to do. So this is not really a malware problem, it’s a fundamental problem in the business models of Internet.   Images: Screenshots from the Webpage Screenshot homepage and Chrome Web Store    

Apr 8, 2015
BY