How We Give Away Our Privacy (And How to Take It Back)

Deciding what information should be public isn’t just important for your reputation and mental health. Keeping your account numbers and identifying information secret can help prevent financial fraud, protecting you and property. In a country like Syria in the midst of turmoil, your privacy can be a matter of life and death.

But for most of us, we’re willing to trade a litte of our privacy for a service we like, or a little company.

Thorin Klosowski recently published a piece on Lifehacker called “Living in Public: What Happens When You Throw Privacy Out the Window”. In it, he describes how he, a very private person, decided to live his life in public.

For three weeks, Thorin shared his location through location-based social networks wherever he went. He made all of his activity on his favorite apps public. He allowed all of his Internet activity to be tracked by anyone who wanted to track it.

After three weeks, he asked a stranger to take a look at all of his activity and tell him what she thought. What she said and what Google thought about him (see what Google thinks about you here) turned out to be pretty accurate.

The reason that social networks are addictive, I’d argue, is that they are pretty good representations of who we are in real life. The problem arises as we share we may create evidence online that can look bad out of context—like those party pictures. The old notions of a private self that your boss doesn’t know are transforming drastically every day. Some of it is beyond your control. But there is a lot you can do.

The first thing to do is to think about the tools that may give away your privacy.

Here are a few:

  • Social networks—Facebook, Twitter, LinkedIn. Do we need to mention Google+?
  • Location-sharing services like Foursquare or posting pictures that include your location data on it.
  • Browsing the Internet without turning off tracking tools.
  • Allowing services like Google to track your history.
  • Apps that encourage social sharing.

How can you limit the privacy you give away?

  • Master the privacy settings on every social network you use.
    You always need to keep whom you’re sharing with in mind. And it’s always best to share under the premise that anyone in the world could come across your post. Settings for Facebook may be ‘Labyrinthian’. But settings generally resemble Twitter’s two basic choices: public or locked down. You should also enable two-step authentication tools when available, such as for Google.
  • Avoid using private computers or open Wi-Fi networks when you don’t have a VPN running.
  • Use strong passwords your friends can’t guess.
  • Use tools that stop your web activity from being blocked. Klosowski has a good list of them in his post under the heading “Letting Websites Track and Collect All the Data They Want”.
  • Avoid apps that encourage social sharing and turn off location data in your images.
  • Keep ALL of your devices patched and protected with the latest system and security software. Our free Health Check makes that easy for your PC.
  • Always think before you click publish, post or check-in.

For every free service we use, there is a cost. On the Internet that cost is usually privacy.

You can’t always expect people to respect your privacy. But you can always respect your own.

What tools am I missing that give away or protect your privacy?

Cheers,
Jason

(CC image by Lance Nielsen.)

More posts from this topic

Facebook videos

How far are you ready to go to see a juicy video? [POLL]

Many of you have seen them. And some of you have no doubt been victims too. Malware spreading through social media sites, like Facebook, is definitively something you should look out for. You know those posts. You raise your eyebrows when old Aunt Sophie suddenly shares a pornographic video with all her friends. You had no idea she was into that kind of stuff! Well, she isn’t (necessary). She’s just got infected with a special kind of malware called a social bot. So what’s going on here? You might feel tempted to check what “Aunt Sophie” really shared with you. But unfortunately your computer isn’t set up properly to watch the video. It lacks some kind of video thingy that need to be installed. Luckily it is easy to fix, you just click the provided link and approve the installation. And you are ready to dive into Aunt Sophie’s stuff. Yes, you probably already figured out where this is going. The social bots are excellent examples of how technology and social tricks can work together. The actual malware is naturally the “video thingy” that people are tricked to install. To be more precise, it’s usually an extension to your browser. And it’s often masqueraded as a video codec, that is a module that understands and can show a certain video format. Once installed, these extensions run in your browser with access to your social media accounts. And your friends start to receive juicy videos from you. There are several significant social engineering tricks involved here. First you are presented with content that people want to see. Juicy things like porn or exposed celebrities always work well. But it may actually be anything, from breaking news to cute animals. The content also feels safer and more trustworthy because it seems to come from one of your friends. The final trick is to masquerade the malware as a necessary system component. Well, when you want to see the video, then nothing stops you from viewing it. Right? It’s so easy to tell people to never accept this kind of additional software. But in reality it’s harder than that. Our technological environment is very heterogeneous and there’s content that devices can’t display out of the box. So we need to install some extensions. Not to talk about the numerous video formats out there. Hand on heart, how many of you can list the video formats your computer currently supports? And which significant formats aren’t supported? A more practical piece of advice is to only approve extensions when viewing content from a reliable source. And we have learned that Facebook isn’t one. On the other hand, you might open a video on a newspaper or magazine that you frequently visit, and this triggers a request to install a module. This is usually safe because you initiated the video viewing from a service that shouldn’t have malicious intents. But what if you already are “Aunt Sophie” and people are calling about your strange posts? Good first aid is going to our On-line Scanner. That’s a quick way to check your system for malware. A more sustainable solution is our F-Secure SAFE. Ok, finally the poll. How do you react when suddenly told that you need to download and install software to view a video? Be honest, how did you deal with this before reading this blog?   [polldaddy poll=9394383]   Safe surfing, Micke   Image: Facebook.com screenshot      

April 22, 2016
BY 
Safer Internet Day

What are your kids doing for Safer Internet Day?

Today is Safer Internet Day – a day to talk about what kind of place the Internet is becoming for kids, and what people can do to make it a safe place for kids and teens to enjoy. We talk a lot about various online threats on this blog. After all, we’re a cyber security company, and it’s our job to secure devices and networks to keep people protected from more than just malware. But protecting kids and protecting adults are different ballparks. Kids have different needs, and as F-Secure Researcher Mikael Albrecht has pointed out, this isn’t always recognized by software developers or device manufacturers. So how does this actually impact kids? Well, it means parents can’t count on the devices and services kids use to be completely age appropriate. Or completely safe. Social media is a perfect example. Micke has written in the past that social media is basically designed for adults, making any sort of child protection features more of an afterthought than a focus. Things like age restrictions are easy for kids to work around. So it’s not difficult for kids to hop on Facebook or Twitter and start social networking, just like their parents or older siblings. But these services aren't designed for kids to connect with adults. So where does that leave parents? Parental controls are great tools that parents can use to monitor, and to a certain extent, limit what kids can do online. But they’re not perfect. Particularly considering the popularity of mobile devices amongst kids. Regulating content on desktop browsers and mobile apps are two different things, and while there are a lot of benefits to using mobile apps instead of web browsers, it does make using special software to regulate content much more difficult. The answer to challenges like these is the less technical approach – talking to kids. There’s some great tips for parents on F-Secure’s Digital Parenting web page, with talking points, guidelines, and potential risks that parents should learn more about. That might seem like a bit of a challenge to parents. F-Secure’s Chief Research Officer Mikko Hypponen has pointed out that today’s kids have never experienced a world without the Internet. It’s as common as electricity for them. But the nice thing about this approach is that parents can do this just by spending time with kids and learning about the things they like to do online. So if you don’t know what your kids are up to this Safer Internet Day, why not enjoy the day with your kids (or niece/nephew, or even a kid you might be babysitting) by talking over what they like to do online, and how they can enjoy doing it safely.

February 9, 2016
BY 
parent and child

We need more than just age limits to protect our children in social media

The European Union is preparing a new data protection package. It is making headlines because there are plans to raise the age limit for digital consent from 13 to 16 years. This has sometimes been describes as the age limit for joining social media. To be precise, member states could choose their age limit within this range. Younger kids would need parental consent for creating an account in social media and similar networks. We can probably agree that minors’ use of the internet can be problematic. But is an age limit really the right way to go? It’s easy to think of potential problems when children and teenagers start using social media. The platforms are powerful communication tools, for good and bad. Cyberbullying. Grooming. Inappropriate content. Unwanted marketing. Getting addicted. Stealing time and attention from homework or other hobbies. And perhaps most important. Social media often becomes a sphere of freedom, a world totally insulated from the parents and their silly rules. In social media you can choose your contacts. There’s no function that enables parents to check what the kids are doing, unless they accept their parents as friends. And the parents are often on totally different services. Facebook is quickly becoming the boring place where mom and granny hangs out. Youngsters tend to be on Instagram, WhatsApp, Snapchat, Periscope or whatnot instead. But is restricting their access to social media the right thing to do? What do we achieve by requiring parental consent before they sign up? This would mean that parents, in theory, have a chance to prevent their children from being on social media. And that’s good, right? Well, this is a flawed logic in several ways. First, it’s easy to lie about your age. Social media in generic has very poor authentication mechanisms for people signing up. They are not verifying your true identity, and can’t verify your age either. Kids learn very quickly that signing up just requires some simple math. Subtract 16, or whatever, from the current year when asked for year of birth. The other problem is that parental consent requirements don’t give parents a real choice. Electronic communication is becoming a cornerstone in our way to interact with other people. It can’t be stressed enough how important it is for our children to learn the rules and skills of this new world. Preventing kids from participating in the community where all their friends are could isolate them, and potentially cause more harm than the dark side of social media. What we need isn’t age limits and parental consent. It’s better control of the content our children are dealing with and tools for parents to follow what they are doing. Social media is currently designed for adults and everyone have tools to protect their privacy. But the same tools become a problem when children join, as they also prevent parents from keeping an eye on their offspring. Parental consent becomes significant when the social media platforms start to recognize parent-child relationships. New accounts for children under a specified age could mandatorily be linked to an adult’s account. The adult would have some level of visibility into what the child is doing, but maybe not full visibility. Metadata, like whom the child is communicating with, would be a good start. Remember that children deserve s certain level of privacy too. Parents could of course still neglect their responsibilities, but they would at least have a tool if they want to keep an eye on how their kids are doing online. And then we still have the problem with the lack of age verification. All this is naturally in vain if the kids can sign up as adults. On top of that, children’s social media preferences are very volatile. They do not stay loyally on one service all the time. Having proper parent-child relationships in one service is not enough, it need to be the norm on all services. So we are still very far from a social media world that really takes parents’ and children’s needs into account. Just demanding parental consent when kids are signing up does not really do much good. It’s of course nice to see EU take some baby steps towards a safer net for our children. But this is unfortunately an area where baby steps isn’t enough. We need a couple of giant leaps as soon as possible.   Safe surfing, Micke   Image by skyseeker    

December 17, 2015
BY