There’s one thing I can say for sure about Facebook’s new Timeline: It’s better. I’m just not sure whom it’s better for.
It’s probably better for app makers and brand pages that benefit from the credibility they get from prominent mentions in your Timeline. And it’s probably also better for people who love to use Facebook to tell the story of their lives. But is it better for you? You’ll have to decide.
The idea behind Timeline is: “Tell your life story with a new kind of profile.” Knowing that Facebook’s goal if for you to share your story with the widest possible audience, you should take a few steps to make sure you are only sharing the chapters of your life you really want to.
1. Get your friends’ settings right and audit your friends.
Whenever there are big changes on Facebook, outrage follows. Then it fades and Facebook grows. You can expect a similar cycle as Timeline rolls out. The Timeline is designed to tell your story through the content you’ve posted on Facebook. Some will find that unsettling.
The fact that Facebook built a setting that automatically makes all of your past posts “Friends Only” along with the slow roll out of Timeline indicates that Facebook is anticipating some backlash. Facebook has made the basic friend settings easy and you can now easily change the settings on any old post.
If you’re a “Friends Only” user like me, I recommend that you take advantage of the “reset button” set all of your old posts to “Friends Only”. To do this, go to the arrow in your upper right corner > Privacy Settings> Under “Limit the Audience for Past Posts” click “Manage Past Post Visibility.” If you use this setting, you can’t undo it. You can edit each post’s settings individually but you can’t change them back all at once. You can always make any post only available to you by selecting the “Custom” setting.
2. Check how you are tagged
Anyone can now tag anyone on Facebook. And if a friend tags you in something it could end up in your profile. You can always remove a tag but unless you have your settings right, a joke picture could pop up right at the moment a potential employer happens to click on your Timeline.
Go to the arrow in your upper right corner > Privacy Settings> Under “How Tags Work” click “Edit Settings.”
Here are my recommendations for tagging:
I have Timeline Review and Tag Review on for maximum Timeline control. Timeline Review lets me approved anything tagged with my name before it shows up on my profile. Tag Review lets me approve tags on my content. I also have Maximum Timeline Visibility set to “Custom” “Only Me” for an extra layer of protection. I don’t let Facebook recognize me in photos nor do I let friends check me into Places.
This is about as locked down as you can get. But I’ve found erring on the side of privacy has never been a problem for me on Facebook.
3. Edit your apps.
An app can write directly to your “wall”/timeline if you’ve given it permission to do so. Fact is you probably don’t remember if you’ve done so. And now apps play a more prominent role in your profile. So you should go through your approved apps and delete any that you are a) not using and b) would never like to see show up in your profile.
Go to the arrow in your upper right corner > Privacy Settings> Under “Apps & Websites” click “Edit Settings”> Under “Apps You Use” click “Edit Settings”> Click the light blue “x” next to any app you want to get rid of. Now, whenever you use an app, actually read the permissions the apps want. And it the app can write to your profile, your activity will become visible in your timeline
Extra Tip: Turn of Instant Personalization
Go to the arrow in your upper right corner > Privacy Settings> Under “Apps & Websites” click “Edit Settings”> Under “Instant Personalization” click “Edit Settings”> Uncheck the box that says “Enable instant personalization on partner websites.”
Facebook has been automatically sharing your public Facebook data with third- party partners through apps for over a year now. Now that apps will be posting to your timeline, you may end up having your activity on sites you didn’t mean to make public show up on your timeline. This is being very cautious. But it could help avoid some unintended consequences.
The fact is we can’t be fully aware of the implications of Timeline until its widely implemented. When will that be?
On Quora, a, a Facebook employee speculated that it would be before the end of October. (If you’re dying to get the profile, here’s one way people have been able to get it.) The one thing you have to understand up the Facebook Timeline is that it can make your life feel way more public. More than LinkedIn, Twitter or most any other site, Facebook has the content to tell the story of our lives over the past few years.
Going forward, Facebook—I believe—hopes that you will embrace Facebook as the channel for your lifecast and mindcast in a public way. And if you do, Facebook will hit the billion-user mark before the end of 2011.
Many of you have seen them. And some of you have no doubt been victims too. Malware spreading through social media sites, like Facebook, is definitively something you should look out for. You know those posts. You raise your eyebrows when old Aunt Sophie suddenly shares a pornographic video with all her friends. You had no idea she was into that kind of stuff! Well, she isn’t (necessary). She’s just got infected with a special kind of malware called a social bot. So what’s going on here? You might feel tempted to check what “Aunt Sophie” really shared with you. But unfortunately your computer isn’t set up properly to watch the video. It lacks some kind of video thingy that need to be installed. Luckily it is easy to fix, you just click the provided link and approve the installation. And you are ready to dive into Aunt Sophie’s stuff. Yes, you probably already figured out where this is going. The social bots are excellent examples of how technology and social tricks can work together. The actual malware is naturally the “video thingy” that people are tricked to install. To be more precise, it’s usually an extension to your browser. And it’s often masqueraded as a video codec, that is a module that understands and can show a certain video format. Once installed, these extensions run in your browser with access to your social media accounts. And your friends start to receive juicy videos from you. There are several significant social engineering tricks involved here. First you are presented with content that people want to see. Juicy things like porn or exposed celebrities always work well. But it may actually be anything, from breaking news to cute animals. The content also feels safer and more trustworthy because it seems to come from one of your friends. The final trick is to masquerade the malware as a necessary system component. Well, when you want to see the video, then nothing stops you from viewing it. Right? It’s so easy to tell people to never accept this kind of additional software. But in reality it’s harder than that. Our technological environment is very heterogeneous and there’s content that devices can’t display out of the box. So we need to install some extensions. Not to talk about the numerous video formats out there. Hand on heart, how many of you can list the video formats your computer currently supports? And which significant formats aren’t supported? A more practical piece of advice is to only approve extensions when viewing content from a reliable source. And we have learned that Facebook isn’t one. On the other hand, you might open a video on a newspaper or magazine that you frequently visit, and this triggers a request to install a module. This is usually safe because you initiated the video viewing from a service that shouldn’t have malicious intents. But what if you already are “Aunt Sophie” and people are calling about your strange posts? Good first aid is going to our On-line Scanner. That’s a quick way to check your system for malware. A more sustainable solution is our F-Secure SAFE. Ok, finally the poll. How do you react when suddenly told that you need to download and install software to view a video? Be honest, how did you deal with this before reading this blog? [polldaddy poll=9394383] Safe surfing, Micke Image: Facebook.com screenshot
Today is Safer Internet Day – a day to talk about what kind of place the Internet is becoming for kids, and what people can do to make it a safe place for kids and teens to enjoy. We talk a lot about various online threats on this blog. After all, we’re a cyber security company, and it’s our job to secure devices and networks to keep people protected from more than just malware. But protecting kids and protecting adults are different ballparks. Kids have different needs, and as F-Secure Researcher Mikael Albrecht has pointed out, this isn’t always recognized by software developers or device manufacturers. So how does this actually impact kids? Well, it means parents can’t count on the devices and services kids use to be completely age appropriate. Or completely safe. Social media is a perfect example. Micke has written in the past that social media is basically designed for adults, making any sort of child protection features more of an afterthought than a focus. Things like age restrictions are easy for kids to work around. So it’s not difficult for kids to hop on Facebook or Twitter and start social networking, just like their parents or older siblings. But these services aren't designed for kids to connect with adults. So where does that leave parents? Parental controls are great tools that parents can use to monitor, and to a certain extent, limit what kids can do online. But they’re not perfect. Particularly considering the popularity of mobile devices amongst kids. Regulating content on desktop browsers and mobile apps are two different things, and while there are a lot of benefits to using mobile apps instead of web browsers, it does make using special software to regulate content much more difficult. The answer to challenges like these is the less technical approach – talking to kids. There’s some great tips for parents on F-Secure’s Digital Parenting web page, with talking points, guidelines, and potential risks that parents should learn more about. That might seem like a bit of a challenge to parents. F-Secure’s Chief Research Officer Mikko Hypponen has pointed out that today’s kids have never experienced a world without the Internet. It’s as common as electricity for them. But the nice thing about this approach is that parents can do this just by spending time with kids and learning about the things they like to do online. So if you don’t know what your kids are up to this Safer Internet Day, why not enjoy the day with your kids (or niece/nephew, or even a kid you might be babysitting) by talking over what they like to do online, and how they can enjoy doing it safely.
The European Union is preparing a new data protection package. It is making headlines because there are plans to raise the age limit for digital consent from 13 to 16 years. This has sometimes been describes as the age limit for joining social media. To be precise, member states could choose their age limit within this range. Younger kids would need parental consent for creating an account in social media and similar networks. We can probably agree that minors’ use of the internet can be problematic. But is an age limit really the right way to go? It’s easy to think of potential problems when children and teenagers start using social media. The platforms are powerful communication tools, for good and bad. Cyberbullying. Grooming. Inappropriate content. Unwanted marketing. Getting addicted. Stealing time and attention from homework or other hobbies. And perhaps most important. Social media often becomes a sphere of freedom, a world totally insulated from the parents and their silly rules. In social media you can choose your contacts. There’s no function that enables parents to check what the kids are doing, unless they accept their parents as friends. And the parents are often on totally different services. Facebook is quickly becoming the boring place where mom and granny hangs out. Youngsters tend to be on Instagram, WhatsApp, Snapchat, Periscope or whatnot instead. But is restricting their access to social media the right thing to do? What do we achieve by requiring parental consent before they sign up? This would mean that parents, in theory, have a chance to prevent their children from being on social media. And that’s good, right? Well, this is a flawed logic in several ways. First, it’s easy to lie about your age. Social media in generic has very poor authentication mechanisms for people signing up. They are not verifying your true identity, and can’t verify your age either. Kids learn very quickly that signing up just requires some simple math. Subtract 16, or whatever, from the current year when asked for year of birth. The other problem is that parental consent requirements don’t give parents a real choice. Electronic communication is becoming a cornerstone in our way to interact with other people. It can’t be stressed enough how important it is for our children to learn the rules and skills of this new world. Preventing kids from participating in the community where all their friends are could isolate them, and potentially cause more harm than the dark side of social media. What we need isn’t age limits and parental consent. It’s better control of the content our children are dealing with and tools for parents to follow what they are doing. Social media is currently designed for adults and everyone have tools to protect their privacy. But the same tools become a problem when children join, as they also prevent parents from keeping an eye on their offspring. Parental consent becomes significant when the social media platforms start to recognize parent-child relationships. New accounts for children under a specified age could mandatorily be linked to an adult’s account. The adult would have some level of visibility into what the child is doing, but maybe not full visibility. Metadata, like whom the child is communicating with, would be a good start. Remember that children deserve s certain level of privacy too. Parents could of course still neglect their responsibilities, but they would at least have a tool if they want to keep an eye on how their kids are doing online. And then we still have the problem with the lack of age verification. All this is naturally in vain if the kids can sign up as adults. On top of that, children’s social media preferences are very volatile. They do not stay loyally on one service all the time. Having proper parent-child relationships in one service is not enough, it need to be the norm on all services. So we are still very far from a social media world that really takes parents’ and children’s needs into account. Just demanding parental consent when kids are signing up does not really do much good. It’s of course nice to see EU take some baby steps towards a safer net for our children. But this is unfortunately an area where baby steps isn’t enough. We need a couple of giant leaps as soon as possible. Safe surfing, Micke Image by skyseeker