Who Owns the Content You Post on Facebook?

The answer to this question appears to be simple. You own your content. This is true on Facebook and most social media sites.

Here’s how Facebook states the ownership of your content in its terms and conditions:
You own all of the content and information you post on Facebook, and you can control how it is shared through your privacy and application settings.

So, by posting content or information, you grant Facebook the license to use your content in accordance with your privacy and application settings. That means unless you’ve set the setting for a piece of content for “Custom” > Only Me”, someone can see it.

If someone can see your content on Facebook, they can likely copy it or capture it in some way. You’ll still own the content, but you may have lost control of it. In fact, even if you delete your pictures on Facebook, they still may not be technically deleted.

Your name and image are even set up by default to show up in Facebook’s advertising shown to your friends. (To adjust this, go to “Account Settings”> “Facebook Ads” > “Edit social ad setting”.)

This why a good rule of thumb is: Don’t post anything on Facebook you would not like to see go public, even if your settings are completely locked down.

(And if you’re in the US, you should take extra care not to post anything negative about your employer using your employers’ computers or mobile devices.)

The issue of who controls our content is likely to become more controversial as the cloud makes it possible to store, sync and share content from any PC or device any time. People want access and the ability to share. But they want to do so without giving up control of their irreplaceable images, videos and documents.

This is why F-Secure has created Content Anywhere.

Content Anywhere is a safe personal cloud service provided through operators and ISPs that makes accessing your content wherever you are easy. With Content Anywhere, Internet providers will move beyond being ‘data pipeline’ to become the ‘king of the content cloud.’

We’re looking forward to sharing more about the our move into safe cloud technology, which we believe is the future of sharing and enjoying your content in a smart and secure way.

Cheers,

Jason

CC image credit: jurvetson

More posts from this topic

Facebook videos

How far are you ready to go to see a juicy video? [POLL]

Many of you have seen them. And some of you have no doubt been victims too. Malware spreading through social media sites, like Facebook, is definitively something you should look out for. You know those posts. You raise your eyebrows when old Aunt Sophie suddenly shares a pornographic video with all her friends. You had no idea she was into that kind of stuff! Well, she isn’t (necessary). She’s just got infected with a special kind of malware called a social bot. So what’s going on here? You might feel tempted to check what “Aunt Sophie” really shared with you. But unfortunately your computer isn’t set up properly to watch the video. It lacks some kind of video thingy that need to be installed. Luckily it is easy to fix, you just click the provided link and approve the installation. And you are ready to dive into Aunt Sophie’s stuff. Yes, you probably already figured out where this is going. The social bots are excellent examples of how technology and social tricks can work together. The actual malware is naturally the “video thingy” that people are tricked to install. To be more precise, it’s usually an extension to your browser. And it’s often masqueraded as a video codec, that is a module that understands and can show a certain video format. Once installed, these extensions run in your browser with access to your social media accounts. And your friends start to receive juicy videos from you. There are several significant social engineering tricks involved here. First you are presented with content that people want to see. Juicy things like porn or exposed celebrities always work well. But it may actually be anything, from breaking news to cute animals. The content also feels safer and more trustworthy because it seems to come from one of your friends. The final trick is to masquerade the malware as a necessary system component. Well, when you want to see the video, then nothing stops you from viewing it. Right? It’s so easy to tell people to never accept this kind of additional software. But in reality it’s harder than that. Our technological environment is very heterogeneous and there’s content that devices can’t display out of the box. So we need to install some extensions. Not to talk about the numerous video formats out there. Hand on heart, how many of you can list the video formats your computer currently supports? And which significant formats aren’t supported? A more practical piece of advice is to only approve extensions when viewing content from a reliable source. And we have learned that Facebook isn’t one. On the other hand, you might open a video on a newspaper or magazine that you frequently visit, and this triggers a request to install a module. This is usually safe because you initiated the video viewing from a service that shouldn’t have malicious intents. But what if you already are “Aunt Sophie” and people are calling about your strange posts? Good first aid is going to our On-line Scanner. That’s a quick way to check your system for malware. A more sustainable solution is our F-Secure SAFE. Ok, finally the poll. How do you react when suddenly told that you need to download and install software to view a video? Be honest, how did you deal with this before reading this blog?   [polldaddy poll=9394383]   Safe surfing, Micke   Image: Facebook.com screenshot      

April 22, 2016
BY 
Safer Internet Day

What are your kids doing for Safer Internet Day?

Today is Safer Internet Day – a day to talk about what kind of place the Internet is becoming for kids, and what people can do to make it a safe place for kids and teens to enjoy. We talk a lot about various online threats on this blog. After all, we’re a cyber security company, and it’s our job to secure devices and networks to keep people protected from more than just malware. But protecting kids and protecting adults are different ballparks. Kids have different needs, and as F-Secure Researcher Mikael Albrecht has pointed out, this isn’t always recognized by software developers or device manufacturers. So how does this actually impact kids? Well, it means parents can’t count on the devices and services kids use to be completely age appropriate. Or completely safe. Social media is a perfect example. Micke has written in the past that social media is basically designed for adults, making any sort of child protection features more of an afterthought than a focus. Things like age restrictions are easy for kids to work around. So it’s not difficult for kids to hop on Facebook or Twitter and start social networking, just like their parents or older siblings. But these services aren't designed for kids to connect with adults. So where does that leave parents? Parental controls are great tools that parents can use to monitor, and to a certain extent, limit what kids can do online. But they’re not perfect. Particularly considering the popularity of mobile devices amongst kids. Regulating content on desktop browsers and mobile apps are two different things, and while there are a lot of benefits to using mobile apps instead of web browsers, it does make using special software to regulate content much more difficult. The answer to challenges like these is the less technical approach – talking to kids. There’s some great tips for parents on F-Secure’s Digital Parenting web page, with talking points, guidelines, and potential risks that parents should learn more about. That might seem like a bit of a challenge to parents. F-Secure’s Chief Research Officer Mikko Hypponen has pointed out that today’s kids have never experienced a world without the Internet. It’s as common as electricity for them. But the nice thing about this approach is that parents can do this just by spending time with kids and learning about the things they like to do online. So if you don’t know what your kids are up to this Safer Internet Day, why not enjoy the day with your kids (or niece/nephew, or even a kid you might be babysitting) by talking over what they like to do online, and how they can enjoy doing it safely.

February 9, 2016
BY 
parent and child

We need more than just age limits to protect our children in social media

The European Union is preparing a new data protection package. It is making headlines because there are plans to raise the age limit for digital consent from 13 to 16 years. This has sometimes been describes as the age limit for joining social media. To be precise, member states could choose their age limit within this range. Younger kids would need parental consent for creating an account in social media and similar networks. We can probably agree that minors’ use of the internet can be problematic. But is an age limit really the right way to go? It’s easy to think of potential problems when children and teenagers start using social media. The platforms are powerful communication tools, for good and bad. Cyberbullying. Grooming. Inappropriate content. Unwanted marketing. Getting addicted. Stealing time and attention from homework or other hobbies. And perhaps most important. Social media often becomes a sphere of freedom, a world totally insulated from the parents and their silly rules. In social media you can choose your contacts. There’s no function that enables parents to check what the kids are doing, unless they accept their parents as friends. And the parents are often on totally different services. Facebook is quickly becoming the boring place where mom and granny hangs out. Youngsters tend to be on Instagram, WhatsApp, Snapchat, Periscope or whatnot instead. But is restricting their access to social media the right thing to do? What do we achieve by requiring parental consent before they sign up? This would mean that parents, in theory, have a chance to prevent their children from being on social media. And that’s good, right? Well, this is a flawed logic in several ways. First, it’s easy to lie about your age. Social media in generic has very poor authentication mechanisms for people signing up. They are not verifying your true identity, and can’t verify your age either. Kids learn very quickly that signing up just requires some simple math. Subtract 16, or whatever, from the current year when asked for year of birth. The other problem is that parental consent requirements don’t give parents a real choice. Electronic communication is becoming a cornerstone in our way to interact with other people. It can’t be stressed enough how important it is for our children to learn the rules and skills of this new world. Preventing kids from participating in the community where all their friends are could isolate them, and potentially cause more harm than the dark side of social media. What we need isn’t age limits and parental consent. It’s better control of the content our children are dealing with and tools for parents to follow what they are doing. Social media is currently designed for adults and everyone have tools to protect their privacy. But the same tools become a problem when children join, as they also prevent parents from keeping an eye on their offspring. Parental consent becomes significant when the social media platforms start to recognize parent-child relationships. New accounts for children under a specified age could mandatorily be linked to an adult’s account. The adult would have some level of visibility into what the child is doing, but maybe not full visibility. Metadata, like whom the child is communicating with, would be a good start. Remember that children deserve s certain level of privacy too. Parents could of course still neglect their responsibilities, but they would at least have a tool if they want to keep an eye on how their kids are doing online. And then we still have the problem with the lack of age verification. All this is naturally in vain if the kids can sign up as adults. On top of that, children’s social media preferences are very volatile. They do not stay loyally on one service all the time. Having proper parent-child relationships in one service is not enough, it need to be the norm on all services. So we are still very far from a social media world that really takes parents’ and children’s needs into account. Just demanding parental consent when kids are signing up does not really do much good. It’s of course nice to see EU take some baby steps towards a safer net for our children. But this is unfortunately an area where baby steps isn’t enough. We need a couple of giant leaps as soon as possible.   Safe surfing, Micke   Image by skyseeker    

December 17, 2015
BY