Are you sharing your telephone number on Facebook?
You might be and not even realize it.
A few months ago I signed up for Facebook’s Login Approvals, which required my mobile number. Instantly my number was added and set at my default setting.
If my general privacy setting were “Public”, my number could be one of the 2.5 million phone numbers that Brandon Copley recently harvested from Facebook using the site’s new Open Graph Search.
The app developer from Texas admits that users can use privacy settings to hide their number but still believes this is a violation of users’ trust.
“Facebook is denying its users the right to privacy by allowing our phone numbers to be publicly searchable as the default setting,” Copley told TechCrunch. “This means that anyone with my number knows my Facebook contact information. I may have not told my future employer about my Facebook account, but if I called them on my cell phone they can now know how to find me on Facebook.”
To make sure your phone number isn’t public, go to your profile and click on “Update Info”. Click “Edit” next to your “Contact Information” then click on the audience icon and select the level of sharing you want. I chose “Only Me”.This isn’t the only privacy surprise you should expect as Facebook’s Open Graph Search begins rolling out to the site’s one billion users
The simplest way to make sure you’re only sharing what you want to share is to use our new Safe Profile Beta app, which scans your profile and lets you know how much you’re sharing and how to lock down your profile. But keep reading for more information about the search and how to prepare yourself.
Open Graph Search will definitely change the way people look at Facebook. You can sign up for the waiting list here: http://www.facebook.com/about/graphsearch
Your friends and their friends will be able to search your information in ways you may not expect. And this tool will likely become the “Google” of social—meaning people will go to it first to discover the people based on interests and location, which could get a bit “creepy.”
Some suggest this tool will make it easier for criminals to find information for phishing attacks or repressive governments to crack down on dissidents. You can see some examples of how married people who “like” prostitutes and government employees who “like” racism here: http://actualfacebookgraphsearches.tumblr.com/
However, the good news is that it’s restricted by your privacy settings most of your friends use Facebook pretty sanely, right?
“90% of users get the basics right and the other 10% are hopeless,” F-Secure Security Advisor Sean Sullivan told me. “When the 90% meets the 10%, de-friend the boneheads. Because soon they will reflect on you.”
Since you will not be able to opt out of Open Graph Search, you might want to take a few more steps to make sure you don’t end up on the bad end of a disturbing search made by a friend, family member or potential employer.
Here’s what to do now:
(If you’re one of the 90% of the Facebook users who gets how to use the site, you can skip to step three for tips that relate specifically to Graph Search.)
1. First of all, never post anything you wouldn’t want to end in your mom’s newsfeed.
This will save you from most embarrassment. This means, no pictures, videos or status updates you wouldn’t want to see on the cover of your hometown newspaper. If you do this, you’ll avoid most—but not all trouble that could result from being on Facebook or in its search.
2. Check your privacy settings and unfriend anyone who doesn’t seem to use the site responsibly
You can get fancy and restrict certain things to certain people, but Facebook’s basic privacy settings are “public” or “friends.” We recommend friends, unless you want to open your profile to end up in the search results of anyone in the world.
Find the lock near the upper right hand corner, click on it and select “See more settings” at the bottom of the menu that pops up.
Change every option for “Who can see my stuff?” and “Who can look me up?” pick “friends”.
3. Scrub you history
You can (and should) limit all of your old posts to just your friends. Once you do this, you cannot undo it. But you can go back and adjust each posts individually.
Click at the top right of any Facebook page and select Privacy Settings Find “Limit the audience for posts I’ve shared with friends of friends or Public?” and click Limit Past Posts. Click ”Limit Old Posts”.
4. Check your likes!
This is where Graph search gets “creepy.” Let’s say you liked a band three years ago or your competitor at work or a boy band as joke. Graph Search doesn’t get the joke. What you’ve liked on Facebook is now much more important. And just as you unfriend anyone who worries, go through your likes and unlike any page you don’t want to be associated with. Unfortunately you need to do this page by page.
Go to your profile, click on “Likes.”
They’re organized chronically, so go back in time and unlike away.
5. Turn on “tag review” and take control of your wall.
The most annoying thing about Facebook is that people can tag you in photos you don’t want to be associated with. You can turn on “tag review” and prevent the photos from showing up to your friends but the tag will still be on the photo unless you “report/remove tag.”
Here’s how to turn on “tag review” so photos you don’t approve don’t show up on your profile.
Click on the wheel in the right-hand corner, click on your privacy settings and then click on Timeline and Tagging on the left menu.
Most people want to allow friends to post on your wall but if protecting your images is your priority, you may want to make it available only for you. Either way, it’s a good idea to select “friends” for “Who can see what others post on your timeline?” This will prevent strangers or even potential mates or employers happening to catch your page right as a friend posted some hilariously sick image on your timeline.
We recommend you turn on “Review posts friends tag you in before they appear on your timeline?” This won’t stop your friends from tagging you in something embarrassing but it will stop it from showing up on your wall if they do.
We definitely recommend you enable “Review tags people add to your own posts before the tags appear on Facebook?” This so called tag review will keep you from being in ridiculous tagged pictures or posts that show up in search results. Instead of just popping up on your wall the posts will show up in your activity log where you can approve a tag or asked for it to be removed. To get to your “Activity Log” to approve your tags, go to your profile by clicking on your name on the top navigation. Then click on “Activity Log”
Here’s a Facebook video on how to “report/remove” photos or videos you don’t want to be tagged in.
6. If you want to prevent your friends and family from being associated from you, hide them.
On your profile/timeline page, click “Friends”. In the new screen you’ll see an edit button.
Select “Only Me”.
To hide your family, click “About” below your name, work, school and hometown on your timeline. Under “Relationships and Family” select “Edit” and select “Only Me.”
7. If this is too much work, consider moving somewhere you’ll have lots of privacy—Google+.
[Photo by Milica Sekulic]
It’s a well-known fact that UK’s Prime Minister David Cameron doesn’t care much about peoples’ privacy. Recently he has been driving the so called Snooper’s Charter that would give authorities expanded surveillance powers, which got additional fuel from the Paris attacks. It is said that terrorists want to tear down the Western society and lifestyle. And Cameron definitively puts himself in the same camp with statements like this: “In our country, do we want to allow a means of communication between people which we cannot read? No, we must not.” David Cameron Note that he didn’t say terrorists, he said people. Kudos for the honesty. It’s a fact that terrorist blend in with the rest of the population and any attempt to weaken their security affects all of us. And it should be a no-brainer that a nation where the government can listen in on everybody is bad, at least if you have read Orwell’s Nineteen Eighty-Four. But why does WhatsApp occur over and over as an example of something that gives the snoops grey hair? It’s a mainstream instant messenger app that wasn’t built for security. There are also similar apps that focus on security and privacy, like Telegram, Signal and Wickr. Why isn’t Cameron raging about them? The answer is both simple and very significant. But it may not be obvious at fist. Internet was by default insecure and you had to use tools to fix that. The pre-Snowden era was the golden age for agencies tapping into the Internet backbone. Everything was open and unencrypted, except the really interesting stuff. Encryption itself became a signal that someone was of interest, and the authorities could use other means to find out what that person was up to. More and more encryption is being built in by default now when we, thanks to Snowden, know the real state of things. A secured connection between client and server is becoming the norm for communication services. And many services are deploying end-to-end encryption. That means that messages are secured and opened by the communicating devices, not by the servers. Stuff stored on the servers are thus also safe from snoops. So yes, people with Cameron’s mindset have a real problem here. Correctly implemented end-to-end encryption can be next to impossible to break. But there’s still one important thing that tapping the wire can reveal. That’s what communication tool you are using, and this is the important point. WhatsApp is a mainstream messenger with security. Telegram, Signal and Wickr are security messengers used by only a small group people with special needs. Traffic from both WhatsApp and Signal, for example, are encrypted. But the fact that you are using Signal is the important point. You stick out, just like encryption-users before. WhatsApp is the prime target of Cameron’s wrath mainly because it is showing us how security will be implemented in the future. We are quickly moving towards a net where security is built in. Everyone will get decent security by default and minding your security will not make you a suspect anymore. And that’s great! We all need protection in a world with escalating cyber criminality. WhatsApp is by no means a perfect security solution. The implementation of end-to-end encryption started in late 2014 and is still far from complete. The handling of metadata about users and communication is not very secure. And there are tricks the wire-snoops can use to map peoples’ network of contacts. So check it out thoroughly before you start using it for really hot stuff. But they seem to be on the path to become something unique. Among the first communication solutions that are easy to use, popular and secure by default. Apple's iMessage is another example. So easy that many are using it without knowing it, when they think they are sending SMS-messages. But iMessage’s security is unfortunately not flawless either. Safe surfing, Micke PS. Yes, weakening security IS a bad idea. An excellent example is the TSA luggage locks, that have a master key that *used to be* secret. Image by Sam Azgor
We have a dilemma, and maybe you want to help us. I have written a lot about privacy and the trust relationship between users and software vendors. Users must trust the vendor to not misuse data that the software handles, but they have very poor abilities to base that trust on any facts. The vendor’s reputation is usually the most tangible thing available. Vendors can be split into two camps based on their business model. The providers of “free” services, like Facebook and Google, must collect comprehensive data about the users to be able to run targeted marketing. The other camp, where we at F-Secure are, sells products that you pay money for. This camp does not have the need to profile users, so the privacy-threats should be smaller. But is that the whole picture? No, not really. Vendors of paid products do not have the need to profile users for marketing. But there is still a lot of data on customers’ devices that may be relevant. The devices’ technical configuration is of course relevant when prioritizing maintenance. And knowing what features actually are used helps plan future releases. And we in the security field have additional interests. The prevalence of both clean and malicious files is important, as well as patterns related to malicious attacks. Just to name a few things. One of our primary goals is to guard your privacy. But we could on the other hand benefit from data on your device. Or to be precise, you could benefit from letting us use that data as it contributes to better protection overall. So that’s our dilemma. How to utilize this data in a way that won’t put your privacy in jeopardy? And how to maintain trust? How to convince you that data we collect really is used to improve your protection? Our policy for this is outlined here, and the anti-malware product’s data transfer is documented in detail in this document. In short, we only upload data necessary to produce the service, we focus on technical data and won’t take personal data, we use hashing of the data when feasible and we anonymize data so we can’t tell whom it came from. The trend is clearly towards lighter devices that rely more on cloud services. Our answer to that is Security Cloud. It enables devices to off-load tasks to the cloud and benefit from data collected from the whole community. But to keep up with the threats we must develop Security Cloud constantly. And that also means that we will need more info about what happens on your device. That’s why I would like to check what your opinion about data upload is. How do you feel about Security Cloud using data from your device to improve the overall security for all users? Do you trust us when we say that we apply strict rules to the data upload to guard your privacy? [polldaddy poll=9196371] Safe surfing, Micke Image by balticservers.com
We are all sad about what’s happened in Paris last Friday. It’s said that the terrorist attacks have changed the world. That is no doubt true, and one aspect of that is how social media becomes more important in situations like this. Facebook has deployed two functions that help people deal with this kind of crisis. The Safety Check feature collects info about people in the area of a disaster, and if they are safe or not. This feature was initially created for natural disasters. Facebook received criticism for using it in Paris but not for the Beirut bombings a day earlier. It turned out that their explanation is quite good. Beirut made them think if the feature should be used for terror attacks as well, and they were ready to change the policy when Paris happened. The other feature lets you use a temporary profile picture with some appropriate overlay, the tricolor in this case. This is a nice and easy way to show sympathy. And it became popular very quickly, at least among my friends. The downside is however that it seemed so popular that those without a tricolor were sticking out. Some people started asking them why they aren’t supporting the victims in Paris? The whole thing has lost part of its meaning when it goes that far. We can’t know anymore who genuinely supports France and who changed the picture because of the social pressure. I changed my picture too. And it was interesting to see how the feature was implemented. The Facebook app for iOS 9 launched a wizard that let me make a picture with the tricolor overlay. Either by snapping a new selfie or using one of my previous profile pictures. I guess the latter is what most people want to do. But Facebook’s wizard requires permissions to use the camera and refuses to start until the user has given that permission. Even if you just want to modify an existing picture. Even more spooky. The wizard also asked for permission to use the microphone when I first run it. That is, needless to say, totally unnecessary when creating a profile picture. And Facebook has been accused of misusing audio data. It’s doubtful if they really do, but the only sure thing is that they don’t if you deny Facebook microphone access. But that was probably a temporary glitch, I was not able to reproduce the mic request when resetting everything and running the wizard again. Your new profile picture may be temporary, but any rights you grant the Facebook app are permanent. I’m not saying that this is a sinister plot to get more data about you, it may be just sloppy programming. But it is anyway an excellent reminder about how important the app permissions are. We should learn to become more critical when granting, or denying, rights like this. This is the case for any app, but especially Facebook as its whole business model is based on scooping up data about us users. Time for an app permission check. On your iOS device, go to Settings and Privacy. Here you can see the categories of info that an app can request. Go through them and think critically about if a certain app really needs its permissions to provide value to you. Check Facebook's camera and microphone permissions if you have used the temporary profile picture feature. And one last thing. Make it a habit to check the privacy settings now and then. [caption id="attachment_8637" align="aligncenter" width="169"] This is how far you get unless you agree to grant Facebook camera access.[/caption] [caption id="attachment_8638" align="aligncenter" width="169"] The Settings, Privacy page. Under each category you find the apps that have requested access, and can select if the request is granted or denied.[/caption] Safe surfing, Micke PS. The temporary profile picture function is BTW simpler in Facebook's web interface. You just see your current profile picture with the overlay. You can pan and zoom before saving. I like that approach much more. Photo by Markus Nikander and iPhone screen captures