What motivates most of the world’s most advanced mobile malware authors? One word: money.
Mobile Threats Motivated by Profit, 2004-2011
“The most credible threat is coming from hackers who want to profit monetarily with their attacks. And right now we’re seeing more profit-motivated mobile malware than ever before,” F-Secure’s Chief Research Officer Mikko Hypponen said, in the Mobile Threat Report Q4 2011 (Available here).
Since 2009, more than half of mobile malware has been profit-motivated. Do you remember what was happening in the mobile world around 2009? The Android mobile platform emerged and has since replaced Symbian as the mobile OS most often targeted by mobile malware.
From the Mobile Threat Report: “Android malware continues to expand rapidly in the fourth quarter of 2011, with malware originating from Russia forming a significant presence in the scene.”
Mobile Threats by Platform, 2004-2011
You’ll notice that while the iOS platform that powers Apple devices has expanded exponentially but it has not experienced a boom in new malware targeting it. F-Secure Labs has credited the security approvals required for placement in Apple’s AppStore for keeping malicious apps to a minimum. Mobile malware that affects jailbroken iPhones but the Labs does not expect an iOS malware boom.
What does a boom in malware look like?
Tuesday February 9th is Safer Internet Day this year. An excellent time to sit down and reflect about what kind of Internet we offer to our kids. And what kind of electronic environment they will inherit from us. I have to be blunt here. Our children love their smartphones and the net. They have access to a lot of stuff that interest them. And it’s their new cool way to be in contact with each other. But the net is not designed for them and even younger children are getting connected smartphones. Technology does not support parents properly and they are often left with very poor visibility into what their kids are doing on-line. This manifests itself as a wide range of problems, from addiction to cyber bullying and grooming. The situation is not healthy! There are several factors that contribute to this huge problem: The future’s main connectivity devices, the handhelds, are not suitable for kids. Rudimentary features that help protect children are starting to appear, but the development is too slow. Social media turns a blind eye to children’s and parents’ needs. Most services only offer one single user experience for both children and adults, and do not recognize parent-child relationships. Legislation and controlling authorities are national while Internet is global. We will not achieve much without a globally harmonized framework that both device manufacturers and service providers adhere to. Let’s take a closer look at these three issues. Mobile devices based on iOS and Android have made significant security advances compared to our old-school desktop computers. The sandboxed app model, where applications only have limited permissions in the system, is good at keeping malware at bay. The downside is however that you can’t make traditional anti-malware products for these environments. These products used to carry an overall responsibility for what happens in the system and monitor activity at many levels. The new model helps fight malware, but there’s a wide range of other threats and unsuitable content that can’t be fought efficiently anymore. We at F-Secure have a lot of technology and knowledge that can keep devices safe. It’s frustrating that we can’t deploy that technology efficiently in the devices our kids love to use. We can make things like a safe browser that filters out unwanted content, but we can’t filter what the kids are accessing through other apps. And forcing the kids to use our safe browser exclusively requires tricky configuration. Device manufacturers should recognize the need for parental control at the mobile devices. They should provide functionality that enable us to enforce a managed and safe experience for the kids across all apps. Privacy is an issue of paramount importance in social media. Most platforms have implemented good tools enabling users to manage their privacy. This is great, but it has a downside just like the app model in mobile operating systems. Kids can sign up in social media and enjoy the same privacy protection as adults. Also against their parents. What we need is a special kind of child account that must be tied to one or more adult accounts. The adults would have some level of visibility into what the kid is doing. But full visibility is probably not the right way to implement this. Remember that children also have a certain right to privacy. A good start would be to show whom the kid is communicating with and how often. But without showing the message contents. That would already enable the parents to spot cyberbullying and grooming patterns in an early phase. But what if the kids sign up as adults with a false year of birth? There’s currently no reliable way to stop that without implementing strong identity checks for new users. And that is principally unfeasible. Device control could be the answer. If parents can lock the social media accounts used on the device, then they could at the same time ensure that the kid really is using a child account that is connected to the parents. The ideas presented here are all significant changes. The device manufacturers and social media companies may have limited motivation to drive them as they aren’t linked to their business models. It is therefore very important that there is an external, centralized driving force. The authorities. And that this force is globally harmonized. This is where it becomes really challenging. Many of the problems we face on Internet today are somehow related to the lack of global harmonization. This area is no exception. The tools we are left with today are pretty much talking to the kids, setting clear rules and threatening to take away the smartphone. Some of the problems can no doubt be solved this way. But there is still the risk that destructive on-line scenarios can develop for too long before the parents notice. So status quo is really not an acceptable state. I also really hope that parents don’t get scared and solve the problem by not buying the kids a smartphone at all. This is even worse than the apparent dangers posed by an uncontrolled net. The ability to use smart devices and social media will be a fundamental skill in the future society. They deserve to start practicing for that early. And mobile devices are also becoming tools that tie the group together. A kid without a smartphone is soon an outsider. So the no smartphone strategy is not really an alternative anymore. Yes, this is an epic issue. It’s clear that we can’t solve it overnight. But we must start working towards these goals ASAP. Mobile devices and Internet will be a cornerstone in tomorrow’s society. In our children’s society. We owe them a net that is better suited for the little ones. We will not achieve this during our kids’ childhood. But we must start working now to make this reality for our grandchildren. Micke
How to balance between privacy and crime fighting? That’s one of the big questions now when we are entering the digitally connected era. Our western democracies have a set of well-established and widely accepted rules that control what authorities can and can’t do. One aspect of this has been in the headlines lately. That’s your right to “plead the Fifth”, as the Americans say. Laws are different in every country, but most have something similar to USA’s Fifth Amendment. The beef is that “No person … shall be compelled in any criminal case to be a witness against himself,…”. Or as often expressed in popular culture: “You have the right to remain silent.” With more fancy words, protection against self-incrimination. What this means in practice is that no one can force you to reveal information if authorities are suspecting you of a crime. You have the right to defend yourself, and refusal to disclose information is a legal defense tactic. But the police can search your home and vehicles for items, if they have the proper warrant, and there’s nothing you can do to stop that. In short, the Fifth Amendment protects what you know but not what you have. Sounds fair. But the problem is that there was no information technology when these fundamental principles were formed back in 1789. The makers of the Fifth Amendment, and similar laws in other countries, could not foresee that “what you know” will expand far beyond our own brains. Our mobile gadgets, social media and cloud services can in the worst case store a very comprehensive picture of how we think, whom we have communicated with, where we have been and what we have done. All this is stored in devices, and thus available to the police even if we exercise our right to remain silent. Where were you last Thursday at 10 PM? Do you know Mr John Doe? What's the nature of your relationship with Ms Jane Doe? Have you purchased any chemicals lately? Do you own a gun? Have you traveled to Boston during the last month? Have you ever communicated with email@example.com? These are all questions that an investigator could ask you. And all may still be answered by data in your devices and clouds even if you exercise your right to remain silent. So has the Fifth Amendment lost its meaning? Would the original makers of the amendment accept this situation, or would they make an amendment to the amendment? The situation is pretty clear for social media and cloud storage. This data is stored in some service provider’s data center. The police can obtain a warrant and then get your data without any help from you.(* Same thing with computers they take from your home. The common interpretation is that this isn’t covered by the Fifth Amendment. But what if you stored encrypted files on the servers? Or you use a device that encrypts its local storage (modern Androids and iPhones belong to this category). The police will in these cases need the password. This is something you know, which makes it protected. This is a problem for the police and countries have varying legislation to address the problem. UK takes an aggressive approach and makes it a crime to refuse revealing passwords. Memorized passwords are however protected in US, which was demonstrated in a recent case. Biometric authentication is yet another twist. Imagine that you use your fingerprint to unlock your mobile device. Yes, it’s convenient. But it may at the same time reduce your Fifth Amendment protection significantly. Your fingerprint is what you are, not what you know. There are cases in the US where judges have ruled that forcing a suspect to unlock a device with a fingerprint isn’t in conflict with the constitution. But we haven’t heard the Supreme Court’s ruling on this issue yet. So the Fifth Amendment, and equal laws in other countries, is usually interpreted so that it only protects information stored in your brain. But this definition is quickly becoming outdated and very limited. This is a significant ethical question. Should we let the Fifth Amendment deteriorate and give crime fighting higher priority? Or should we accept that our personal memory expands beyond what we have in our heads? Our personal gadgets do no doubt contain a lot of such information that the makers of Fifth Amendment wanted to protect. If I have the right to withhold a piece of information stored in my head, why should I not have the right to withhold the same information stored elsewhere? Is there really a fundamental difference that justifies treating these two storage types differently? These are big questions where different interests conflict, and there are no perfect solutions. So I pass the question to you. What do you think? [polldaddy poll=9102679] Safe surfing, Micke Image by OhLizz (* It is this simple if the police, the suspect and the service provider all are in the same country. But it can get very complicated in other cases. Let's not go there now as that would be beside the point of this post.
We have written a lot about how companies treat you as an asset. A source of data that can be monetized in a variety of ways. Spotify did recently change their terms and ensured that this topic stays in the headlines. They want to collect information stored on your mobile device, such as contacts, photos and media files. No thanks! My Spotify app plays music just nicely even if it doesn’t have access to pictures and contacts. And their new terms did backfire big time! Spotify's response: Sorry! Spotify is not the only one. A lot of companies are dependent on user data, from Facebook and Google to utility developers. So this is really a significant privacy challenge. But we have privacy legislation. It’s supposed to protect us and set limits for what data Spotify et al can scoop up and how they can use it. Right? Well, that line of defense has unfortunately fallen already. Yes, there is legislation. But it’s your data and you can decide what to do with it. You are free to sign away your rights to it, and that is utilized by many companies. I bet you have signed a lot of user agreements without reading the fine print legalese. That’s where you disable most of the protection the law could have offered. But there is fortunately a second line of defense, and it’s much stronger. Your Spotify app can only upload data it has access to. Mobile operating systems, like iOS and Android, were designed during an era when we already were aware of the privacy threats. They have several security benefits over desktop systems, but the app permissions is definitively one of the most important. In short, it means that apps you install can’t access everything on the device by default. They must ask for permissions, and you can decide what data and functions a certain app shall have access to. This is your last best hope to keep your private data private. So you better learn the importance of app permissions! They are fortunately very easy to use. And you have already used them. After installing you almost always get a prompt telling that the new app want permission to do something. The most important advice is to stop and think at this point! Don’t let these app permissions be just another boring thing you click through. Your last line of privacy defense falls if you do that. Common sense is enough to use app permissions. Just think about what the app is supposed to do. In Spotify I search for music or start a playlist. Neither action is depending on where I am, so the Spotify app has no real need to access my location. An app that helps me call the emergency number is a totally different cup of tea. It can upload my exact location to the operator, and that is as a matter of fact the main reason for implementing it as an app. So it is natural that this app has a legit reason to access my location. And neither of those apps need to paw through my contacts, so any request to access contacts should be denied. This is the kind of thinking you need to learn. iPhone is currently better on app permissions than Android. Android apps declare what they want and you can review the list before installing the app. That sounds great, but is not so good in practice. The main problem is that it is take it or leave it for you. Your only option is to reject the whole list if you dislike one thing the app want to do, which usually means that the app refuse to install. App developers can sneak in a lot of extra permissions because rejecting the list isn’t a true option in most cases. Android app permissions have actually become just like user license agreements, only a few pay any attention to them. iPhone is smarter. Apps install without any questions about permissions. But the system asks the user when the app tries to access restricted content. The app can’t pressure the user to grant unnecessary permissions by threatening to not install at all. And the user has granular control over permissions, it’s not take it or leave it. Every sensitive content or service is handled separately. This is clearly a better approach. Actually so much better that the next Android, Marshmallow, will copy this system. Moral of the story. App permissions is your friend. And you definitively need allies to help protecting your privacy. Safe surfing, Micke [caption id="attachment_8440" align="aligncenter" width="240"] My Facebook permissions. Location is a no-no. And I don't want to shoot pictures from the app. But access to the photos is needed to post shots.[/caption] Pictures: twitter.com and iPhone screenshots