Google I/O 2012 - Security and Privacy in Android Apps


Uploaded by GoogleDevelopers on 02.07.2012

Transcript:
>>Jon Larimer: Hi, everyone. My name is Jon Larimer, and I am a security engineer on the
Android team based out of sunny Atlanta,. >>Kenny Root: My name is Kenny Root. I'm a
software engineer on the Android team. >>Jon Larimer: We're here today to talk to
you about security and privacy in the Android Apps that you guys are developing and publishing.
And we're going to dig right in. Let's talk about why you need to take security into account
when you're writing apps and why it's important to protect your apps and your users' data.
I'm sure you've all seen the headlines. There's a major security breach here, a major privacy
leak there, some popular mobile app gets caught collecting too much data. The users are furious.
The blogosphere is freaking out. The media demands answers. And even the politicians
are getting involved to speak their mind. So this is pretty serious stuff.
When stories like this hit the mainstream news outlets and people talk about it in bars
and in taxi cabs, it shows that people really are paying attention to the privacy practices
and security issues in mobile apps today. And it would really ruin your day if you woke
up one morning and saw the headline that was about your app that your company produced
that you actually spent time developing. You don't really want the wake-up call from your
boss with PR and legal on the line, asking what's going on and what you're planning on
doing to fix it. And the problem isn't really that developers
are collecting personal user data. Sometimes a fundamental feature of your app requires
it. The problem is the lack of transparency. And
the problem is not notifying the users of the data you're collecting and what it's been
used for. And even if you aren't purposely collecting
personal data without telling users, your app can still allow an accidental personal
data leak if you aren't taking security into consideration.
If your app stores or transmits any personal data insecurely, malware authors or hackers
could end up using this data without you or your users knowing it. And if your app is
collecting too much data or even if it has too access to too much data through the permissions
that it requests, people start to worry. And they have the right to worry, and they really
should be worrying. So how many of you guys have seen negative
reviews on your apps because of the amount of permissions that your app needs?
A few of you. Probably a lot of you. Some of you aren't raising your hands.
[ Laughter ] >>Jon Larimer: I see a lot of these reviews
all the time. So my point is that people are getting more
and more distrustful of apps that ask for access to their personal data without any
clear reason for what you're planning on using it for.
So our talk today is about how you can write apps with privacy and security in mind. And
we'll talk about ways to protect your app's data from malicious apps that are on the device
and how to prevent your app from being a springboard from privilege elevation attacks or information
leaks. And we'll tell you ways that you can minimize
the number of permissions that your app requests and -- so you can lower the impact of a potential
attack. And we'll talk about the tools and the documentation that we have to help you
write secure apps. So people today, they use their phones for
everything now. They surf the Web. They read their email. They take pictures, they get
driving directions, they interact with all of their friends through their social networks.
And people can customize the way they do these things by installing different apps. But for
these apps to work, they need to have access to the data that basically defines your life,
your list of friends, your location on the planet, where you're going and where you've
been, and who you're talking to. So mobile devices are very powerful now, but
they're also a treasure trove of very private personal data on the phone's owner and even
all of the phone owner's friends. So Android just doesn't hand out this data
to anyone that wants it, as you undoubtedly already know, being Android developers, the
Android permission system protects the data and the capabilities on your phone. So when
a user installs an app that wants to access this data or some capability of the phone,
the user gets to see the data and the capabilities that you're wanting to access. For example,
if your app wants to send SMS messages, the user can see that your app wants to do this
through the permission system. And if somebody wants to download a game and they see that
it's requesting access to read all of your contacts and send SMS messages, a lot of times,
they'll think twice about downloading it. So when you write an app that accesses the
data on a device or even produces data on a device, for example, a social networking
app where a user enters their own content, you need to take very good care of this data
and you need to be a good data custodian. There are bad guys out there that would love
to abuse this data for purposes that the user never really intended it for.
And, first of all, people generally don't like giving out their personal information
to strangers, they really hate it. You can go ask someone on the street for their name
and email address and phone number and see if they'll give it to you. And they probably
won't. You can ask him for a list of all of their friends and their friends' phone numbers,
but they're not going to give this data. But it seems like a lot of app developers are
expecting people to just hand over all this information for no reason.
And besides the general discomfort involved in giving out all this personal data, people
need to worry about the -- the folks that write malicious software, the malware authors
who write apps that try to mine mobile devices for data.
What do the bad guys want to do if they get a piece of malware on a user's phone?
Well, the user's phone number and email address could be harvested for sending unsolicited
advertisements, robo calls, and spam emails. Their contacts could be collected, too, so
all of their friends start getting spam emails. And the more unscrupulous or Black Hat marketers
and spammers will pay big bucks for collections of detailed personal information on lots of
people. But that's really not the only way to make
money from having access to a mobile device. There are criminals out that there want to
take advantage of your user's phones to send premium SMS messages that directly charge
your phone bill. And then there are the real serious criminals out there that would love
to transfer money directly out of your bank account, for example, intercepting the two-factor
authentication messages that your bank sends over SMS.
So these are just a few of the reasons that the data and capabilities on Android devices
really need to be protected. But so what does this have to do with the
apps that you write and didn't I just get done saying that Android protects all this
will data? Well, even well intentioned apps that are
insecure could end up leaking data and other device access to malicious apps.
And when a user installs your app and they approve access to the permissions that you
request, they're trusting you with that data and those device capabilities. They have faith
that you aren't going to leak this data to bad apps and that their email address won't
start getting spam because of your app and they're trusting that all of their contacts
won't be solicited unsolicited SMS messages because of your app.
So the goal of this talk is really to make sure that you don't let those users down.
So you really need to be aware that if your app requests a permission, a security vulnerability
in the app can end up granting equivalent access to another app that doesn't request
that permission. So even if you aren't willfully violating your user's privacy, a mistake in
your code could end up leading to some other bad actor doing that kind of thing.
One example I have of this kind of security vulnerability is, for example, storing the
latest GPS location in a world-readable file. And I've seen this a few times, and maybe
your app does this. Hopefully, it doesn't. If it does, you should go fix it immediately
after this session. So if you store the GPS location in a file
that any app can read. This means that another app can access the phone's location without
requesting either of the access location permissions, either access course location or access find
location. You're basically removing the effectiveness
of the location permission that Android normally requires to read the location data.
Another example would be exporting a content provider that stores sensitive personal information.
So let's say that you are developing a note taking app where the user can take notes and
save their notes, and you might implement a content provider so the different components
of your app can all access these notes through a uniform API. And if this content provider
is exported and doesn't require any permissions, then that means that any app on the phone
could end up reading the user's private notes. And the user could be keeping, like, really
private, sensitive stuff in here and they might not have any idea that you're making
it available to any other app that's on the phone.
And you could think -- you could think that you're being really smart and encrypting the
backing store for the file or encrypting the database and thinking that the malware authors
or the hackers won't be able to access it. But if you're providing an open API where
any app can read the data, then your encryption is basically worthless, your users are still
at risk, and your app is really a security headline waiting to happen.
And so another one that we see a lot is logging personal information in the logcat logs. We've
seen people logging email addresses or even passwords for the logcat logs. And it's usually
just leftover lines of code from debugging that people forget to remove. But it's a pretty
serious problem. And we have made some improvements to the
logging system in Jelly Bean to prevent this kind of thing from being abused. So apps in
Jelly Bean can no longer read the logcat logs for other apps, which is good. But it's still
a very bad idea to log any personal data to logcat logs, regardless of that.
Besides worrying about other malicious acts interacting with your apps, you need to watch
what you do over the network, sending unencrypted personal data over a wireless network is really
bad. You should be using encryption whenever possible.
If you're transmitting data over an unencrypted link, you might as well be broadcasting it
to the world. Imagine your user is walking around with a billboard floating over their
head showing all of their personal information and the contents of their documents or even
their contacts' personal information. Like I said, this is bad. You need to encrypt anything
that goes over a wireless network. And then there is also the issue of lost or
stolen phones. And it's really up to the user to protect access to their physical device.
But making your app more secure can still help here. Because you don't want to accidentally
provide a back door into someone's phone if they lose control over it.
So that's what our talk is about today. We're going to get into more detail about
these possible security holes and how to prevent them. If you pay attention to what we're saying,
hopefully, you won't have to worry about dealing with the consequences of a major security
or privacy breach with your app. So the first piece of advice I want to give
you is that you really need to let your users know what you're doing with their data. If
your app is transmitting or collecting any user data, meaning any data that you get from
the phone by requesting permission or any data that the user manually enters into the
app, make sure that you have a privacy policy. We allow uploading of -- we allow uploading
privacy policies to the Play Store now so that users can read it before downloading
the app. So you don't have to implement your own sort of UI for showing a privacy policy.
This helps you be more up-front and transparent. And most users really appreciate this kind
of thing. And the privacy policy should spell out exactly
what data you collect. And I really mean exactly. So if you're collecting email addresses and
names, say names and email addresses. If you're collecting phone numbers, say phone numbers.
Don't just provide a blanket policy that says that you are going to collect whatever data
you want and do whatever you want with it, because that's not useful and you might as
well not even have a privacy policy at that point.
Something else that you want to do, it's a pretty good idea to give users a choice. If
you want to collect all of their contacts for some reason, ask them and let them know
why you want this information. Let them know what benefit that you're providing to them
in exchange for this data. And you should really give the users an option to select
which data they want to give you. All or nothing isn't a great option.
So let them select a subset of the contacts that they want to share with your app.
But, really, the most important thing that you need to do is just be transparent about
what data your app collects and what you're doing with it.
So something else that's really important to keeping your app secure is keeping your
developer account itself secure. The Play Store Developer Console lets you
upload apps and publish your apps and update your apps. So you really need to protect your
account. If someone else gets access to it, they can publish apps as you, which if they
publish some really bad apps, it would make you look pretty bad. They could also get access
to your financial data. They'll see how much money you make from selling your apps and
all of your in-app content. So one of the best ways to prevent unauthorized
access is by using two-factor authentication. You can download the Google authenticator
app from your phone and require a one-time password when you log into your Google account.
And if you don't have two-factor authentication enabled for your developer account, that's
another one of those things where after this session, you should go enable it, because
it's really one of the most effective things you can do to protect your account.
So another tip is, if you want to give another employee or a partner access to your developer
account, don't give them your password. We've seen this happen, and it really kind of scares
me when people do this kind of thing. So we added the ability to grant access to
other accounts to your main developer account. So you don't have to give out your password.
You can grant access by email address, and you get to choose whether or not this person
can see the financial data. So you can end up -- you can create a single
authoritative account for your company, and then other accounts can be granted access.
And this way, a single rogue employee can't take your account hostage. If you're just
granting access to other people, you can revoke that access without changing the master account
password. So if someone leaves the company on bad terms, there's -- it kind of limits
the damage that they can do. >>Kenny Root: Next we're going to talk about
the app signing key. You might ask, what is an app signing key, well, as Android developers,
you've already been using it. You might not have noticed because in a normal development
work flow, in Eclipse, using the ADT, or Android Developer Tools, your app is signed automatically
before it's installed on the device. And that's because any app installed on Android device
must be signed with a signature before that's -- or must be signed with a private key before
it's allowed to be installed. However, if you're distributing your app in
the -- whoops -- if you're distributing your app in the Google Play or sending it via email
to someone else, you'll have to generate a release key. This is a unique key to your
app. And it provides -- any update you want to provide for that app must be signed with
the same key. Another cool property of this key is, if you
sign multiple apps of the same key, you can use things such as shared user IDs between
the apps to look at each app's data, or you can use a permission level of signature, which
we'll talk about later. Because the signing key is part of the app's
identity, it can be used to update your app, you really need to make sure that you protect
this signing key. So a big company might have something like
a hardware security module or HSM where they have little a ceremony where they sign the
app's keys. I don't know, they might have put on special hats.
As a home-brewed developer, you might just have your key on a USB stick that you stick
in your fireproof safe. But the most important part is you need to keep a backup of this
key. But whatever you do, don't keep that backup in your source repository or package
it in your app. We've seen people do this before, and you're basically giving your private
key away to the world. Oh, and by the way, since you have to sign
with the same key to have updates, if you lose that release key, you're not going to
be able to update your app. You're going to have to tell users to uninstall and install
a new app with a different key. And you can see by the Google Search results that it happens
quite a lot. And we've even received support calls about this. And I'm sorry we can't factor
your key to find the private key again. But, you know, I don't want to get a call from
you guys about losing your key, because I don't want to tell you, sorry, that I can't
fix your key for you. Now, that we know how to keep your accounts
and your key secure, let's talk more about Android security itself and the features it
offers developers. So the security architecture for Android has
several layers. The first layer is that each application runs in its own process space
and as its own unique user ID. There are some exceptions such as being signed by the same
certificate allows you to run as a shared user ID. But, generally, you can imagine that
each application is separated by kind of a little firewall between them. It's a process
space. This -- the Linux kernel provides a separation
between the processes and different user IDs, which forms the basis of the Android sandbox.
And this sandbox gives developers the flexibility they need to make wonderful and innovative
apps without jumping through hoops to go through different problems they might have.
And the sandbox also ensures that apps -- or interactions between each app component are
protected by the appropriate permission check. For instance, if your app was accessing the
file system, Linux kernel provides file system security. So what the Linux kernel would do
is see what your app's user ID is, what the file's owner is, check the permissions. And
if that checks out, allow you to write from or read to the file.
Or you can imagine as a developer you might want to enable Wi-Fi peer-to-peer mode. But
you can't talk directly to the Wi-Fi peer-to-peer or the Wi-Fi driver and the kernel, so you
make a call to the Wi-Fi manager API. So in the backend, this actually uses a binder IPC
mechanism or interprocess communication protocol or method to talk to the system server.
And the system server would then check your app's permissions, what it was granted when
it was installed. And if it's granted, it would talk to the Wi-Fi kernel driver for
you on your app's behalf and enable Wi-Fi peer-to-peer mode if you have that permission.
But it's not limited to calls to the system server. You can provide components to other
applications for them to interact with, and you can do something like maybe you have a
foreign language word list for an input method editor. Or you could have a match-making service
for a game to find other opponents. You could provide a simple interface for a complex communication
protocol, like Secure Shell, or you could have a calendar provider that goes up to the
cloud to grab your next appointment. You know, it's really limited to whatever you can imagine.
Android is unique in that there's freedom within your app's process to do what you need
to do. This provides a lot of awesome possibilities. But it's also different from a desktop VM.
And I have a kind of a story. I was trying to write a tool for the Android SDK. I thought,
you know, I'd just write it in the Java language because it can run on Linux and Windows and
Mac OS. But it turns out I ran into a problem. I was trying to use the password-based key
derivation function number 2. It's kind of a mouthful. But it's PBKDF2. It turns out,
in Java, it's only available in some operating systems. I thought, oh, boy, I'll just use
my own implementation and provide that through the Java cryptography architecture. But it
turns out in some G -- some VMs, you can't install your own JCA provider and some VMs
you can. So it's kind of a headache. But in Android, you won't have that kind of headache.
You can do what you need to do, and you'll be much happier. But it also means you have
to be aware of some things. You can use reflection in your own application
to do what you need to do and it allows you to do cool things like building dynamic code.
But it also means that protections for methods, fields, and classes, like protected and private,
aren't absolute. And you can do some really gnarly things in your own code using reflection.
I wouldn't really encourage that, though, I'd just stay away from it.
Or you can use JNI, which is a native interface for applications, and it's allowed developers
to do some really, really cool things, like port in the latest game engine or use a third-party
library to speed up development. But it also means that in native code, you
can do anything in your own process space. There's not really a restriction. So you can
also scribble on the Dalvik manage (indiscernible), which can really cause some bizarre behavior.
If you're processing things in native code, like images or anything, you can inadvertently
cause a security exploit by allowing a buffer overflow. In the worst case, you'll provide
the security overflow. It might just have a segmentation fault.
It is a cause of really bizarre behavior. >>Jon Larimer: So next we'll talk about what
it takes to actually write a secure app. We'll talk about the potential problems you might
face and solutions for those problems, we'll tell you the best practices that you can employ
to take advantage of the security features that the Android platform framework and operating
system offers. So here's a typical application. It has a
few activities. There's a service running in the background. The service talks -- the
service has a settings file that it reads and writes to, there's a database with a content
provider that provides access to it. The app is Internet-enabled and it talks to the cloud.
So when you look at this diagram, can you tell where the attack surface is, which of
these components could have a security vulnerability or some kind of data leak that would leak
data or capabilities from Android? And the answer is all of them. Every single component
here could be exposing data if the developer -- if you guys aren't taking the necessary
precautions. The activities could be leaking personal data to the log file, the service
could remote calls from other apps that don't have permission, the settings file could be
world readable or world writeable allowing access to the service. The content provider
could be granting access to the database and the database file itself could have insecure
file system permissions. And even the data being transmitted over the network could be
in clear text or the web service itself could be compromised. Now, this isn't really as
scary as it looks. Android actually makes it pretty easy to prevent most of these kind
of attacks, except for the cloud one there, we can't really protect your web server. But
it's often easier to write a secure app in Android than it is to write an insecure app.
You just need to know what to do, you need to know what's safe and what's not safe, and
once you really understand the risks and the security model of Android mit starts to become
like second nature and we do have some tools to help keep you on track.
So now Kenny will tell you about the different types of application components and how to
protect other apps from accessing them if you do not want other apps to access them.
>>Kenny Root: So each component provided by your app is declared in the Android manifest.
And this is a way in which the system server and specifically the activity manager and
package manager inside the system server knows the entry points to your apps. And these entry
points can be services, activities, broadcast receivers or content providers. And each one
of these components can have intent filters associated with them which tells the system
server which intent data you want these activities or services or whatever being matched for.
An intent filter signals to the Android system that you want these components exposed to
other apps. And with without an intent filter, most components are not available to other
applications with the exception of the content provider which is exported by default. So
it's a good idea to get in the habit of explicitly marking components as exported or not. This
is not strictly necessary, but it will help you prevent mistakes you make in the future.
For instance, if you didn't intent a component to be exported and maybe seven months later
you come back or a coworker comes in there and adds an intent filter for something they
need, they might inadvertently export that component and it could cause a security vulnerability
later. If you do wish to make your -- whoops, wrong slide.
If you do wish to make your components available but only for limited use, there are permissions
you can grant in the Androidmanifest.XML. These may be listed to the user upon installation,
depending on the protection level they're at. So if you have a protection level normal,
it's for permissions that you're granting to an application that don't really expose
any user data, but you might want the user to be aware of if you're scrolling through
the list of permissions that are in there. Dangerous is for things that could expose
user data and you do want the user to be aware of. And finally there's a permission level
of signature which allows you to provide components to other applications signed by the same app-signing
keys we discussed before. So here's an example of how you define a permission
on an Android manifest. Shows that protection level is restricted to signature and that
means the same app-signing key that you sign this app with and it's protecting our service
example. So if you didn't want the service example available to other applications or
exported, it's a good idea to leave that signature in there and just mark "exported false." This
is kind of a belt-and-suspenders approach and it just ensures that if there's any problem
with the exporting in previous platform versions, then it's not exported, if you have the same
sign-in signature. So the Android manifest provides really coarse permission-checking,
basically all or nothing for some things. But if you need to check individual paths
of your code, there's more granular things you can do with checking in code. For instance,
if you register a broadcast receiver in your code, there's a version of the API that specifies
what permission the caller must have before they're allowed to invoke that component.
If you're receiving a binder call, say through your AIDL file, you can use the enforce call-in
permissions or check calling permissions methods to check whether the caller has the appropriate
permissions. And this allows you to avoid the confused deputy problem. So what's a confused
deputy problem? So we have our Wi-Fi manager here which is inside the system server which
I call the strict sheriff. And say you want to make a Wi-Fi control app that enables Wi-Fi
peer-to-peer mode like we were discussing before. So if your app asks the strict sheriff
here whether he's allowed to do it, you would be granted. It you installed an attacker app
that requested no such permission and he asked the strict sheriff, it would be denied. But
what happens if your app is exposing a component that is part of the Wi-Fi control and the
attacker decides that he can ask your app. Well, your app might be asked by the attacker
app, it would ask the strict sheriff, he would grant it because you asked for the permission
during installation and your app could return some information back to the attacker app
without having the correct permission and your app has become the confused deputy. He's
being deputized by the strict sheriff and he's leaked information to the other app without
the correct permission. >>Jon Larimer: So now I would like to talk
about protecting your app's data from people with physical access to the device. And I
don't mean the users that installed your app, they generally have every right to access
all the data that your app's storing on the device on their behalf, but I'm talking about
people who maybe found or stole the phone from someone. The Android debuggable attribute
and the Androidmanifest.XML file turns on debugging for your app. And you want to make
sure that this isn't enabled in release builds of your app. And it's usually automatically
disabled when you create a replace APK, but if you've been messing around with build process,
it's possible to release an APK with it enabled and we have seen apps that were released the
bug setting enabled. If it is enabled, someone with debug access to the device, if ADB is
enabled and USB-to-bugging is enabled they can actually run code with the same permissions
as your app and read of your app's data. Here you can see how easy it is. It takes a single
shell command to run another shell as user ID giving access to the private secrets file
that's full of secrets. Something else that's worth mentioning is the Android allow backup
attribute which isn't on this slide but it specifies that your app allows private data
to be backed up by the Android backup system. And in my opinion this is usually a pretty
good thing. Users like being able to back up their data, but you need to keep in mind
that someone with ADB access to the device can run ADB backup and dump all of -- dump
the backup files, all the app's private data to get access to this data. Like I said, it's
usually a good thing in most cases, but if you have some extremely sensitive information
that you wouldn't want someone who stole a phone to have access to, then you might want
not to allow backups. Now let's talk about storing data, and in
particular, storing data in the files on the file system. So when you store personal data
or data that's protected by a permission, you need to make sure that you aren't leaking
this data to other apps. And the example I gave at the beginning of this talk was storing
the latest GPS location in a world-readable file but
really if there's any information that you retrieve from Android that requires a permissions
access or any data that the user enters into your app, you don't want it to be world-readable
and you also need to watch out for when you're using external storage. In Android, the external
storage or the SD card, isn't protected by the same set of granular app permissions that
private files get and it was designed to be used as shared storage, where any app can
access it. And in Jelly Bean we added the ability to protect extra storage with a permission
but any app can request this permission. So this means that if your app is handling any
personal or private data, you really shouldn't store it on external storage or the SD card
without asking the user first and making sure that they're aware that any other app can
access this data. Another important thing I'd like to say is that you can't trust files
that any other app can write to. It's usually a bad idea to create a world writeable file
anyways, but other apps can still get access to write to files that your app stores in
external storage, right, external storage permission allows this. And I have a couple
few really big don'ts here. If you're using any code libraries or external libraries,
don't place the code anywhere that it can be written to by other apps. Another one is
don't write paths to code libraries in world writeable files because then an app can just
load that file, change the path of the file to a library that they control and then your
app would end up loading that binary library into their own process space and it could
be possibly malicious and executing in the context of your app with all of your app's
permissions using as your app's user ID and this has actually happened before in some
very popular apps. If you're going to be reading world writeable data, you shouldn't do it
with native code. Dalvik offers protection against memory corruption vulnerabilities
that are really hard to prevent in native code. It takes a lot of effort. And by memory
corruption vulnerabilities I mean the things like buffer overflows or integer overflows
that lead to arbitrary code execution. Here's an example of some good and some bad code
for handling private data files. And you can see here that it's pretty easy to make sure
that other apps can't read or write to your files. And it's actually less characters to
type mode private than mode world readable so it actually takes a little bit more effort
to write the apps insecurely. And the default value is actually zero so you could just use
zero instead of contacts no private. So you need to put a little bit of extra effort into
making your app insecure so that's one of the examples of it's actually easier to make
a secure app, slightly, very slightly easier. So something else that you might consider
to protect your files from being accessed is encryption but Kenny would like to say
a few words about that So as Jon mentioned, permissions don't work on the SD card because
the file system used on the SD card for maximum compatibility with consumer electronic devices
like your camera, where you put the SD card in, doesn't support the concept of file owners.
So you might think to yourself, I know, I'll just encrypt the file I put on the SD card,
then no bad guy will be able to read them. Well, the problem is there's something called
like -- there's some attacks and one of them is the chosen-ciphertext attack. So what might
happen is your encrypt your loginok=0 in this example, but if the bad guy knows that it's
always going to be login okay equals something or he can guess kind of the pattern of the
file, he can exhort with the data he thinks it is and then exhort with the data he wants
it to be, and then when you later decrypt that file, it will become what he wants it
to be. That's not a good idea. So a lot of this problem comes from trying to compose
your own cryptography primitives without really understanding the consequences and the pitfalls
you might run into. So it's a good idea just to use something that's been peer-reviewed
already, and one such library is Keyczar. Keyczar was written by the Google security
team, and it's made available under the Apache 2.0 license and it can handle encryption and
decryption, signing and verification of files without much guesswork as to how to actually
use it. For instance, on the slide there's a Keyczar tool you run to create a key pair,
the public and private part. Take that public part and put it in your raw resources and
later on in your application, after you've packaged up that public key, you can -- if
you need to send data to your server later, you can encrypt your private data, you can
store it away for a while, if you want to, but later on you can send it up to your server
and on the server use Keyczar to encrypt it. And it's really only two lines of code. And
so it's a lot easier than instantiating a lot of the Java cryptography objection and
every API should be that simple, in my opinion. I'm working on it, but -- so just to show
how tricky crypto can be, in 1970s Ron Rivest, Adi Shamir and Len Adleman took about 42 tries
to get RSA right. Those are the R, S, and A in RSA. And that means they got it wrong
41 times beforehand. And these guys are experts. They've been researching cryptography a long
time. So it just shows how easy it is to get things wrong. And more recently, NIST has
opened up the competition for the secure hash algorithm, Version 3 or SHA-3. 64 people entered
that, and immediately, crypt analysts were attacking the algorithms. Some were broken
immediately, some were deemed too slow, and some just not secure enough in the competition.
Now, in the third round, there's five people left, so that means 59 people got some things
wrong. And I just don't want to see anyone in this room end up in the same area. So just
use something like cryptography -- or Keyczar for your cryptography. And just to paraphrase
a -- one of my favorite games, it's dangerous to go alone. Take this peer-verified cryptography
library with you. Now I'd like to talk about protecting the network trafficking in your
app. Most apps will connect to the Internet for something. They might be transmitting
personal data, and there's nothing particularly wrong about this as long as you include that
privacy policy that Jon talked about. But there's a problem if you're transmitting it
unencrypted. Now, you can imagine a public Wi-Fi network. If someone's on a public Wi-Fi
network, anyone in the room can basically sniff the traffic on that public Wi-Fi network.
And even if you encrypt the public Wi-Fi network, there might be a problem in the network between
your server and your client. There could be an attacker in there that could mount what
is called a "man in the middle" attack. They can even inject data into your application
like they could intercept the connection and appear as if you're connecting to the server
but they're injecting malicious data into your application. So you can't even really
trust what's coming into your application even if you know that the local network's
encrypted. So what's the worst that can happen if someone hijacks a network stream? Well,
I think the best that could happen is that they see pictures of my neighbors' cats instead
of the images they're expecting to see. So I like my neighbors' cats, but you can imagine
something else happening like maybe they're injecting malicious data, maybe they're injecting
JavaScript into your WebView. It could be anything, really. If you've ever been to a
hacker conference like DEF CON or Black Hat in Las Vegas and tried to use a public Wi-Fi
you might have seen something -- it might be a little less tame than this but you'll
know what I mean. But in the worst case, your app could be completely compromised if it's
exposing some JavaScript interface in the WebView or whatever you're downloading. Luckily
it's pretty easy to make -- to enable encryption if you're using http in Android. Just make
sure you use https instead of http. Just one little letter you put in there makes a whole
lot of difference. Of course you need to make sure your server is supporting https. But
that's really outside the scope of this talk and there's plenty of things on the Web you
can go and search on how to enable https or SSL on your server. Something else to keep
in mind is that you really shouldn't download code into your program. Android allows us
-- you can download a DEX file and actually load it into your applications process space.
But it's a really bad idea security-wise. So I really wouldn't recommend it. And remember
the cats from the last slide, imagine if, instead of the DEX file you intended, it's
some attacker's DEX file that completely comprises your application and they'd be able to get
access to all the data your app has generated or the permissions it has granted by the user.
If you know what you're doing, you can use cryptographic signing to ensure that nothing
like this happens. As we saw on the previous slide, it's hard to get cryptographic signing
right. It's better to use Android's own updating mechanism and to provide the freshest data
to your applications. There's a good one in Google Play, by the way. By default, the https
stack in Android will validate SSL certificates against all the CAs included in the Android
platform. This is fine and it provides an adequate level of security by placing your
trust in all the trustworthiness of all the CAs, which, I don't know if you guys have
read the news, but there was a few break-ins in CAs. And when a bad guy breaks into a trusted
CA and generates a rogue certificate for a site, if that rogue certificate is used, the
user won't be able to tell because it actually validates correctly. But breaking into a CA
and generating rogue certificates isn't something that a hobbyist hacker or a script kiddie
is able to do, these guys are really serious. And they're usually not after money, they're
after information. So if your app is used by people in particularly sensitive areas
and you want to protect your app a little bit more, you might want to consider implementing
certificate pinning. So certificate pinning, if you know some CA or intermediate certificate
in a chain that will never change, you can place that certificate or a group of certificates
in a pin set and then that means that -- in the pin set and trust only the group instead
of everything in the default list. And what's even better is if you control the software
which runs on the client, which you obviously do, and the server as well, you can generate
your own CA. So then you can save a little bit of money if you're strapped for cash.
But, of course, if you generate your own CA, there's some protection you have to put in
place. Like I mentioned before, a big corporation might have an HSN for assigning capabilities.
Typically, they have something like they only take it out at certain times of the year.
But you should really treat it the same as your application signing key. Make multiple
copies, keep a backup and keep a backup and keep a backup.
But if you want to find out more about how to actually pin your certificates, you can
check the Android documentation for https URL connection. There's some example code
in there. So now I would like to give you some advice for using WebView. And by default,
all JavaScript is disabled in WebView, and this is good. When you enable JavaScript,
you open up your application to attacks like cross-site scripting, cross-site request forgery
and people could write heap spray exploits for WebKit bugs. There's all sorts of bad
things that can happen if WebView in JavaScript is enabled. We don't have time to get deep
into those specific attacks or how to block those attacks today. It's beyond the scope
of this talk and we just don't have time. Keep in mind, you should keep JavaScript disabled
in your WebView unless there is some fundamental reason your app requires it. Because if an
attacker can find a way to inject arbitrary JavaScript into your application either through
man-in-the-middle attack as Kenny was talking about, or even compromising your web server,
they could end up running code in the context of your app. This is especially dangerous
if your WebView exports a JavaScript interface. And the add JavaScript interface method is
really dangerous. What this call does is expose your app's functionality through JavaScript.
For example, you could write a method at that reads data from a file in your app's private
storage and post it back to your server. And then you can make this method available to
JavaScript that's running in the WebView. If an attacker has any way to control which
JavaScript is running in the WebView, and your code -- say your code let's the JavaScript
read any file instead of a specific file, like the file name is an argument to the method,
then you could have a major information leak on your hands because -- so if you expose
any protected data or any phone capabilities through JavaScript, like I said a web site
compromise or man-in-the-middle attack could end up executing arbitrary code. There are
some legitimate uses for using add JavaScript interface; that's why we put it in there.
It is useful sometimes. If you do use it, make sure you are grabbing the JavaScript
over https. Make sure it is an encrypted and authenticated connection. Don't expose anything
that's protected by a permission. Make sure that the -- any information you do expose
is extremely limited. So for example, don't provide a generic interface for reading and
writing files to JavaScript and don't expose direct access to content providers, and definitely
don't expose access to, like, reflection. So you really want to keep the potential attack
surface as small as possible. Something else that we like to talk about
is minimizing the permissions that your app requests. And you already know that requesting
too many unnecessary permissions can get you a lot of bad reviews from users. People complain
on the forums. That's not the only problem with requesting too many permissions in your
apps. Recently a group of university researchers found that one third of the app that is they
tested requested more permissions than they needed. This means that, based on the code
that was in the apps, they requested permissions that they didn't have any code that used.
So there's extra permissions floating around that weren't really necessary. And this is
a problem because if there's a security vulnerability in the app, if an app can run arbitrary code
in the context of the app, using one of the vulnerabilities that we already talked about,
they can take advantage of any of the permissions that you request. So your app might not be
using them, but a malicious app could. So I'd like to show you a few examples of ways
that your app can perform common tasks without requesting permissions. And these are things
that most people think actually require permission and I've seen some of this very similar code
in apps and they request a permission that they don't need. So when I say that you can
do these things without requesting permission, I don't mean that you can, like, do these
things like access data secretly behind the user's back without permission. The user is
fully aware of what's going on. They are prompted to select the piece of information to share
with your app. So in the first example, you'll see how you don't need to request a permission
yourself if you call an activity that already has the permission that your app needs. And
in the second example, you will see how content providers can grant temporary permissions
for your app to access some data. So here's one example. If you want to let the user take
a picture with a camera and then read that image data in your app, you don't need the
camera permission for your app. You can just launch the camera system activity to let the
user take a picture and then return control to your own app. So to make this happen, you
fire off an action image capture intent and specify whatever output file name that you
want. You can use start activity for result to make sure that your app gets called back
with the data whenever the activity is complete. So in this case, after the user takes the
picture, your on activity result callback in your class gets the file path and you can
read the file. So from the user's perspective and from a security and privacy perspective,
this is a lot safer than granting camera to the app and writing a bunch of code to interface
with the camera API. The user gets to pick the photo to that is sent to the app. Can't
open the camera -- can't read camera data without actually opening the camera and the
user seeing it there. There's no reason to request the camera permission unless you are
actually writing a camera app yourself. Another example is letting the user send an
SMS message. Instead of requesting the send SMS permission and going through all of the
effort of talking to the telephony manager to send an SMS message, you can create an
intent and start an activity to launch as the system SMS app with the message and app
that you choose. This way the user gets to see the destination phone number and message.
They will have an option of either sending it or declining to send it if they aren't
comfortable with it. Or they can change the message. Obviously, this won't work for an
app that -- for every app that has SMS features. But if SMS is just a minor part of your app,
for example if you want to let the user share a link to your app to one of their contacts
through SMS, letting them send the message this way makes a lot of sense. You don't have
to worry about freaking the user out by requiring send SMS permission if that's the only feature
you will be using it for. Something else that's pretty neat is that
content providers can grant temporary permissions to apps. An app can use the action pic or
ActionGetContent to have the system launch an activity that let's the user pick the information
they want to share with your apps. Here's an example. Let's say you want to get some
contact information from the user. So instead of requiring the read contacts permission
and spending a bunch of time writing a new widget to let users scroll through the contacts
and pick the ones they want. You can use ActionGetContent with the mine type of phone.content item type,
and the Android system will pop up a contact chooser that let's the user pick which contact
they want. When the user makes their choice, the on activity result callback for your class
is called. Then you can use a content resolver to read the data that you want. This works
because the context content provider grants a temporary URI-based permission, which we
really didn't get into, to your app for this one piece of information. Your app can read
this contact but it can't read any others and they don't have access to read the full
list and the user picks the contact themself. So I think users will feel much better about
this privacy and security-wise than just granting blanket access to your app to read the entire
contact list. Another thing I want to talk about minimizing
permissions is identifying app installations. And this is a pretty common problem. There
are a lot of reasons why you'd want to identify unique installations, whether it be for licensing
or tracking game scores or advertising or even just getting an accurate count of the
active users I don't have you are app. People seem to be inclined to use get device ID for
this or the sim serial number or some hardware-based ID. The hardware IDs aren't great values for
tracking installs. The device ID isn't really reliable on -- and some devices, and on some
customer OMs it's just set to a fixed value. There was a very popular phone that's shipped
that's still widely used today that had a fixed device ID on every single phone. There
was also the privacy implication that the user can't change this value without buying
a new phone. If someone decides to do a factor reset on their device and sell the phone to
someone else, and then the buyer also purchases your app, they can see some weird stuff if
your app is associating that phone with another user. You don't want the new owner of the
phone to see all of the old user's personal data if you are storing it keyed on the device
ID. So you could use the Android ID to track installs. It doesn't require permission and
it does get changed on a device swipe. But this really still isn't ideal. My favorite
method is -- security and privacy-wise, is just generate an UUID, universal unique identifier,
when the app starts up for the first time and then store it in the shared preferences
file. Then you can use the Android backup system so the shared preference file for your
app is synced up to the Cloud. That way the user can actually wipe the ID if they want.
They can go to the system settings and clear the apps data. If they feel like they need
to do a factory reset of the phone, they can do a backup, factory reset their phone, resync
the device back to the Cloud and your ID will still be there and will still see the same
data from the app. This is really the best of both worlds. The value is persistent across
swipes if the user wants it to be, or they can erase the value without doing a full factory
reset. So it ends up working a lot like web browser cookie.
So now let's talk a bit about device administrator access. The Android device administration
API let's anyone develop and publish apps that can control the administrative security
features of the device. Things like requiring a lock screen with a complex password or requiring
encryption, disabling the camera or doing something fun like wiping the device. So if
your app does request device administrator privileges, it works a little differently
than the normal permission system. Users can actually revoke device administration privileges.
And the privileges are also granted on a more granular basis. An app can ask for the ability
to enforce the password length or password complexity but still not be able to wipe the
device. And some issues we have seen in the past are that people who install apps that
require device administration access, they can't uninstall these apps unless device administration
is disabled. Android versions up to Gingerbread, there wasn't really a clear error here. It
would say uninstall failed. The user would have no idea why they can't uninstall these
apps and they think it's malware. But in Ice Cream Sandwich and up, the user has the option
of going directly to the settings to disable the app. So if you're using device administrator
access in a normal app and not like an actual enterprise mobile device management solution,
for example if you developing a lock screen or a replacement for the default launcher,
make sure that your users are aware of the power that they are giving your app. Make
sure that they know that they have a disabled device administrator privileges to uninstall
your app. And I have been saying the whole time that you need to be really careful with
security but you should be really careful if your app has access to any of the more
dangerous device administrator privileges because accidentally leaking someone's private
notes or their email address is one thing. But letting some other app wipe the device
without permission is a whole other problem and you really don't want to have to deal
with that. So I don't know if you guys have seen it,
but we reasonably started including a tool call Android Lint with the Android SDK. It's
been there since around version 16 of the development kit tools. And what it does is
it scans your Android project source code, the Java and the XML files, for potential
bugs. Besides normal bugs like bugs that can crash your program or performance issues,
Lint actually checks for several potential security issues that we talked about today,
too. And Lint is integrated with Eclipse, so if it finds an issue, it will generate
a warning and point the problem out to you with the yellow squiggly line that you probably
can't see on the projector under mode world writable. When you do see this warning, you
can hover the mouse over it and click a menu option to get more detailed information. Here
you can see we are urging you to carefully review the code that's creating a world writable
file. It tells you not to write any private data to the file. It let's you know if the
file is modified by malicious code, it can compromise your entire application. Besides
looking for world readable and world writable files, Lint also checks for exported application
components that can be accessed without permission. It's not always a vulnerability to export
a service or broadcast receiver or a content provider without permission. Sometimes that's
exactly what you're intending to do, but Lint is there to make sure that you know that you
are doing it. So when you see these Lint security warnings, whatever you do, don't ignore them.
You can suppress the warnings with an annotation, it's just a menu option and adds the code
automatically, but you should really only do that once you understand the security issue
that is causing the warning and fully aware of the implications that -- of what you are
doing. You could be making a huge mistake by ignoring a security warning from Lint.
And we're also adding new security checks. So make sure you are always using the latest
build of the SDK tools. The SDK tools are released independently of the Android operating
system, so it's updated a lot more frequently. It's also developed mostly in open source.
So you can check out the source for Lint. If you have ideas for more checks, you can
feel free to add them. We will put them in the source code. So definitely check that
out if you haven't seen it. And we do have a lot of documentation on the
Android developer site and Android open source project site. There's some other good references
out there, too. The Android security overview site describes the security features in Android
and how they are implemented. You should definitely check that out to get a deeper understanding
of Android's security architecture and some of the underlying features of the operating
system that provide the basis for the Android permission model and the various security
features the operating system provides. The designing for security page tells you how
to write apps with security in mind. This is probably the most important document that
you need to read to be able to design and develop your app securely. And on that page
you will find a lot of the same information that we talked about today. How to avoid world
readable and world writable files. How to protect your application components, and encrypted
network communications. So definitely take a look at that if you haven't seen it. It's
a relatively new document. We just put it up a few months ago or late last year. I'm
not sure how many people have seen it. Then there's the documentation on the security
and permissions in Android. If you do decide to export some of your application components
and require permission, definitely read the permissions documentation very thoroughly.
It covers all aspect of the permission system and there's a lot of good stuff that you need
to know. And something else you might want to check
out is a book buy Jeff Six called Application Security For The Android Platform. It's short,
a little over 100 pages, maybe 115 pages. It's still very thorough. We would like to
think that the short lent of this book is a testament to how easy it is to write secure
apps in Android. >>Kenny Root: Thanks everyone. If you have
any questions about the Android platform security features or general discussion, there is a
group called Android Security Discuss on Google Groups. If you want to report a security vulnerability
in the Android platform or any Google apps, you can send email to security@android.com.
And we will probably be doing a Google hangout -- Google+ hangout with the Android developer
office hours in August sometime. So if you can't think of your question right now, we
will be hanging out with Reto and the other guys on the Google+ hangout. We only have
about three minutes left. We might be able to take a few questions. But if you line up
at the microphones in the aisle, we will answer your questions.
[ Applause ] >>> Can you tell us what exactly the device
encryption setting does and also how does rooting affect security?
>>Kenny Root: So device encryption, you mean the disk encryption? Whole device encryption?
>>> Yes, in the settings. >>Kenny Root: What it does, it basically turns
on -- on a low level, it use the Linux kernel de-encrypt to encrypt the entire slash data
partition. It basically encrypts it in place. What was the other part -- how does rooting
-- >>> Rooting.
>>Kenny Root: How does rooting affect security? >>> My concern is if somebody is rooting the
phone, does an app have access to your data even if you have that encryption option turned
on? >>Kenny Root: It depends on what it does exactly.
Basically when your device is running, it has to access the data. So if you -- if there's
an application that runs, it's going to be able to access the data because it is unencrypted
to the operating system. If it was totally encrypted for everything, the device would
be a brick basically. If you have root permissions, you can read some other applications.
>>> So you guys have this ability to request a different context in the SDK. Can you explain
how that works with your security model and whether or not another application can request
your resource bundle. >>Kenny Root: Depends on if you have what's
called forward locking. So basically you can request resources for other applications.
This is what allows third-party launchers to work. So if you wanted to make a launcher,
we load up an icon or other resources, you have to have that available for other third-party
launchers to be able to use. That's why it's there. You can also do things like create
add-on processes where you just install an application, doesn't really have any code,
but it can have resources that you can load. So you can enable the forward locking but
I think they took it out of the developer console for Market.
>>> Just to be clear, does that mean other applications can access my resources?
>>Kenny Root: Depends on where the resources are. But, yeah.
>>> Is there more guidance? The documentation online leaves something to be desired.
>>Kenny Root: Yeah, I don't know exactly where it would be. But I'll look it up and get back
to you. >>> Okay, thanks.
>>> Are you guys familiar with sequel cipher for Android and if so, what are your impressions?
>>Kenny Root: I'm not familiar with it. >>Jon Larimer: I'm not familiar.
>>> It is a drop-in replacement for SQLite, identical API, so you just use different imports
for SQLite database. It provides full encryption of the database.
>>Kenny Root: I haven't heard of it. >>Jon Larimer: I don't know it.
>>> It's good. >>Jon Larimer: Somebody says it's good.
>>> It adds a bit to the package size but the cost of having it self-secure.
>>Kenny Root: So adds to package size at the cost of users being mad it takes up too much
room. >>> Posting on the Motorola forum, something
about log -- >>Kenny Root: I think we have time for one
more question. We have 20 seconds. >>> It's my understanding that factory data
reset doesn't actually delete user's data or do any sort of encryption or anything like
that. I've heard reports of people doing factory data reset with the impression that everything
would be completely hidden and then someone buying their phone that knew what they were
doing. >>Kenny Root: On some phones if you don't
have an SD card or if you have an SD card, it won't wipe the SD card, a lot of people
take it out when they sell the phone. On phones that have USB storage, no removable SD card,
there's an option that says delete everything, including photos.
>>> Even if you -- even the data that is supposedly deleting, it's actually just marking.
>>Kenny Root: If you are really paranoid, you can always enable hold this encryption.
Basically that throws away the keys. >>> I have my phone encryption. My main question
is really -- since it is not really getting rid of that data, is there any steps that
we're supposed to be taking to make sure that if someone finds themself in that situation,
the data in our apps is not accessible. >>Kenny Root: If you enable hold this encryption
when you do factory reset, it is getting rid of the data.
>>Jon Larimer: We are out of time for questions. But we will hang out up here on stage and
you guys can come up and talk to us. [ Applause ]