NOWSECURE NOW AVAILABLE IN THE MICROSOFT AZURE MARKETPLACE

Microsoft Azure customers gain access to NowSecure Mobile App Security and Privacy Testing for scalability, reliability, and agility of Azure to drive mobile appdev and shape business strategies.

Media Announcement
NOWSECURE NOW AVAILABLE IN THE MICROSOFT AZURE MARKETPLACE NOWSECURE NOW AVAILABLE IN THE MICROSOFT AZURE MARKETPLACE Show More
magnifying glass icon

How to secure Android apps: Dos and don’ts webinar

Presented on July 14, 2016

Yesterday we hosted an inquisitive bunch for our “How to make Android apps secure: Dos and don’ts” webinar with NowSecure Mobile Security Researcher Jake Van Dyke. The audience submitted a lot of great questions. If you missed it, watch a recording of the webinar or take a look at the slides here. Below you’ll find answers to the questions asked during the webinar including some we didn’t have time to address.

Questions and answers

What security-specific static code analysis tools would you recommend?

I’m not aware of many strictly security-focused static analysis tools. Whenever we’re doing assessments I’ll use JEB ““ a Java compiler. It’s pricey but does a good job. We’ve also used Apktool ““ you can get back to your Dalvik code there. It won’t get you all the way back to your Java. We’ve got Radare that is a pretty good disassembler if you’re dealing with native code aspects. Radare is pretty heavy on the command line or terminal. If you’re more of a point and click GUI type of person IDA Pro is really nice. But it’s also pricey and not something the average joe is going to have access to. That’s a list of most of the ones I use for static analysis. (NowSecure also offers mobile app security testing solutions that perform both static and dynamic analysis, and our automated capability can provide results in minutes. Our testing solutions also integrate throughout your development process so that you can find and fix vulnerabilities when it’s less expensive and before they harm your organization or users. Read more about our mobile app security testing solution on our website.)

We already do root check in our app (checking for SU, busybox and rooting apps) what else could you check for to indicate root?

If you’re going that route you’re going to end up in a cat-and-mouse game probably. You could check for the presence of specific files or packages. If you follow superuser, recently they switched to what they call “systemless” root where they are not writing any files to the system partition ““ all its files got moved to the RAM disk. For a short while, no apps that were detecting root knew the new paths to check for, so they returned a false negative. Then these apps started getting updated to look for the files in the new locations. If you’re doing that, you’re going to be investing a lot of time. Root apps can get updated with new paths, and you will start looking for those new paths ““ rinse and repeat. There’s something that Google’s come out with called Safety Net. It does the root checking for you, and I believe they’re also checking for Xposed. So your app uses their APIs and basically asks is this device safe, and you get back a response. They’re also updating it so you won’t have to update your app. You’ll automatically get the new checks that they come out with. They did a pretty good job explaining some of Safety Net’s benefits. For example they’ve got known hashes of all the files that are supposed to be on a many different phones. They can compare that with the phone they’re running on and look for changes. So, they have already put in a lot of work, and it would be kind of a waste to not take advantage of all the hard work they’ve already done.

What would you recommend as a mitigation for app integrity testing?

This question seems to be about anti-tamper techniques. Your app should perform integrity testing on itself. Some common responses to the detection of tampering I’ve seen are the app immediately killing itself (that’s the one i use personally). Or if you really care, you can have the app send some sort of status report to your server that informs you that somebody somewhere has been modifying the app. But that should also stop execution. You don’t want the app to try to log in to the user’s bank account if you know the app has been modified Some developers are more subtle. We see this with some games. If the app detects that a player has been hacked or pirated the game, the app makes a particular level of the game impossible to complete.

How do I figure out whether or not the libraries and frameworks i use in my app, or am thinking about using are secure? Are there resources available for that sort of thing?

Every library you look at is going to be a little bit different. Some libraries are open source and you can inspect the source code. You can visit the library’s github page, or wherever it’s hosted, and look at open and closed issues and whether you see anything security related. You might get a good feeling for whether the developers are responsive and security savvy. Or maybe they simply threw the code up, don’t maintain it, and ignore issues. You can also try to contact the developers. The good ones will have a dedicated security email address so that if you happen to find an issue it’s easy to report it to them. You should also look at the library’s version history. If you look at release dates and see that the code hasn’t been updated since the 1900s, it might not be something that you want to put in your app, By far the best method is, if it’s open source ““ read the source code. If it looks like a terrible snakes’ nest that’s uncommented and really ugly, then they might not have tried too hard to keep it secure. These are things you should do before you even consider bringing that code into your project.

Some developers hard code their keys in their codes and make the code available on Github, I think hardcoding the keys in the code is kind of a vulnerability as those keys could be used maliciously by any cracker?

I guess it really depends on what kind of keys you’re using. You definitely don’t want to be storing your Amazon AWS credentials in code and then make that open source. We’ve already seen where people have scanned all the projects on Github and pulled off a whole bunch of AWS credentials and abused them. But even if you’re not talking about those, you should try to avoid putting AES keys, passwords, or http auth tokens that directly in your code. A lot of times doing that up front might be easier for a programmer, but it’s not the most secure thing to do. A better option for some cases is you can have the user type in a password and then derive your key from that. In that scenario, it’s not stored in the source code. It’s not stored on the file system, and the password is only available after the user types their own password or PIN.

Can you elaborate on anti-tampering techniques?

For Android one of the common things to do is use the package manager. You can query your own package, receive the digital signature, and then compare that to what you think the signature should be. Hopefully you’ve got the keystore that you sign your app tucked away on your development machine, and no one else has access to it. So whenever somebody does modify your app they’re going to have to sign it with their own key; and if you check the signature at runtime, you’ll be able to tell right away that something’s up. You can also hash your package just get a SHA-1 or SHA-256 and query a web server ““ and hopefully you’re doing that securely and not getting “man-in-the-middled.” One downside with this approach is your app relies on the Internet meaning that if the user doesn’t have service then that method isn’t going to work. As you’re doing these things, you want to tuck them away somewhere in your code ““ hide them a little bit. Because if you make it painfully obvious, or you only check one time, then whoever’s modifying your app can remove your anti-tamper checks as well. If you pepper the checks throughout your app and make it more work for the attacker, then hopefully they won’t catch them all.

How does NowSecure decide what best practices to include?

The best practices all come from our research and our services team. We see and assess a lot of apps whether that’s for our customers or just general apps we come across in the app stores and are interested in for whatever reason. As a result, we see a lot of these mistakes being made in the wild. We then bounce those around internally and think about them from the developer’s perspective to figure out how developers can avoid them. Then we discuss it, agree on what to include in our best parctices, and then we add it.

I am developing an open source intelligence app for a law enforcement agency. What are the security features I should keep in mind so that someone doesn’t exploit it and use it for malicious purposes?

Without more specific details about the app and its functionality, it’s difficult to provide specific advice. In general though, you should follow the NowSecure Secure Mobile Development Best Practices, and thoroughly test the app before you ship it to customers.

When should a firm focus on application security while they scale up from a startup to an enterprise?

They should focus on security from the beginning. The earlier you can identify security issues and fix them, the less expensive it will be to do so. If at some point you experience explosive growth, failing to address what seemed like a minor security issue in the beginning can quickly become a catastrophe at scale.