Many mobile app developers with the best of intentions have rushed COVID-19 apps to Google Play and the App Store to assist with contact tracing, symptom diagnosis and outbreak maps. But in the speed to get apps that can help fight the pandemic out to the public quickly, some security and privacy vulnerabilities went undetected prior to release.
For example, Threatpost reports that the Colombian government released a mobile app called CoronApp-Colombia to track potential symptoms. However, the app uses HTTP instead of Secure HTTPS for network communications, which potentially compromises user data. And an Iranian app called CafeBazaar requests permission to a user’s location, camera, Internet data and system information, and to write to external storage.
While organizations developing COVID-19 apps may have mature app development practices, the sensitive nature of healthcare information creates unique security and privacy challenges. I advise mobile app developers and security analysts to heed the following advice to avoid security, privacy and compliance issues that I’ve seen fairly regularly over the years. (For additional advice, see our COVID-19 Mobile AppDev Security & Privacy Checklist post).
Use HTTPS for Network Communications
As seen in a recent version of the CoronApp-Colombia app, developers may miss basic end-to-end encryption of data. These apps share and collect highly confidential data so all requests must be performed over HTTPS rather than HTTP. Also use cert validation to ensure the authenticity of the server and certificate. Going forward, you may want to implement certificate pinning, but the work entailed could delay a first release.
Also consider exploring network communications security controls such as Apple’s NSAppTransport Security and Android’s NetworkSecurityConfig, which both provide a mechanism to enforce HTTPS use across the app.
These apps will need a way to track users without using a direct standard authentication, such as apps that use email and password. As stated before, a successful COVID tracking app will need to be able to provide its features while allowing users to remain anonymous. There are several ways to set up an anonymous authentication scheme, such as using an API like Firebase. If the app authentication will rely on a device token, avoid using a non-resettable device value that can result in a major violation of privacy. If ever leaked, it could potentially allow an attacker to link user data to a real person.
If the app has to validate against some Identifier on the client side, look at `identifierForVendor`, part of Apple’s UIKit, and Android’s GUID or instance ID. Both of these values will be unique to the app on which the device is installed.
Protect Sensitive API Requests
Users will rely on COVID-19 tracking apps to provide API-driven meaningful notifications to assist with situational awareness. The challenge will be differentiating between real users and fake ones. Depending on the level of controls built into the API, an attacker could set up an enrollment bot, then use the bots to pollute tracking data. This could be used to create false alarms, affecting the trust users have in the alert system.
An API like Google’s SafetyNet reCAPTCHA can aid bot detection for your app. In general, technologies like reCAPTCHA force app enrollees to demonstrate that they are human. Adding this additional step makes it difficult for cyberattacks.
Another control to put into place is an authorization scheme. Any API endpoint that is accessible without authorization is potentially accessible to unauthorized viewers. With apps that handle user locations and health status, protection is paramount.
Finally, implement rate limiting and detection to help prevent brute-force attacks on the backend, while also warning administrators if these attacks are happening.
Manage Third-Party Dependencies
Avoid Extraneous Functionality
It may be obvious to avoid including social networking features, camera usage, or other extraneous features for a COVID-19 tracking app, it’s still worth mentioning. Extraneous features can lead to other privacy and security issues down the road so limiting the capabilities of the app is a good thing. For example, take in-app browsers. Adopting in-app browsing instead of sending users to a native browser adds a whole other layer of security requirements that are easier to avoid.
While this post covered several recommendations, there are other issues to consider such as management of data retention and data sharing policies. I’m not an expert in that realm and hope others will take up that gauntlet to offer advice. In addition, these apps will be heavily used and must be stress tested prior to release. If COVID-19 apps don’t function as expected, they’ll never be widely adopted.
Developers can find several useful resources for determining app security and privacy requirements. The OWASP Mobile Application Security Verification Standard should be at the top of the list for establishing good baselines.
NowSecure also recommends incorporating automated mobile application security testing into your development pipeline so you can remediate security and privacy issues as they emerge rather than discover them just prior to release. In addition, manual penetration tests and architectural reviews can test with an eye toward specific design goals of the apps being developed. Once again, if you’re developing a COVID-19 app for public use, we’re eager to help secure your app to further the mission of fighting the pandemic.
Thank you to fellow NowSecurians Dawn Isabel, Adam Schafer, Cory Thomas and David Weinstein for contributing to this blog.