Session Description
This session uncovers the top 5 mobile app risks based on 525,600 assessments of iOS and Android apps. These risks include security, privacy, app store blocker, and AI governance issues that impact businesses and customers. Attendees will gain actionable steps to improve mobile app risk management and learn to apply OWASP’s MASVS for stronger security practices.
Over 7 billion smartphones worldwide with 25 billion mobile app downloads in 2023, showcasing the massive scale of mobile app usage.
Mobile app vulnerabilities are difficult to detect due to platform restrictions, sandboxing, and lack of root access for defenders.
Reconnaissance risks arise from publicly accessible app binaries and debug symbols, enabling attackers to reverse engineer and exploit apps.
Encryption flaws, such as outdated algorithms and hard-coded keys, are prevalent and expose sensitive data to attackers.
Third-party SDKs make up 50-60% of app code, often containing known vulnerabilities and privacy risks, sometimes from untrusted sources.
Security misconfigurations like hard-coded credentials and poorly implemented custom URL schemes are common and exploitable.
Privacy risks are ongoing problems due to excessive data collection, insecure transmission, and failure to adhere to platform privacy requirements.
Hey everybody, my name is Andrew Hoog and I'm the co-founder at Now Secure and I'm going to do a recording of my 2025 talk about the top mobile app risks since 2022. Okay, standard disclaimers aside, let's dive right in. So, I think it's important to just set a baseline. I think everybody knows that mobile apps and devices are everywhere, but there's over 7 billion smartphones in use globally today. And there's an enormous amount of downloads of mobile apps. So about 25 billion more mobile apps were downloaded in 2023. Over a quarter of a trillion apps worldwide. So just an enormous amount of scale around mobile applications. And the average user has somewhere between 60 to 90 apps installed on their device. So there's a lot of apps that people are using. Uh and I think it's important to note that uh apps are not just for our individual use. Uh of course we use it personally. We use it in our business and we also use it increasingly in our civic lives. So I want to talk a little bit about why mobile app flaws are so difficult to detect. And that's really driven by three primary reasons. First of all, when mobile started, there were uh platform restrictions put in place. And that was built on all of the experience and knowledge that we've built over the years trying to secure desktops, maybe fixing the sins of the past. And so there are some really great platform protections um which can help protect against attacks but at the same time it really provides uh restrictions for defenders. And so first of all and the probably the biggest is the inability to get root or administrator access on the device operating system. If you think about tools like edr uh from crowdstrike from Palo Alto from others they actually operate typically on these endpoints let's say windows with administrator access. And while there's risks there, so they can mess something up as we saw uh last year with the global outages caused. Um it's also really important to be able to operate at that level with that sort of permission so you can see exactly what's going on in the device. Um in addition, app sandboxing prevents apps. So if you were to try to write a security app, it would be sandboxed and you wouldn't be able to see a lot of what's going on in the system or in the other applications. and even the platforms themselves provide some basic security APIs but but not a lot. Um so that's a really big challenge because if an attacker ever um gets through any of these defenses they have that full access and defenders are effectively operating blind. The second thing that I like to point out is that the attack surface on mobile is actually quite large. Um, it has everything that a standard web app would have because mobile apps instantiate web browsers in app browsers for things like authentication or showing you a website. So everything that you would normally worry about in a normal website is also contained in a mobile app. Plus all of the sensors uh that are used uh to increase the user experience but also to uh uniquely identify people. um that adds a lot of attack surface and we find about 50 to 60% of all mobile apps use uh 50 to 60% of a mobile app is actually third-party SDKs. So there's a significant amount of code in these mobile apps uh that needs to be tested. And then lastly what I would say be the frameworks and the languages and just how new this is and the cadence of updates. So we're looking at yearly updates from Android and iOS that can change not only the fundamental frameworks for how developers interact with it, how defenders need to monitor uh but also put in new restrictions and then just new features that come in uh in the platforms themselves or the languages like Swift, Cotlin, etc. So why does all of this matter? Well, we know that mobile apps collect an enormous amount of sensitive data because they can. And really the app economy is built off of, hey, give us your data and we'll deliver better ads and the advertisers can pay for this. And so there's an enormous amount of data collection going on. And as I mentioned earlier, as defenders, we lack basic visibility into what's going on the device. The kind of visibility we have in Windows and other platforms, we simply don't have uh on the devices that say our employees are running. And then as researchers, we have very little access on a research device. we have to go out and individually jailbreak these devices. It's kind of a cat-and- mouse game that we're constantly playing. Uh and we're spending enormous amounts of energy just trying to get the visibility so we can do the research to understand what sort of security and privacy uh issues exist in these mobile apps. And so once an attacker is successful in exploiting a vulnerability, they now can easily live off the land, hide on the device, and they have a very little chance of ever being detected. Um, and these obviously put both our personal privacy but national security and economic security at risk. National security, why? Because the government uses mobile apps and we use the uh mobile apps to coordinate and to interact with the government. And then from an economic security standpoint, just the sheer amount of data collection, intellectual property, credential harvesting that allows uh um uh infiltration into enterprise networks, lateral movement, it just it puts everything at risk. And so we have this asymmetric disadvantage as defenders where we can see very very little. Uh we simply don't have access. And once the attackers are inside the walls, there's very little that we can do. So really the goal of today's talk is to get folks to think about the mobile apps themselves. And one of the things that we've noticed is that a lot of people aren't worried about mobile app security and privacy. And we've spent a lot of time trying to think about why this is. I think one of the biggest reasons is that folks think that Apple and Google are testing deeply testing mobile apps for security and privacy reasons. And as I hope you'll find uh and agree with me at the end of this presentation, that's not really happening. And so if you kind of leave with one question that you're always asking yourself is, do you think that mobile app has been vetted? All right, so let's talk about the top mobile app risks that we've seen since 2022. Um, now I want to put this out is basically this isn't new risk for the most part. Um, these are all the risks that we're seeing are going are going to fall nicely into kind of risk categories that enterprises are always thinking about. And the great news about that is we can use existing processes, existing groups, uh existing technology to try to understand and prioritize these risks. And so uh for example, we see fraud happening. Well, we see fraud happening in a lot of different places. But in mobile, there are unique ways that attackers can uh do reconnaissance uh to attack the APIs to understand what's going on to do click ad fraud. A lot of different fraud can occur on mobile. We kind of already touched on privacy kind of again everything that you would normally see and I'll especially call out emerging technologies for a minute because I think AI is is going to change a lot as we all know. Um and mobile's really going to be at the forefront of that. about 30% of all apps now incorporate some sort of AI or ML. And I think that would really surprise a lot of folks recognizing that the data that you're placing into these apps uh could be used to train other people's models um is being consumed by these models and and oftentimes sent to third-party servers. And this tends to be some of the most sensitive data that you have. You're prompting it with your questions and you're providing it to source documentations. So there's really going to be a lot of emerging technology risk that shows up because mobile apps are so bleeding edge and really pushing the envelope and innovation. So the format of the talk is going to take you through the top uh issues that uh that I believe looking at our data set since 2020 uh2 and I use the 500,000 25,600 number uh is just kind of a placeholder. This really incorporates a little over 800,000 assessments. And the format of each of these is I'm going to kind of describe the general issue, in this case, reconnaissance. Um, and we at now secure believe deeply in the MASVS project. Uh, and so we're going to always tie these back to the um MASVS categories, in this case, resilience. Um, and then I'm going to provide an example of what does this uh mean for one of these findings that we have. So in this case, the debug symbol's not stripped. And then I'm going to pick out four or five of the other findings in that category and just give you a sense of like how frequently are we seeing this. And then lastly, I'm going to show you real world examples of this occurring so that it's not just theoretical. You can see how this is happening and impacting people today. So as I already mentioned, reconnaissance is number five on the list. Um, and what does that mean? Well, what folks don't necessarily think about is when you ship a mobile app, all of the code that you put into that mobile app lives in the public stores. That means attackers, defenders, we can download it, we can reverse engineer it, we can see everything that's in your application. That's different than most people uh ways that people secure web apps. So for web apps, yeah, there's certainly some of that attack surface publicly available, but much of what you put in your code and business logic sits behind a firewall, sits behind a W. It's kind of closely guarded. Um, you have to put all of that code into your mobile app, put your brand name on it, and then put it out in the public. So, how can an attacker use this? Well, one example is we see debug symbols not being stripped from uh the binaries shipped to the app and Play Store quite frequently. And what that means is that when you're building an application as a developer um and you've taken it from your source code into binary, you're running it and there's some sort of issue, you need to be able to trace back the issue that you found in the app in the when the binary is running into your actual source code. And so while you're compiling it, you create these things called debug symbols. You can embed them into the application and when some sort of crash or bug occurs, there's a lookup and it provides you with that information. Well, as an attacker, um, we're going to use that debug symbol to make our jobs easier. Um, one of the big things that we want to do is to be able to reverse engineer applications to understand how they work. Um, we can do that manually. It just a matter of time. Um, and we can use AI. We have uh, Radar is one of the uh, projects, open source projects we support. We hired Pancake about a decade ago and he continues to really push the envelope with Radar. Uh there are some new AI um tools that help uh help with reverse engineering. Um but you don't have to work that hard if you can simply grab the debug symbols, the class names, the global variables, the functions, the methods, all of these things can be quickly decoded looking at the debug symbols. And so if I look at some of the other and you know findings out there, the debug symbols not strip is almost in three-quarters of the apps that we look at. So a significant number of apps um API discovery is trivial. Um the APIs have to be called from the application. We're able to look inside that source code and almost 90% of the time determine the APIs that are being called. Um and you know similar like the SDKs and frameworks. Again, these aren't security issues. Um you have to have those in your applications. You want to build a software bill of material, but an attacker can easily use that and get their hands on that data, use it for reconnaissance, and see whether or not you have any um vulnerabilities in some of those third party SDKs that you're using. So, let's go through a couple real world examples. Um, this is actually taken from a SIZA alert that got put out about a an application that helps people manage the u alarms that they have uh at their place of business. And uh in this particular application um there's a uh I think it's the top API attack from OASP top 10. Uh it's called a bola attack. Um and what you basically do is uh ids are passed through to an API. So let's say you're you have your username but your ID in the system is one, mine is two, somebody else's is three. If you can go look at how the API is called, you can effectively increment um those identifiers. Now the application should prevent you from doing that. The backend shouldn't allow you to do it. In this particular instance um the user can go out there change those API calls and basically enumerate through other people's data. So now um if somebody wanted to let's say uh physically break into your environment they could come in understand everything that's active uh interact with the alarm system and disable it. Um so that's an example where you can um use reconnaissance in the application to find these uh vulnerabilities and how the APIs are implemented and then leverage that to go launch an attack. Another one that I want to just call out and this is kind of good for defenders is there was a a piece of of malware out there um called SparkCCAT and the researchers over at Secureless that were working on this um actually wrote up a really great article on on the analysis and how they did it. I just pulled a snippet out here that I found really interesting. Um they talk about the iOS frameworks retain the debugging symbols. So again the example that I gave above um and you can do things like often times see the names of the developers and the users that are working on this. So this is great from uh kind of ttps and uh IoC's that you might be tracking feeding it to your sock or your threat uh hunt group. Um and so we can see here like the project author's name uh the rust library that they were creating um different things about the C2 uh module that they they were building up and the original name of the project. So just some really great information in there by analyzing those debug symbols that were left in the application. Okay, number four on the list, uh I want to talk about encryption flaws. So this would come back to MASVS crypto1. Uh and the example that I'm going to call out and this is one that we see very frequently is that the app is encoding sensitive information. So not just information but things that we have determined to be sensitive to be PII or PHI and they're either using an outdated or an inse insecure cryptography. Um so uh and the great news is by combining static and dynamic analysis we're not just saying hey somebody's using a shyawan we're actually able to specifically see you're using a shawan on um the password somebody else could an attacker could create a rainbow tree you know you shouldn't be hashing your passwords you need to be using um other algorithms to protect that data as an example u it also can make it very easy to spot like outdated or insecure crypto we'll talk about in a minute um and if you think about it from a defender standpoint. Um this is an amazing way to go out and look at postquantum cryptography. So um going through the applications understanding what algorithms are being used uh are they resilient in against a quantum attack. Um there are new um cryptography standards from NIST that you can employ and really identifying those um is a great step that defenders could take kind of flipping this flaw or weakness around and turning it to your advantage. So we find uh this particular finding in almost two-thirds of the apps that we assess. Um we find SSL configuration uh issues where they would allow insecure connections in about a third of the apps. A lot of like initialization vector reuse um uh hard-coded keys um and you know just basically static uh values used for crypto almost 20% of the time which kind of defeats the purpose. So the example that I'll call out here is actually some work that we did uh back in February looking at the DeepSeek iOS app. Um the link is provided below. Um you can go read the full technical report. Um, but when DeepSeek uh first hit the market over the weekend, it became the number one app on iOS and uh that Monday when the markets opened, there was about a trillion dollar hit that the markets took um as investors were very worried about, you know, this technology and how it may impact um the overall uh stock market. And so we found a number of risks. Um sensitive data uh being transmitted over the internet without any encryption, weak and hard-coded encryption keys which talks to this particular flaw that I'm mentioning. Uh data being stored insecurely encryption keys that are hardcoded. Uh and then uh a lot of data collection and fingerprinting. Obviously, all that data going back to China where folks have to be concerned about kind of the the people republic of China's laws and what sort of access the uh Chinese Communist Party could have to that data. But specifically for uh Deepseek iOS um they were using the outdated triple uh algorithm um which has been broke for for quite some time. Um and they were reusing an initialization vector. So that's how you kind of start the uh initialize the cryptography. Um, and it was actually uh null. So the IV was just blank. It wasn't being used at all. Uh, and the actual encryption keys themselves were hardcoded into the app. So as a as a reverse engineering effort, it wasn't too difficult to come out, figure out what the IV was, grab those encryption keys, and then go decrypt that data. Okay, number three on the list, um, thirdparty SDKs. We kind of touched on that a little bit earlier. This is going to fall under the MASVS code three. Um, but you know mobile apps again 60 50 to 60% of mobile apps uh are thirdparty SDKs and you know not all of them are open source. So crash litics, mix rank, uh, mix panel um, a bunch of these are proprietary commercial software. So a developer includes those into the application. You don't have access to that source code. So even if you're doing static source code analysis, you're not going to be checking that code. Um, and then you compile it and you push it out into the market. What we found is a lot of those SDKs have security and privacy issues. Maybe they're known CVEEs. Maybe it's something new that we spotted during the static or dynamic analysis. Um, and so we find this issue um, a lot. At least 15% of the apps have thirdparty SDKs with known vulnerabilities in them. Uh, an example would be lib PNG. um it has known security vulnerabilities. We see that in over 13% of the apps. Um so just lots of issues out there with these kind of SDKs and frameworks being used uh folks not testing them um and then introducing security and privacy issues into their applications. So an example I'll run you through here real quick is one um called pushwush. Um I actually wrote this up um on my um personal blog back in 2022. I've got like a technical write up out there. I've got a video that kind of walks you through what is the issue and how can you detect it. Um but basically pushwish was a advertising and analytics backend. Um it was used on over they claimed 2.3 billion devices. Um and the uh developers said they were a US company. So in fact um lots of developers started using it. Um a lot of government agencies uh used it and built it into their applications. Um and then um there were some issues found in the SDK. people did a little bit of kind of OENT looking a little bit more closely at the company um and ended up quickly realizing that this is not a US company. They're actually a Russian company based out of Siberia. Um and it's a really interesting read. I linked to it in my in my article. Um but basically they ended up setting up uh fake uh directors, board of directors. They pulled LinkedIn photos from uh random people and created these kind of fictitious folks um to try to bolster the idea that they were a US-based company, which they weren't. So, it's a really great read. Go check out the Reuters article to learn more. But this is an example where a lot of applications were incorporating this SDK. They weren't testing it. They didn't see the security issues. They didn't see how much data collection was going on. kind of looking at the developer profile and didn't understand the risks that hey this is actually a Russian uh company collecting very sensitive data from US um government apps and obviously lots of other apps out there as well. Um another one that I'll point out was actually some great work done by Sneak. Uh Sneak was looking at uh SDKs, the Mint integr SDK in particular. Um it's a uh Chinese-based analytics SDK. Uh they dubbed the vulnerability sour mint. Um and uh they found a number of different uh issues where there was uh extensive data collection. Uh they found a back door in the iOS SDK with remote code execution uh capabilities. Um and then uh just a lot of different issues in the implementations across um iOS and Android. And so another real world example where SDKs are collecting significant amounts. They're enabling different kinds of click fraud and fraud uh data collection and vulnerabilities into people's applications. Okay, number two on the list, this is a really broad category, security misconfigurations. Uh we see this a lot. Um this falls under MASVS platform one. The one that I'm going to call out is hard-coded uh cryptographic keys. Uh finding those in use. Um obviously if you hardcode your crypto keys it becomes trivial to extract them and then to use that to decrypt the data that you were trying to protect. Um and so um this particular finding um we find in almost 20% um of applications so a really high percentage. Um the um another one I'd like to point out is this one about custom URL schemes. So, um, when you click on a mobile app and you're, let's say you're at a web page, um, and you click a link, http https, obviously that's going to go to Safari and and show you that website. Well, developers can create these custom URL schemes where you could say something like uh now secure slash and you can register that scheme so that if somebody clicks on a link or calls a URI with that instead of opening up Safari or something like that um it would open up the application that has registered let's say our application um but if not implemented properly um these are susceptible to hijacking and so we find that almost 80% of the apps that have uh have um URL custom URL schemes that are susceptible to hijacking. They're not using the secure way to implement these within the platform within Android and iOS. Um, and the last one that I'll call out just as a new finding that we're starting to see trend here, which is um not just looking for hard-coded um let's say uh API keys for let's say Firebase or or other backend systems, but in particular for AI systems. And so we just added six new AI findings back in um um November of last year and we're finding that more and more apps uh as I mentioned earlier 30% of apps now are using AI and you know a very small percentage point you know 0.12% right now are actually hard coding those API keys. We expect that number to grow over time as more applications use this as developers make that kind of security misconfiguration mistake that we've seen with um with uh other things like fire firebased stores and whatnot. So that's one we're paying close attention to and kind of represents an emerging risk. Um some examples of that um the cardio health iOS app. Um this was uh CVE was published about this. Um and so this particular app um had the sensitive usernames and passwords in a P list file. So they weren't even using API keys. Um they basically secured their backend with usernames and passwords and uh anybody could pull that hard-coded P list file and then log into the production database uh using the development accounts or the engineering backd dooror application. Um and it allowed uh significant control over this health application uh and access to backend sensitive PHI data. Uh another one just to kind of talk about what I mentioned earlier was an example of an application which we've redacted. Uh this is not public um but just an example of one that leaked open API keys. So we'll check well-known um SAS API applications and we'll look for those API keys inside the application and and produce that finding for folks. Okay, the last one I'm going to talk about today and the top risk that I see uh is privacy risk. So MASVS privacy there's a significant effort to expand um the standard the focus on privacy and you know security issues are bad. Um the thing about security issues is you can fix them and you know you want to make sure you do continuous testing to prevent regressions. Um but they're kind of a one-time thing and if you fix it then they go away. And the great thing with Apple uh iOS and Google's Android is that there are a lot of things about these platforms that make it easy for developers to implement these things in a secure way. So it's not like you have to go out and reinvent the wheel. Let's go fix that security issue. use the techniques that are built into the platform um and then put that behind us. Privacy I always talk about there they're there it's an always on problem that data collection is occurring uh often times it's uh overcolction and so this is an issue that really gets compounded over time and it's been going on for quite some time. So the one that I like to point out is uh the unique device identifier or a build fingerprint that gets stored uh insecurely or transmitted insecurely. Um, and in this case, um, for iOS in particular, if you look at the device identifier, it by default has the owner's first name and last name. So, the device identifier would be Andrew Hog's iPad, for example. Now, on testing, maybe there's a generic name. Maybe people aren't really looking for this. It's kind of shrugged off. It's like, it's just a device identifier. Who cares? But a lot of third-party SDKs collect this and therefore you're sending your customers u PII to third parties. You need to make sure you've disclosed that. You make need to make sure you're doing it securely. And you probably ought to ask yourself, do you really need to be sending that to third parties? Is that something you need or want to collect? Because you could run uh you know right into GDPR and CCPA privacy issues or if you've got any kids using your environment uh under 13, you can run into Kappa issues as well. So um we find this issue you know frequently uh in mobile apps. Um we also find um sensitive data gets cached by the mobile app when uh developers use those um uh inapp browsers. There's a lot of caching of sensitive data um that gets tracked. Um I put invalid u uh or failure to validate certificate authorities uh in this one. So kind of a man-in-the-middle attack. That's both a security and a privacy issue. By not properly checking certificates, you're allowing somebody in the middle to of course get inside those encrypted tunnels and collect all that data. And for folks that are developers, I want to point out missing purpose strings. Um these apply for both iOS and Android. They're called slightly differently, but both organizations realize how important privacy is and have been putting uh restrictions on developers saying if you're going to be using sensitive APIs uh or accessing sensitive data, you have to provide purpose strings. You have to say why you're collecting this. Is it tied to the user? And if you don't, Apple or Google can block you from the store or remove your app from the store. And so we call these app store blockers and they really represent business risk. um all of a sudden your application was humming along, you were doing great, you know, taking in orders and then all of a sudden the update that you want to push doesn't get um allowed or that the app gets removed. Um so making sure that you are properly doing the addestations inside the application for what sensitive data you're asking is it tied to the user and why and we find that about uh three4s of the apps aren't actually doing this. And so that's a risk uh that's going to become more and more um impactful for developers and companies um as Apple and Google be you know clamped down more and more and um prevent folks from um updating their apps without having these um identifiers um put into the application. So a couple quick examples of that. Um this one again we've redacted it um but it's just a good way to see um what needs to be uh tracked. So here you can see and it's probably a little bit difficult but for example um there's an API that allows uh the developer to track file timestamps or disk space that you have available. Uh active keyboards that could be used. These are all things that could allow somebody to uniquely identify your device. Hey, you know when was the last time this file that barely gets uh accessed was ever accessed? That's going to be unique to your device. So collecting this data, Apple and Google have recognized can can drive privacy risk, fingerprinting, unmasking and identifying individuals. And so apps that use that have to declare why or you know they again run the risk of not being allowed to push their updates to the store or getting removed. U one other example I want to show and this was something I learned about last year. I wasn't aware of it is um obviously mobile apps are driven off of um uh advertising and uh there's a a spec called open real-time bidding open RTB that specifies how um ads can be put up for auction and then how buyers can come in and bid on that in real time. These things have to occur within hundreds of milliseconds because it has to very quickly make that decision, see who wins the bid and then hand that ad to the user. And what was interesting is when I took a look at that specification which I've linked to down here um HTTPS is actually not required and I found that to be shocking. Um the volume of data that's being transmitted here the sensitivity of it is massive. Um, and effectively it's being sent without encryption, which means anybody that subscribes to that broadcast network um can basically consume all of that data and put it into a database. Even if they don't bid on the app, even if they don't win it, they're able to collect all that data since it's not encrypted. And so that really drives a lot of risk. And I think a lot of these um data brokers and massive data collection that occurred um was done by tapping into open RTB um and collecting that data as ads were being sent out and in order to decide the right ad a bunch of information about you and your device gets sent to the uh open RTB network. Um you can see this in investigations again of like data brokers and things of that sort. Um there was an investigation by Atlas Data Privacy Corp. uh on a company called Babel Street's Locate X platform. Uh Brian Krebs did an amazing write up uh in October of last year. Um and you can go read about all the data that's collected and how locate X works. Um I highlighted a few things that was really interesting. One was that um they could locate 80% of the Android devices but only 25% of iOS. And I was like well why? Well, Apple implemented something called uh app tracking transparency and uh users can basically say, "Hey, I don't want to opt in to allowing apps to track. Therefore, they have less data to share and they can't put it out to open RTB or other places." And so, uh what's interesting is looking at the numbers since they put that out in April 2021, it's had a massive impact on reducing how much data about iOS users is available. And so that's why I think you went from 80% of Android devices where you could locate them because so much data is being sent um but only 25% of the iOS devices. Um and you can actually see this in kind of some of the companies that aggregate and purchase this data and then sell it. Um there was a breach of a company called Gravy Analytics uh back in January of 2025. Uh and 17 terabytes of their data was breached from their AWS cloud and you get a peak into what's being collected. Precise Geo, where are people at? Where do they live? Um and it was thousands of different apps that were collecting this data. Some of the ones that they named in their headlines was Tinder, Grinder, Candy Crush, My Fitness Pal. But it was really an interesting insight into seeing how much data is being collected um how it's being aggregated and then how those data brokers are selling it to anybody that wants to purchase it. So it kind of is the old adage if the product's free you're the product and we can kind of see that happening here in real time. So why is all this happening? Well, I think driving all of this is this incorrect assumption I hinted to at the beginning, even by security professionals, people that really, you know, deeply understand the space that think that the public mobile apps, the apps that are sent to Apple's u app store and in the Google Play Store, um are doing sufficient vetting for security and privacy issues. That's not true. They're making sure that they vet the apps for compliance with their store rules um for malware, but they're not obviously covering the kinds of risks that that we've touched on here because we see them show up at such a high percentage in so many of apps, so many of the apps. Um and then companies tend to worry a bit, not enough to be honest, about the apps that they build, right? That's powering their business. It's got their brand name on it. They're liable. Um, but they overlook the hundreds and thousands of apps that they use within their business, within their enterprise to operate. And those apps have an enormous amount of sensitive data about your customers, about your employees, your intellectual property. Um, and so folks, some companies invest in securing the apps they build, but they're overlooking those thousands of apps that they use every day to power their business that have an enormous amount of of data and again have all of these security and privacy issues. And then I think the last reason is a lot of organizations haven't thought about mobile being sufficiently different and so they already have some web testing techniques in place, static source code analysis, some tools and they're like well we'll just use those that'll be fine. Um some of those vendors say that they do analysis and I think you know that's probably a massive misstatement. Um uh there's a lack of support for Cotlin and Swift because these are newer languages. is I mean there's some support but not great support. It misses all of those third party SDKs and all of the dynamic analysis that occurs where you can actually see what's collected and where is it sent doesn't happen. So there's significant gaps if you kind of take a technology that was built for web and try to test your mobile apps for that. So let's talk about applying this. So the good news is that most of these risks can be very quickly addressed. Uh, as I mentioned earlier, both Apple and Google provide platform level protections that developers can implement. So, it's not like you have to go out there and roll your own and figure this all out. It's just like don't use it this way, use it this way. This is the proper way. It's already built in. It's easier. Um, and so really pointing out um the u ability to address these risks um is a is a significant win and I think good news. Um, a lot of these things you can also detect with open source tools. So, you don't have to go out and necessarily buy a commercial tool. Um, you can go out there and there's a lot of noise in the open source tools. Um, it's, you know, the signal to noise ratio is actually quite low, but you can kind of go out there and say like, all right, I'm going to ignore most of that noise and I'm just going to look for these different types of risks that we've identified. And so I'll show you how to do that. Um, and there's nothing that's a significantly new here. It's really a lot of the same best practices. you know, minimization of data collected, don't roll your own crypto, um, use, uh, built-in platform protections. So, it's not like we have to go out and retrain everybody. It's just that we need to take all the lessons we've learned in the past, leverage the tools and the protections that we have available and implement those for mobile apps and make sure you test them and make sure that those issues don't exist or there hasn't been a regression. So, let me walk you through these in the next couple of slides. Um, one of the first things that I I wanted to do is to say, well, if somebody has an MDM and they're tracking all their applications, um, could I make it easier for them to download all those applications, um, help them identify which ones maybe are, uh, very high risk because of company IP, uh, PII, PHI, flagship applications versus ones that were more like brochureware and not a big deal. And so, um, I wrote this tool u called export intoune apps. Um, this could be extended to support other MDMs. I just chose uh Microsoft's Intoune. Um, and it's a command line tool that will connect to your Microsoft Intune instance. Um, download a list of all of your app inventory, which is again cyber security 101. Uh, you need to have an inventory of all your assets. Um, and then it'll go out there and enrich that data with um, data from the app and play store. This is really important because just by getting that list of of platform and package package name, we can go out there and say, well, how many people have downloaded the app and how popular is it and how long has it been around and what does it do? Um, and so enriching that and then making that accessible to you in a CSV and a JSON and a SQLite um database. That way it can become very easy to do this analysis um uh against your infrastructure and the apps that you use. So, this is just a little um you know, screenshot of how it would work. Um and I'm going to actually go do a live demo for you real quick. So, um here's the link is on the deck and you can go out to GitHub and download this. This has been open sourced. Um I kind of walk you through the basic steps that you need in Azure AD to basically get uh your secret, your client ID, and your tenant ID and to make sure you have sufficient permissions. And then you'll clone the repo. Uh make sure you have Node.js JS installed in git. Um, and then just go run the install simple configuration and then run the uh the uh CLI. So, let's go ahead and do that. Um, so I've already downloaded um or cloned the repo and we should probably since there are new issues uh fix the uh the um uh dependencies. So, you can just uh audit fix force those. Um and then um I have a um example um env file. Let me do that here at the top. So um you want to create av file and at a minimum you need to have the client secret which you can get from Azure ID. Um the steps that I walk you through in the readme and you're going to paste that in to yourv file. So this is just an example. Um and then optionally you can hard you know you can put in your tenant ID and your client ID. That way you don't have to provide a command line every single time. Um but you may decide like hey I like the fact that it's command line because now you can easily um u inject those in. Let's say you're scripting this somewhere else you want to use it as a CLI or you have multiple environments and then once you have that um you're able to just go run the application. Now I put in this no deprecation flag. This just tells node don't warn me about deprecation of libraries that haven't been updated uh where they're being deprecated over time. Um, then run the actual application. Um, tell it to fetch the metadata. And then, uh, I'm going to change this to demo. So, just give it a file name that you care about. And then I flipped on the debug flag. I just think it's interesting from a demo perspective for you to see what's going on. So, it already downloaded a list from Intoune. And now it's uh, updating the metadata for each one of these applications and just outputting that to screen and then it's going to save it to the file. So, this is going to give you all that information about, hey, how many people rated this application as a one? Well, 37 people did actually more than rated it a five. And you can now start to identify like different things about the application that would help you do something that we call mobile app risk management. identify what these different issues are, the different apps, uh the different metadata and and then to go uh put those into categories, high, medium, and low. And that can help you decide how you want to do these um mis risk mitigation. So here you can see um we've output the file. Um we have seven apps in our demo environment and I'm just going to um show you the JSON real quick. Um and again it's also in a CSV and it's in a SQLite database. So whatever is most helpful for you. So you can see we've got um the in tune app ID um the platform key um the package ID information about you know what what is the title the URL the description um the max installs so about seven 7.2 2 million people um class of it the the genre the finance etc. So I won't take you through all of these but really just information that could be helpful for you to look at this application and decide hey you know is this something that may represent a higher amount of risk in my organization. Okay so um go out extract uh applications from your MDM uh collect that metadata and then go through a process where you categorize them into these different categories. Now, one of the projects that we really like and we sponsor is called uh the mobile app security verification standard uh from OASP. Um and you can go out there and review um their their standard and it's actually there's a particular part of it um that will help you identify like different categories of applications and the level of testing that you um should consider doing that on them. So, you can come out here. All of this is open source. Um you can download um the entire standard um and then look at the actual testing profiles um that would help you say hey how do I want to test my application based on you know what sort of level I put it in is it an L1 do I need to say it's got sensitive business uh information in it so I need to go to a higher level uh L1 plus R etc so a lot of great stuff you can review out there and then the last thing um is that you can as I mentioned earlier can use open source tools. Uh I'm going to demo Mob SF. So Jin Abram wrote this. Um I got the link out to his uh website. You can read more about it. Um but it's an open- source uh mostly static analysis. It tries to do some dynamic analysis using um uh Android virtualization. Um and maybe there's some techniques on iOS, but but largely I think people are using dynamic analysis. And so I take you through the steps here on how to do it. I'm going to go ahead and do this uh live real quick. Um, but basically I'm going to create a local directory which I already have created. Um, and then um pull in the latest version which that might take a minute so we may have to edit the video. And then lastly we're going to go run that. So let's come back to our command line. And um, normally you would do your um, make your mob SF directory. And you can see I have stuff here. uh and and then we want to do a docker pull um on the latest version of that uh security framework. So this just makes it really easy to bundle and um download the latest versions. Um Ajin is constantly updating it which is amazing. Um, so this is going to go ahead and download uh unzip and then it's going to allow you to run it here locally on your computer as long as you have Docker uh or Lima or other um virtualization uh containerization uh frameworks in place. Um you can also run these on servers. So you could host something at your organization that would allow you to um to share this at a single um instance. So we're going to let this finish up. And so here it's the command. and I've got it in the slides. Um, but we're going to remove it when we're done. We're going to call it mobsf. We're going to put it out on port 8000 and we're going to mount that shared directory um running the latest version. So, this is going to look at u the configuration um download any new updates that came in. Um, and then it's going to create u the web service running on port 8000. Um, so we'll let this finish starting up and you can see here it's now listening on port 8000. So, let's go ahead and fire up a new browser. Change this to 8,000. Okay. And so, now we've got Mob SF up and running. I think it's Mob SF. Mob SF. Excellent. Um, so here you can see, um, you can upload, you can look at previous, uh, scans that you've done. Um, you can upload a new binary. So here you could see I did Spotify a while ago when I was prepping these demos. Um you can kind of look at the static report. Um so this right now is pulling all of this information in from this Docker instance that you're running. So it's fetching that analysis from the database. So it's kind of cool. You can see everything that's going on um and then makes this available for you to look at. Now the other cool thing about Mob SF is you can um do all of this through APIs as well. So, you can use this to kind of automate the results, um, download reports, um, and and kind of view that data. All right, we're going to let that finish up. So, I just have a couple of screenshots showing you how that works and how you can upload new binaries. And then, you know, when you go into the static analysis, this would be, for example, a list of all the different permissions uh, inside that application. Okay, if I dial out now to six months, you've now identified all the apps. Um, hopefully you've automated the inventorying of that by using the CLI and pulling it down and updating it. Um and it's time u to then look through this the the different categories looking at the testing levels and then use mobsf or other tools to go do testing of those applications and start to figure out you know what sort of risks there what sort of privacy issues um is that something that violates the policies um for your organization and that you need to go address um at a six-month time frame you've learned a lot you've automated this uh on the inventory side but you haven't automated the scanning side and that becomes comes really really important. You can't possibly keep up with the the volume of updates of apps. So, however you choose to do that, commercial vendors, mob SF, um creating a process that continuously scans new versions of the apps using an API, the apps that you're building as well, and then really thinking about integrating that into your sock, into your threat hunters, into any sort of like dashboard, uh and into your management tools, the EDR, XDR, uh some sort of MDM endpoint. So, you can say, "Hey, we found these apps. we just found this issue. It violates our policy. Let's immediately block that from our end-user devices. And so within that six-month time frame, then you'd be in an amazing position to dynamically test and find these risks and automatically block them or push them to the threat team for additional analysis. Okay, summing everything up, um these risks are impacting you right now today. A third of the apps on your phone, call it 30 35 apps are using AI. Um but discovering these risks is actually pretty difficult. Um as we mentioned earlier, there's very few companies that do this because it's such a hard space. The platform architecture, uh the inability to for defenders to have visibility into what's going on. Um the fact that the operating systems change frequently and there's no real good way to get root access on these devices uh introduces significant friction for folks to go out there and test. What does that lead to? It means there's less vulnerability researchers. There's less tools to detect these things and therefore there's less visibility and awareness of what's going on. But these apps, these quarter of a trillion downloads a year actually power our lives, our personal lives, our business lives, our civic lives, our economy, our national security. And while Apple and Google, you know, do a great job of setting up an app store and and testing for compliance with their standards and and you know, trying to fair it out um malware and blocking that. Um there's no way. They absolutely do not do this kind of rigorous security and privacy testing that would be required to make sure that you've protected your enterprise and enforced um the standards that you guys have um at your company. And so um if you end up building a continuous testing framework so that every time a new mobile app is released whether it's the apps that you're using and you're scanning the stores or um it's the apps that you're building and you're integrating it into your dev sec ops um continuously test those for the security and privacy issues you're going to have a significant reduction of risk and impact to your organization impact that's occurring today it's happening and you probably don't even realize it. So, I'd ask you to kind of go back and reflect on that kind of opening question and and anytime you're using an app or installing app, just just really ask yourself, do do you think that app's been vetted? Probably not, and it probably needs to be. So, I want to thank you so much for your time today. Um, I'm happy to connect with you further. You can catch me on Blue Sky and Mastadon uh at Infosc Exchange on Masttodon. I have links in the in the presentation here, AHOB42 out on um LinkedIn as well. And if you have any questions, just reach out. I'd be happy to to talk further. And um thanks so much for watching. [Music] What does a healthy DevOps regimen look like? How should my security team and my development team work together to limit business risk and ensure the safety and security of business critical applications? The answer by implementing an efficient workflow that allows all teams to work together continuously without impacting the productivity of others. Let's walk through this example. Developers complete code review on a new feature and automatically kick off a scan via the CI/CD pipeline. Now secure platform performance static and dynamic binary analysis in minutes. In this example, the automation produces 42 finding. These findings are then filtered through now secures policy engine which the security team customizes to ensure that all high and critical findings immediately and automatically generate tickets into the developer ticketing system. Assuming five findings were high and critical, the remaining 37 findings are then manually reviewed by the security team to triage and assign to the appropriate queue for remediation. This workflow assures that high severity tickets are provided as soon as they are discovered and less severe tickets are triaged appropriately by security teams. Using the integrated workflow with Mouse Security's policy engine is the fastest way to prioritize and remediate issues in your mobile application suite. [Music] Hello and welcome to your MARM minute. I'm Alan Snder and today we're going to talk about the first step in the MARM program. We're going to talk about how you classify apps and put them into business impact tiers. A business impact tier is super simple. It's basically saying how important is this app to my business and what is the impact to my business if there is a cyber security incident. Be that a data breach, be that a vulnerability, privacy issue, operational disruption, all sorts of things that can cause harm to the business. So, let's dive right in and let's take a look at some of the characteristics that we recommend. Now, what's important to understand about what we're going through here is that each company is going to come up with what is appropriate for them. We've created a best practice document to help you define these and serve as a template, but based on your threat model and based on your operations organization, it's probably going to vary a little bit, but you need to look at things like sensitive information. Does it have PII, health information, financial transactions? Does it have your brand on it? Is it the primary path to business? Does it collect geoloccation data? Maybe have access to contacts and microphone and camera. So, it's collecting information that you have an obligation to protect. How many uh connections and endpoints does it have? In essence, where could that data that it's collecting be distributed to with or maybe without your knowledge? All of these things go into that factor in terms of how much of an impact therefore how much of a risk is it to your business. So this has been the MARM minute super quick but hopefully it helps you put together a better program. [Applause] [Music] Hi, I'm Michael Krueger and here to talk to you about traditional pentesting versus pentesting as a service. Standalone pentesting is our traditional application of pentesting for mobile apps. Experts uh conduct rigorous security testing against the application, provide a final report and then ultimately remediation consultation. However, there is a problem with that. Uh in this example, there may be an application that has uh three major releases throughout the year as well as a number of minor releases. Uh and we're conducting an annual pentest as well. The annual pen test in March, as you can see, may catch one of four bugs throughout the year. However, those other three bugs may introduce vulnerabilities that may not be caught until the following annual pen test if it's not uncovered during other rigorous testing. How do we fix this sort of application? Well, that's where we bring in pentesting as a service. Pentesting as a service modernizes the pentesting approach by utilizing SDLC integrations to allow and reduce developer friction by uploading binaries directly into uh a software as a service platform. Allow you to dynamically request pentest on demand from experts. go through that typical expertled pentesting cycle, export reports on demand, but also feed that report information back into a more continuous monitoring program. Can also conduct retesting. As we as we ingest into the continuous monitoring, continuous testing program, we start this continuous cycle of uploading new releases for assessment, automated assessments occurring throughout the year, constant reporting and notification out to van management and uh bug tracking systems and of course that remediation consultation all happening in a continuous application. So as you can see here, we've now taken that that annual pent test. We've introduced another more prescriptive pentest throughout the year, but we've also introduced this concept of a continuous monitoring, continuous testing program so that uh those bugs throughout the year can be addressed in a much uh much more reduced time frame. Uh final I'll leave you with uh some benefits to PTAZ. Uh continuous testing really does ensure that new releases don't go untested for long periods of time. It reduces that developer friction by enabling binaries and results to be provided directly via CI/CD. Uh ideally it's an application of progressive testing for your mobile app risk management program. reduces cost by allowing flexibility in the frequency and the types of continuous testing. And finally, it allows trend analysis and a view of all of your assessments despite their type in a single platform to get a overall view of your entire mobile portfolio. Hello and welcome to the MARM minute. I'm Alan Snder and today we're going to talk about the second step in the MARM program. It is basically asset inventory. It's understanding the mobile apps that are in your environment that need to be secured and protected. It comes in a couple different groups. The first relatively straightforward to uh understand a little bit harder to identify that is all the mobile apps that you or a vendor develops on your behalf typically is going to have a brand usually going to be in uh public app store maybe in your internal enterprise app store. The second category is uh apps that are approved for use. So this is one that you didn't develop but a third party vendor developed with their brand but you're putting your intellectual property or uh PII or other such information that needs to be protected in it. So think of things like maybe Slack or some other uh messaging uh platform teams that you didn't build it but you're using it. It's super sensitive. Uh those are typically going to be in your MDM and they are approved for use apps that get pushed out to new employees. The third category is going to be BYOD. And this really depends on your security posture about how important it is to protect. But that's your big categories there to go and find. So really one group you're going to do with your MDM, the other group you're going to do with your development teams and vendors. This has been your Marm minute Minute. Welcome to the MARM Minute. I'm Alan Snder and we're going to talk about step three of the MARM program. This is where you bring together uh step one where you defined your impact tiers and the app attributes that matter to you to make it a high, medium or low impact to your business. And step two where you did the asset inventory of understanding all of the mobile apps, whether ones you built or ones that somebody else built and you use and you put sensitive or critical information in and you start to categorize and put things together. This is super important to the program because how do you know what level of testing you should apply unless you understand what category that app should be in? Now, this requires you're getting information about the app. You need to understand uh whether the app has PII, whether the app has critical information such as IP, uh financial transactions. You need to understand does the app have the ability to track geoloc, how many endpoints, how many downloads. So, you're going to need a lot of information. Highly recommend you use Now Secure. We can actually tell you uh pretty much all of those items, right? We can't we can't tell you brand uh impact, but we can certainly tell you all of the other attributes of that app and how we would categorize whether it's high, medium, or low impact to your business. So, once you have that, it's also important to keep in mind apps will change over time. Sometimes they will lose functionality and be downgraded. Sometimes they will gain functionality and information and be upgraded. So this is a continuous process that needs to be applied. This has been your MARM minute. [Music] Welcome to the MARM minute. I'm Alan Snder and we're going to talk about step four of the MARM program. This is the part where you really, now that you've got your apps.