Session Description
AI is rapidly reshaping mobile apps creating new security, privacy, and compliance risks - highlighted by recent concerns over DeepSeek.
OWASP MASTG and MASVS Project Leader, Carlos Holguera, will explore threats like model theft, regulatory pitfalls, sensitive data leaks, and hidden AI functionality, which traditional security frameworks often overlook. Audiences will learn how to detect and mitigate AI-related risks, meet transparency requirements, and ensure compliance using MASVS guidelines.
AI is everywhere in mobile apps, from cloud-based LLMs to on-device machine learning.
Transparency is crucial for managing AI risks in mobile applications.
Emerging AI regulations, like the EU’s Artificial Intelligence Act, impose significant compliance penalties.
Security risks include data leakage, insecure API keys, and unauthorized model access.
Three personas (healthcare AI, MDM, and game development) illustrate diverse AI risk impacts.
MARM program guides businesses on asset inventory, impact assessment, and risk-based testing.
Continuous automated testing and developer-security collaboration optimize mobile app security.
Hi everyone, this is Carlos. Um, I will leave you here uh my contact information. So if you want to um reach out in any of these social media platforms, uh feel free to do it. Um so let's uh go into AI. So AI, I hope no one can say that um they haven't heard of it, right? So really hot topic, really hype topic. Is it really a hype? I don't think so. Uh it's really relevant and important topic today. The market uh is growing and it will keep growing. So it is really important to understand risks related to it and how they are going to affect your business, your mobile apps, etc. AI is everywhere, right? So you see it in the news, you see it in the app stores. AI apps, more AI apps here. This is just a screenshot of the graphic and design apps. Uh 10 out of 12 in the top 12, right? So, um they're really everywhere. So, really, I'm not here to convince you that AI is important. Um but still, it's really everywhere. Uh you can see here many of the recent uh DeepS uh news around um the risks and data leakage and um the apps been blocked in certain countries etc. And now secure has also looked into their apps of course and we found a couple of issues. You can see here things like unencrypted connections, weak encryption keys, hardcoded keys, insecure data storage, sensitive data collection, fingerprinting. So please get it out of your phones until everything is safe. Um yeah so there is there is a lot going on with these apps and um it is really important to understand not only on this more security side but also on the privacy side of things and regulations etc. So um overall what is uh the most important thing is to look at everything as a whole and the best way to do that is using a standard. So the standard for mobile apps is of course the OASP MASBS mobile application security verification standard which um it's not only going to cover all those things like AI as privacy risk and so on but also as we will see later also as resilience risk and of course the other classical security risks such as storage crypto authentication networking etc. So this is very important and I would say if you're going to take uh one thing with you today is the most important part is going to be transparency. So you're going to need to know everything about AI in mobile apps and that's what we are going to discuss today. And in the end you should be able to prove that to get evidence the right evidence right for audits for compliance with regulations etc. But overall transparency and uh of course uh covering all the areas in security. So let's just quickly go over this like what is AI? I'm not going to explain everything about AI but we have two main groups which are cloud-based AI and ondevice AI. So first of all what is cloud-based AI in terms of LLMs. So these are cloud-based LLMs things like that you already know everyone knows about Chad GBT and deepseek on the cloud and uh cloud and so on. So these are exactly those kind of models that you use online and that are really really big and they run on the pl on the cloud. And then you also have uh ML so machine learning on the cloud very powerful models of machine learning that uh run on the cloud as well. These are used for image recognition and uh personalization and so on. And then we have of course the counterparts on the device in this case the mobile device. So we can have uh LLMs on the device which is not that typical yet but as they get smaller and devices get more powerful this is going to be more common. So things like uh these integrations from uh hugging face core ML you can find some apps that allows you to load some LLMs that you can download and you can load them. It's not going to be very fast but um they work and then you have ML. This is this can be really fast. uh they are in a lot of apps already and they can be typically used for this kind of things like image recognition uh translations uh OCR uh which is you know when you uh just have a text in an image and you can read it out augmented reality and so on. So this is the basis for things that we are going to be covering later. So on cloud um cloud-based and on device LLMs and machine learning models. So I'm going to present you uh here to three people. Regina first um she works with uh health AI apps and her focus is on regulatory compliance. She needs to demonstrate transparency. Again, I I this is what I said before. She has audits and contractual oblig obligations. So, she really needs a lot of detailed information especially in this area of health apps uh health apps, right? uh she considers uh risk from this heap functionalities when she doesn't know if the her apps have third party SDKs that maybe are having AI integrated and then in this uh highly regulated area of of health um of course she h she has a fear of legal and reputational damage to to the company. Then we have Richard. Uh he deals with MDM advetting. So he has to look into a lot of apps to see if he um should allow them or not or block them in the business. So he deals a lot with risks management and he needs he needs a maximum visibility into everything like libraries and especially an AI due to the regulations and this is all the corporate policies that he's um considering and so on and of course he struggles with all of these kind of problems and he has to know about also like countries the apps connect to and if it has AI or not. So it's kind of um concerns in order to ensure that the companies has a strong uh security and privacy posture. And then we have Peter. He does games very very cool games that has specialized models very advanced that they developed and they're unique to their company. So his focus is on IP protection, intellectual property. um he's afraid that a competitor is going to extract those models, repackage them and yeah publish them or use them for their own benefit. So uh this could mean that that the company loses um revenue and yeah competitive disadvantages and so on. All of this is important because um these are very different cases but in the end um they are also related and we're going to see this through the different risks. So now remember uh these icons here on each of them because we are going to apply them to the risks that come later. So AI risk is business risk. Yeah, it's the right one. So the first risk we're going to uh present today is the non-compliance with data privacy law uh laws. This isn't new. We already had this before but let's say with AI this gets magnified again. And uh we had already uh many loss there. They're exactly the same we had. But as you can see I just copied two of the news from the beginning. Um we are seeing this already happening like there are a lot of leaks there are uh databases with chat history people they are entering all possible details uh like personal details in this AI chats or other apps. So everything is can be potentially leaked. So this is important and in this case it affects uh the three personas that we saw before. Then the next one this is new. Now we have AI regulations and um this is growing with time. in the case uh well. I'm based in Europe so I I put you here some examples uh from Europe the uh the Europe artificial intelligence act and as you can see uh you can have really high penalties like this case up to 35 million or 7% of the your company's annual turnover. So you better be careful with this uh when you have an audit so that you don't fail, right? Um in this case this can also affect the three personas that we saw before. Very very important. Another one violation of contractual obligations. Again we have some things here from the European Commission and the government of UK and other places as well. So you need to um or someone might demand you to enter this or it might be because of these regulations and so on that you have your contracts and you are um you need to specify and be transparent about the AI that you use in a product. So this is very important that you have all this transparency in this case. Um FX uh the persona of the health apps and the MDM persona. It could affect the other one of course um the gaming but yeah I would say it's more relevant to this two first personas. And then um this one as well liability for model outcomes. This is can affect the other ones of course, but this is especially important for the for Regina, the health AI apps uh persona because um the AIS you never know sometimes they hallucinate and might give in this case some bad health advice and that can have really bad consequences as you can imagine. So um I left here a lot of resources by the way. Um when you see this kind of images there are links there. So when you get the presentation you will be able to click on the links and access all this um all these documents and regulations and so on. So yeah there is more here. In this case these are all this can all be applied to the third persona the gaming guy Peter. Um, this is the model theft and intellectual property loss. Very obvious from what we saw before. He's worried that someone is going to steal their models. They invested a lot of time and money to create them. So, that would be very bad. Um, integrity of model outcomes. Now, this is mainly you don't want anyone to cheat or you don't want anyone to like manipulate that, right? So, this is related to that. And then um unauthorized use of proprietary models. So it's related actually to the model theft is like someone could also use that those models for other purposes or access them without you noticing like it can be that it's cloud-based. So someone else's um well let let me let me think you someone could get an API key to that and use it and they are actually not authorized right but they stole it because they found it in one of your mobile apps and now they are able to use your service. So something like that. So how can we detect this? Because I said we need transparency. So we need to know a lot of things. So how can we um look and search analyze mobile apps and what can we extract that is going to give us this kind of transparency. So there are a lot of things you can search for AI endpoints and jurisdiction. So you can search where are these cloud-based servers, where are they located um and have a list of all potential um AI endpoints that apps connect to it. Even better if you see connections to them, then you can confirm that uh local files that can be um any files related to for instance machine learning models or LLMs that were downloaded and so on. um especially for machine learning there are a lot of files that you can identify. There might be some weights from the um from the models or other kind of uh configuration files and of course the models uh themselves. Um model names of course for machine learning as well but especially important for LLMs. uh you're going to need to know which models um there are even some uh LM no AI bomb um that includes LM and you can also specify there like an SBOM but for AI models and then you can um enter all your models and then you can use that in your audits etc. Then we have AI libraries and SDKs. Of course, we can find things like OpenCV and um and other libraries that can be used in in mobile apps. So, it is important that we have a full list of all those libraries. Um I already mentioned this actually AI API keys. So, we do see apps that are taking API keys. So you can um open uh the corresponding server and access it um for free. Really nice. Um we even found and I think we're going to see this uh next uh Open AAI key. So Open AAI uh you have to pay right? So you find one of these keys you will be able to use it without paying. And then um sensitive data sent to AI endpoints. So this is um interesting because even if the connections are encrypted you will have that sensitive data there. So I think this is very interesting because you wouldn't have a CBSS so to say because everything is properly secured right so you need to detect these things to be aware of what it's going out of your app and where it is going and so on or which country as well that can be very relevant. So here is um one example. This is from our product. As I just said, uh we find some apps that leak some API keys. In this case, of course, the key is redacted. But yeah, we found it in an app. It was uh if I remember properly was uh even downloaded. So it wasn't really hardcoded, but the apps uh is downloaded and runtime then storeing it and we are able to detect it this way. Apart from that the app had other very very bad and questionable practices. Um then this other example um pretty good security but we see this app is using quite a few uh services like open AI and Google and Deep Seek and Moonshot AI. So this is very interesting to be able to have all this information for the transparency that I mentioned. Right. So uh time for a demo. I saw this the other day. I didn't include it in the first um slide with news because I wanted to show you um this way and with a demo. So this is an interesting case where a combination of uh security issues plus the fact that the app has some kind of AI in this case uh machine learning is a problem. So, OCR, so the app here uh can read your images on your phone cuz they look like apps like a chat app or um you know to order food or things like that. So really apps that you wouldn't think that they are bad or dangerous or anything. So people would use it and at some point it would request access to your um to your pictures maybe to set up your profile picture or whatever. Like if it's a chat app, of course they typically ask for access to your images, right? So you can send images. So that's legitimate. But maybe you wouldn't know that the app itself or maybe the app doesn't know and is using an SDK that is using ML and that ML is going to be OCR. So it's going to be able to read text from pictures. And to add to that, we're going to have another issue which is like we saw in in in Deepseek and we are seeing in many other apps as well is that they tend to allow clear text traffic. So that is that is a problem because um an attacker could be able to um to leak all that information outside of the app very easily. So uh let's go for the for the demo which I have on this side going to see uh the phone and this this news here but basically you will see that I'm going to um get this recovery codes typical you have a service set up a password if you want to restore then you're going to have to download those codes and then well you leave them in your gallery, right? That can happen. So, now we're going to use this app and it's just it's a chat app. So, I'm going to chat with Harry, right? WhatsApp ask for the access to photos. Sure, I want to send pictures to Harry. So, going to keep chatting and um everything looks normal, right? So, nothing to worry, I guess. So hold up. Let's take a look because we have an attacker server running which we have activated and now we're going to see what's really going on. So we keep chatting with Harry and what happens is oh some leakage. So here we can see pictures that are in my phone and that the app has red or an SDK in the app. You don't know. And you can see that exactly the same codes that I just downloaded are there. Not only that, but you see above there are even more things that I had stored before. So now everything is leaked. very very bad right so how can we detect this so here there's a screenshot of one thing very important the first red flag is that it has ML OCR it shouldn't be a problem it might be a legitimate use but in this case is a red flag so let's keep collecting red flags what is the next on the app uses clear text traffic. That's how it can exfiltrate all that data without being noticed and it uses the permission for read media images. You allow everything because you don't want to there there is a possibility that you select pictures but not everyone does that. Some people would allow everything, right? So that's uh that's another problem. This combination of things enables such an attack. Recommendations track AI dependencies. Again, transparency. You need to know everything about SDKs, libraries using AI, as you just saw. Model protection. This is for Peter, right? He's going to want to obusc encrypt the models, etc. to protect them from theft. And of course, privacy measures minimize. This is typical MASVS privacy. Minimize data transmission. If you don't need all this data, just don't send it. Don't use it. Reduce that. And then disclose um data collection. That's going to be part of Google Play and the app Apple App Store guidelines that you need to do those things. So um in general also for privacy audits, it's going to be very good that you have all that information declared properly. So we are with all these measures and everything we are solving the problems for Regina and Richard and Peter. Um now running a bit out of time so I'm going to stop here but yeah again um thanks a lot. Uh if you have any questions please and uh I hope you enjoy the talk. Thank you. Welcome to the MARM minute. I'm Alan Snyder and we're going to talk about step four of the MARMM program. This is the part where you really now that you've got your apps uh categorized and classified in terms of business impact. You need to give some thought to what is the appropriate level of testing and what is the frequency of testing and what is the depth of testing that is appropriate to protect your business from risk at that level that impact tier. So what we do in the MARM program is we give you our recommendations. Again I would uh caution everyone this is a best practice recommendation. It's a template to get you started. your own threat model, your business risk, your challenges are going to define the appropriate level of testing and the frequency and the depth of testing for you. But what we thought would be helpful, and this is particularly true given that mobile apps have different characteristics uh from uh web apps, it's important, don't assume they're the same. Don't treat them the same. the way they leak data, the way they function, the number of endpoints, the number of third-party components, all highly highly different from what you're going to find in web apps. Therefore, your testing regimen and your assumptions also need to be adjusted to make sure it's with mobile. What we recommend from now secure is that you do that uh regular and annual uh pentest and then you supplement it must test MFA and then you also do the continuous automated testing. Don't forget training because you're always going to need to make sure that your teams, both the development teams, security teams are trained. This is what we recommend for the high impact apps. You may have your own choice and that's what's good about the program is we're here to help you define a program and then implement it as best suits your business. This has been your Marm minute Minute. [Music] What does a healthy DevOps regimen look like? How should my security team and my development team work together to limit business risk and ensure the safety and security of business critical applications? The answer, by implementing an efficient workflow that allows all teams to work together continuously without impacting the productivity of others. Let's walk through this example. Developers complete code review on a new feature and automatically kick off a scan via the CI/CD pipeline. Now, Secure Platform performs static and dynamic binary analysis in minutes. In this example, the automation produces 42 finding. These findings are then filtered through now secures policy engine which the security team customizes to ensure that all high and critical findings immediately and automatically generate tickets into the developer ticketing system. Assuming five findings were high and critical, the remaining 37 findings are then manually reviewed by the security team to triage and assigned to the appropriate queue for remediation. This workflow assures that high severity tickets are provided as soon as they are discovered and less severe tickets are triaged appropriately by security teams. Using the integrated workflow with Mouse Secure's policy engine is the fastest way to prioritize and remediate issues in your mobile application suite. [Music] Welcome to the MARM minute. I'm Alan Snder and we're going to talk about step three of the MARM program. This is where you bring together uh step one where you defined your impact tiers and the app attributes that matter to you to make it a high, medium or low impact to your business. And step two where you did the asset inventory of understanding all of the mobile apps, whether ones you built or ones that somebody else built and you use and you put sensitive or critical information in and you start to categorize and put things together. This is super important to the program because how do you know what level of testing you should apply unless you understand what category that app should be in? Now, this requires you're getting information about the app. You need to understand uh whether the app has PII, whether the app has critical information such as IP, uh financial transactions. You need to understand does the app have the ability to track geolocation, how many endpoints, how many downloads. So, you're going to need a lot of information. Highly recommend you use Now Secure. We can actually tell you uh pretty much all of those items, right? We can't we can't tell you brand uh impact, but we can certainly tell you all of the other attributes of that app and how we would categorize whether it's high, medium, or low impact to your business. So, once you have that, it's also important to keep in mind apps will change over time. Sometimes they will lose functionality and be downgraded. Sometimes they will gain functionality and information and be upgraded. So this is a continuous process that needs to be applied. This has been your MARM minute. [Music] Hello and welcome to the MARM minute. I'm Alan Snyder and today we're going to talk about the second step in the MARM program. It is basically asset inventory. It's understanding the mobile apps that are in your environment that need to be secured and protected. It comes in a couple different groups. The first relatively straightforward to uh understand a little bit harder to identify that is all the mobile apps that you or a vendor develops on your behalf. Typically is going to have a brand usually going to be in uh public app store maybe in your internal enterprise app store. The second category is uh apps that are approved for use. So this is one that you didn't develop but a third-party vendor developed with their brand but you're putting your intellectual property or uh PII or other such information that needs to be protected in it. So, think of things like maybe Slack or some other uh messaging uh platform teams that you didn't.
vulnerabilities that are directly linked to privacy violations. So whether it's a business logic flaw in an application or something that was be uh something was able to be abused to get information that a user shouldn't have access to, right? It is a privacy violation if it was done maliciously. And so they're trying to figure out how do we get ahead of that? and they're asking us those questions ahead of time to to do a little more due diligence on making sure we know the the information that should be private and which information shouldn't. That way whenever something comes out it's properly articulated the impact would be much higher for them. So I think that I think it's finally starting to come around um the question is always that we always get from folks is just how do you how do you implement that at scale in a cost-effective way which is obviously like where the paz side for us comes in. uh the linkage of connecting the security team and the privacy group is really through the finding context to say like if we see certain c certain types of vulnerabilities this is how it layers in and it's us helping the syso or the app abstack director sell to their general counsel folks like here is the impact and and value coming out of the testing that ultimately helps your job and that you should be aware of and it pulls those folks into that conversation because um that's ultimately what the syso is trying to do they're trying to pull more people into that conversation with All right. Well, looks like we're out of time for today. Paul, thank you very much for joining us. Now, I like to always leave our listeners with a little call to action before we uh depart the uh session today. Uh any last anecdote, anything that you'd like to leave our listeners with today? Yeah, I think there's just a lot happening. Obviously, the stakes have never been higher in terms of what the threat landscape looks like, but there there is light at the end of the tunnel. I think having these conversations defining like how to do things in a cost-effective way, how to do them at scale, that is ultimately the name of the game. And so I think we're all here to help support that conversation. Absolutely. Thanks, Paul. I'll leave you with just uh one additional nugget and uh definitely uh if you found this valuable, let us know. Uh we'd love both of us on the SynX side, on the now secure side, we'd love to talk to you together about how we can help elevate your risk management program both on web testing, on your appliance testing, on your OTT app testing, on your mobile app testing. Come talk to us and let us uh fill you in with a lot of information on how we can help with AI and some of all of the other risks that we talked about today. So uh with that, uh thank you everyone for joining. Uh Paul, any final words? And uh thank you everybody.