Learn where AI is hiding in your mobile apps
AI Discovery & Governance
Just because you can’t see it doesn’t mean it’s not there.
Discover shadow AI hidden in your app supply chain.
25% of apps feed data, IP, and Personal Information (PI) to AI and machine learning endpoints like ChatGPT, Claude, Gemini and Perplexity.
Get definitive answers to key AI questions
Whether first-party apps your organization builds or third-party apps your employees download and use, discover where and how AI is being used inside your mobile app ecosystem.
- Where is AI hiding in your mobile apps?
- Which apps have AI?
- What data do apps collect? Is it private or sensitive data?
- Where does the data go?
AI Local Files
AI local files are data files containing pre-trained AI models, weights, configurations, or other essential information needed to deploy AI systems within an app. Identifying the AI files included in the app package or downloaded at runtime reveals the frameworks used (e.g., PyTorch or ONNX), offering insights into the app’s AI infrastructure. These files can also indicate whether the app is susceptible to model-related vulnerabilities or if it is using proprietary models without proper authorization. Ensuring these models are up-to-date, securely stored, and compliant with licensing agreements is crucial to prevent legal issues, security breaches, and loss of competitive advantage due to intellectual property theft. Outdated or unauthorized models can expose the app to attacks like prompt injection, infringe on intellectual property rights, and pose risks to functionality, user privacy, and the business—including financial losses and reputational damage.
AI Endpoint URLs
AI endpoint URLs represent connections from the app to AI services hosted externally from the mobile device. The presence of these URLs indicates that the app may be transmitting or receiving data related to machine learning (ML), natural language processing (NLP), image recognition, predictive analytics, or other AI tasks. This data exchange can involve sensitive user information being sent to third-party servers, raising potential privacy concerns and regulatory compliance issues. If an app communicates with well-established domains like openai.com or aws.amazon.com, it is more likely relying on reputable platforms with strong security measures. However, communication with lesser-known or unsecured domains may expose user data to unauthorized access or misuse.
AI Libraries
AI endpoint URLs represent connections from the app to AI services hosted externally from the mobile device. The presence of these URLs indicates that the app may be transmitting or receiving data related to machine learning (ML), natural language processing (NLP), image recognition, predictive analytics, or other AI tasks. This data exchange can involve sensitive user information being sent to third-party servers, raising potential privacy concerns and regulatory compliance issues. If an app communicates with well-established domains like openai.com or aws.amazon.com, it is more likely relying on reputable platforms with strong security measures. However, communication with lesser-known or unsecured domains may expose user data to unauthorized access or misuse.
Use of Cloud-Based Artificial Intelligence
Cloud-based AI, including LLMs and ML models, enable apps to perform advanced tasks by offloading processing to external infrastructure. However, this reliance poses risks to data privacy, particularly when sensitive user information is transmitted, potentially leading to unauthorized use, interception, or compliance challenges with evolving artificial intelligence and data privacy legislation.
Use of On-Device Artificial Intelligence
On-device AI, including LLMs and ML models, provides benefits like local data processing, which can enhance privacy. However, these models may be vulnerable to reverse engineering, model-targeted attacks, and misuse, leading to significant risks for intellectual property, compliance, and liability.
AI API Keys
Exposure of AI service API keys: Handling API keys in apps can be easily extracted by attackers, leading to financial risks, data breaches, and potential DDoS attacks. Unauthorized access to API keys can also violate terms of service, resulting in account suspension.
Use Cases
- Identify the use of AI in the apps you build to ensure compliance with regulatory and customer requirements
- Protect your organization’s mobile app ecosystem from apps with unapproved AI in them
- Get visibility into the third-party libraries, SDKs and components that provide AI functionality or connectivity
- Spot hardcoded secrets like hardcoded AI API keys to avoid financial risks, data breaches, terms of service violations and potential DDoS attacks
Identify Mobile App Risk
Organizations, employees and customers have increased privacy concerns, including potential data breaches, unauthorized data collection and misuse of confidential information. Businesses also face legal and contractual risk from unauthorized AI usage. With NowSecure, you can establish guardrails and governance to track AI usage in your apps, including local files, third-party libraries, SDKs and connected AI endpoints.
Stay Compliant
Enterprise governance and local regulations around AI are still a work in progress but it is clear that disclosure for the use of AI in an app is table stakes. Consequences range from fines and penalties and losing app store placement, to the complete closure of business lines that violate these rules. Development and security teams need to ensure that they can attest to their mobile app’s use of AI for compliance and procurement requirements.
Uncover Shadow AI Hidden in the Supply Chain
Shadow AI refers to AI integrations that come from third-party apps or app components that your development teams have little control over. Between 50% and 70% of apps are composed of third-party components like SDKs. These third-party libraries and files may contain or connect to AI models without the development or security team’s awareness. Businesses can face legal action, even if unauthorized AI use stems from a third-party component.
Guard Against Vulnerable AI-Generated Code
Generative AI helps teams build mobile apps faster, but it also increases the occurrence of security vulnerabilities, with over 40% more bugs in AI-generated code. It is critical that teams not only statically test their mobile apps for issues, but also test them at runtime and in different network conditions in order to identify issues AI-generated code may have introduced. NowSecure Platform automates both in order to help organizations not only know what AI is being used in their app, but also discover what security, privacy and compliance issues may have been introduced by generative AI.
Protect Your IP
Using AI models without proper licensing can result in intellectual property (IP) violations, leading to lawsuits or penalties. Evidence of proprietary or licensed AI models/files being used correctly helps developers protect their intellectual property and avoid unauthorized use, ensuring compliance with licensing agreements and preventing IP theft.