NOWSECURE UNVEILS FIRST AUTOMATED OWASP MASVS V2.1 MOBILE APP SECURITY AND NEW PRIVACY TESTING

NowSecure MARI is the industry’s first simple risk score based on millions of assessments that identifies third-party apps vulnerable to PII and IP exfiltration, supply-chain and MiTM attacks and sensitive data theft.

MARI Datasheet featured image 768X480
NowSecure Launches Mobile App Risk Intelligence Solution to Combat Threats to Customer and Employee Security, Safety and Privacy NowSecure Launches Mobile App Risk Intelligence Solution to Combat Threats to Customer and Employee Security, Safety and Privacy Show More
magnifying glass icon

Key Security Considerations for AI Coding Assistants in Mobile DevSecOps

Posted by

Amy Schurr

Content Marketing Director
Amy Schurr is content marketing director for NowSecure. A former B2B journalist, she has spent her career covering technology and how it enables organizations.
AI Graphic

The rise of generative Artificial Intelligence (AI)-based tools has the potential to revolutionize software development. Many organizations have already embraced AI-powered coding assistants such as GitHub Copilot and ChatGPT to improve the developer experience and speed time to market. A recent McKinsey study found devs can complete coding tasks up to twice as fast with generative AI. 

AI coding assistants automate repetitive tasks to help devs code faster. But while innovative AI supercharge productivity, the tools have one notable pitfall: they can increase security risk. AI coding assistants aren’t foolproof so organizations should adopt this disruptive technology carefully. Following secure coding best practices and conducting automated mobile application security testing to find and fix security bugs prior to release remains critically important.

AI code generators offer autocomplete suggestions and speed development by automating repetitive tasks and writing boilerplate code. However, the tools aren’t 100% accurate and can make mistakes. They can introduce security or privacy vulnerabilities or threat actors can exploit the tools to create and spread malware or launch supply-chain attacks. 

Indeed, a recent Stanford University research study found software engineers who used code-generating AI tools wrote significantly less secure code than those without access. Worse, participants who relied on the AI coding assistant were lulled into a false sense of security and were more likely to believe that they wrote secure code. For these reasons, developers and security teams must exercise caution when using generative AI tools and factor in a few key security considerations. Never assume that AI coding tools write accurate and secure code and always test any code written by humans or AI.

What Are AI Coding Assistants?

AI coding assistants tap AI and machine learning to help developers write, review and improve their code. They perform such tasks as completing lines of code, writing code from scratch based on natural language prompts or translating code from one programming language to another. Organizations have experimented with applying generative AI to these aspects of software development:

  • Code completion
  • Error detection
  • Code refactoring
  • Documentation
  • Code reviews
  • Code search
  • Language support

Some embed directly into an Integrated Development Environment (IDE) so developers can receive code snippet suggestions or snippets within their preferred code editor rather than copying and pasting, further boosting productivity. For enterprise mobile app development, organizations should seek an AI coding assistant that’s trainable on your code base, offers contextual suggestions and can be customized to the way a dev team works.

Popular generative AI tools used for software development include ChatGPT, an AI chatbot created by OpenAI, and GitHub Copilot, which is powered by the OpenAI Codex large language model and has been trained on billions of lines of code. Amazon CodeWhisperer, Baidu Comate, Codeium and Tabnine. In addition, GitHub Copilot Chat lets developers ask and receive answers to coding-related questions directly within a supported IDE.

Overall, pairing generative AI tools with a programmer generally creates a happier, more productive dev experience. The tools streamline software development and help companies ship mobile apps faster. But regardless of the AI coding assistant chosen, dev teams must factor in a few considerations to use them carefully and securely. Being aware of the limitations enables mobile app security and DevSecOps teams gain efficiencies while properly managing risk.

“Most developers I’ve spoken with are excited to incorporate AI into their set of tools,” says NowSecure Co-Founder Andrew Hoog, who recently built a simple news reader with GitHub Copilot and it would have taken 10x longer without that help. “But all code, whether written directly by developers or assistance, can include security and privacy issues. I believe that automated security testing incorporated directly into the developer experience is perhaps even more important today.”

AI Coding Assistants Create Risks to Data Privacy & IP

Devs and security professionals need to understand how AI coding assistants work and what powers them. The tool algorithms rely on large volumes of data to learn, find patterns and make decisions. They require access to existing codebases which could expose sensitive data. Samsung banned AI workplace chatbots after discovering its source code was accidentally leaked to ChatGPT. Apple, Amazon, Northrop Grumman and Verizon also reportedly restrict employee use of ChatGPT to prevent the release of confidential information. 

As a first step to adoption, organizations should have visibility into what data is being collected and how it’s being used, stored and transmitted in order to prevent exposure to unauthorized third parties. At a minimum, they must take the standard precautions to safeguard sensitive data and prevent unauthorized access. Additional protective measures may also be prudent.

Developers must also be careful about the open-source code used. The tools may suggest the use of external third-party libraries that create supply-chain security risk through dependencies. In addition, AI coding assistants may not have access to the latest frameworks or libraries so may suggest outdated coding. Companies need to understand or stipulate which libraries the tools use and if they can be trusted and be mindful of potential legal issues stemming from plagiarism.

Automated security testing incorporated directly into the developer experience is perhaps even more important today. – NowSecure Co-Founder Andrew Hoog

AI Coding Tools Can Create Security Bugs

Organizations should be aware that AI coding assistants have flaws like humans do. The tools can mislead inexperienced developers and make them overconfident. As mentioned above, a Stanford study of Codex found study participants who used it were more likely to write incorrect and insecure code compared to the control group. Worse they were more likely to say their insecure answers were secure.

Those findings track with an earlier study of GitHub Copilot conducted in 2021 by researchers affiliated with the New York University. The NYU researchers produced 89 different scenarios in which Copilot had to finish incomplete code. Of the 1,692 programs it generated, approximately 40% had security vulnerabilities or flaws that could be exploited by an attacker. The authors of the study recommend pairing AI coding assistants with appropriate security-aware tooling during both training and generation to minimize the risk of introducing security bugs.

AI Coding Assistants Lack Contextual Awareness

Thanks to AI coding assistants’ reliance on existing data and codebases to look for patterns, they don’t truly understand the broader context of the project, business requirements and design principles. 

AI coding assistants struggle with generating code for handling unusual use cases that don’t fit within the context of what they have learned from or complex debugging scenarios that require deeper understanding and problem solving. Another challenge is the code may work but not fit with the overall goals for the project, its architecture or long-term maintainability.

To overcome these obstacles, use generative AI tools for inspiration but carefully check the work. Over time, developers can learn to craft better prompts to aid contextual understanding.

Use AI Coding Assistants with Caution

The security considerations outlined above underscore that organizations should perform extra monitoring and automated mobile application security testing as usual. Solutions such as NowSecure Platform integrate directly into the dev pipeline for fast, efficient testing and embedded development resources. For example, developers using GitHub can leverage GitHub Copilot to write mobile app code and NowSecure Platform to automatically test that code, all from inside GitHub.

Another best practice is to follow the security standards that are critical to success. Developers should rely on the OWASP Application Security Verification Standard (ASVS) for web apps and OWASP Mobile Application Security Verification Standard (MASVS) for mobile apps. Leverage NowSecure Platform to test mobile apps against OWASP MASVS requirements and achieve compliance. In addition, the new OWASP Large Language Model Top 10 project helps organizations understand and guard against the risks of LMM applications.

Developers should use their own judgment and understanding of building apps when incorporating suggestions from generative AI tools into their code. Follow standards like OWASP for secure coding and test all code to ensure it meets the security bar. Using AI coding assistants can undoubtedly be effective in helping developers release mobile apps quickly and efficiently as long as they are well informed, exercise critical thinking and use the tools wisely. 

To see an AI coding assistant in action, watch the NowSecure Tech Talk “How to Build a Simple Mobile App with GitHub Copilot.” Hoog and Solutions Engineer Kevin Lewis share their observations about what the AI tools can do well and where they come up short. They also recommend how organizations can use Github Copilot today to gain efficiencies in  mobile app development.