DevArmor
AppSec

AI-Native Development and the Future of AppSec

Amir Kavousian
#appsec#ai

What is AI-Native Development

The history of programming languages reveals a steady move toward higher abstraction, bridging machine code and human thought. Domain-Specific Languages (DSLs) address specific problem areas, while Visual Programming and Low-Code/No-Code platforms enable application development with minimal coding, making software creation more accessible. AI code generators, powered by Large Language Models (LLMs), mark the next step. By translating unstructured human input into executable code, they promise a more intuitive interface between human intent and software development.

The first wave of AI code assistants, like GitHub Copilot, focused on code completion by generating context-aware code snippets to complement developer input. Since its 2022 debut, Copilot has seen rapid adoption, growing its user base by over 60% each quarter to exceed 1.3 million users within two years. Competitors such as Codeium, Cursor, Tabnine, Anysphere, and Continue.dev have since developed tools allowing for custom model selection or training on company-specific data.

More recently, a new category of platforms has emerged, aiming to automate broader programming tasks. These AI-Native Development tools, including Augment, Magic.dev, Cognition Labs, and Poolside, go beyond code completion by generating entire functional components, pushing software development toward full automation.

How Application Security Will Be Impacted by AI-Native Development

As we move towards a predominantly AI-driven coding environment, the AppSec toolchain needs to adapt to the new developer workflows. I believe the following trends will emerge as a result of the proliferation of AI Native Development.

Developers will spend more time on system design and less on implementation

As AI abstracts implementation details, developers can shift their focus from the technical nuances of coding to solving core business problems. This evolution in software development emphasizes skills such as problem definition and system design over traditional coding expertise. On the other hand, human oversight will still play a critical role, with developers taking on responsibilities in code review and quality assurance to maintain robust and secure systems.

AppSec tools will need to adapt to this new workflow. While a significant number of AppSec tools focus on helping developers write more secure code (the “Shift Left” movement), there has been less development on tools that enable developers to design secure architectures and enforce the security requirements during development and in production (“Shift Right”).

Security reviews will become more critical

While AI-Native development platforms excel in tasks like documentation generation, code maintenance, and specification-driven development, they often lack comprehensive insights into security risks, business contexts, and architectural decisions.

Code scanners are great at finding security bugs such as misconfiguring an authentication method or using an insecure encryption algorithm. However, they are much less effective in finding a different class of vulnerabilities that are caused by logical flaws in the application. Finding logical vulnerabilities (such as missing authentication) requires an understanding of the business context, security controls, and design principles that are outside the codebase.

To address these gaps, integrating system design and conducting thorough security design reviews are essential for building stable and secure systems.

By combining AI-driven development tools with system design and security reviews, AppSec tools empower development teams to ship secure software faster.

Security reviews are the most common bottleneck in SDLC.
The gap in security coverage and delay in software releases widen as the team size grows.

Secure-by-Design mandates will drive the need for new security tools

The Secure-by-design mandate represents a fundamental shift in the approach to cybersecurity, moving from reactive to proactive security measures. Secure by Design principles emphasize embedding security measures early in the planning and development phase of a product.

Since AI-Native Development is heavily reliant on the application specs, it is important to ensure the architecture design of the application is secure. In the new AI-Native workflow, design is still a human-provided input and thus is error-prone and requires validation. The next generation of AppSec tools need to assist developers with architectural design reviews, and act as assistants during the design phase.

To address these needs, a new generation of security tools need to:

AI-Driven Development use cases will consolidate

Contrary to the early claims that AI-Native Dev tools can solve arbitrary tasks, there is no evidence that AI-Native Dev tools can support a ”prompt-to-application” workflow as many hope they would do. Instead, I expect AI-Native Dev tools to specialize in a few distinct and targeted tasks, including:

The implication is that developers still need to do the architecture design and then feed that into the AI-Native Dev tools to generate the code, one component at a time.

Code generation and scanning will be commoditized

AI-Native Development is poised to commoditize code generation. Developers are guaranteed to adopt AI-Native development, and security has to adapt. Since code scanning happens inside the pipeline, Static Application Security Testing (SAST), Software Composition Analysis (SCA), Secrets detection, and Infrastructure as Code (IaC) scanning have to find a way to integrate.

On the other hand, the crowded competitive landscape of code scanners and the growing popularity of open-source solutions will make end-users re-evaluate their budget allocation, and increase their investment in high-return areas such as secure-by-design and SDLC integration.

Design Primitives and Secure Building Blocks will be standardized and integrated into SDLC

The concept of “paved paths” originates from Netflix’s “paved road” methodology, which emphasizes providing developers with standardized, opinionated defaults (design primitives, secure building blocks) to streamline workflows and enhance productivity. This approach has been adopted by companies like Spotify and Google to improve developer efficiency and scalability. By treating infrastructure, deployment, and security as products, these organizations have created standardized, user-friendly processes that developers are inclined to follow, leading to more consistent and efficient development practices.

In the new world of AI-Native development, these design primitives can be directly baked into the AI toolchain. Since the design primitives are customer-specific due to the custom risk profile and regulatory requirements of each company, I forsee two plausible scenarios for integrating secure-by-design into AI-Native development.

  1. Customer-tuned models such as Poolside can directly code the design primitives into their AI model.
  2. Generic models such as CoPilot would have to rely on the user input for each task to follow design primitives. The reliance on the accuracy and completeness of user input degrades user experience and increases the chances of errors and omissions.

Regardless of how the design primitives are communicated to the AI tool, a human-in-the-loop is still needed to verify implementation of design primitives. Furthermore, design primitives need to be associated with security controls to support compliance.

In short: the following tasks are currently not supported by any AppSec tool and I expect new AppSec tools to innovate in these areas:

Compliance automation will be integrated into SDLC

Compliance is a key driver for implementing security measures in the SDLC. AI-Native development environments can support compliance management through:

Verification becomes a critical component of the development workflow

AI-Native Development tools increase speed and efficiency, but they also bring new risks. Without robust verification, there’s no way for teams to ensure that:

In the context of AI-Native Development, a verifier is any tool, process, or mechanism that ensures the outputs of automated systems—whether code, designs, or configurations—are correct, secure, and aligned with business and security requirements.

Verifiers can take several forms:

Summary

AI-Native Development helps teams write code quickly. However, we need to address application security and compliance needs. While AI coding assistants can potentially commoditize code generation and integrate code scanning, they don’t fully address application security concerns. Most importantly, the application design, which is human-provided, will become even more of a crucial juncture for establishing security. In addition to that, implementation verification is still required to make sure AI platforms stay within the guardrails that security teams have specified for them, and for connecting the security results of generic models to a company specific standard.

With the AI models’ performance stabilizing, it is now easier to create a compelling vision for the role of AppSec tools in the modern workflows facilitated by AI-Native development environments. In my view, the AI-Native development environments will support the following:

To adapt to the new workflows of predominantly AI-Driven development, the new generation of AppSec tools need to support:

AI-Native Development accelerates code creation, but it also introduces risks that require robust verification. Verifiers ensure the correctness and security of AI-generated outputs by:

Acknowledgements

I want to sincerely thank Rami McCarthy and Ian Livingstone for their feedback on drafts of this article.

← Back to Blog