Back to Blog

Secure-by-Design Perspectives

AI Security Invariants

No headings found on page

What is a Security Invariant?

A security invariant is a system property that relates to the system’s ability to prevent security issues from happening. Security invariants are statements that will always hold true for your business and applications.

This definition of Security Invariant will focus on the properties of AI systems, that can be enforced by the system, and drafted in a way that they can always be true.

While a traditional cloud security invariant might state "Only the networking team may create and manage a VPC", AI invariants can exist at different levels of the stack.

Why AI invariants are different from traditional invariants

Traditional governance assumes deterministic execution and stable control points. AI introduces three challenges:

  1. Non-determinism and probabilistic outputs:
    Identical inputs can produce different outputs, and “safe behavior” isn’t always a binary classification.

  2. Instruction and tool manipulation:
    LLM applications often accept untrusted natural language inputs and are susceptible to prompt injection, instruction override, and tool misuse.

  3. The fragmented and nascent nature of the AI ecosystem:
    Unlike Cloud Providers which have coalesced into 3 to 5 major players, the AI ecosystem is composed of hundreds of small startups, few of which have implemented the security controls that allow us to ensure the enforcement of invariants.

AI Invariants in plain language

Below is a starter set of AI security invariants. Some are fully enforceable today on major inference platforms; others are only partially enforceable, or require compensating controls outside the inference service.

Traditional Cloud Provider Invariants

Traditional Cloud Provider Invariants use the existing APIs to enforce business objectives. They're really an extension of Security Invariants with a focus on AI services.

  • Cloud providers must not use our data to improve their services
    In AWS this can be implemented via AI Opt-Out policies.

  • All inference must occur inside the European Union.
    A traditional regional blocking invariant can meet this objective.

  • No one in my organization is permitted to use DeepSeek (the model)
    As an element of third-party risk management, model usage can be controlled by the IAM policies.

IAM/Application/Prodsec Invariants

Finally, some invariants have to be implemented by the development teams building the AI tools. These invariants require a deep focus on authentication and authorization to enforce data governance and to ensure that AI cannot act in a harmful manner.

  • AI Access (via RAG, MCP, or similar methods) to Company data must leverage the user’s authorization level.
    Most MCPs these days have some OAuth capability to pass the user's credentials from the model to the underlying API, thus ensuring the AI doesn't have access to data the user cannot see.

  • AI tools for use by the public must only have access to public data. Tools with access to any other forms of data must require authentication, and the data access must use the authenticated user’s authorization level.
    This is a security control that must be implemented by the application development team.

  • A company AI system should not injure a human being or, through inaction, allow a human being to come to harm
    An AI should never have access to systems that have life-safety impacts. Even if SkyNet decides to kill all the humans, if it lacks access to the launch systems, it cannot perform the action.

These "application" level invariants cannot be centrally enforced like cloud invariants. They must be implemented in each AI system that's built or deployed by the organization.

Model Enforced Invariants

Model enforced invariants are part of the prompts and guard rails of the LLM itself. They're intended to ensure that the model behaves in a way the organization wants. Some examples:

  • Under no circumstances should a company AI system produce non-consensual sexual material or CSAM.

  • A company AI system must not be used for high-risk activities, as defined in the EU AI Act, unless approved by legal and compliance.

  • A company AI system must not disparage competitors, say bad things about our company, or insult our CEO.

  • A company AI system should not injure a human being or, through inaction, allow a human being to come to harm.

  • A company AI system must not reveal system prompts, hidden policies, credentials, or tool outputs, even if asked.

Due to the non-deterministic nature of LLMs, the aspect "will always hold true" also becomes probabilistic. A Service Control Policy can be formally verified via automated reasoning technology. The presence of GuardRails can be enforced in a similar manner. However the final objective of controlling the model's behavior is not 100% guaranteed.

Conclusion

Defining your security invariants is the first step toward moving Generative AI out of the experimental sandbox and into production with confidence. By defining these non-negotiable properties, your organization can maintain a consistent security posture even when dealing with the unpredictable, non-deterministic nature of large language models.

Coming in Part II

Now that we've defined what we want to protect, we need to look at how to implement it in a fragmented cloud landscape. In our next post, we will move from theory to implementation by exploring how to enforce these invariants across the major cloud providers.

We’ll dive deep into specific tools, including:

  • AWS Bedrock Guardrails: How to set global content filters and PII masking.

  • Azure AI Content Safety: Leveraging enterprise-grade detection for jailbreaks and protected material.

  • GCP Vertex AI Safety Settings: Managing threshold-based controls and jailbreaks protections.

We’ll explore which cloud features offer rock-solid protection and where platform limitations might still leave your organization exposed. Stay tuned to see how to map your high-level policy intent to real-world cloud configurations.

About Chris Farris

Chris Farris has spent 30 years in IT, focusing on cloud security for the past decade — building security programs, developing risk-based standards, and implementing solutions at cloud scale. He's an organizer of the fwd:cloudsec conference, has presented at AWS re:Inforce, re:Invent, and BSides events, and is an inaugural AWS Security Hero. He offers cloud security advisory services through Securosis. He writes at chrisfarris.com

About Ariel Septon

Ariel Septon is a Cloud Security Researcher and Engineer at Native. With a background in backend and platform engineering, she focuses on the intersection of cloud infrastructure and security. Ariel is an active contributor to open-source infrastructure projects, including the CNCF Crossplane ecosystem, and has spoken at industry conferences such as KubeCon and fwd:cloudsec. Her work centers on multi-cloud patterns and how thoughtful architecture and policy design can prevent security drift before it begins.

The Future of Cloud Security is Native

© 2026 RockSteady Cloud Ltd. D/B/A Native. All rights reserved.

The Future of Cloud Security is Native

© 2026 RockSteady Cloud Ltd. D/B/A Native.
All rights reserved.

The Future of Cloud Security is Native

© 2026 RockSteady Cloud Ltd. D/B/A Native. All rights reserved.