Back to Blog
Secure-by-Design Perspectives
The Question Every Engineering Leader is Asking About AI Guardrails

, Sales Engineering Lead

Almost every organization I've talked to in the last several weeks has surfaced the same question, in some variation:
"We're going hard on AI-assisted development internally. What is Native doing to make sure we can keep moving fast without putting the business at risk?"
The question often arrives with a link. Sometimes it's a story about a coding agent that took down a production database. Sometimes it's a peer's post-mortem. Sometimes it's a quiet incident inside their own org that didn't make the news.
What unifies these conversations isn't fear. The engineering leaders I'm talking to are not flinching away from AI. They're already deep into AI-assisted development, and they intend to go further. What they want to know is what good looks like operationally. How do you give developers, and the agents acting on their behalf, the speed advantage that makes them valuable, while making sure the worst-case action of any single actor is bounded?
That's the architectural constraint. Not "slow agents down." Not "lock everything." Build the boundaries and baselines that let agents operate at full speed, with the worst-case action bounded by the system itself.
This post walks through what one of the organizations we're working with, a fast-growing SaaS company running across AWS, Azure, Google Cloud, and OCI, asked us to help them build. The shorthand they used was simple: "We want to be structurally unable to make our worst mistakes."
Their starting point
The team hadn't suffered a public incident. They were ahead of that. What they recognized was the pattern in their own environment:
Production databases and backup vaults could be deleted in a single API call. Nothing in the cloud configuration prevented it.
IAM roles created by AI-assisted tooling were inheriting broad permissions by default. There was no ceiling on what those roles could ever do.
Their CSPM was flagging risky configurations, but the alerts arrived after the action. Useful for cleanup. Not useful for prevention.
The cloud's own preventative controls were available but unadopted. Earlier attempts to deploy blocking policies had taken down a CI/CD pipeline the team didn't know existed. After that, no one wanted to touch them again.
This is the pattern I see across most engineering orgs right now. The cloud providers have already shipped the controls that solve this problem. The reason teams don't use them isn't disbelief in prevention. It's that nobody trusts a blocking policy they can't simulate first.
Why simulation was the unlock
Every part of the solution we built with this customer depended on one capability: the ability to deploy blocking policies with confidence that they wouldn't break production.
Native's simulator runs every policy against the live environment before it goes into enforcement mode. It pulls from audit logs, from billing telemetry (which catches ephemeral CI/CD resources that point-in-time scans miss), and from current resource state. It produces a complete list of every resource, identity, and pipeline that would be affected.
That changes the conversation from "let's deploy this and hope" to "here is the impact, decide how to handle each line item." The team could grant scoped exceptions, fix the underlying issues, or accept the impact, all before a single block went live.
Without that simulation, the rest of the rollout wouldn't have happened. With it, the security team could finally deploy aggressive preventative controls and trust they would not break the business.
What got deployed
Four pieces of the architecture, in the order that mattered most. Together they form the baseline that lets AI agents move at full speed inside boundaries the system enforces on its own.
Resource protection at the control plane. Production databases across all four clouds (Amazon RDS and DynamoDB, Azure SQL and Cosmos DB, Google Cloud SQL and BigQuery, Oracle Autonomous Database), backup vaults, audit log streams, and organization-membership records were marked non-deletable. Enforcement happens at each cloud's control plane, not at the IAM layer. Even a compromised global admin cannot override these controls.
AI blast radius containment. AI agents got freedom to build and deploy at speed, with a hard ceiling on what any single agent could ever do. The architecture made it structurally impossible for an agent to grant itself admin, expose data publicly, or delete protected resources. That outcome holds regardless of what the agent attempts or what permissions IAM technically grants. Agents move at agent speed, and the worst case stays bounded by the cloud control plane.
Cross-cloud parity from a single plane. The same set of security intents was translated into each cloud's native primitives and deployed consistently across AWS, Azure, Google Cloud, and OCI. The intents covered privilege-escalation paths, destructive actions against protected resources, exfiltration of root credentials and key material, and cross-account and cross-tenant trust. The team didn't rewrite policies per cloud.
Exceptions handled in the moment. When something gets blocked, the requester gets a Slack ping with a one-click override request. Each one is scoped to a specific tag or role, logged, and reviewed before it goes through. That friction is the point. It shows up when someone is actually trying to get something done, not weeks later in a report nobody reads.
Results
In the months since deployment:
Zero destructive infrastructure events across the in-scope cloud accounts.
High-risk actions blocked at execution time, including attempted deletions of protected production resources.
Preventative policies live across AWS, Azure, Google Cloud, and OCI, deployed without a single production incident attributable to rollout.
Critical resources protected by default for every new account added to the org hierarchy.
The security team shifted from triaging a CSPM backlog to operating a control plane where the most damaging actions are simply not possible. AI adoption continued to scale. The failure paths it would otherwise have opened were closed before any agent, or any human, could walk into them.
Back to the question
When engineering leaders ask what Native is doing about AI-driven infrastructure risk, this is the answer. The point isn't to constrain AI. The point is simpler than that: make the worst case impossible. Every actor in the system, agent or human, gets a bounded blast radius, and full speed inside it.
If this matches a question you're working on right now, we'd like to hear about it. Reach out here and I'll walk you through what we built with this customer in more detail.
Continue reading

Secure-by-Design Perspectives
Apr 6, 2026
AI agents are already being given real access to code, infrastructure, and production workflows. The winners will not be the organizations that slow that down. They will be the ones that enforce the right boundaries at the cloud level, so agents can move fast without putting the business at risk.

Security Architecture & Strategy
Apr 3, 2026
Security architecture has always depended on separation. In the cloud, that means more than network segmentation alone. It means defining zones, controlling what can cross between them, and enforcing the baselines each zone must hold. This piece explores the missing layer between secure-by-design principles and durable enforcement.



