Air Canada's chatbot invented a bereavement refund policy. A tribunal made them honor it. A Chevy dealer's bot got prompt-injected into appearing to agree to sell a Tahoe for $1, and the screenshots were on every tech feed by dinner.
Those were the public ones. The customer-facing ones. The ones we laughed at.
The agents getting wired into your cloud right now have a much bigger blast radius:
β A Slack bot with read access to S3 that summarizes "any file a teammate drops in #ops"
β A Jira agent with an IAM role that can spin up infra to "help triage tickets"
β A Copilot-style assistant with Graph API scopes across SharePoint, Outlook, and Teams
β An IR copilot reading raw CloudTrail and GuardDuty findings into its context window
Every one of those is a prompt-injection sink. And the version that bites cloud teams isn't the direct kind β it's indirect injection: the attacker never talks to your agent. They leave a string somewhere your agent will read. A malicious filename in a bucket. A poisoned Jira description. A calendar invite with hidden instructions. A log line crafted to look like a system prompt. Microsoft already shipped a CVE this year for exactly this β EchoLeak / CVE-2025-32711, zero-click data exfil out of Microsoft 365 Copilot. That's the category, not an outlier.
The control surface most people skip: the agent's IAM role and OAuth scopes are the actual blast radius β the prompt layer is just where the attacker pulls the trigger. If you're a Terraform shop, that means scoped roles, tight trust policies (ExternalId, condition keys, session tags), no * on the resource side, and tool allowlists that are deny-by-default with an audit log on every call. The prompt-side filters are a backstop, not the perimeter.
This is what "we'll figure out AI security later" actually costs in a cloud shop: β An agent with an over-scoped IAM role does something that lights up a SOC 2 CC6.1 finding β Customer data leaves through a model context window instead of an S3 bucket policy β Your detection stack never fires because the "attack" is just text.
I'm running the AI-CSL lab on exactly this problem β wiring agents into real cloud stacks and seeing where they break. So the Lab question this week:
β Where in your cloud is an agent one untrusted string away from a privileged action?
β Are you grounding it (RAG with provenance), validating it (output checks, deny-by-default tool allowlists), or gating it (HITL on anything that touches IAM, money, or customer data)?
β What scope did you almost hand an agent before someone caught it?
Drop your setup in the comments. Pressure-test each other's guardrails before the next CVE has our name on it.