๐ Your AI agent is one untrusted string away from a privileged action
Air Canada's chatbot invented a bereavement refund policy. A tribunal made them honor it. A Chevy dealer's bot got prompt-injected into appearing to agree to sell a Tahoe for $1, and the screenshots were on every tech feed by dinner. Those were the public ones. The customer-facing ones. The ones we laughed at. The agents getting wired into your cloud right now have a much bigger blast radius: โ A Slack bot with read access to S3 that summarizes "any file a teammate drops in #ops" โ A Jira agent with an IAM role that can spin up infra to "help triage tickets" โ A Copilot-style assistant with Graph API scopes across SharePoint, Outlook, and Teams โ An IR copilot reading raw CloudTrail and GuardDuty findings into its context window Every one of those is a prompt-injection sink. And the version that bites cloud teams isn't the direct kind โ it's indirect injection: the attacker never talks to your agent. They leave a string somewhere your agent will read. A malicious filename in a bucket. A poisoned Jira description. A calendar invite with hidden instructions. A log line crafted to look like a system prompt. Microsoft already shipped a CVE this year for exactly this โ EchoLeak / CVE-2025-32711, zero-click data exfil out of Microsoft 365 Copilot. That's the category, not an outlier. The control surface most people skip: the agent's IAM role and OAuth scopes are the actual blast radius โ the prompt layer is just where the attacker pulls the trigger. If you're a Terraform shop, that means scoped roles, tight trust policies (ExternalId, condition keys, session tags), no * on the resource side, and tool allowlists that are deny-by-default with an audit log on every call. The prompt-side filters are a backstop, not the perimeter. This is what "we'll figure out AI security later" actually costs in a cloud shop: โ An agent with an over-scoped IAM role does something that lights up a SOC 2 CC6.1 finding โ Customer data leaves through a model context window instead of an S3 bucket policy โ Your detection stack never fires because the "attack" is just text.