This post has a video example.
**TLDR:** Console injection. Anything running in a browser can print a control menu to the console. Claude reads it. Now it knows what it can do, and controls the site without an API. So, no back-end needed. Super simple.
Want the full picture? Keep reading. Want your LLM to build this for you? Copy/paste this whole post.
---
**THE PROBLEM**
You want your LLM to control something in your browser. Your options aren't great:
1. Claude-in-Chrome's built-in tools. It doesn't know what your site does or how to use it. It takes screenshots and guesses. Slow and token-heavy.
2. Build an API server. Now you're managing keys, hosting, and paying per call.
**THE SOLUTION: CONSOLE INJECTION**
There's a third option. Whatever you're building loads in the browser. On load, it prints a list of controls the AI can use. The LLM reads the console, sees the control list, and runs JavaScript to call those controls directly.
It's just a simple JavaScript control list sitting in the console. You can even add a workflow so the AI has an idea of what it can do with those controls.
This works for anything that runs in a browser. A SaaS tool, an internal dashboard, a local dev environment, a canvas editor, a data pipeline UI - if it renders in a browser tab, you can give an LLM a control panel for it.
Here's what it looks like in one of my tools (hit F12 to see your own console):
```
[AI-ACCESSIBLE] This app can be controlled via JavaScript.
* Sidekick.help() - Complete tools reference (returns JS object)
* Sidekick.teach() - Full teaching guide, 14 sections logged to console
* Sidekick.tool(name, input) - Execute any tool directly
* Sidekick.batchAutomap(mappings) - Smart batch mapping (recommended)
help() + teach() contain everything needed. No need to fetch external files.
```
The LLM sees that, and it's immediately trained. It can read state, click buttons, type things, batch operations - whatever you've exposed.
**IT GETS BETTER**
You can have the LLM tell you what it *wishes* it could do. It takes a screenshot, looks at your UI, and writes a list of the controls it wants. Feed that list to an LLM, and it writes the implementation. Then you expose those controls on your site.
The AI designs its own interface.
**AND IT'S FOR ANY LLM WITH BROWSER CONTROL**
When other models get browser access, they'll read the same console menu. No updates needed.
Google is actually building a browser standard called WebMCP that does something similar - lets websites expose tools for AI agents. It's in Chrome Canary right now behind a feature flag. I started building my version in October 2025, before they shipped theirs in February 2026.
**SO WHY NOT JUST USE WEBMCP?**
WebMCP could push the same buttons. It can expose tools with descriptions, and an AI that supports the protocol can call them. But it can't *teach* the AI when to push them or why.
Console injection doesn't just give the AI a tool list. It gives the AI understanding. In my version, there's a teach() command that prints 14 sections of context about how the app works, what workflows look like, what the current state means. It's the difference between handing someone a remote control vs. handing them a remote control and explaining what's on every channel.
The other practical difference: WebMCP requires both the browser AND the AI to support the protocol. Console injection works today, in any browser, with any AI that can read a console and run JavaScript.
**REAL EXAMPLE**
I use this to map thousands of signs for large buildings. Pretty niche, but it shows what console injection can do at scale: