The LLM Does Not Touch Your Network
Before anything else, get this right in your head: the language model cannot execute code. It cannot SSH into a router. It cannot run a Python function. What it can do is produce text — and that text, if you structure it correctly, looks like a tool call request.
Your Python code reads that request. Your Python code decides whether to honor it. Your Python code runs the actual SSH session.
That separation is not a design choice you can skip. It is where every safety control lives.
---
Three Access Levels. No Exceptions.
Think of these like privilege levels on a Cisco router.
Every tool you build belongs to exactly one of three levels:
- READ — show commands, lookups, pings. Runs automatically. Cannot change anything.
- WRITE — config changes, interface bounces. Agent proposes, you type y to approve.
- ADMIN — high-impact operations. You must type YES I CONFIRM exactly.
You are running this agent against client infrastructure. The access level is not a suggestion.
Here is how you define them in Python — plain string constants, nothing fancy:
READ = "read" # privilege 1 — show commands only, auto-execute
WRITE = "write" # privilege 7 — config changes, needs y/n approval
ADMIN = "admin" # privilege 15 — destructive ops, needs YES I CONFIRM
---
ToolResult: What Every Tool Returns
Every tool returns exactly the same shape. This is important — the agent loop always knows what it is getting back.
class ToolResult:
"""
Every tool returns this. Always the same three fields:
success — True or False
data — what came back, e.g. {"raw_output": "..."}
error — error message if success is False, empty string otherwise
"""
def __init__(self, success: bool, data: dict, error: str = ""):
self.success = success
self.error = error
Usage:
result = ToolResult(success=True, data={"output": "neighbor FULL"})
print(result.success) # True
No magic. No decorators. A class with three attributes.
---
BaseTool: The Template Every Tool Follows
Think of BaseTool like a standard interface config template on a Cisco router.
The template defines what every interface must have: description, IP, OSPF settings. You never activate the template itself — you apply it to a real interface and fill in the values.
BaseTool works the same way. It defines what every tool must declare:
- name — what the LLM types to call this tool
- description — what it does (the LLM reads this to decide which tool to use)
- category — READ, WRITE, or ADMIN
- parameters — what inputs it needs (the LLM reads this to know what to pass)
- execute() — the actual work
class BaseTool:
"""
Template that every tool must follow.
Inherit from this, fill in the four fields, implement execute().
"""
name = "" # e.g. "run_show_command"
description = "" # e.g. "Execute a show command via SSH"
category = READ # READ / WRITE / ADMIN
parameters = {} # e.g. {"device": "hostname or IP"}
def execute(self, **kwargs) -> ToolResult:
# Subclasses must override this.
# If they forget, they get a clear error — not silent failure.
raise NotImplementedError(
f"Tool '{self.name}' has no execute() method. " f"You must implement it before registering this tool."
)
You never use BaseTool directly. You build a specific tool that inherits from it. If you forget to write execute(), you get a clear error instead of silent failure:
template = BaseTool()
try:
template.execute()
except NotImplementedError as e:
print(f"Good — BaseTool enforces the contract: {e}")
---
ToolRegistry: The Agent's Contact Book
The registry solves two problems at once.
Problem 1: The LLM produces text like run_show_command. Your Python needs to find the actual tool object that matches that name. Without a registry you need an if/elif chain — one branch per tool. That breaks every time you add a tool.
Problem 2: The LLM needs to know what tools exist and what they do. Without a registry you manually paste descriptions into the system prompt and remember to update them whenever something changes.
The registry fixes both: register a tool once, and it is automatically findable by name AND automatically included in the LLM system prompt.
class ToolRegistry:
"""
The agent's toolbox. Does two jobs:
1. Stores tools so the agent can find them by name at runtime.
2. Generates the text block injected into the LLM system prompt.
"""
def __init__(self):
self._tools = {} # { tool_name: tool_object }
def register(self, tool: BaseTool):
"""Add a tool to the toolbox."""
print(f"Registered: {tool.name} [{tool.category}]") def get(self, name: str):
"""Look up a tool by name. Returns None if not found."""
return self._tools.get(name)
def describe_all(self) -> str:
"""Generate the tool list for the LLM system prompt."""
lines = []
for tool in self._tools.values():
lines.append(
f"{tool.description} | params: {tool.parameters}"
)
return "\n".join(lines)
registry = ToolRegistry()
When the agent loop builds its system prompt, it calls registry.describe_all() and injects the result. The LLM reads those descriptions and parameter lists, then produces structured tool call requests. Your Python looks up the tool by name in _tools and executes it. The model never touches the execution path directly.
---
Key Insight, Written Plainly
The LLM produces something like:
TOOL_CALL: run_show_command
PARAMS: {"device": "core-sw-01", "command": "show ip ospf neighbor"}
Your agent loop parses that text, finds run_show_command in the registry, calls execute(device="core-sw-01", command="show ip ospf neighbor"), and returns the result. The model sees the result and continues its reasoning.
It is a request-response loop. The model requests. You decide. You execute. You report back.
The safety checks inside that loop — which you build in Lesson 3 — are what make this production-safe.
---
Open in Colab
Work through the code interactively:
---
Next: Lesson 2 — The SSH Show Command Tool