How to build a "Working" Local Moltbot Setup (No Cloud Required)
Hey everyone, if you're looking to run your agents locally for privacy or cost reasons, I found a breakdown for a Moltbot + Ollama setup that actually handles tool-calling (exec, read, write) reliably.
Why most local setups fail: Most models under 70B struggle with Moltbot's complex system prompts—they hallucinate tools or ask for permission too much. This setup solves that.
The Setup Specs:
  • Hardware: 48GB VRAM (2x 3090s).
  • Model: Qwen 2.5 72B Instruct (Q3_K_M).
  • Throughput: ~16 t/s.
3 Critical "Gotchas" Fixed in This Guide:
  1. API Settings: You MUST use api: "openai-completions" in your clawdbot.json. If you use openai-responses, you'll get empty output.
  2. Tool Permissions: You have to explicitly allow "read" in the tools config, or the agent can't even read its own skill files.
  3. The Prompt: It includes a custom Modelfile that tells Qwen to "Act first, report results later." This stops the AI from talking about what it could do and makes it actually do it.
Pro-Tip: If your local model is too wordy, add a SOUL.md to your workspace with "Brevity" instructions. It saves a ton of time and tokens.
Let me know if you're running agents locally—what hardware are you using?
|
2
0 comments
Gourav J Shah
2
How to build a "Working" Local Moltbot Setup (No Cloud Required)
School of AI
skool.com/school-of-ai
School of AI WhatsApp Channel: https://chat.whatsapp.com/JuqbToSN6bcJAY94jLzZEd
Leaderboard (30-day)
Powered by