User
Write something
Cron commands fail with “pairing required” even though gateway is running and status works
I was trying to set up a simple daily reminder using openclaw cron, but all cron commands failed with: gateway connect failed: GatewayClientRequestError: pairing required What I tested: • openclaw status worked • openclaw gateway status showed the gateway running and RPC probe ok • openclaw cron status, openclaw cron list, and openclaw cron add ... all failed with pairing required • ran openclaw doctor • ran openclaw doctor --fix • regenerated gateway token through openclaw configure • restarted gateway with openclaw gateway restart At one point the dashboard/config flow also showed: unauthorized: gateway token mismatch After token regeneration + restart, the mismatch cleared, but cron still failed with pairing required. This seems like a broken local auth/pairing state for cron specifically, with no clear recovery path. The UX is especially confusing because normal status checks work while cron remains unusable.
1
0
How are you structuring your AI + automation agency brain across 30+ clients?
Running an agency with 30+ clients, each with their own automations, knowledge bases, and workflows (mostly Make.com + Claude). Curious how others are solving the organizational layer — not just the automation itself. My current stack: ClickUp for team management, Google Docs for documentation, OpenClaw (Codex GPT primary / MiniMax backup) running locally, and Claude Code via Cowork which honestly has been moving fast. Background is cybersecurity (that's my major) with solid SQL and working knowledge across several languages — so I'm comfortable going deep technically, just want to make sure I'm not building a mess at scale. I'm weighing a move toward a local monorepo structure — one folder per client holding prompts, scenario docs, context files, API notes — something I can actually version control and build from systematically. A few things I'd love to hear from the community: 1. How do you structure your client knowledge bases? One repo per client? Flat files? Notion? Something else? 2. Are you using Claude Projects, Claude Code, OpenClaw, or something entirely different to maintain context across clients? 3. For Make.com builders — where do you store your scenario documentation, module notes, and client-specific logic so it's actually findable later? 4. Version control — are you Git-versioning your prompts and automation docs, or is that overkill for most agency ops? Not looking for the perfect system — just what's actually working in production for people running real client loads. Drop your setup below 👇
Anthropic killed the $200 plan for OpenClaw. Here's what I'm building instead.
If you watched Alex Finn's video yesterday — Anthropic just blocked OpenClaw. Here's what you need to do immediately — you already know the situation. On April 4th, Anthropic blocked OAuth access for third-party agent frameworks including OpenClaw. The $200/month Max plan that gave you flat-rate access to Opus? Gone. Over 135,000 OpenClaw instances affected overnight. The move to pay-as-you-go API pricing means what used to cost $200/month flat can now run $1,000–$5,000+ if your agent operates autonomously all day. That's a 10–50x cost increase for some users. A lot of people are panicking. Some are leaving OpenClaw entirely. I'm not panicking. I'm building. Alex laid out the "brain and muscle" concept in his video — use Claude Opus as the smart orchestrator for planning, and cheaper or local models for execution. That framework is exactly right. I want to break down how I'm actually implementing it, because I think the specifics matter. 🧠 Why this matters more than you think Here's the thing most people miss — not every message your agent handles actually needs Opus. Think about what your agent does in a given day. Health checks. Routing messages. Summarizing emails. Monitoring cron jobs. Running scripts. Maybe 80% of that work is operational — important, but not complex. Then there's the other 20% — the high-stakes stuff. Financial analysis. Complex research. Decision-making that requires real reasoning depth. Sending ALL of that to Opus at $15/million output tokens is like hiring a senior architect to change lightbulbs. 🔧 The smart router concept Building on Alex's brain-and-muscle framework, I'm designing a layered routing architecture that matches model capability to task complexity: 📱 Tier 1 — Local lightweight (free): Health checks, script execution, routine monitoring, simple routing decisions. Models like Llama 3.1 8B running on your own hardware. Cost: $0. 🔍 Tier 2 — Local mid-tier (free): Research, analysis, content digests, data processing. Larger local models like Gemma 4 running on a Mac Studio or similar. Still your hardware. Cost: $0.
1
0
They Gave Me the First Hit Free. Now I Can't Quit.
Anthropic just killed OAuth for OpenClaw. Subsidized monthly plan gone. Per-token now. I realized I'm a drug addict. Not metaphorically — the pattern is identical. 🧪 First hit cheap. Build everything on their product. They change the terms. More dependency. Less leverage. You rent intelligence from someone who controls the price. 💊 Rate limits at 2 AM. Quotas on their schedule. Pricing changes after months of infrastructure built. We're building dependencies, not businesses. 🔓 Not going cold turkey — frontier models still best for complex reasoning. But done letting them be the foundation. Hybrid: 80% local, 20% cloud. Industry-specific LLMs on your own hardware. Cloud as utility, not foundation. Every point shifted from cloud to local is a point nobody else controls. Not anti-AI. Not anti-cloud. Anti-dependency. The future is owning specialized intelligence.
0
0
How I Use One AI Agent to Train All the Others
Most people build AI agents that work in silos. Each one knows its own lane and nothing else. That's how mine started. DD analyst for underwriting. Builder intel for homebuilder tracking. Market scout for county scoring. None shared knowledge. So I built Scholar — the training department for my AI org. 🔬 Scholar researches knowledge tracks daily — earnings calls, acquisition models, market trends, regulatory changes. Structured knowledge files. 📡 Pushes to every director. Scores findings for relevance, injects into memory files. 🧠 Pre-flight context injection. Every director reads shared knowledge before acting. 🔄 Directors write back. Knowledge compounds automatically. 42 entries across 5 departments per digest cycle. Every agent gets smarter daily without my input. The real unlock isn't multiple agents. It's agents that educate each other.
0
0
1-30 of 30
OpenClaw Users
skool.com/openclawusers
Free community for OpenClaw users to install, build, break, fix and share wild AI agent ideas together.
Leaderboard (30-day)
Powered by