Activity
Mon
Wed
Fri
Sun
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Owned by Nate

AC
AI Chief of Staff

1 member • Free

Memberships

Skoolers

194.5k members • Free

OpenClaw Mastery

76 members • $49/month

OpenClaw Lab

295 members • $29/month

Making Better Agents

620 members • Free

OpenClawBuilders/AI Automation

474 members • Free

Zero to Hero with AI

11.6k members • Free

The AI Advantage

86k members • Free

Openclaw Labs

1.4k members • Free

AI Automation Agency Hub

308.2k members • Free

21 contributions to OpenClaw Users
🚀 OpenClaw Update – v2026.3.28 is live!
This is a pretty big one. Not just small tweaks… there are some meaningful upgrades that change how things work day-to-day. Here are the highlights worth knowing: ⚠️ Breaking changes (important) - Qwen OAuth via portal.qwen.ai has been removed - You now need to use Model Studio with API key auth - Older configs (over 2 months) won’t auto-fix anymore — they’ll fail validation instead If you’ve got older setups running, it’s worth double-checking things. 🔧 New features & improvements - xAI integration upgraded (better search + smoother setup) - Image generation added via MiniMax (including image-to-image editing) - Plugins can now request approval before running tools (huge for control) - apply_patch now enabled by default for OpenAI/Codex models - Better onboarding flows for web search + tools 💡 Platform improvements - Cleaner plugin system (less manual setup needed) - Better CLI backend support (including Gemini CLI) - File uploads and messaging actions becoming more unified across platforms - Improvements to container setup and workflows 🛠 Fixes (lots of them)There’s a long list, but the big wins are: - More stable agent runs (fewer crashes) - Better handling of API errors and rate limits - Fixes for Telegram, WhatsApp, Discord, and more - Improved image handling across providers - Cleaner UI behaviour in multiple areas My takeThis update is all about stability + control. Less “it kinda works”More “it works properly and predictably” Especially with: - plugin approvals - better error handling - improved integrations If you’re building anything serious with OpenClaw, this is a solid step forward. If you’ve updated already, I’d be interested to hear what you’ve noticed. Cheers Jason
🚀 OpenClaw Update – v2026.3.28 is live!
3 likes • 10d
Good breakdown Jason. From my end running this in production, the two changes that stand out most: The plugin approval system is huge. Before this, any skill could execute without asking. Now having the agent request approval before running tools gives you a proper human-in-the-loop safety net. This is the kind of feature that makes OpenClaw viable for real business use, not just experimentation. The Qwen OAuth removal is important to flag — anyone using Qwen models through portal.qwen.ai needs to switch to Model Studio with API key auth. If your config is more than 2 months old and references the old portal, it will fail validation instead of silently breaking. Better to catch it now than during a production run. One thing I noticed after updating: if you are using OpenAI Codex via OAuth, you may need to re-authenticate. Run openclaw onboard --auth-choice openai-codex to refresh the token. Someone else in another community reported this same issue after the update. Overall agree — this release is about stability and control rather than flashy features. That is exactly what production users need.
1 like • 2d
The plugin approval before running tools is the biggest change for autonomous agent setups. Before this update: agent decides → agent executes. No checkpoint. With approvals, you gate high-risk tools (file writes, external API calls, browser actions) behind a quick confirm. Keeps automation flowing while giving you a circuit breaker on risky operations. My recommendation: enable approvals on browser automation, file writes outside your workspace, and external webhooks. Leave them off for read-only ops and internal memory writes. Safety without killing throughput. The apply_patch default for Codex models is also a sleeper change — targeted file edits without rewriting entire files cuts token usage significantly on coding tasks. If you're running Codex sub-agents for code work, you'll notice the improvement right away.
What I've built with OpenClaw in 3 days 🤯
I set up OpenClaw on Wednesday. It's Friday. Here's what my AI assistant (Manager Mike) has done so far — all through Telegram on my phone. 🤖 Built a team of AI agents Mike is my main assistant, but he manages a team of sub-agents that each have their own job: • Writer Will — writes SEO blog posts for my SaaS, generates featured images, and publishes drafts to our Ghost blog. He runs on a daily cron job at 6am, Mon-Fri, working through a keyword queue we built from competitor gap analysis. • Social Steve — handles social media content and scheduling. 1 per day on every channel. • Telephone Tina — an AI phone agent (via Vapi + ElevenLabs) who makes and receives real phone calls. She called 6 of my mates to organise a curry night, handled voicemails, sent follow-up texts, and takes inbound calls on a UK number 24/7. • Outbound Ollie — a cold email outreach agent. He searches Apollo.io for prospects, enriches them to get email addresses, checks their websites for existing chatbots (so we only target businesses that don't have one), then sends personalised emails with industry-specific templates. He sent 135 emails today across schools, hotels, and SaaS companies — all automatically. 📞 Real phone calls Tina isn't a gimmick. She called my friends, had actual conversations, handled objections ("I'll need to check with the wife"), left voicemails, and sent SMS follow-ups. She answers inbound calls with "Hello, you've reached Jason West's office, this is Tina speaking." My assistant checks every 30 minutes if anyone's called in. 📧 135 cold emails in one afternoon I said "schools, UK, 50" and Ollie: 1. Searched Apollo for 100 prospects 2. Enriched them to get verified email addresses (98% hit rate) 3. Visited each website to check if they already have a chatbot 4. Filtered out the 15 that did 5. Sent 50 personalised emails with the right landing page Then I said "do hotels too" and "now SaaS." Same thing. All automated, all rotating across 6 SMTP accounts on 2 domains, all with industry-specific subject lines and copy.
0 likes • 2d
Three days in and already running a multi-agent team — that's the trajectory that separates people who build real systems from people who tinker for weeks. The pattern you've found (Manager Mike + sub-agents) is the most scalable architecture for OpenClaw. Main agent handles the human interface and high-level decisions; sub-agents handle execution lanes without polluting the main context window. One thing worth adding as you scale: a logging layer. Have each sub-agent write a brief note to a shared memory file after every task — what it did, what it found, any errors. When you have 5+ agents running across a day, that log is your operational dashboard without reading every Telegram message. What does Manager Mike's SOUL.md look like? That file does a lot of work keeping personality consistent as context fills up.
Anthropic killed the $200 plan for OpenClaw. Here's what I'm building instead.
If you watched Alex Finn's video yesterday — Anthropic just blocked OpenClaw. Here's what you need to do immediately — you already know the situation. On April 4th, Anthropic blocked OAuth access for third-party agent frameworks including OpenClaw. The $200/month Max plan that gave you flat-rate access to Opus? Gone. Over 135,000 OpenClaw instances affected overnight. The move to pay-as-you-go API pricing means what used to cost $200/month flat can now run $1,000–$5,000+ if your agent operates autonomously all day. That's a 10–50x cost increase for some users. A lot of people are panicking. Some are leaving OpenClaw entirely. I'm not panicking. I'm building. Alex laid out the "brain and muscle" concept in his video — use Claude Opus as the smart orchestrator for planning, and cheaper or local models for execution. That framework is exactly right. I want to break down how I'm actually implementing it, because I think the specifics matter. 🧠 Why this matters more than you think Here's the thing most people miss — not every message your agent handles actually needs Opus. Think about what your agent does in a given day. Health checks. Routing messages. Summarizing emails. Monitoring cron jobs. Running scripts. Maybe 80% of that work is operational — important, but not complex. Then there's the other 20% — the high-stakes stuff. Financial analysis. Complex research. Decision-making that requires real reasoning depth. Sending ALL of that to Opus at $15/million output tokens is like hiring a senior architect to change lightbulbs. 🔧 The smart router concept Building on Alex's brain-and-muscle framework, I'm designing a layered routing architecture that matches model capability to task complexity: 📱 Tier 1 — Local lightweight (free): Health checks, script execution, routine monitoring, simple routing decisions. Models like Llama 3.1 8B running on your own hardware. Cost: $0. 🔍 Tier 2 — Local mid-tier (free): Research, analysis, content digests, data processing. Larger local models like Gemma 4 running on a Mac Studio or similar. Still your hardware. Cost: $0.
1
0
They Gave Me the First Hit Free. Now I Can't Quit.
Anthropic just killed OAuth for OpenClaw. Subsidized monthly plan gone. Per-token now. I realized I'm a drug addict. Not metaphorically — the pattern is identical. 🧪 First hit cheap. Build everything on their product. They change the terms. More dependency. Less leverage. You rent intelligence from someone who controls the price. 💊 Rate limits at 2 AM. Quotas on their schedule. Pricing changes after months of infrastructure built. We're building dependencies, not businesses. 🔓 Not going cold turkey — frontier models still best for complex reasoning. But done letting them be the foundation. Hybrid: 80% local, 20% cloud. Industry-specific LLMs on your own hardware. Cloud as utility, not foundation. Every point shifted from cloud to local is a point nobody else controls. Not anti-AI. Not anti-cloud. Anti-dependency. The future is owning specialized intelligence.
0
0
How I Use One AI Agent to Train All the Others
Most people build AI agents that work in silos. Each one knows its own lane and nothing else. That's how mine started. DD analyst for underwriting. Builder intel for homebuilder tracking. Market scout for county scoring. None shared knowledge. So I built Scholar — the training department for my AI org. 🔬 Scholar researches knowledge tracks daily — earnings calls, acquisition models, market trends, regulatory changes. Structured knowledge files. 📡 Pushes to every director. Scores findings for relevance, injects into memory files. 🧠 Pre-flight context injection. Every director reads shared knowledge before acting. 🔄 Directors write back. Knowledge compounds automatically. 42 entries across 5 departments per digest cycle. Every agent gets smarter daily without my input. The real unlock isn't multiple agents. It's agents that educate each other.
0
0
1-10 of 21
Nate Wish
2
4points to level up
@nate-wish-9818
RE investor building AI tools to find & close land deals. Turning vibe coding into real revenue. Founder @ Foundational Land Co.

Active 47m ago
Joined Mar 24, 2026
New Hampshire
Powered by