Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

Online Business Friends

85.4k members • Free

Vertical AI Builders

9.9k members • Free

AI Vibe Coders (Free)

4.9k members • Free

Super Affiliate Academy (Free)

11.8k members • Free

AI Automation Station

2.2k members • Free

Automation University

5.8k members • Free

Agency Growth

4.3k members • Free

Imperium Academy™

47.2k members • Free

AI Automation Agency Hub

290.6k members • Free

4 contributions to Brendan's AI Community
Personal AI Agent(moltbot + ollama), no tokens, no cloud, full steps guide.
1. Understand What You’re Building You’re not installing a chatbot. You’re setting up a local AI brain, an always-on agent, and a private AI that lives in your chats. This setup is closer to running your own infrastructure than using a plug-and-play tool. 2. System Requirements Minimum Requirements: - 16GB RAM (32GB preferred) - 20GB free storage - Modern CPU (8+ threads) - Optional GPU for better performance 3. Install Ollama Install Ollama based on your operating system. macOS:brew install ollama Linux:curl -fsSL https://ollama.com/install.sh | sh Windows:Download installer from https://ollama.com 4. Pull an AI Model Download a model to use locally. ollama pull qwen3-coder 5. Test Ollama Run a test to confirm the model works. ollama run qwen3-coder 6. Install Node.js Verify Node.js installation. node -vnpm -v 7. Install Moltbot Install Moltbot globally using npm. npm install -g moltbot@latest 8. Run Moltbot Onboarding Set up Moltbot services. moltbot onboard --install-daemon 9. Connect Moltbot to Ollama Launch Moltbot using Ollama. ollama launch moltbot 10. Background Services Ollama serves the AI model. Moltbot handles routing and logic. All processing stays local. 11. Connect Chat Platforms You can connect Telegram, Slack, and Discord for automation. 12. Configuration Adjust model selection, safety settings, and tools in the config folder. 13. Common Mistakes - Installing oversized models - Skipping Ollama testing - Ignoring GPU VRAM limits 14. Performance Optimization - Use quantized models - Prefer SSD storage - Close background applications 15. What You Can Do Now Capture notes, manage tasks, automate workflows, respond in chat apps, and work offline. follow for more...
Install Moltbot(Claudebot)
Step-by-Step Setup Guide to install Moltbot(claudebot) in Windows 11 with WSL + Docker 1. Make sure prerequisites are ready - WSL2 installed and running (Ubuntu or Debian recommended). - Docker Desktop installed with WSL2 integration enabled. - Git installed inside your WSL distro (sudo apt install git). - Node.js & pnpm (optional, if you want to build from source). 2. Clone Moltbot git clone https://github.com/moltbot/moltbot.git cd moltbot 3. Configure environment Copy the example environment file - cp .env.example .env Edit .env with your preferred settings: Add your messaging app tokens (WhatsApp, Slack, Telegram, etc.). Configure your LLM provider (Claude, Gemini, OpenAI, or Ollama for local models). 4. Build & run with Docker docker compose up --build (This will pull dependencies, build the container, and start Moltbot inside WSL.) 5. Connect to messaging apps - Follow the docs for WhatsApp, Slack, or Telegram integration. - Once connected, you’ll be able to chat with Moltbot directly in those apps. Performance tip: Store the Moltbot repo inside your WSL filesystem (/home/username/...) instead of Windows drives (/mnt/c/...) for faster I/O. Persistence: Use Docker volumes to keep Moltbot’s state across restarts. Local models: If you want to run Ollama inside WSL, you can integrate it with Moltbot for fully local AI.
resolved production alerting issue
We resolved a production alerting issue in our n8n monitoring system — and it reinforced some important lessons about reliability and data governance. The issue Our monitoring workflow was repeatedly sending WhatsApp alerts for services that hadn’t actually changed status. The result was unnecessary noise, reduced trust in alerts, and operational distraction. Why it mattered Teams started ignoring alerts (classic alert fatigue). Monitoring reliability was questioned. Sensitive infrastructure data risked being logged or shared unintentionally. What we changed Implemented a persistent, external source of truth so alert state survives restarts and redeployments. Cleanly separated runtime logic from stored state, improving stability and predictability. Strengthened health-check validation to correctly handle timeouts and errors. Ensured alerts and logs are generated only from verified system state, not third-party responses. Added rate-limiting and redaction controls to prevent duplicate alerts and protect infrastructure details. The result Alerts now trigger only on real service status changes. Monitoring remains stable across deployments. No sensitive endpoint data is stored or shared. Higher confidence in alerts and faster response when issues actually occur. This was a good reminder that monitoring isn’t just about uptime — it’s about signal quality, resilience, and trust. If you’re using n8n or similar tools and struggling with noisy alerts or unreliable state, happy to share a sanitized workflow and a production-readiness checklist. Feel free to DM
Build AI Voice Agents with Claude Code (Complete Guide)
You can now build and deploy entire AI voice agents + their connected n8n automations… just by talking to Claude. What’s the point? → Use Claude Code as a “master agent” to command other platforms (like Retell AI & n8n). → Go from a simple prompt to a fully-built AI receptionist that qualifies leads. → Create and connect complex multi-app systems with zero coding required. My new video breaks down the entire process: → A complete guide on using Claude Code for AI development. → Prompting a Retell AI voice agent into existence step-by-step. → Automatically building the connected n8n workflow to handle post-call tasks. → Watching it all come together in a live demo. Check it out! 👇
2 likes • 9d
great work
1-4 of 4
Jogeshkumar Kumawat
2
13points to level up
@jogeshkumar-kumawat-9966
j

Active 2d ago
Joined Jan 26, 2026
Powered by