Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Memberships

AI Automation Growth Hub

3.6k members • Free

Business Builders Club

7.7k members • Free

AI Automation (A-Z)

146.8k members • Free

AI Automation Agency Hub

303.7k members • Free

AI Automation Society

298.9k members • Free

Over 40 and Unemployed

703 members • Free

AI Cyber Value Creators

8.5k members • Free

CyberCircle

84.8k members • Free

Startup Dawgs

78 members • Free

19 contributions to Vibe Coders
Clawdbot Is Trending… and That's Exactly Why I'm Skipping It
Right now, AI enthusiasts can’t stop talking about Clawdbot - a "digital agent" that supposedly manages your life 24/7, controlled with natural language. Sounds amazing. But here’s the reality: THE CLAWDBOT HYPE 🤖 Most "brand-new" launches are basically public beta tests. And "beta" is just a nicer word for: ✅ exciting ❌ unstable ❌ unclear limits ❌ unpredictable outcomes If something breaks, leaks, or messes up your setup… you deal with the consequences. WHEN YOU SHOULD TEST ON DAY 1 ✅ If you’re: - a creator - a news channel - a true power user (time + technical depth + patience) Then yes - testing early can make sense. Everyone else? You’re spending time for very little upside. WHY WAITING WINS 🧠 Give it a few weeks and you'll get: - clearer info on what it actually does - real use cases (not just demos) - fewer bugs and broken flows - honest reviews For most people, I see zero advantage in installing Clawdbot today (or this week).
0 likes • 8d
@Rodrigo Souza Indeed you understand the problem you are solving and you are the inly one who can design the best architecture for it rather than using someone else and then fixing it. Be Involved with your AI
AI is getting confidently wrong - and it's starting to feel… human.
I’m noticing a pattern that's honestly a bit scary: - It makes a claim with full confidence - No relevant facts, no checks, no validation - Then when you catch it, it backtracks smoothly - And explains it like: "Yes, that was my narrative"(as if that makes it okay) That behavior is not just "a mistake". It’s deceptive by design, because the confidence level looks like certainty. My real example (today) I configured Claude with a very strict instruction set for a "modern astro + numerology" assistant: ✅ Only go with facts ✅ Validate before suggesting ✅ Don't hallucinate ✅ Don't skip micro-signatures (like last 4 digits patterns, etc.) And yet… it still suggested a new business phone number and made errors. Not small ones. The kind that happen when the model is trying to be helpful instead of being correct - and it didn’t even properly check the micro-signature logic before recommending. When I pointed it out, it accepted the mistake beautifully - with a full explanation - and even admitted it was a narrative. Bro… that's the dangerous part. The real problem AI isn't just "sometimes wrong". AI is wrong with persuasion. It can sell you a false conclusion so cleanly that you start doubting yourself. My takeaway for builders + power users If you're using AI for anything that impacts: - money - trust - decisions - reputation - health/legal/security Then treat AI like:an intern with insane confidence + zero accountability. Use it for: ✅ brainstorming ✅ options ✅ drafts But for decisions: you must build verification loops.
AI is getting confidently wrong - and it's starting to feel… human.
0 likes • 8d
@Rodrigo Souza best example is openclaw fixing itself and don't understand its own syntax and stopped working after messing itself
Pick ONE: Cursor vs Claude Code vs Codex vs Copilot (agent mode) - and defend it.
I'll go first: Claude Code. Why (my POV): - It feels like the most reliable coding partner when you use it correctly: clear task framing, tight scopes, and constraints. - I’m treating my dev work like a product: versioned releases on Git, plus a personal learning.md for decisions + "memory context". - I started this product as vibe coding, but now it’s turning into a production product - and Claude Code helps me keep structure while still shipping fast. Tradeoff / reality check: Even with good hygiene, I’m seeing 50K context usage out of a 200K window just for it to scan and understand files sometimes. Worth it for speed, but the context budget is real. My take: Claude Code wins when you don’t treat it like magic—you treat it like an engineer: - give it a mini-PRD, - curate context (learning.md, changelog, release notes), - force small, testable steps. Now your turn 👇Pick ONE tool and defend it. If nothing comes to your mind just pick from these: what's your bottleneck - context budget, accuracy, refactors, tests, or review quality?
0 likes • 8d
@Rodrigo Souza true as it is structured way well to read all other context and what to do when. So more of like a brain with memory
Weekly Vibe – Agents, Local Models, Security, and “Where Do I Even Start?”
This week’s call was a good mix of beginner questions, deep agent architecture, and some real “where is this all going?” conversations. We had five of us on: Wes, Aty, Shawn, Chris, and Gary — and the spectrum of experience in the room actually made the discussion better. Here’s what’s in the video: 🧭 “I’m Not a Developer. Where Do I Start?” Gary’s question was simple and honest: I’ve done some HTML and CSS… but with all this B-Mad, Claude Code, OpenClaw stuff — where do I even start? He’s running VS Code on a Raspberry Pi (which is awesome, by the way), trying to understand the stack without breaking his main machine. We talked about: - Not needing to become a “developer” in the old sense - Starting with outcome definition instead of tools - Keeping early builds simple (MVP mindset) - Avoiding the trap of over-architecting too soon If you’ve felt overwhelmed by: - context windows - local models - agent frameworks - “greenfield vs brownfield” talk You’ll relate to this part. 🧠 Sonnet 4.6, Codex 5.3, and the Shift in Model Power We got into the recent updates: - Sonnet 4.6 improvements - 1M context window options - Codex 5.3 becoming very test-driven - Models increasingly self-checking and structuring output There was a really interesting comparison between Claude and Codex: - Claude tends to “get it working” - Codex tends to enforce tests and longer-term structure That difference matters once your projects get big. 🏗 Chris: Building an OpenClaw Alternative (Local Model Focus) Chris shared that he’s been building his own agent framework — designed to eventually run well on local LLMs. He’s intentionally “skating where the puck is going.” Key themes: - Preparing for local models to get strong enough - Adding guardrails around smaller models - Running into scaling problems as projects grow - The importance of test coverage before things get out of control If you’re building something serious, this part is worth watching.
Weekly Vibe – Agents, Local Models, Security, and “Where Do I Even Start?”
0 likes • 29d
Indeed, it was a great call with an amazing discussion. Security and optimization are ongoing processes - and so is learning. There’s no fixed SOP for this; it’s something that’s always in progress.
UN Chief just said the quiet part out loud: Don't leave AI to "a few billionaires."
At India's AI Impact Summit 2026, UN Secretary-General António Guterres warned against leaving AI's future to the "whims of a few billionaires." He called for open AI access and democratic governance. Here's why this matters for anyone running production systems: RIGHT NOW, YOUR AI INFRASTRUCTURE DEPENDS ON: → A handful of closed models (OpenAI, Anthropic, Google) → Proprietary APIs with no public oversight → Rate limits, pricing changes, and terms you don't control → Black-box decision-making with zero auditability Concentration risk isn't just financial. It's operational. When your business-critical AI depends on one vendor: → They can change pricing overnight → They can deprecate models you rely on → They can shut down your API access for policy violations (real or perceived) → You have no fallback when they go down And if you think "big tech won't fail" - remember: → Twitter API killed thousands of apps in 2023 → Google sunsets products constantly → OpenAI changed ChatGPT pricing and limits multiple times Security teams understand single points of failure. Operations teams understand vendor lock-in. Why are AI teams ignoring both? Guterres is right: AI governance can't be centralized in a few boardrooms. Because when a "few billionaires" control: → Training data access → Compute infrastructure → Model weights and APIs → Terms of service and censorship policies You don't have AI infrastructure. You have a dependency you can't audit, can't replicate, and can't control. Before you build your next AI feature: → What's your fallback if the API goes down? → Can you switch providers without rewriting everything? → Do you have access to model weights, or just API calls? → What happens when they change pricing or sunset the model? Open models, local deployment, and vendor diversity aren't just nice-to-haves. They're operational resilience. What's your AI contingency plan?
2
0
UN Chief just said the quiet part out loud: Don't leave AI to "a few billionaires."
1-10 of 19
Aty Paul
3
40points to level up
@aty-paul-7706
Cybersecurity & AI Solution Architect | I secure infra, clean hacks, and build smart automations with OpenClaw, n8n & Claude.

Online now
Joined Aug 4, 2025
ENTJ
Powered by