Activity
Mon
Wed
Fri
Sun
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
What is this?
Less
More

Owned by Jonathan

The Open Campus for AI

88 members • Free

Learn How to Run OpenClaw 24/7 Under $200 /mo DIY/DFY/DWY Access Everything: https://tinyurl.com/citizen-dev

Memberships

AI - OpenClaw - Code

215 members • Free

OpenClaw Users

497 members • Free

AI Craft

334 members • Free

Recess

585 members • Free

The AI Skills-Forge

79 members • Free

Vibe Coders

205 members • $40/month

The Skool Hub

5.1k members • Free

Solve4Why: Ignite.Ai

13 members • Free

🎮 OnlyLANs Gaming Group

119 members • Free

39 contributions to The AI Advantage
Kleptocracy and Power
Would love to hear the community’s thoughts on this. Are you leaning more towards fear or empowerment when you think of where we’re heading in the next 5-10 years with AI?
Poll
Cast your vote
📰 AI News: OpenAI Hires OpenClaw Creator To Build Next-Gen Personal Agents
📝 TL;DR Sam Altman just announced that OpenClaw creator Peter Steinberger is joining OpenAI to drive the next generation of personal agents. OpenClaw is moving into an independent foundation and staying open source, with OpenAI continuing to support it. 🧠 Overview This is a big signal about where AI is headed next. The race is shifting from “who has the smartest chatbot” to “who can deliver agents that actually do things,” like managing your inbox, booking travel, handling forms, and working across apps on your behalf. OpenClaw exploded because it made agents feel real, not theoretical. OpenAI bringing Steinberger in suggests they want to move faster on personal, multi step, multi agent systems, and they want the builder who already proved it can work in the wild. 📜 The Announcement OpenAI CEO Sam Altman said Peter Steinberger is joining OpenAI to lead the next generation of personal agents. He also said OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. Separately, Steinberger reinforced the same direction, he is joining OpenAI to bring agents to everyone, while OpenClaw remains open and independent under a foundation structure. ⚙️ How It Works • Personal agents, not just chatbots - The goal is an assistant that takes actions across tools, not one that only drafts text. • Multi agent direction - Instead of one giant assistant doing everything, the future looks like multiple specialized agents coordinating to complete tasks. • OpenClaw as proof of demand - OpenClaw gained massive traction because it showed real workflows like email, scheduling, and online tasks, not just demos. • Foundation structure for OpenClaw - Moving the project into a foundation is meant to keep it open source and reduce fear that it gets locked into a single company. • OpenAI support without full ownership - The public promise is continued support while the open source project stays independent in governance.
📰 AI News: OpenAI Hires OpenClaw Creator To Build Next-Gen Personal Agents
1 like • Feb 19
@Brian Flett I'm curious, how much troubleshooting do you do actively on n8n now vs offloading to agents? Super interesting regardless
OpenClaw and Learning
Was wondering if anyone's been into an OpenClaw rabbit hole and seeing fruit from it recent. I've been working with a team of experts on answering this exact pain point (which would be my own Skool community) Otherwise would love to learn what's been working for you
OpenClaw and Learning
Navigating AI Aversion vs Dependency
I've been teaching at this private school for a little bit over 4 months. In that time, I've had a lot of interesting takeaways that I thought to share, and I hope that if you're working with anyone born after 2000, you'll walk away with at least one new perspective. 1) 80% of classrooms are GPT-dependent and hate it. I've heard this to be true about public schools too, but with private schools where kids are believed to have an extra "edge", they're just prompting their way through oblivion and hating it. They know it's not their own thoughts. Management frowns upon any AI use, teachers are scared of inaccuracy, and students lean into it to offload the difficulty of thinking through problems. I posted about this revelation on Reddit, expressing that I was hoping to offer some middle-ground solution. The aversion and pitchforks were palpable, I had DMs, comments, all walks of teachers saying that I embody "the problem" with "tech bros coming into a space they know nothing about and trying to make a sale" I painted the post with "free" but I guess it still smelled like sales. I felt odd, rejected, I wondered if this was just a Reddit echo chamber. Went to a bunch of private teaching forums, attempted to walk down the same path of bridging free AI tools to help teachers and students navigate better. Felt incredibly uphill, I realized any mission in this direction of AI education would have to be tackled from the ground-up. I fundamentally believe, purely due to systemic bias, that we are under-equipping future generations and numbing them with media machines. Edu-informational content is meant to serve this gap, take the ones that are motivated and give them a path to trailblaze. Waiting for curriculum approval or middle management to greenlight doesn't make sense to me. I started building with this group in mind, the next generation of AI specialists. I've also found that this generation motivates and inspires a lot of the open source efforts, and if you have them on your team, you have a novel perspective on architecture, capability, and possibility. Regardless of age, people want to help people.
Navigating AI Aversion vs Dependency
0 likes • Jan 8
@Tami Vid Thanks for this! Yeah I'm excited to see adoption happen and for them to feel that sense of empowerment. Race against the machine truly
0 likes • Jan 27
@Ahsan Raza 100% feel free to reach out. Can't post links or DM here unfortunately but you can find my LinkedIn, happy to chat
Avoid sycophancy
I asked an AI to create an image of how I treat it and I added one condition: Avoid sycophancy. Don’t try to make me happy. I want the truth. What came back stopped me cold. Because the image wasn’t about how I treat AI at all. It was me. Overloaded. Surrounded by unfinished ideas that are actually good. Constantly pushing for one more fix, one more improvement, one more iteration. Never quite letting anything feel “done.” And that’s when it hit me: I don’t treat tools this way., I treat myself this way. That relentless internal pressure? That constant optimization loop? That refusal to pause because I can see what’s possible? That’s EXACTLY me. Not self-hatred. Not failure. Just high standards with no recovery cycle. It’s kind of wild (and a little uncomfortable) realizing that the same mindset that drives growth can also quietly drain you if you never step back and acknowledge progress. Sharing this because I can’t be the only one who lives here. If this resonates, you’re not broken — you’re just pushing a powerful system without enough rest. And sometimes, the mirror comes from places you don’t expect.
Avoid sycophancy
1 like • Jan 22
Felt this one too The graveyard of ideas is riddled with ghosts. I try to remind myself that this is part of the process, and the bigger picture of it all coming together will make sense in hindsight. Vision is usually 20/20 whenever you're looking backwards, seeing all the dots connecting together. AI Sycophancy is an interesting dilemma where AI tells you what it thinks you want to hear, and you believe it. This can lead to rabbit hole after rabbit hole, and part of the process has been embracing the fact that your life exists in these parallel think tanks. To take action and materialize something requires a persistent ambition, which AI can never replace, and if you're of the tenacity, you can create whatever you set your mind and heart to. That's at least what I keep telling myself 🤓
1-10 of 39
Jonathan McLemore
4
23points to level up
@jonathan-mclemore-8563
I help owner/operators compress time and train the next generation on becoming workforce-ready with AI.

Active 6h ago
Joined Oct 30, 2025
INFJ
Toronto, Ontario
Powered by