Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

Creator Academy

8.2k members • Free

Self Publishers Unite!

500 members • Free

Voiceover Masterclass

121 members • Free

KDP Publishing

1k members • Free

Podcaster Pals

96 members • Free

Creator Profits

19k members • Free

The AI Advantage

121.7k members • Free

Zero to Hero with AI

12.4k members • Free

24 contributions to Content Academy
I emailed 600 people I hadn't spoken to in 14 years
I emailed 600 people I hadn't spoken to in 14 years. 5 of them became my first paying customers — within 60 minutes. Here's what I built and why. I'm partially dyslexic. Long text has always been a struggle. Since high school I've been converting written content to audio — articles, reports, white papers, ebooks. I kept building tools to do this. Eventually one of them got good enough that content creators started asking for it. A friend wanted it for creating custom bedtime stories for her kids. Another had a stack of ebooks he'd never read — wanted them as audio for his commute. Others were producing YouTube content and tired of paying per-character for cloud voiceover tools. That personal tool became a full desktop voice AI studio. 63 voices, voice cloning, 23 languages, multi-speaker editing, professional mastering. Everything runs locally — no uploading scripts to someone else's server. Then 3 days ago I emailed 600 customers from a product I built in 2012. Plain text, no design. Some of them bought. Revenue before the product even launched publicly. Tonight it goes live. For content creators here — how much of your workflow involves voiceovers? And what's your biggest frustration with the tools you're using now?
0 likes • 25d
@Ilyas Serter Its all automated
0 likes • 12d
@Giovanni Scotti Thanks, I currently run it on a M2 pro Macbook pro with 16GB RAM
WOW! GPT-5.5 and most awaited Deepseek V4 Flash & Pro dropped today
What a Friday. For two years, if you were building anything serious with AI, you were building on Claude. Not because it was a rule — because it was the right call. Anthropic set the bar for coding. They set the bar for writing. They set the quiet default that if you cared about quality, you paid the Opus premium and didn't ask questions. I didn't either. The whole builder community ran on Claude for a reason. This week, that changed. GPT-5.5 shipped yesterday. DeepSeek V4 Pro shipped the same day. Inside twenty-four hours, the ceiling on agentic coding went up — and the open-weight floor came within striking distance of the closed frontier. Real contenders. Not "almost there." Actually here. Three things this changes for anyone building, and none of them are in the headlines yet. Coding: The default setting of "Claude writes the code, Claude runs the agents" breaks this week. GPT-5.5 is measurably better on the kind of long-running multi-step agent work that used to be Claude's moat. DeepSeek V4 Pro is within a fraction on real software engineering, at a price point where "run it myself" is genuinely on the table. Every tool in your stack that quietly assumed Anthropic — your IDE integrations, your review agents, your automation glue — is about to get reconsidered. That's good for you. Less lock-in. More leverage. Marketing and writing: The price-per-draft math just flipped. We've been rationing the good model forever — the flagship handles the brand-safe stuff, volume work gets the cheap model, and we've all quietly accepted that frontier-quality writing at scale isn't possible. That's over. Frontier-quality writing at open-weight pricing means every ad variant, every email rewrite, every landing-page test, every personalization loop runs at the top tier. The whole architecture of "one good draft, fifty cheap copies" starts feeling as dated as shared creative. Everything top-tier. Everything personalized. Everything testable. Agentic work: This is the one I am most excited about, and the most under-talked-about. For two years, "multi-model agent stacks" has been a slide in decks. Nobody actually builds them, because there hasn't been a real second option. GPT-5.5 for the reasoning step. DeepSeek V4 Pro for the long-context research step. Claude for the interpretive writing step. A cheap open model for the high-volume structured step. Not one runtime. A pipeline. Composed by you. Owned by you. That stops being a slide and starts being the default next month.
4
0
WOW! GPT-5.5 and most awaited Deepseek V4 Flash & Pro dropped today
Anyone played with Andrej Karpathy's "LLM Wiki" idea from the gist he dropped?
Quick version in case you missed it: instead of using RAG to re-chunk your sources every time you ask a question, you compile each source once into a persistent markdown wiki. The LLM extracts concepts, writes entity and concept pages, updates cross-references, flags contradictions, and maintains the whole thing. Future queries read the pre-synthesized wiki. The part that clicked for me: the reason most of us abandon our second brains is that backlink and cross-reference upkeep is boring. The LLM doesn't care. It's happy to touch fifteen pages in one pass. I spent a couple of weeks turning Karpathy's pattern into a Claude Code plugin that actually scales (atomic pages, sharded indexes, BM25 fallback past ~300 pages). It also runs in Codex, Cursor, Gemini CLI, Pi, and OpenClaw through the skills CLI. Install in Claude Code: /plugin marketplace add praneybehl/llm-wiki-plugin /plugin install llm-wiki@llm-wiki Or in any other supported agent: npx skills add praneybehl/llm-wiki-plugin -a <your-agent> Five slash commands (init, ingest, query, lint, stats), stdlib-only Python, no dependencies. Plays well with Obsidian if you want the graph view. Repo: https://github.com/praneybehl/llm-wiki-plugin Karpathy's gist: https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f Curious if anyone here has tried the pattern themselves. What did you ingest first, and what broke before it worked?
I Run 10 YouTube Channels. I Don't Make a Single Video. Here's what that actually looks like
I woke up this morning to 10 fresh podcast episodes. Fully researched. Scripted. Narrated. Visuals timed to every beat. Published to YouTube, RSS, and my own website. I didn't make any of them. A machine on my desk did. While I slept. I launched these channels at the end of February. It hasn't been a month yet. Some channels are pulling 1,000+ views and gaining subscribers - with zero ads, zero promotion, zero outreach. But here's what I need you to understand: this is not a prompt. When people hear "automated content," they picture someone typing a topic into a chatbox and hitting publish. That's not what this is. That's not even close. What I built is a multi-stage production pipeline. Not a single generation step - a sequence of independent systems, each with its own job, its own rules, and its own quality bar. Every stage has to pass before the next one starts. If something isn't good enough, it gets caught, flagged, and redone automatically. Here's what that actually means in practice: Every episode starts with real research. Not "summarise this topic." Actual source-finding, fact-checking, angle evaluation. The kind of editorial groundwork a good producer would do before writing a single word. Most automated content skips this entirely. Mine can't - the pipeline won't let it move forward without it. Then there's the writing. And this is where I spent most of my 45 days. I didn't just generate scripts - I built an entire set of rules around how spoken language works differently from written language. How rhythm changes when someone is listening instead of reading. How a pause lands. How a transition should feel. Early versions sounded like a textbook. Now they sound like someone talking to you. After the writing comes the part most people don't think about: quality control. Every script gets evaluated across multiple dimensions before it moves on. There's a hard pass/fail threshold. I've watched the system reject its own output dozens of times and come back with something genuinely better. Nothing mediocre gets through. That's not a nice-to-have - it's the reason the content performs.
I Run 10 YouTube Channels. I Don't Make a Single Video. Here's what that actually looks like
1 like • Mar 25
@Sheroz Samatov my bad — there was a typo. Instead of "channel," I wrote "videos." I've corrected that. So the channel has gained over a thousand views in less than a month. It's not a particular video just yet, but one video in particular has done remarkably well. I think it's probably because of the story behind it. https://www.youtube.com/watch?v=2phJc2Nmpk0
0 likes • Mar 27
Interesting questions, let me check. I guess these segment dont care, makes sense though.
The real takeaway from Sam Altman's "AI as electricity" comment
Altman said this week that AI will be sold like electricity. Metered. On demand. A utility. For content creators and marketers, this isn't abstract. Think about what happened when video production tools became cheap. YouTube exploded. Not because video technology was new. Because the barrier to creating video dropped low enough that millions of people could suddenly publish. The same thing is happening with AI-powered content. When AI tokens become a utility, the cost of generating voiceovers drops. The cost of translating content into 12 languages drops. The cost of repurposing one piece of content into 30 formats drops. Right now, a lot of that is expensive or manual. Cloud APIs charge per character, per minute, per whatever. You're watching a meter while you create. When that meter runs close to zero, the volume of content explodes. And the competitive advantage shifts from "who can afford to produce" to "who has the best ideas and the best distribution." I'm building AI-native content tools right now as a solo founder. Five-plus products. One person. No team. A decade ago I tried building multiple products solo and failed miserably. The infrastructure didn't exist. Today, AI is that infrastructure. One person can now ship things that would have taken entire teams. People ask me constantly if AI is a bubble. I push back every time. I'm in it every day, building real products. This feels like electricity going mainstream, not like tulips about to crash. For the content people here: when the cost of AI- generated content drops to nearly zero, what changes in your strategy? More volume? New formats? Different distribution? Curious how you're thinking about this.
The real takeaway from Sam Altman's "AI as electricity" comment
1-10 of 24
Praney Behl
4
30points to level up
@praney-behl-3117
Creator, Developer, Entrepreneur, Marketer, Husband & a Dad. Building Vois.so, konvy.ai, heynyx.app, volant.app and a couple more ;)

Active 6h ago
Joined Apr 28, 2025
Melbourne AUS
Powered by