Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
What is this?
Less
More

Memberships

AI Automation Society

350.9k members • Free

14 contributions to AI Automation Society
#7dayAISChallenge - Day 1
Here is my first build - the newsletter. One thing I learned was how important it is to be specific with Claude. A few things thing I'd change...I'd be more specific about which resources to use for the infographics, be more picky about formatting, and more detailed all around with what I want the finished product to be.
#7dayAISChallenge - Day 1
Day 1 Build Newsletter
I received a newsletter in my email, and it looks good. I will definitely keep editing it as I grow, but I am moving on.
Day 1 Build Newsletter
1 like • 7h
Great job!
Welcome! Introduce yourself + share a career goal you have 🎉
Let's get to know each other! Comment below sharing where you are in the world, a career goal you have, and something you like to do for fun. 😊
2 likes • 17h
@Sam Alder thanks!!
1 like • 9h
@Frank van Bokhorst thank you Frank. I’ve visited your lovely city several years back. What a beautiful and friendly place!
Why Your Claude Setup Is Burning Tokens
Most Claude Code users create an AGENTS.md or CLAUDE.md file. A thousand lines is not unusual. They fill it with everything they want Claude to remember: - tech stack - coding conventions - tone of voice - folder structure - review checklist - deployment process - project gotchas The problem? That file loads into the context window on every single turn. If it’s 7,000 tokens, you pay 7,000 tokens just to ask Claude what file to edit. You pay it again when you ask a follow-up. You pay it again when Claude replies. By turn twenty, you may have burned 100,000+ tokens on instructions that were only relevant for maybe three of those turns. Two things break: 1. CostTokens are not free. You’re paying for context you are not using. 2. QualityYou hit the context-fill danger zone sooner. The model has less room for the actual task, and performance starts to degrade. My hot take: Stop putting everything in one giant always-loaded instruction file. Use skills instead. I tested one skill file: - Full skill body: 944 tokens - Name + description only: 53 tokens That’s an 18x difference. And that difference compounds on every turn, every session, for every user on your team. The better pattern is simple: Keep the always-loaded context small. Load the detailed instructions only when the task actually needs them. Curious: how big is your AGENTS.md or CLAUDE.md file right now?
Poll
7 members have voted
1 like • 21h
Thanks for posting this. I need to do some more research into this.
0 likes • 18h
@David Dacruz thanks!!
PhD Student Paid Me $1,800 to Cut Literature Review From 120 Hours to 22 Hours 🔥
PhD student facing dissertation deadline in 4 months. Literature review: 6 months behind schedule already. Required comprehensive review of 200+ academic papers. Extract methodology, findings, limitations from each. Synthesize into coherent narrative demonstrating research gap. Manual approach: Read each paper carefully (45 minutes average), take detailed notes, extract relevant quotes, log complete citations properly. Estimated total time: 120+ hours minimum for thorough review. Current progress after 2 months of dedicated work: 34 papers fully reviewed, 166 still remaining. At current pace: 8 additional months needed to complete. Critical problem: Dissertation defense scheduled in exactly 4 months. Advisor already expressing serious concern about timeline viability. She paid me $1,800 to build academic paper processing system that could accelerate this dramatically. System functionality: Upload research paper PDF → Automatically extract key structured terms (title, authors, publication year, methodology type, sample size, key findings, stated limitations) → Generate concise one-paragraph summary → Auto-tag by research method category → Create fully searchable database. Processing time per paper: 3 minutes average versus 45 minutes manual reading and note-taking. Implementation timeline: Weekend 1 system development and testing. Weeks 1-3 systematically processed 247 papers (discovered more relevant papers than originally planned during search expansion). Total project time including setup: 22 hours from start to complete database. Result: Comprehensive literature review completed in 3 weeks instead of projected 8 additional months. Unexpected powerful benefit: Searchable database enabled sophisticated pattern analysis completely impossible with manual approach. Methodology breakdown became instantly visible: 87 studies used surveys, 34 used interviews, 18 used mixed methods. Critical research gap identification emerged from simple database queries that would have required weeks of manual cross-referencing and analysis.
1 like • 21h
Nice work, and it's helpful to see how you you thought through the cost of your services vs. the potential savings you provided.
1-10 of 14
Andy Hartfield
3
38points to level up
@andy-hartfield-8062
Just beginning my AI Agentic journey

Active 6h ago
Joined Apr 26, 2026
Powered by