Activity
Mon
Wed
Fri
Sun
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
What is this?
Less
More

Memberships

The Stronger Human

27.5k members • Free

Clief Notes

26.9k members • Free

36 contributions to Clief Notes
I used to build redundant networks. Now I build redundant thinking
When I was a younger network tech in the Army, the first thing I learned was that every critical network path needed a backup. Radios, telephones, smoke signals, anything to get a message across. You didn't build something and hope one link held. Six months ago, using AI every day, I was making the exact mistake I was trained to avoid. Everything going through one interface. Reading, drafting, problem-solving, etc. Going through one point of failure. Then I hit a usage limit one afternoon and had no idea what to do next. I was completely stuck. Our bodies are really good at dumping whatever isn't being used (use it or lose it!) and at that point I was not using my intellect that I worked for so many years to gain. I started putting friction back in deliberately. Reading without it. First drafts without it. Twenty minutes on a problem before I touch anything. I'm still using it constantly and getting better with it every day. I just didn't want one dependency I couldn't survive losing. What have you kept separate? Curious what others are protecting.
"Engineering Challenge": Finding Time Between Diapers and Development
​I often post about my wins with Claude Code or the progress on my book project, but there’s one part of the equation I haven’t touched on yet: Time. ​I’m lucky enough to be able to build a little during my work hours, but my primary focus is still being a Finance Manager. The real work happens when the house is finally quiet. ​But here is the reality: I have a 3-year-old and a 9-month-old. ​If you’ve been there, you know. My "second shift" starts after they are tucked in, but it’s always a gamble. Especially with a 9-month-old—you never really know how the night is going to go or how many times you’ll be woken up. ​The Internal Conflict: I’m a natural "A-person." I love waking up early and feeling fresh. But to get anything done on my private projects, I often have to push late into the night. ​I’m constantly trying to balance three things that all feel non-negotiable: ​Family Time: This is my fuel. I refuse to sacrifice being present with my kids. ​Sleep: As an early riser, I need sleep to function as a Finance Manager and a dad. ​Development: I have a deep drive to learn, build, and move my projects forward. ​The truth? Most days, it feels like I can only pick two. ​If I work late on a website or an automation, I’m a zombie the next morning. If I go to bed early to be a "good dad" at 6:00 AM, my projects stand still, which frustrates me. It’s a constant puzzle of trying to be "efficiently lazy" with the few hours I actually have. ​I don’t have a "5-step master plan" for this. I’m just navigating it one night at a time—sometimes winning, sometimes just trying to stay awake during a meeting. ​How do you guys balance this? For those of you with young kids, demanding jobs, and big goals—how do you find the space to create without burning out or missing the "golden years" with your family? ​I’m curious to hear how you prioritize when everything feels equally important. 👇
4 likes • 2d
I've been trying to do more with less. At the end of the day my daughter's development is more important than my own and that's what's going to bring me the most fulfillment in life. So if that means I might miss out on some opportunities and my progress slows down, so be it. Slow motion is better than no motion. I have about 2 hours in the morning to myself before I leave for work and when the house is still asleep and about a hour after the kid goes to bed before I enforce my own "no screens" time. In those 3 hours I have to squeeze in my workouts, my studies, my personal projects, and something fun so I don't crash out. Limitation breeds creativity and my time in the Army did teach me some good tricks for time management. I try to remind myself that the grand canyon was carved out through persistence and I just gotta keep pushing that stone uphill.
1 like • 1d
@Allan Durhuus Ah yeah I see. Mine starts school this year, but I remember when it was so hard to plan around everything. Things exponentially got better when we weren't worried about pull ups and diapers anymore and she could get her own snacks. Once she gained more autonomy there was more peace of mind when I'd sit down and I could squeeze in a extra half hour here or there. Hopefully you just have to hold out a bit more longer!
Console Injection: Turn Any Browser Tab Into an AI Control Panel
This post has a video example. **TLDR:** Console injection. Anything running in a browser can print a control menu to the console. Claude reads it. Now it knows what it can do, and controls the site without an API. So, no back-end needed. Super simple. Want the full picture? Keep reading. Want your LLM to build this for you? Copy/paste this whole post. --- **THE PROBLEM** You want your LLM to control something in your browser. Your options aren't great: 1. Claude-in-Chrome's built-in tools. It doesn't know what your site does or how to use it. It takes screenshots and guesses. Slow and token-heavy. 2. Build an API server. Now you're managing keys, hosting, and paying per call. **THE SOLUTION: CONSOLE INJECTION** There's a third option. Whatever you're building loads in the browser. On load, it prints a list of controls the AI can use. The LLM reads the console, sees the control list, and runs JavaScript to call those controls directly. It's just a simple JavaScript control list sitting in the console. You can even add a workflow so the AI has an idea of what it can do with those controls. This works for anything that runs in a browser. A SaaS tool, an internal dashboard, a local dev environment, a canvas editor, a data pipeline UI - if it renders in a browser tab, you can give an LLM a control panel for it. Here's what it looks like in one of my tools (hit F12 to see your own console): ``` [AI-ACCESSIBLE] This app can be controlled via JavaScript. * Sidekick.help() - Complete tools reference (returns JS object) * Sidekick.teach() - Full teaching guide, 14 sections logged to console * Sidekick.tool(name, input) - Execute any tool directly * Sidekick.batchAutomap(mappings) - Smart batch mapping (recommended) help() + teach() contain everything needed. No need to fetch external files. ``` The LLM sees that, and it's immediately trained. It can read state, click buttons, type things, batch operations - whatever you've exposed. **IT GETS BETTER**
1 like • 2d
This post needs more visibility. This is extremely impressive!
Four hours of documentation a week I'm no longer doing
Six weeks ago I built a KB article pipeline. My team closes 15-20 tickets a week where someone fixed a real problem, documented it in the ticket, and we have no article for it. That adds up. A year of good fixes trapped in closed tickets nobody can find. A Python script hits the ServiceNow API, pulls 30 days of closed tickets, strips PII, normalizes each one into structured JSON; symptom, environment, resolution, notes. Claude drafted the first version. I've been maintaining it since. The LLM stage reads that batch alongside the current article list. For each cluster: does coverage exist, rate it 1-5, draft a new article if the gap score clears a threshold. The threshold is a number I set in a reference file and can change without touching the code. Six weeks in: 23 new articles published, 8 existing ones flagged for revision. About 4 hours of documentation avoided per week. The 60-30-10 here is about as clean as I've gotten it. Normalization, dedup, and article-list comparison are all deterministic, so those live in the script. Rubric and threshold logic live in the reference file. LLM rates coverage quality and writes articles. First version had the LLM doing normalization too. Structure drifted week to week. Moving it into the script fixed that. One thing to flag: it doesn't know when the technician notes are thin. The output quality follows the input quality. I've also re-calibrated the coverage rubric twice. "Quality" is harder to operationalize than I expected. Anyone here run gap detection against a knowledge base outside IT? Sales playbook, legal, anything with a lot of tribal knowledge. The structure should transfer but I haven't tested it.
3
0
Every beginner should do this: A personal coach for prompting
I wanted proof that my prompts improved from four months ago. The results turned into this post. Around early January I added these instructions to my Claude.ai user preferences: If required information is missing, ask clarifying questions before answering. Before giving the final answer: list assumptions, identify missing data, state confidence level. If appropriate, advise on how to write a prompt more efficiently in the future. Then I had Claude pull my chat history from before and after, and look for patterns. I figured I'd see changes in what I was asking. The actual change was in how I structured conversations around the asking, in three phases. Phase 1: one-line prompts (early January) Real prompt from January 8: "How do I set up a eSIM on a Windows laptop?" I was asking the way you'd ask a search engine. Claude wrote a generic eSIM tutorial. I bounced because it didn't match my situation, and never came back. That was my default. One sentence prompts. No context, no constraint, no goal. Phase 2: Claude starts showing its work (mid-January) This is where the instructions started doing actual work. The "list assumptions" line forced Claude to write down what it was filling in for me. When a response opened with "Assuming this is a Windows endpoint with standard user permissions and no recent OS reimage," I could correct the wrong guesses before they corrupted the rest of the answer. About half the time, at least one was wrong. "Identify missing data" produced a list of the questions Claude wanted to ask but was about to silently guess at. Reading that list every response taught me what to include upfront. Every "missing data" bullet was a future prompt fix. "State confidence" forced Claude to mark which parts of the answer were solid and which to stress-test. "High confidence that one of the first three checks will identify the cause" is useful in a way that a confident-sounding wall of text just isn't. The prompt-efficiency line pulled the other three together into a habit. After enough rounds of "next time include the OS version and whether the machine is domain-managed," I stopped needing to be told.
2 likes • 4d
@Nick Prescott Whaaaaaat!? I just ran it on one of my pipeline workspaces. I think my sleep might be in danger tonight. 😅 I do see only Claude Code has this function, but this is a real gem. I wish there was a way to pin comments.
1-10 of 36
Alex Harrison
5
359points to level up
@alex-harrison-5965
Army Veteran | Support Engineer

Active 5h ago
Joined Mar 8, 2026
Greater Seattle Area
Powered by