I'm starting from scratch here and documenting as I go.
I've set up my initial framework (DOE) on Antigravity and built a simple reviewer agent that reviews all code the parent agent builds. Nothing fancy yet, the goal is just to get something running end-to-end.
When I say "I built", I mean I said to Claude Code, "Build a reviewer agent that does xyz". A.I. is getting mental!
So far, this is my basic tech stack:
- Google Antigravity (IDE)
- Claude Code (Agent/Engineer)
- Wispr Flow (Voice prompts)
- Framework (Directive, Orchestration, Execution, aka D.O.E.)
What's working:
- Agent runs
- The reviewer agent reviews the code and rates it out of 100.
What's broken / in progress:
- When the rating is below a certain threshold, I want the parent agent to autonomously improve the code. Essentially, taking me out of the picture, which allows me to focus on the next biggest needle-moving activity.
I'm intentionally keeping this simple. The goal is not a perfect system yet; it's to learn through building, reviewing failures, and then iterating.
BUILDING IS THE FASTEST AND 'FUNNEST' WAY TO LEARN!
Just as AI requires feedback loops to get smarter. Humans require feedback loops to learn.
I decided to still post this even though the build's incomplete. If you're a beginner, this is what "starting" looks like.
Next steps:
- Update the Directive layer prompt with instructions to always run the reviewer agent after the parent agent has built code.
- Update the reviewer agent to instruct the parent agent to improve code autonomously if the rating is below a set threshold (e.g., 70)