My agentic coding methodology of June 2025
I was chatting with some friends about how I'm using "AI" tools to write code. Like everyone else, my process has been evolving over the past few months. It seemed worthwhile to do a quick writeup of how I'm doing stuff today.
At the moment, I'm mostly living in Claude Code.
My "planning methodology" is:
"Let's talk through an idea I have. I'm going to describe it. Ask me lots of questions. When you understand it sufficiently, write out a draft plan."
After that, I chat with the LLM for a bit.
Then, the LLM shows me the draft plan.
I point out things I don't like in the plan and ask for changes.
The LLM revises the plan.
We do that a few times.
Once I'm happy with the plan, I say something along the lines of:
"Great. now write that to docs/plans/somefeature.plan.md
as a series of prompts for an llm coding agent. DRY YAGNI simple test-first clean clear good code"
I check over the plan. Maybe I ask for edits. Maybe I don't.
And then I type /clear
to blow away the LLM's memory of this nice plan it just made.
"There's a plan for a feature in docs/plans/somefeature.plan.md
. Read it over. If you have questions, let me know. Otherwise, let's get to work."
Invariably, there are (good) questions. It asks. I answer.
"Before we get going, update the plan document based on the answers I just gave you."
When the model has written out the updated plan, it usually asks me some variant of "can I please write some code now?"
*"lfg"
And then the model starts burning tokens. (Claude totally understands "lfg". Qwen tends to overthink it.)
I keep an eye on it while it runs, occasionally stopping it to redirect or critque something it's done until it reports "Ok! Phase 1 is production ready." (I don't know why, but lately, it's very big on telling me first-draft code is production ready.)
Usually, I'll ask it if it's written and run test. Usually, it actually has, which is awesome.
*"Ok. please commit these changes and update the planning doc with your current status."
Once the mmodel has done that, I usually /clear
it again to get a nice fresh context window and tell it *"Read docs/plans/somefeature.plan.md
and do the next phase.`
And then we lather, rinse, and repeat until there's something resembling software.
This process is startlingly effective most of the time. Part of what makes it work well is the CLAUDE.md file that spells out a my preferences and workflow. Part of it is that Anthropic's models are just well tuned for what I'm doing (which is mostly JavaScript, embedded C++, and Swift.) Generally, I find that the size of spec that works is something the model can blaze through in less than a couple hours with a focused human paying attention, but really, the smaller and more focused the spec, the better.
If you've got a process that looks like mine (or is wildly different), I'd love to hear from you about it. Drop me a line at jesse@fsck.com.