AI coding best practices

AI coding agents have some of the same limitations as humans:

AI agents perform better with guidance, just like human teams. The spec and multi role subagent frameworks I’ve played with seemed promising at first, but have been too rigid or action oriented for particular scenarios. e.g. Spec Kit

I’ve found myself using a freeform set of basic techniques that can be done in more or less detail depending on the product/codebase and evolve over time. These practices have seemed durable as models and agent capabilities have changed.

I’m mentioning Claude, but these techniques can be used with all of the major agents. I’m currently using many Claude Code Opus 4.5 sessions in iTerm with VS Code and Obsidian for code and doc review.

Best practices

  1. I ask Claude to map user flows, data flows, logical code modules, etc. into .md docs with mermaid diagrams to make them faster for me to digest. Read, read, read. Go deeper and iterate until I understand the problem space sufficiently to plan what to build next.

  2. I ask Claude to write instructions for itself in AGENTS.md on maintaining these docs with every new major work unit shipped, as well as referencing them itself in planning and execution.

    These two goals can be as simple as asking Claude to come up with a documentation structure that it will reference in AGENTS.md:

    To speed up your work and reduce context, we want to summarize the major components of this app into logical chunks in .md files in /docs/current-functionality, then reference these files with short summaries about them in AGENTS.md, so you can read them on init depending on tasks needed. Propose a docs file structure. Also consider guidelines to add to AGENTS.md to maintain them going forward as new development is done. Think hard about this.

    Some more explicit examples

    Read through this repo in detail and attempt to identify the core user flows like registration, auth, authorized application user flows, public site pages, etc. Think hard and generate an exhaustive set of documents at /docs/current-implementation/user-flows/overview.md and /docs/current-implementation/user-flows/.md for each major flow. Include mermaid diagrams in these markdown files to help illustrate the core steps in the flow.

    Read through this repo in detail and attempt to identify the core data flows. Identify the key items in the schema and their relationships. Identify when these items are created, updated or deleted. Identify which users have rights to apply these actions on the data. Flag anything that has privacy considerations. Think hard and generate an exhaustive set of documents at /docs/current-implementation/data-flows/overview.md and /docs/current-implementation/user-flows/.md for each major flow. Include mermaid diagrams in these markdown files to help illustrate the core steps in the flow. Generate a high level schema description at /docs/current-implementation/data/schema.md.

  3. I collaborate with Claude to write business, product, architectural vision docs that are referenced in AGENTS.md. Business and product docs may be copies from other non-repo locations. Assertions about blessed frameworks, design libraries, infrastructure components, etc. live in here. New planning and dev will skew toward the target.

  4. I ask Claude to write itself instructions in AGENTS.md on building work plans in markdown files with todo items to plan, test and track major units of work. I don’t rely on its ephemeral todo system that I can’t review in detail. It will use that internal capability automatically and they stack together well. Simple example:

    As we work in this repo, we want to design and execute development through documented work plans at /specs/work-plans/. New work plans will be created at /specs/work-plans/todo and completed work plans should be moved to /specs/work-plans/completed when done. Work plans should include context on why we are building a feature, any dependencies and an explicit checklist of phases so we can track progress as development is progressing. When ideating new features with the user, ask questions and propose spec structure until agreement is reached and the user requests writing the work plan. Put this work plan lifecycle concept into AGENTS.md

  5. I collaborate with Claude on every new work plan and explicitly look for refactor optimizations every time. Claude seems very willing to repeat existing patterns without seeing the repetition as an opportunity to consolidate. These refactor opportunities often become their own work plans.

  6. I use rewind liberally. When the LLM produces a strange or incorrect outcome, I rewind to either refine the prompt or sometimes just retry with the same prompt. Continuing forward with errors and asking the LLM to fix them is usually not the right move unless it got 90% of the way there. Having erroneous actions in the conversation history tends to skew future work in the wrong direction.

  7. I decouple ideation from execution. I’ll develop multiple work plans in parallel, sometimes related and sometimes not, then revisit them before implementation to ensure they reflect any changes from completed work.

  8. I often ask Claude to look for refactor opportunities. These are usually explicit things like looking for abandoned code, possible duplicate code, plus explicit review of its own self-documentation to think at that zoom level about the whole architecture.

  9. I use Playwright MCP to have Claude test its own work, iterating to remove bugs and ensure visual and functional accuracy to spec.

  10. I point Claude at competitor apps to extract their likely schema, user flows, and application logic into documentation I can query later. This is more useful than asking about competitors directly—forces it to build a coherent model first.