CLAUDE.md: The Definitive Guide to AI Rules Files

SangamSangamHead of AI & Digital Solutions
12 min read

I rewrote my CLAUDE.md eleven times in six months. Each rewrite came after the agent did something stupid the previous version should have prevented. Here is what survived.

The first version was seventeen lines of vague enthusiasm. "Write clean code. Use TypeScript. Prefer server components." Within a day, Claude Code had invented three utility files I did not ask for, rewritten an import I had marked as load-bearing, and decided my Tailwind tokens were "legacy." None of that was in the rules. But nothing stopped it either.

Version eleven is 78 lines and is the only reason I can leave Claude Code running unattended for ninety minutes and come back to working software instead of a landfill. This post is about what took me eleven rewrites to learn.

A CLAUDE.md is not documentation. It is a contract. And like every contract that holds up in the real world, it was not drafted at the start. It was negotiated after each breach.

Eleven rewrites in six months. Roughly one new rule every two weeks. Each rule has the bruise of a specific mistake behind it. I did not plan this cadence. It planned me.

What a Rules File Actually Is (and What It Is Not)

Let me burn down the most common misconception. CLAUDE.md is not a README. It is not a team style guide. It is not where you explain what your product does to a new hire. It is a set of standing instructions loaded into the context window every time an agent starts a session in your repo. Every token in there is a token you are paying for, on every turn, forever.

Think of it less like a wiki page and more like a mise en place list taped above a restaurant line cook's station. Nobody reads a mise en place for fun. It exists to be glanced at in the middle of chaos, with oil smoking and the ticket printer screaming, to answer one question: what am I supposed to do right now. Anything that does not answer that question is grease splatter.

According to the official Claude Code documentation, the CLAUDE.md file is loaded automatically at session start and serves as "persistent project context." That phrasing is diplomatic. What it really means: this file is the agent's only stable memory across turns, and anything not in it will be forgotten or reinvented the next time you say hello.

My working definition: a CLAUDE.md is a pre-negotiated contract with your future self and your agent, written in the imperative mood, optimized for the agent's reading comprehension rather than yours. It tells the agent what to do, what never to do, and how to verify it did the right thing. Everything else belongs somewhere else.

If you are brand new to Claude Code, start with our complete beginner's tutorial and come back here when the basics are in place.

The Seven Sections Every CLAUDE.md Needs

After eleven rewrites, I converged on seven sections. I do not think this is the only arrangement that works, but every CLAUDE.md I have seen that actually keeps an agent on the rails contains these seven things in some form. When one goes missing, a specific class of failure shows up within days.

Here they are, in the order I write them:

  1. Project context. One paragraph. What is this project, who is it for, what is the single measurable goal. Not marketing copy. Just enough for the agent to know whether a given suggestion is even in the right universe.
  2. Stack. A flat list of the actual frameworks, languages, and services in use, with versions where versions matter. This exists to stop the agent from applying patterns from the wrong generation of a framework. The model knows ten years of React. You have to tell it which year you are in.
  3. Build and dev commands. The exact shell commands to install, run, build, lint, and test. This is the most important section in the entire file, and most people treat it as an afterthought.
  4. Content or domain conventions. What kind of file lives where. What a valid record looks like. Any homegrown pattern the agent could not possibly infer from public training data.
  5. Code style. Not a lecture on clean code. A short list of the specific decisions that cause friction on this project: strict mode on or off, server or client defaults, where components go, what the design tokens are called.
  6. Non-negotiable rules. The things the agent must never do, written in the voice of someone who has already been burned. Short, imperative, marked loudly. Every rule should trace back to a specific past incident.
  7. Key files. A pointer list of the files that matter most, with a one-line explanation of why each one matters. Without it, the agent will happily rewrite something load-bearing because it looked "unused."

Notice what is missing. No "architecture philosophy." No "long-term vision." No praise for test-driven development in the abstract. Those things belong in a planning doc, a PRD, or a conversation. They do not belong in a file that gets injected into context on every single turn.

The pattern maps closely to what Stack Overflow's engineering team described in their piece on coding guidelines for AI agents and people too: the most effective agent instructions are the ones that also work on humans. Short. Imperative. Testable. If a rule cannot be verified by running a command, it probably should not be there at all.

The Mistakes I Made in Versions 1 Through 10

If version eleven is a contract, versions one through ten were me learning what a contract is. Here is the condensed obituary.

Version 1 was aspirational. I wrote what I wished my codebase looked like. I put "80% test coverage" in the rules file on a project with zero tests. Claude took that seriously, started adding tests to everything, and rewrote half my utility functions to make them "more testable." It was correct to try. I had lied in the contract.

Version 2 was a novel. Two hundred lines of lovingly detailed context about the project's history and design philosophy. This felt thoughtful. It was expensive and useless. The rules I actually needed were buried under three paragraphs of exposition.

Version 3 had no build commands. The agent had no way to verify its own work. It would "finish" a task and hand back code that did not compile. A rules file without build commands is sheet music for a band that has never agreed on what key they are playing in.

Version 4 had too many negatives. I listed twenty things the agent must never do. Negatives compete for salience. Five loud negatives get followed. Twenty become ambient noise.

Version 5 mixed user and project instructions. I put my personal preferences in the project file, and the agent kept applying them across every project I opened. More on this next.

Version 6 had a stale command. I changed the build script but forgot to update the rules file. For two weeks, Claude kept running a script that no longer existed, failing, and trying to "fix" the failure by inventing new scripts. I was debugging a ghost.

Version 7 was too abstract. "Follow SOLID principles." The model knows what SOLID is. It also has its own interpretation, and that interpretation was not mine. Abstract rules invite the agent to bring its own priors.

Versions 8, 9, and 10 were small corrections. A missing "read this file before writing a post." A missing "never mutate the velite config without asking." Each one, left unfixed, cost me an hour of cleanup within a week.

Climbing instructors say you only learn what holds when you fall. CLAUDE.md works the same way. You find out what a rule is worth the moment you did not have it.

Project-Level vs User-Level vs Team-Level Rules

Three layers, not interchangeable.

Project-level rules live in CLAUDE.md at the root of your repository, checked into git. Rules that belong to this project and would not make sense in another. Build commands. File layout. Domain conventions. The "never use em dashes" rule for a content project lives here because it is a property of the project, not of me.

User-level rules live in ~/.claude/ on your machine and apply to every project you open. Personal preferences as an operator: model selection, commit style, whether you want extended thinking on by default. Mine includes "always use parallel tool calls when operations are independent" and "never run git reset --hard without asking first."

Team-level rules are the awkward middle. There is no first-class place for them yet, so most teams put them in the project CLAUDE.md and mark them clearly. Shared norms every contributor has agreed to but that are not technically properties of the code: a commit message convention, an agreement to run the security linter before every push.

The failure mode I see most often is people dumping user-level preferences into the project file, which forces the whole team to inherit one person's operating style. This is like a driving instructor insisting you hold the wheel exactly the way they do, even when you are in a different car. Keep them in separate pockets.

I wrote a separate piece on how we block AI agents from merging PRs without human review, and a big part of that workflow is knowing which rules live where.

Editing Rules in Response to Failure (the Feedback Loop)

Every new rule in your CLAUDE.md should be traceable to a specific incident. If you cannot point at the moment an agent did the wrong thing and say "this rule exists because of that," the rule is aspirational, and aspirational rules do not hold.

The feedback loop I run now is simple. When the agent does something I did not want, I ask one question: would a reasonable rule have prevented this, and is that rule specific enough that I could verify it by running a command. If the answer to both halves is yes, I add the rule. Otherwise I correct the agent in the current session and move on.

This sounds obvious. It is not what most people do. Most people, myself included for my first four rewrites, add a rule every time anything goes wrong, regardless of whether the rule is enforceable. That is how you end up with version 4: twenty negatives, none enforced, all of them diluting the ones that matter.

A surgical checklist is the metaphor I keep coming back to. Atul Gawande wrote a whole book about why they work. The short version: every item on the list exists because someone, somewhere, forgot that exact thing and a patient died. Every line is blood-earned. No aspirational items. No "make sure the patient is comfortable" fluff. Just the specific, repeatable checks that have already failed.

Your CLAUDE.md should earn its items the same way. If a rule has not saved you from a specific failure mode you have actually experienced, it is taking up context budget for nothing.

I am still figuring out the right cadence for pruning. I do not yet know if I should delete old rules or keep them forever as institutional memory. Right now I lean toward keeping.

Anti-Patterns That Look Smart But Break Things

These are the patterns I see over and over in public CLAUDE.md templates, all of which look reasonable and all of which have hurt me or people I know.

Anti-pattern 1: The giant "architecture overview." Paragraphs explaining system design, module boundaries, and data flow. This looks like good engineering hygiene. The agent does not use this context to make better decisions. It uses it to generate more plausible-sounding wrong answers. An architecture diagram in a rules file is like a full menu taped inside a cash register: correct information, wrong location.

Anti-pattern 2: Rules phrased as principles instead of actions. "Favor composition over inheritance." "Write clean, maintainable code." "Handle errors gracefully." These read as wisdom and act as noise. The model will cheerfully violate your specific version of a principle while claiming to honor the general one. Replace every principle with the concrete rule you actually want enforced.

Anti-pattern 3: Long quoted examples. I have seen CLAUDE.md files with fifty-line code examples showing "the right way" to write a component. The agent treats the example as canonical and produces minor variations everywhere, even when the situation calls for a completely different shape. A short snippet is fine. A fifty-line example is a template the agent will copy-paste through your entire codebase.

Anti-pattern 4: Rules that contradict the actual code. If your CLAUDE.md says "use server components by default" but half your components are client components for reasons you have forgotten, the agent will either "fix" them (bad) or conclude the rule is optional (worse). If the rule and the code disagree, one of them is wrong. Decide which before the agent decides for you.

Anti-pattern 5: The "we value" section. Lists of team values: we value simplicity, correctness, readability. Corporate language applied to a system that does not read corporate language. The agent does not weigh values. It follows instructions. Replace "we value X" with "do X."

Anti-pattern 6: Rules without commands. Every non-trivial rule should have a verification path. Rules without commands are hopes. Rules with commands are tests. The METR study on agent productivity found that engineers who felt faster with AI were often measurably slower, and a big chunk of that gap came from skipped verification. The rules file is where you make verification unskippable.

The Full vibecoding.ae CLAUDE.md, Annotated

Here is the actual rules file powering this blog, version eleven. I will paste it in chunks and annotate why each section exists. The why is the part that transfers.

First, project context and stack.

markdown
# vibecoding.ae
 
## Project
SEO-first media/community blog about vibe coding - building software with AI coding agents.
Target: #1 ranking for "vibe coding" related keywords globally.
 
## Stack
- Next.js 16 (App Router, Turbopack, React 19, React Compiler)
- Velite for MDX content (type-safe, Zod schemas)
- Tailwind CSS v4 (CSS-based @theme tokens, no config file)
- Resend + React Email for newsletters
- Deployed on Vercel
- Domain: vibecoding.ae

Two sentences of project context, six lines of stack. No history, no philosophy, no vision statement. The agent does not need to know why I started the blog. It needs to know what framework version it is targeting so it does not suggest an App Router pattern from three versions ago. The Tailwind line exists because v4 changed configuration radically and Claude was defaulting to v3 config for the first month.

Next, the build commands. The single most important section in the file.

markdown
## Build & Dev
- `npm run dev` - local development (Turbopack)
- `npm run build` - production build (must pass before committing)
- `npm run lint` - ESLint check

Three lines. Three commands. That parenthetical, must pass before committing, is doing enormous work. It turns the build command from a suggestion into a gate. When the agent finishes a task, it now knows it has not actually finished until npm run build passes. This one phrase eliminated an entire class of "it looks done but does not compile" failures I used to see every week.

The next section is domain context: what a valid blog post looks like, what fields are required, the content folder structure. I will skip pasting it here because it is long and project-specific. The principle: if there is any schema, any required field, any naming convention the agent could not guess from public data, it goes in this section.

Then the non-negotiable rules. The section that has changed the most across versions and is now the shortest.

markdown
## Content Generation Rules (NON-NEGOTIABLE)
- **ALWAYS read `EditorialDNA.md` before writing any blog post.** This is mandatory, not optional. Read it fresh every time, even if you read it earlier in the session.
- **NEVER use em dashes (-) in any content.** Use commas, periods, colons, semicolons, or hyphens (-) instead.
- Write in first-person, conversational tone with intellectual ambition
- Open with personal experience or provocative assertion, never abstractions
- Show uncertainty and process. Don't just present polished conclusions
- Metaphors should reach outside technology for source material

Every one of these rules has a specific bruise behind it. The "read EditorialDNA.md fresh every time" line exists because Claude would read the file once at the start of a session, generate a post, then generate a second post without re-reading it, and the second post would drift toward generic AI prose. The "never use em dashes" line exists because the voice of the publication depends on it and the model's training pulls it toward em dashes constantly. Writing the rule once was not enough. I had to write it in all caps, in bold, in a section marked NON-NEGOTIABLE. And I still run grep after every post.

Finally, the key files section.

markdown
## Key Files
- `velite.config.ts` - Content schema (modifying affects all posts)
- `src/lib/seo.ts` - JSON-LD schema generation
- `src/app/sitemap.ts` - Sitemap generation
- `src/app/blog/[slug]/page.tsx` - Blog post page (SEO-critical)
- `src/components/mdx/mdx-components.tsx` - MDX styling overrides
- `EditorialDNA.md` - Editorial voice standards (READ before generating posts)

This section is a map with warnings. The parentheticals do the real work. "Modifying affects all posts" is there because Claude once cheerfully updated the velite config to "improve" a schema, and I did not catch it until my build broke across four posts. "SEO-critical" exists because the blog post page touches JSON-LD, OG images, and canonical URLs, and I would rather the agent ask first.

That is the whole file. Seventy-eight lines. Every line earned. For how this fits into our broader review workflow, see vibe coding security rules, which covers the non-CLAUDE.md half of the contract: the git hooks, the CI checks, and the human review policy that backs everything up. For a deeper tour of Claude Code itself, my 50 Claude Code tips after six months of daily use has the moves I reach for every day. And if you want to see a similar philosophy applied to general-purpose engineering, the GitHub team's write-up on Copilot repository custom instructions is worth reading alongside this post. The conventions differ; the principle is identical: short, imperative, verifiable, earned.

I am still editing this file. Version twelve is probably coming. Last week the agent did something I did not expect and I have been turning over whether it needs a new rule or whether I just need to pay closer attention. I do not know yet.

What does your CLAUDE.md look like? I am genuinely curious how other people have solved the problem of what to include and what to leave out. If you have rules that have saved you from specific, named disasters, I would love to read them. The most valuable lines in my file came from other people telling me about failures I had not had yet. The conversation continues, and I would rather hear about your bruises before I collect them myself.

Stay in the loop. Get weekly tutorials on building software with AI coding agents. Speak to the community of Builders worldwide.

Free forever, no spam. Tutorials, tool reviews, and strategies for founders, PMs, and builders shipping with AI.

Learn More