Claude Code Routines: The Complete Guide to Automated AI Development

SangamSangamHead of AI & Digital Solutions
15 min read

Every Monday morning at 9:15, I did the same thing. Open the terminal. Pull the latest changes. Scan through new pull requests. Run the test suite. Check for dependency updates. Write a summary. Post it to Slack.

Fifteen minutes of work. Not hard. Not creative. Just maintenance.

I did this for months before it occurred to me that I was the bottleneck. Not because the task was complex, but because it required me to be awake, caffeinated, and at my keyboard. The moment I discovered claude code routines, I realized I had been hand-watering a garden that could have been irrigated all along.

Anthropic launched routines on April 14, 2026, and they represent a fundamental shift in how we think about AI-assisted development. A routine is a saved Claude Code configuration: a prompt, one or more repositories, and a set of MCP connectors, packaged once and run on Anthropic's cloud infrastructure. You set it up, define a trigger, and Claude does the work whether you are watching or not.

This is not another GitHub Actions workflow with a language model bolted on. This is Claude Code, with full agentic capability, running autonomously on a schedule you define. If you have been following our Claude Code tutorial for beginners, think of routines as the next evolution. You have learned to cook with Claude at the stove beside you. Now it is time for mise en place: prep that runs itself.

In this guide, I will walk through everything. The three trigger types. Prompt engineering for autonomous runs. Cost modeling so you know exactly what you are spending. The connector ecosystem. Error handling. And the patterns I have found that actually work after two weeks of running routines in production.

The Three Trigger Types: Scheduled, API, and GitHub

Every routine needs a trigger. Something that says "run now." Anthropic gives you three options, and understanding when to use each one is the difference between a useful automation and a noisy mess.

Scheduled Triggers

This is where most people start, and for good reason. Scheduled triggers run your routine on a recurring cadence. You get presets for hourly, daily, weekdays, and weekly runs, plus full custom cron syntax for anything more specific.

The minimum interval is one hour. You cannot run a routine every five minutes, and honestly, you should not want to. Each run consumes quota from your subscription tier, and each run spins up a full Claude Code session with agentic capabilities. This is not a lightweight ping. It is a thoughtful agent doing real work.

Here is what a daily dependency audit routine might look like when you set it up through the CLI:

bash
claude /schedule

The CLI walks you through the configuration interactively. You specify the repository, write your prompt, choose your cadence, and select which connectors to attach. You can also create routines from the web UI at claude.com or from the Desktop app using "New remote task."

A few things I learned the hard way about scheduled triggers:

Be specific about time zones. If your routine posts a daily standup summary to Slack, "daily at 9 AM" needs to mean 9 AM in your time zone, not UTC. The cron syntax lets you specify this, but the presets default to UTC.

Stagger your routines. If you have five routines all set to run at midnight, they will all compete for your daily quota simultaneously. Spread them out.

Start with weekly, then tighten. I set my first routine to run hourly. It was a dependency scanner. Within a day, I had 24 sessions worth of nearly identical reports. The dependencies had not changed. I was burning quota on confirmations of "everything is fine." Weekly was the right cadence for that task.

API Triggers

Every routine gets its own HTTP POST endpoint. You hit it with a bearer token, and Claude spins up a session. The response gives you a session ID and URL so you can monitor progress.

bash
curl -X POST https://api.claude.ai/v1/routines/{routine_id}/trigger \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "anthropic-beta: experimental-cc-routine-2026-04-01" \
  -H "Content-Type: application/json"

That beta header is required. This is still experimental, and Anthropic wants to make sure you know it.

One critical detail: API tokens are shown once when you create them. You cannot retrieve them later. If you lose the token, you regenerate it. I keep mine in a secrets manager, and I recommend you do the same. For more on structuring your Claude Code environment, our guide on CLAUDE.md rules files covers the configuration patterns that apply here too.

API triggers are the most powerful option because they let you chain routines into larger workflows. Your CI pipeline finishes a deploy? Hit the routine endpoint to trigger a post-deploy smoke test. Your monitoring system detects an anomaly? Trigger a routine that investigates the logs and posts findings to Slack.

The possibilities multiply when you think of routines not as standalone automations but as composable units in a larger system.

GitHub Triggers

This is where things get interesting, and also where the current limitations matter most. GitHub triggers currently support two event types: pull_request and release. That is it. Not pushes, not issues, not workflow runs. Just PRs and releases.

But within those two events, the filtering is remarkably granular. For pull requests, you can filter by:

  • Author (only trigger for specific contributors)
  • Title and body content (regex matching)
  • Base and head branch
  • Labels
  • Draft status
  • Merged status
  • Fork origin

This filtering matters because each GitHub event creates a separate session. Two PR updates mean two routine runs. Without filtering, a busy repository could burn through your daily quota before lunch.

Here is a pattern I use: a routine that triggers on pull requests labeled "needs-review" from external contributors (fork origin = true). The routine reads the PR diff, checks for common security issues, validates that tests pass, and posts a structured review comment. It only runs when I actually need it, not on every push to every branch.

The security implications are worth noting. Routines run as you. Commits carry your GitHub user. If your routine pushes code, it pushes to claude/-prefixed branches by default. This is a safety rail, not a limitation. You do not want an automated agent pushing directly to main. You want it proposing changes on a clearly labeled branch that a human reviews.

Prompt Engineering for Autonomous Routines

Writing prompts for routines is fundamentally different from writing prompts for interactive Claude Code sessions. In an interactive session, you are there. You can clarify. You can redirect. You can say "no, not that file, the other one."

In a routine, Claude is alone.

This changes everything about how you write prompts. The prompt needs to be complete, unambiguous, and defensive. It needs to handle edge cases that you would normally handle yourself by glancing at the output and saying "oh wait, skip that."

Here are the patterns I have developed after writing dozens of routine prompts.

Start with context, not commands. Tell Claude what it is looking at before telling it what to do.

You are reviewing the main branch of the vibecoding-ae repository.
This is a Next.js 16 application using MDX for content.
The content directory is /content/blog/.
Your job is to check for SEO issues in any posts modified in the last 7 days.

Define success criteria explicitly. What does "done" look like?

A successful run produces a Slack message in #content-review with:
1. Number of posts checked
2. Any posts missing required frontmatter fields
3. Any posts with meta descriptions over 155 characters
4. Any posts with fewer than 3 internal links
If all posts pass, say "All clear" with the count. Do not skip the message.

Handle the empty case. What should the routine do when there is nothing to do?

If no posts were modified in the last 7 days, post a short confirmation
to Slack: "No content changes in the last 7 days. Nothing to review."
Do NOT silently exit.

Set boundaries. What should the routine not do?

Do NOT modify any files. Do NOT create pull requests.
This is a read-only audit. Report findings only.

I cannot stress this enough: the prompt is the entire interface for an autonomous routine. There is no second chance to clarify. Spend more time on the prompt than you think you need. I typically spend 30 minutes writing and refining a routine prompt before I ever set a trigger. That is not perfectionism. That is engineering.

For more prompt patterns and configuration tips, our 50 Claude Code tips collection has a section on writing effective instructions that applies directly to routine prompts.

Cost Modeling: The Napkin Math You Need

Let me be honest. I did not think about costs when I first started using routines. I set up six routines on a Pro plan with a limit of five runs per day. Do the math on why that was a problem.

Here are the tier limits:

TierRuns per day
Pro5
Max15
Team25
Enterprise25

Each run counts against your subscription usage. This is not free compute bolted onto your existing plan. Each routine session is a full Claude Code session consuming real tokens.

Now let me do some napkin math that I wish I had done before my first week.

At Pro tier, 5 runs per day times 30 days equals 150 automated sessions per month. If each routine replaces 15 minutes of manual work, that is 2,250 minutes reclaimed monthly. Divide by 60 and you get 37.5 hours. Thirty-seven hours. That is almost a full work week recovered every month from five daily automations.

But here is the catch. Not every routine replaces 15 minutes of work. My dependency scanner routine? It replaced maybe 3 minutes of manual checking. My PR review routine? That one legitimately saved 20 minutes per run because I would have had to read the diff, check the tests, and write review comments manually.

The real cost model is not "runs per day" but "value per run." A routine that saves you 3 minutes five times a day is worth 15 minutes. A routine that saves you 20 minutes once a day is worth 20 minutes. Same quota, different ROI.

At Max tier with 15 runs per day, you have enough headroom to run a mix of high-value and low-value routines. At Pro tier, you need to be selective. Every routine slot is precious.

My recommendation: start with your highest-value automation. The task that takes the most time, requires the least human judgment, and runs on a predictable schedule. Prove the value there before expanding.

Connectors: Slack, Linear, Google Drive, and the Rest

Routines are not just about reading code and writing commits. The connector ecosystem is what makes them genuinely useful in a team context.

Connectors like Slack, Linear, and Google Drive are included by default. You attach them when creating the routine, and Claude can read from and write to those services during its run.

This is where routines stop being "automated code review" and start being "automated team operations." Some patterns I have seen work well:

The morning briefing. A daily routine that scans open PRs, checks CI status, reviews the Linear board for overdue items, and posts a formatted summary to a Slack channel. Your team arrives to a digest instead of spending the first 30 minutes of the day piecing together context from five different tools.

The release notes drafter. A routine triggered by a GitHub release event that reads all merged PRs since the last release, categorizes them (features, fixes, improvements), and drafts release notes in a Google Doc. A human reviews and publishes, but the tedious aggregation work is done.

The stale PR nudger. A weekly routine that identifies PRs open for more than 5 days, checks if they have review comments that were never addressed, and sends a gentle Slack DM to the author. Not a bot message. A thoughtful note that says "Hey, PR #47 has unresolved comments from 6 days ago. Want me to help resolve them?"

The connector model is powerful because it gives Claude the same cross-tool context that a human team member would have. It is not just reading code. It is reading code and understanding the project management context and communicating findings through the channels your team already uses.

I am still figuring out the best practices for connector permissions. Right now, I grant routines access to specific Slack channels rather than my entire workspace. I limit Linear access to specific projects. The principle is the same as any automation: least privilege. Give the routine access to what it needs, nothing more.

GitHub Actions vs. Claude Code Routines: When to Use Which

This is the question everyone asks, and the answer is not "routines replace GitHub Actions." They serve different purposes, and the best setups use both.

GitHub Actions excels at deterministic, repeatable workflows. Run tests. Build artifacts. Deploy to staging. Lint the code. These are well-defined pipelines where the steps do not change and the logic is straightforward. Actions are also essentially free for public repositories and very cheap for private ones.

Claude Code Routines excel at tasks requiring judgment, synthesis, and natural language output. Review this PR and tell me if the approach is sound. Scan the codebase for patterns that violate our architecture guidelines. Read the last week of commits and write a changelog that a non-technical stakeholder can understand.

The dividing line is cognition. If the task can be expressed as a shell script or a YAML pipeline, use Actions. If the task requires an agent that reads, reasons, and writes, use routines.

The best pattern I have found is using them together. A GitHub Action runs the deterministic checks (lint, test, build). If all checks pass, the Action hits the routine's API endpoint to trigger a Claude Code review. The routine reads the PR with full context, knowing that the mechanical checks already passed, and focuses on architectural and logical review.

yaml
# In your GitHub Actions workflow
- name: Trigger Claude review
  if: success()
  run: |
    curl -X POST https://api.claude.ai/v1/routines/$ROUTINE_ID/trigger \
      -H "Authorization: Bearer ${{ secrets.CLAUDE_ROUTINE_TOKEN }}" \
      -H "anthropic-beta: experimental-cc-routine-2026-04-01"

This gives you the best of both worlds. Fast, cheap, deterministic checks from Actions. Thoughtful, contextual, intelligent review from routines.

Error Handling and Failure Modes

Nobody talks about what happens when routines fail. But they do fail, and understanding the failure modes will save you hours of confusion.

Silent failures. If your routine prompt has a logical error, like referencing a file path that does not exist, Claude will try its best and may produce partial results or no results at all. Without explicit error handling in your prompt, you might not know anything went wrong until you notice the Slack message never arrived.

Fix this by adding explicit error reporting to every routine prompt:

If you encounter any errors during this run, including missing files,
failed commands, or unexpected states, post a message to #routine-errors
in Slack with:
1. The routine name
2. The error encountered
3. What step you were on when it failed
4. Your best guess at the cause

Quota exhaustion. If you have used all your daily runs, the routine simply does not execute. There is no queuing. There is no "run it tomorrow instead." The trigger fires, nothing happens, and the next scheduled trigger tries again at the next interval.

This is why staggering and prioritizing matters. If your most important routine runs at 6 AM and a less important one already consumed the last run at 5:30 AM, you are out of luck until tomorrow.

Rate limiting on connectors. If your routine posts to Slack, it is subject to Slack's rate limits. A routine that processes 50 PRs and tries to post 50 individual messages will hit throttling. Batch your outputs. Have the routine collect all findings and post a single, well-formatted message.

Branch conflicts. Routines push to claude/-prefixed branches. If two routines target the same repository and try to create branches with similar names, you can get conflicts. Use descriptive branch naming in your prompts: "Create a branch named claude/weekly-deps-{date} for your changes."

I do not know if Anthropic will add built-in retry logic or failure notifications. Right now, you need to build that yourself, either through your prompt instructions or through external monitoring. I have a simple cron job that checks whether my expected Slack messages arrived each morning. If they did not, something went wrong. It is not elegant, but it works.

Real-World Routine Recipes

Let me share three routines I run in production right now. These are not hypothetical. They run daily and have been refined through iteration.

Recipe 1: The Content Quality Auditor

Trigger: Daily at 7 AM UTC Repository: vibecoding-ae Connectors: Slack

You are auditing the content quality of the vibecoding.ae blog.

Check all published MDX files in /content/blog/ for:
1. Meta descriptions over 155 characters
2. Titles over 60 characters
3. Missing or empty keywords array
4. Fewer than 3 internal links (links to /blog/*)
5. Any em dashes which violate our style guide
6. Posts with published: true but a future date

For each issue found, note the file name and the specific problem.

Post results to #content-ops in Slack. Format as a bulleted list
grouped by severity (blocking vs. advisory).

If no issues found, post "Content audit passed. {count} posts checked."

This routine has caught three issues in two weeks that I would not have noticed until a reader complained. One post had a 210-character meta description. Another had zero internal links. The third had an em dash that slipped past my editing.

Recipe 2: The PR Reviewer for External Contributors

Trigger: GitHub pull_request (filtered: fork origin = true, label = "community") Repository: vibecoding-ae Connectors: None (posts review as GitHub comment)

You are reviewing a pull request from an external contributor.

Read the full diff. Check for:
1. Security issues (exposed secrets, XSS vectors, injection risks)
2. Style guide violations (check CLAUDE.md for project conventions)
3. Content quality (if MDX files: word count, frontmatter completeness)
4. Breaking changes to existing functionality

Post a review comment with your findings structured as:
## Security
## Style
## Content Quality
## Breaking Changes

End with a summary recommendation: APPROVE, REQUEST CHANGES, or NEEDS DISCUSSION.

Be respectful. This is a community contributor volunteering their time.
Thank them for the contribution regardless of your findings.

Recipe 3: The Weekly Changelog Drafter

Trigger: Weekly on Fridays at 4 PM UTC Repository: vibecoding-ae Connectors: Slack, Google Drive

You are drafting the weekly changelog for vibecoding.ae.

1. List all commits to main since last Friday
2. Categorize them: New Content, Site Improvements, Bug Fixes, Other
3. For new content, include the post title and a one-sentence summary
4. For site improvements, describe the user-facing change
5. Ignore merge commits and automated commits

Create a Google Doc titled "Changelog - Week of {date}" with the
categorized list. Format it for a non-technical audience.

Post a link to the doc in #team-updates on Slack with a brief summary:
"{count} changes this week: {x} new posts, {y} improvements, {z} fixes."

This one saves me about 45 minutes every Friday. The old process involved manually reading through the git log, cross-referencing with our project board, writing prose summaries, and formatting everything into a document. Now I review what Claude drafted, make minor edits, and share.

Getting Started: Your First Routine in Five Minutes

If you have read this far and want to try routines right now, here is the fastest path.

Step 1: Open Claude Code in your terminal or go to claude.com.

Step 2: If using the CLI, type /schedule. If using the web UI or Desktop app, look for "New remote task" or the routines section.

Step 3: Select your repository.

Step 4: Write a simple prompt. Start with something low-stakes:

Check this repository for any TODO or FIXME comments in the codebase.
List them with file paths and line numbers.
Post the results to Slack in #dev-tasks.
If there are no TODOs or FIXMEs, say "Codebase is clean."

Step 5: Set the trigger to weekly.

Step 6: Attach the Slack connector and select your channel.

Step 7: Save and wait for the first run.

That is it. Your first routine is live.

Start with one. See how it performs. Refine the prompt based on the output. Then add a second routine. Then a third. The compound effect builds quickly.

The coverage from The Register, 9to5Mac, and SiliconAngle all focus on the announcement. What they do not cover, and what I hope this guide provides, is the practical knowledge you need to make routines work in your daily workflow. The prompt patterns. The cost awareness. The failure modes. The recipes you can steal and adapt.

What I Am Still Figuring Out

I want to be transparent about what I do not have fully solved yet.

Observability. I do not have a great way to monitor routine health over time. I know when a routine fails because the expected output does not appear. But I do not have dashboards showing run duration trends, token usage per routine, or success rates over weeks. I am building a lightweight monitoring layer, but it is manual and clunky. I suspect Anthropic will ship better observability tooling, but for now, you are on your own.

Prompt versioning. When I update a routine's prompt, the old version is gone. There is no history, no diff, no way to roll back to a previous prompt if the new one produces worse results. I have started keeping my routine prompts in a routines/ directory in my repositories, version-controlled alongside the code they operate on. It is extra work, but it has already saved me twice when a prompt edit made things worse.

Multi-repo routines. Right now, each routine targets one or more repositories, but orchestrating across repositories is not seamless. I have a frontend repo and a content repo that are tightly coupled, and I want a single routine that understands both. I am working around this by having the routine clone the second repo during its run, but it feels like a hack.

Team coordination. On a Team plan with 25 runs per day shared across the team, who gets priority? We have not built a governance model yet. Right now it is first-come, first-served, and I know that will not scale past five people.

These are solvable problems. But I would rather tell you about them now than have you discover them at 2 AM when your critical routine did not run because a teammate's experimental routine ate the last slot.

The Irrigation Principle

I keep coming back to that farming metaphor. For decades, software development has been hand-watered. A human wakes up, walks to the field, and carries water to the plants. Every day. Every morning. Rain or shine.

Routines are not smarter water. They are infrastructure. They are the pipes, the timers, the drip lines that deliver exactly what is needed, when it is needed, without requiring someone to be standing there holding the hose.

The work does not disappear. Someone still needs to design the irrigation system. Someone needs to check the soil. Someone needs to decide what to plant. That is your job. That is the part that requires human judgment, taste, and context.

The carrying of water? Let Claude do that while you sleep.

I am two weeks into running routines in production, and my relationship with my own codebase has already changed. I spend less time on maintenance and more time on the work that actually requires me to be present. The code reviews are more thoughtful because the mechanical checks are already done. The changelogs are more useful because they are drafted by an agent that reads every commit, not by a human who skims the git log at 5 PM on a Friday.

The automation is just beginning. Anthropic will expand the trigger types, improve the connector ecosystem, and raise the rate limits. The routines you build today are version one. They will get better. But you need to start now, because the compound learning, the prompt refinement, the workflow integration, that takes time that no amount of future features can shortcut.

So. What will you have Claude do while you sleep?

Stay in the loop. Get weekly tutorials on building software with AI coding agents. Speak to the community of Builders worldwide.

Free forever, no spam. Tutorials, tool reviews, and strategies for founders, PMs, and builders shipping with AI.

Learn More