Code Kit v5.0 released for CC's new Agent Teams feature.
Claude FastClaude Fast
Mechanics

Claude Code Monitor Tool: Stop Polling, Start Reacting

The Monitor tool makes Claude Code event-driven. Catch errors in real time, watch logs and deploys, and save tokens by reacting instead of polling.

Stop configuring. Start shipping.Everything you're reading about and more..
Agentic Orchestration Kit for Claude Code.

Problem: Background work in Claude Code was blind. You ran a command with run_in_background and got a single notification when it finished -- no visibility into what happened along the way. The alternative was polling with /loop or scheduled tasks, which fires a complete prompt every N minutes and costs a full API call per iteration just to ask "did anything happen yet?"

Quick Win: Tell Claude to monitor your dev server for errors. One sentence:

Start my dev server and monitor it for errors

Claude launches the server, attaches a background filter, and only wakes up when something breaks. Zero tokens spent waiting.

What Changed

Anthropic shipped the Monitor tool on April 9, 2026. Noah Zweben, Claude Code PM, announced it: "Claude can now follow logs for errors, poll PRs via script, and more. Big token saver and great way to move away from polling in the agent loop."

The core shift is from time-driven to event-driven. Before Monitor, Claude checked on things at intervals. Now Claude watches things and reacts when they happen. This is the same architectural difference between polling a database every 5 seconds and subscribing to a change stream. One wastes cycles. The other responds instantly.

Monitor works by launching a shell command whose stdout becomes an event stream. Each line of output is a notification that wakes the main session. If the command is silent, Claude spends nothing. The moment something matches your filter, it pushes into the conversation and Claude starts reacting -- while the underlying process keeps running.

How It Works

Every monitor takes four parameters:

ParameterWhat It Controls
descriptionShort label shown in every notification ("errors in deploy.log")
commandShell script whose stdout is the event stream
timeout_msKill after this many ms (default 300,000 / 5 min, max 3,600,000 / 1 hr)
persistentRuns for the full session. Stop manually with TaskStop

The command is where the real work happens. Each line it prints to stdout becomes one notification. Lines arriving within 200ms of each other batch into a single notification, so multi-line output from one event groups naturally. Stderr goes to an output file you can read later but does not trigger events.

Set persistent: true for monitors that should live as long as your session -- dev server watchers, log tailers, PR monitors. For bounded tasks like watching a test run or a deploy, use timeout_ms to auto-kill the monitor when the window closes.

Two Shapes of Monitors

Stream filters watch continuous output and surface matching lines:

# Watch application logs for errors
tail -f /var/log/app.log | grep --line-buffered "ERROR"
 
# Watch file system changes
inotifywait -m --format '%e %f' /watched/dir
 
# Node script that emits events from a WebSocket
node watch-for-events.js

Poll-and-if filters check a source on an interval and emit when conditions change:

# Poll GitHub PR for new comments every 30 seconds
last=$(date -u +%Y-%m-%dT%H:%M:%SZ)
while true; do
  now=$(date -u +%Y-%m-%dT%H:%M:%SZ)
  gh api "repos/owner/repo/issues/123/comments?since=$last" \
    --jq '.[] | "\(.user.login): \(.body)"'
  last=$now; sleep 30
done

Both shapes follow the same rule: stdout lines are events, silence means nothing to report. Claude keeps working on other tasks while the monitor runs quietly in the background.

The Token Math

A /loop checking your test suite every 2 minutes over a 10-minute run costs 5 full API calls. Each call loads context, processes the prompt, and returns a response. Five calls, five charges, zero useful work on four of them.

Monitor inverts this. Claude watches the test runner's output through a filter. When test #23 fails at minute 4, that failure line pushes directly into the session. Claude starts diagnosing the error while tests 24 through 47 are still running. No wasted calls. No delayed discovery. The savings compound in longer workflows -- deploy pipelines, overnight builds, multi-hour CI runs.

Practical Use Cases

Dev server error catching. Monitor your Next.js or Vite dev server and get notified the moment a build error or crash loop appears. No more scrolling back through terminal output to find what broke.

Test suite triage. Surface failing tests the instant they fail. Claude starts writing fixes while the remaining tests finish running.

Deploy pipeline watching. Follow CI/CD output and get notified on failures, warnings, or specific deployment stages completing.

PR review polling. Watch for new comments, review requests, or status checks on GitHub pull requests. Claude reacts to each new comment as it arrives.

Log monitoring. Tail production or staging logs with a filter for specific patterns. Each matching line becomes an event Claude can act on immediately.

Three Rules for Good Monitors

  1. Always use grep --line-buffered in pipes. Without it, pipe buffering delays events by minutes. This is the single most common mistake.

  2. Handle transient failures in poll loops. Add || true after API calls so one network timeout does not kill the entire monitor.

  3. Be selective with stdout. Every line becomes a conversation message. Monitors that produce too many events are automatically stopped. Pipe raw logs through a filter. Never stream unfiltered output.

# Good: selective filter
tail -f app.log | grep --line-buffered "ERROR\|WARN\|FATAL"
 
# Bad: firehose that will auto-stop
tail -f app.log

Poll intervals matter too. Use 30 seconds or more for remote APIs (rate limits apply), and 0.5-1 second for local checks.

Monitor vs. Hooks vs. Scheduled Tasks

Claude Code now has three automation layers. Each fires on a different trigger.

ToolTriggerBest For
HooksTool eventsValidation before/after tool calls
Scheduled TasksTimeRecurring work on a fixed cadence
MonitorEventsReacting to real-time output

Hooks fire on Claude's own actions (before a file edit, after a commit). Scheduled tasks fire on a clock. Monitors fire on external events. The strongest setups combine all three: hooks enforce guardrails, scheduled tasks handle periodic maintenance, and monitors provide real-time observability on everything else.

Monitor vs. run_in_background

This distinction trips people up. Both run things in the background. The difference is the feedback model.

run_in_background is fire-and-forget. You get one notification when the command finishes. No visibility into what happened along the way. Good for "run this build and tell me when it's done."

Monitor is a live stream. You get a notification for every matching event as it happens. Good for "run this build and tell me the moment something goes wrong." Monitor keeps Claude reactive during the process, not just after it.

Next Steps

  • Learn how hooks complement monitors by validating Claude's own actions
  • Explore autonomous agent loops where monitors provide the feedback signal
  • Read about context management to keep long monitored sessions efficient
  • See scheduled tasks for time-based automation that pairs with event-driven monitoring
  • Try feedback loops to tighten the cycle between writing code and catching issues

The Monitor tool is the missing piece between "Claude runs things" and "Claude watches things." Before this, background work was a black box that opened when the process ended. Now Claude is paying attention the entire time, reacting to what matters and ignoring what doesn't. That is a fundamentally different kind of pair programming.

Last updated on

On this page

Stop configuring. Start shipping.Everything you're reading about and more..
Agentic Orchestration Kit for Claude Code.