Claude Code Hooks + Cursor Automations: Total Coverage, Inside and Out
You’ve already read about Claude Code Hooks. You’ve already read about Cursor Automations. They solve different problems — but if you use them together, they cover your entire development workflow with AI: one from the inside, the other from the outside.
This article is about how to combine them into a complete governance stack.
The Mental Model: Two Layers, Zero Gaps
First, the distinction:
Claude Code Hooks live inside an agent session. They fire at specific points in Claude’s lifecycle — before a tool runs, after a file is edited, when the session ends. They control what Claude does while working. They are deterministic: if you configure a hook to block dangerous shell commands, that block happens always, without exceptions.
Cursor Automations live above agent sessions. They don’t control what an agent does — they decide when it starts and what it reports. A commit to main, a Slack message, a PagerDuty alert — these trigger complete workflows without anyone at the keyboard.
Put them together:
- Automations launches the agent when something happens in your codebase or infrastructure
- Hooks enforces the rules while that agent runs
- Humans review only what requires human judgment
That’s not just productivity. It’s a governed, auditable development pipeline with standards applied automatically.
Where Each Layer Operates
| Claude Code Hooks | Cursor Automations | |
|---|---|---|
| Scope | Inside a Claude Code session | Above agent sessions |
| Trigger | Tool events, lifecycle points | Commits, Slack, timers, incidents |
| Can block actions | ||
| Lives in | .claude/settings.json |
Cursor IDE configuration |
| Shared with the team | Commit it to the repo | Configured by workspace |
| Best for | Enforcement, formatting, security guardrails | Orchestration, scheduled reviews, incident response |
A Real Workflow: Security Review on Every Push
Here’s how the two layers work together in a concrete scenario:
Without either:
A developer pushes to main. Someone on the team eventually reviews the code — when they have time. Security issues can go unnoticed for days.
With Automations only:
An Automation triggers a security review agent on every push. The agent scans for vulnerabilities, sends findings to Slack. Better — but the agent itself has no guardrails. It can edit files, run shell commands, propose changes without restrictions.
With both:
The Automation triggers the agent. Hooks enforce the rules while it runs:
PreToolUseblocks the agent from runningrm -rfor writing credentials to filesPostToolUseauto-formats every file the agent touchesStoplogs the session summary with timestamps for the audit trail- If something is flagged as high risk, the agent reports via Slack — without merging anything autonomously without human approval
The Automation decides when the work happens. Hooks decide how it happens.
Configuring the Stack
Step 1 — Configure your Automations in Cursor
In your Cursor workspace, configure an automation triggered by PRs merged to main:
Trigger: GitHub PR merged → branch: main
Agent task: "Review this diff for security vulnerabilities, complexity issues, and test coverage gaps. Report high-risk findings in the #code-review Slack channel. Don't make changes."
This gives you permanent coverage without anyone having to manually initiate a review.
Step 2 — Restrict the agent with Hooks
In .claude/settings.json at the root of your repo (commit it so the whole team inherits it):
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"handler": {
"type": "command",
"command": ".claude/hooks/security-guard.sh"
}
}
],
"PostToolUse": [
{
"matcher": "Edit|Write|MultiEdit",
"handler": {
"type": "command",
"command": ".claude/hooks/auto-format.sh"
}
}
],
"Stop": [
{
"matcher": "*",
"handler": {
"type": "command",
"command": ".claude/hooks/audit-log.sh"
}
}
]
}
}
Your security-guard.sh blocks dangerous patterns. Your auto-format.sh keeps code consistent. Your audit-log.sh writes the summary of each session to a log file with timestamps.
Every agent that Automations launches — and every developer running Claude Code manually — operates under these same rules.
Step 3 — Define your human checkpoints
Not everything should be automated end-to-end. Decide in advance:
- Security findings above a certain severity → Slack alert, human reviews before merge
- Low-risk PRs (docs, minor refactors) → agent can comment, human still merges
- PagerDuty incidents → agent investigates and proposes a fix PR, human approves before deploy
Automations handles detection and initial response. Hooks handles guardrails. Humans handle judgment calls.
For Teams: The Governance Argument
When you commit .claude/settings.json to your repository, every developer on the team inherits your Hooks configuration automatically. When you configure Automations in your Cursor workspace, every push triggers the same review pipeline.
For tech leads and CTOs, this means:
Consistent standards without having to monitor everyone. You don’t need to remind developers to run the linter, check for vulnerabilities, or update documentation after a refactor. The stack does it automatically, always.
Audit trails by default. Your Stop hook logs every Claude Code session. Your Automations log every workflow triggered in Cursor. You have a complete record of what ran, when, and what it found — without anyone having to maintain that manually.
Onboarding that works. A new developer clones the repo, opens Claude Code, and the hooks are already configured. Opens Cursor and the automations are already running. There’s no “remember to configure X” — configuration is part of the codebase.
Separation of concerns at scale. As you add more agents — more automations, more developers using Claude Code — the governance layer scales with them. Rules are defined once, applied everywhere.
The Honest Tradeoffs
Hooks require maintenance. Your shell scripts need to work correctly or they’ll block legitimate operations. Test them well before committing them to the team repo. Start with PostToolUse (non-blocking) before adding PreToolUse hooks (blocking).
Automations pricing isn’t fully published. Expect costs to scale with trigger volume. Start with high-value, low-frequency triggers (PRs merged to main, PagerDuty incidents) before adding high-frequency ones (every commit).
Both require calibration. Too many hooks create friction. Too many automations create noise. The goal is targeted automation — the right guardrails in the right places, not maximum automation everywhere.
The Short Version
Claude Code Hooks answer: what should always happen inside an agent session?
Cursor Automations answer: when should an agent session start?
Together, they answer: how do we run AI-assisted development at team scale, with standards and without constant babysitting?
That’s the stack.
Are you already using either tool on your team? Or both? Tell us how you configured them.
Related Articles on yoDEV:
