Claude Code now supports hooks

Excitement about Hooks & Capabilities

  • Many see hooks as a major step for “context engineering,” runtime verification, and enforcing enterprise/compliance rules on agent behavior.
  • Hooks are valued because they’re deterministic, unlike CLAUDE.md instructions which Claude often ignores or forgets.
  • Users expect this pattern (scriptable, verifiable steps around an agent) to become standard across coding agents.

Workflow, CI, and Safety Patterns

  • Common envisioned pipelines:
    • Pre-hook to restrict allowed commands (e.g., allow tests but block migrations or dangerous ops).
    • Pre-hook to enforce “write tests first,” then run tests, then only commit on success.
    • Post-hook for auto-formatting, linting, type-checking, saving files, or automatic commits to enable rollbacks.
  • Hooks are seen as essential because Claude Code’s commit mechanism breaks some normal git hooks, especially when using the cloud / GitHub-API path.

Comparisons with Other Tools

  • Some say this closes a gap with tools like Cursor and Amazon Q, especially for linting and type-checking.
  • Opinions diverge: some feel Claude Code is leading the field; others find it too “hyperactive” and prefer more incremental tools like Aider or Cursor.
  • Cursor’s tab completion is praised; Claude Code’s “plan mode,” larger context, and IDE flexibility (JetBrains, etc.) are cited as reasons to switch.

Productivity Wins & Real-World Use

  • Reports of large projects executed with multiple repos in one Claude Code session, with substantial time savings but manual review of diffs.
  • Examples include quickly adding subscription billing to an Android app, complex Azure PowerShell automation, and everyday scripting and troubleshooting.

Limitations, Frustrations, and Workarounds

  • Complaints that Claude:
    • Loses focus, ignores CLAUDE.md, and runs the wrong commands (e.g., missing -j or custom workflows).
    • Struggles with novel problems (e.g., a custom YouTube API app with websockets), looping or making circular edits.
  • Suggested mitigations: simplify and script common commands, TDD so the agent can converge, use hooks to reject wrong actions, and break work into small steps.
  • Some dislike having to frequently /clear due to context limits.

Legal / Terms of Service Concerns

  • Significant debate about Anthropic’s clause banning use of services to develop “competing products or services.”
  • Some interpret it as mainly about training competing models; others say the literal wording is far broader and potentially incompatible with open-source and downstream training on generated code.
  • Edge cases (e.g., third parties later training on code you generated) are noted as unclear.

Impact on Jobs and Software Quality

  • Long subthread on whether such tools will destroy or reshape developer jobs.
  • Analogies: shift from hand tools to power tools, or from film to digital photography—more output, not always better quality.
  • Some expect a flood of “sloppy but good enough” software before a later maturation phase; others argue cheaper development will just expand demand and custom software.
  • Consensus that LLM agents currently resemble very fast interns whose work still requires human design and review.

Technical Notes & Open Gaps

  • Hooks can use stdin JSON and scripts (e.g., with jq) to implement complex logic like monorepo directory-based linting or project-specific behaviors.
  • Some wish hooks were modeled as MCP tools so agents could auto-discover them and reuse across ecosystems.
  • Users report needing to restart Claude to test new hook configs, so many route logic through editable scripts.
  • There’s interest in IDE/Language Server MCP integration for richer, instant feedback beyond basic shell commands.