Building better AI tools

Pace, incentives, and organizational reality

  • Several comments doubt industry will “slow down” or prioritize learning-first workflows in competitive corporate contexts.
  • Committees and shared responsibility are seen as blocking bold rethinks of tooling; people fear “rocking the boat.”
  • Others argue Innovator’s Dilemma and organizational incentives, not lack of creativity, are what really block change.

Interfaces: chatbots vs “intelligent workspaces”

  • Many want richer, tool-heavy “intelligent workspaces” instead of raw chatbots: environments tightly integrated with logs, code, infra, and explicit controls.
  • It’s acknowledged this is harder and costlier than “AI in a textbox,” so vendors skew toward selling to management vs building better UX.
  • Team-context sharing between coding agents (Claude Code, Cursor, etc.) is desired, but concerns arise about loss of control over context and potential for abuse or miscoordination.

Human-in-the-loop vs autonomous agents (especially in ops)

  • Strong debate over how far incident-response agents should go:
    • One side: AI should mostly suggest, not act, due to non-determinism, safety, and the need for humans to practice diagnostic skills.
    • Other side: many investigative steps (log queries, state dumps, anomaly detection) are low-risk and computers are inherently better at them; blocking automation there “wastes time.”
  • Broad agreement that unsupervised destructive actions (e.g., Terraform apply, dropping data, DB wipes) are unacceptable today.

Skills, learning, and “deskilling”

  • Many worry AI coding erodes deep fluency, like GPS eroding navigation or calculators eroding arithmetic; practice (“paint-the-fence”) is seen as key for design skill and mental models.
  • Others counter that higher-level thinking is the real bottleneck; code can become an “implementation detail” if reviewed well.
  • Some report AI actually deepens learning via debugging its flawed output or using it for scaffolding while still reasoning through designs.
  • Parallel debate over creativity: is AI a “bicycle for the mind” or a “credit card for the mind” that eventually presents a cognitive bill?

Designing better AI tooling

  • Many resonate with starting from architecture, specs, and tests, then delegating implementation to AI; AI works best with clear structure, types, and docs.
  • Preference for HITL tools that guide, question, and nudge (Clippy-like) over “magic wand” agents that spit out final answers.
  • Some highlight that current models already support this via careful system prompts; the real gap is product design philosophy, not raw capability.