A year of vibes

Productivity and “lost year” debate

  • Some see 2025 as a “lost year” for programming: discourse shifted from algorithms/architecture to tools, prompts, and AI wrappers, with “AI for X” lists and gold‑rush vibes likened to blockchain/Web3 hype.
  • Others report sharply higher personal productivity: finishing long‑standing side‑project backlogs, building many small CLIs, and feeling the “Anthropic tax” is worth it.
  • A data‑science perspective: 2025 felt like a “2.0” jump in tooling (Polars/pyarrow/ibis, Marimo, GPU‑accelerated PyMC), enabling more, faster, cheaper work.
  • Disagreement on whether learning “natural language as a new programming language” is genuine progress or meta‑work that displaced actual building.

Agentic coding, failures, and tooling gaps

  • Strong interest in preserving coding‑agent sessions: logs as primary artifacts, not just commits. Failures are seen as valuable context to prevent models repeating mistakes.
  • People share workflows to export, search, and visualize Claude/agent session logs, sometimes feeding them back into skills that generate new guidelines or ADRs.
  • Git/PRs are widely viewed as inadequate for AI‑generated code: they lack prompts, intermediate reasoning, and branching attempts. Ideas include prompts folders, JSONL logs, OTel traces into ClickHouse, richer timelines, and self‑review comments.
  • Some argue full sessions are overkill for human reviewers and should be summarized; others think machine‑readable logs will be crucial for future agents and “programmer‑archaeologists.”

Prompts, learning, and developer skills

  • Debate over metrics: some happily use LOC/commit counts for personal productivity; others call such metrics misleading.
  • Concerns that heavy reliance on LLMs will atrophy debugging skills; counter‑argument that “Stack Overflow coders” already existed and AI is just another accelerant.
  • Techniques emerge for handling unproductive loops: resetting code, asking models to analyze why they got stuck, and storing distilled “discoveries” for future sessions.

Parasocial bonds and human–LLM interaction

  • Many resonate with the article’s discomfort about forming parasocial relationships with LLMs; comparisons are drawn to the film “Her” and influencer culture.
  • Some recommend treating LLMs like command‑line tools (short, Google‑style queries) to avoid anthropomorphizing; others naturally use full sentences and politeness, arguing it helps clarity or personal habit.
  • Ethical and psychological questions arise: whether to be “kind” to entities that can’t suffer, whether politeness habits matter for human interactions, and how memory/recall in agents amplifies the feeling of a “someone” rather than a tool.

Emerging use cases and observability

  • Proposed “new kinds” of QA: agents repeatedly running complex onboarding flows to test UX and edge cases, and “note‑to‑self” agents that watch your screen and turn spoken ideas into implementation specs.
  • LLMs plus tools like Ghidra are making binary analysis dramatically easier, even enabling reconstruction of C++ and static vulnerability scanning.
  • Observability is seen as ripe for reinvention: LLM‑authored eBPF and a wave of small, focused OSS tools/Skills could challenge incumbent platforms whose APIs aren’t agent‑friendly.

Adoption, visibility, and industry perception

  • Some claim AI‑written code is already “everywhere but invisible” (e.g., large internal codebases); skeptics ask for concrete, public examples and distrust vendor‑curated showcases.
  • Outside tech, senior leaders reportedly see limited value in agents beyond chat/report assistance, reinforcing a gap between “tech pit” enthusiasm and broader industry expectations.