GitHub CEO: manual coding remains key despite AI boom

Terminology and Headline Framing

  • Several commenters object to the phrase “manual coding,” seeing it as belittling “human coding” and implying tedium rather than expertise.
  • Others say “manual” is neutral or even precise (“by hand”), and that negative connotation is new or cultural.
  • Multiple people think the headline misrepresents what was actually said; they see no real endorsement of old‑school “manual coding,” just a reminder humans remain in the loop.
  • There is skepticism about secondary sources (e.g., AI‑like summaries, recycled content) and whether the quote is being framed to drive clicks.

How Developers Are Actually Using AI Tools

  • Common positive uses: boilerplate generation, CRUD/UI scaffolding, migrations, test stubs, quick API exploration, planning documents, and staying “in flow” when blocked.
  • Some describe highly productive workflows: AI drafts code or design docs, humans review, refactor, and integrate; AI is treated like a junior or summer intern.
  • Others find AI most useful as a rubber‑duck/brainstorming partner rather than as an autonomous coder.

Current Limitations and Failure Modes

  • Repeated experiences of AI failing at nuanced refactors, mixing up control structures, or being unable to reconcile similar patterns (e.g., if vs switch).
  • Tools often struggle with context: line numbers, larger codebases, or subtle logical conditions; they tend to bolt on new code instead of simplifying or deleting.
  • Hallucinations remain an issue: imaginary crates/APIs, incorrect references, misleading “your tests passed” messages, and broken jq/complexity examples.
  • Some argue critiques should be explicitly about today’s transformer models, not “AI” in the abstract; others see that as hair‑splitting while real users deal with concrete failures.

Specification, Reasoning, and “Essential Complexity”

  • Many invoke ideas similar to “No Silver Bullet”: the hard part is understanding and specifying systems, not typing code.
  • Natural language prompts don’t eliminate the need to think through architecture, business logic, non‑functionals, and long‑term evolution; they may just add an imprecise intermediate layer.
  • Several note that programming languages remain the right precision level for specifying behavior; code is still the ground truth for reasoning and verification.
  • Others say LLMs can help with this “essential” side too, by suggesting patterns or prior art for common business problems—but agreement is limited.

Productivity, Jobs, and Management Incentives

  • Reported productivity gains range from negligible to ~20–2x, mostly on routine tasks; many note similar step‑changes have come from past tools and frameworks.
  • Some foresee fewer developers needed for the same output; others think more software will be built and demand will absorb gains.
  • There’s strong skepticism toward narratives of full SWE automation, but broad agreement that managers and investors want automation, not mere augmentation.
  • Concern is high for juniors: AI can mask their lack of understanding, stunt learning, and make them easiest to replace, even as they’re the group that most needs to write code themselves.

Code Quality and Long‑Term Maintainability

  • Multiple anecdotes of AI‑generated changes introducing subtle logic bugs, large noisy diffs, or 4000‑line “glue code” files that are impossible to reason about.
  • Some say AI mainly adds accidental complexity; human experts must come back to rationalize, refactor, and enforce architecture.
  • Studies and experience suggesting higher error rates with tools like Copilot are mentioned as a possible reason for more cautious messaging from vendors.
  • Many emphasize that debugging, refactoring, and understanding failure modes still require “manual” expertise; when things break, you need people who truly understand the code.

Future Trajectory and Hype Calibration

  • One camp expects further big jumps and eventual automation of many programming domains; another sees a plateau in current approaches and warns against extrapolating hype.
  • Several stress that AI is “just another tool”: powerful but not reasoning like humans, not an AGI, and dangerous to over‑trust.
  • Overall, the thread leans toward: AI can greatly accelerate parts of development, but careful human coding, specification, and review remain central—and may matter more as AI‑generated complexity accumulates.