Live coding interviews measure stress, not coding skills

What live coding actually measures

  • Many describe live coding as testing performance under social evaluation and high stakes, not day‑to‑day coding ability.
  • Several report freezing on trivial tasks (even sum of evens / FizzBuzz‑level) while later solving them easily alone.
  • Others counter that extremely simple tasks are still a valid “can you code at all” screen; if stress makes you fail that, they see that itself as a negative signal.
  • Some argue live coding selects for “stage performers” and high stress‑tolerance, a trait only needed in a minority of dev roles.

Employer incentives and risk trade‑offs

  • Many hiring managers say the main goal is avoiding bad hires, not capturing every good one; false negatives are tolerated.
  • Live coding is seen as a cheap filter against: non‑coders, resume inflation, and “senior” engineers who can’t write basic loops.
  • Others note this doesn’t catch the real killers of productivity: people who can code but add tech debt, complexity, or are bad collaborators.

Alternatives and interview design

  • Commonly suggested replacements or complements:
    • Short take‑home plus a follow‑up discussion / small modifications.
    • Debugging or code‑review exercises on small real‑ish codebases.
    • Pair‑programming style sessions on simple, job‑adjacent tasks.
    • Work trials / probationary periods (where labor law allows).
  • Several emphasize: questions must be very easy, interviewers trained, stress intentionally reduced, and candidates allowed tools/docs.

AI, cheating, and new constraints

  • Take‑homes are now easily solvable with LLMs; interviewers worry they assess “prompting” more than independent skill.
  • Some say that’s fine if candidates can explain, adapt, and critique AI‑generated code; others insist they need evidence of unaided competence.
  • This is pushing some companies back toward in‑person, monitored sessions or obscure/problem‑specific tasks.

Bias, fairness, and who gets excluded

  • Commenters highlight disproportionate impact on:
    • People with anxiety, autism, or other mental health conditions.
    • Older engineers unused to LeetCode‑style puzzles.
    • Potential gender effects (citing research where women all failed public live coding but passed private).
  • Several note live coding is often copied from big tech without evidence it improves hire quality for ordinary CRUD‑style roles.

Experiences and attitudes

  • Stories range from awful “gotcha” interviews and untrained interviewers to enjoyable collaborative sessions.
  • Some genuinely like live coding and find it fun; others avoid any role that requires it and move to indie work, management, or contracting.