You're Not Interviewing for the Job. You're Auditioning for the Job Title

Interview as Performance vs Reality

  • Many commenters agree the article nails how interviews reward performance over day‑to‑day engineering: you’re auditioning for “senior architect who solves hard problems,” not demonstrating how you’d actually ship features.
  • People report being rejected for answers grounded in real‑world tradeoffs (pagination, indexing, simple architectures) because interviewers wanted textbook data structures or flashy system designs.
  • Some interviewers in the thread explicitly admit they design questions to reveal candidates who over‑engineer versus those who seek minimal, robust solutions—though candidates often can’t tell which is wanted.

Leetcode, Standardization, and “Profession” Arguments

  • Frustration with repeated Leetcode rounds is widespread; several advocate a one‑time standardized exam or certification (analogous to bar/PE exams) instead of redoing puzzles for every job change.
  • Others push back: standardized tests and credentials are distrusted because many certified graduates are weak, while strong engineers may lack formal signals.
  • There’s tension between wanting “software engineer” to be a real profession with ethics boards and exams, and not wanting the constraints, gatekeeping, or extra hoops that come with that.

Candidate Experience: Burnout, Gameability, and “Staying Ready”

  • Long‑tenured engineers describe re‑entering the market as “interview hell”: broken automated coding tests, months of fake or awful roles, multi‑round loops for mediocre pay.
  • Some deliberately “stay interview‑ready” by keeping resumes, accomplishment logs, and networks warm; others find this dystopian—unpaid marketing work just to remain employable.
  • Debate arises over whether this is reasonable professionalism (everyone has to present themselves) or a sign the industry offloads training and vetting costs onto individuals.

Simplicity vs. Complexity and “Trick” Dynamics

  • A recurring theme is that interviews tacitly reward complexity: microservices, Kafka, Kubernetes, and advanced algorithms, even when a SQLite file or simple collection would do.
  • Others argue good interviews value fundamentals and clarity: knowing when a simple design scales sufficiently, articulating assumptions (load, latency, data size), and reasoning about failure modes.

Risk Aversion, Bias, and Structural Problems

  • Several note companies are happy to reject many good candidates to avoid a single bad hire, leading to high bars, many rounds, and heavy emphasis on puzzles.
  • Explanations for bad processes include cargo‑culting big tech, “religious” attachment to rituals, status signaling, frat‑like hazing, and possibly filtering for certain classes or visa outcomes.
  • Lack of honest feedback is seen as a major harm: candidates rarely know whether they failed on skills, fit, or arbitrary preferences.

LLMs, New Signals, and Alternatives

  • One commenter suggests reviewing candidates’ ChatGPT/Claude transcripts plus Git commits as a window into modern problem‑solving; others object this excludes those who don’t use LLMs or work on closed‑source code.
  • A minority argue current puzzle‑heavy processes are still the best proxy they’ve found for engineering ability and are worth the false negatives.
  • A contrasting strand: avoid this entire performance economy by running your own business, where incentives better align with practical, simple solutions.