Casey Muratori: I can always tell a good programmer in an interview
Limits of “drill‑down on past project” interviews
- Assumes the interviewer is senior and technically sharp enough to pick good topics, ask probing questions, and spot BS; many aren’t.
- Strongly biased by interviewer’s own experience: they may reject simple, appropriate designs in favor of their preferred “proper” stack.
- Can disadvantage candidates with classified or tightly NDA’d work (government, big secretive companies) who literally cannot discuss details.
- Also hard on people with many years of experience or poor autobiographical memory; they may recall themes but not implementation detail.
- Some worry candidates can just parrot team design docs, and that the method selects for memory and storytelling more than autonomous design ability.
System design discussions and tradeoffs
- Many argue the key signal is how candidates reason about tradeoffs, constraints, and “it depends,” not whether they build a buzzwordy distributed system.
- Good interviews are described as back‑and‑forth, co‑worker‑style conversations where requirements are clarified and approaches are compared.
- Others find targeted system design problems on the company’s own architecture give clearer signal about real abilities, but admit this favors those with similar prior experience.
LeetCode, coding tests, and LLMs
- LeetCode‑style questions are seen as scalable and standardized but often poor at predicting real‑world performance; some view them as hazing or wage‑suppression tools.
- Distinction is drawn between trivial “can you code at all” questions (e.g., FizzBuzz) and hard puzzle/DP questions that mostly reward grinding.
- Remote coding interviews increasingly suffer from LLM cheating, making open‑ended discussion and pair programming relatively more trustworthy.
- Some companies respond with niche, low‑level trivia quizzes that are hard to Google or ask an LLM, but concede this only fits certain domains (e.g., bare‑metal embedded).
Recruiters, process design, and evidence
- Debate over non‑technical HR screens: they can filter volume but often mis‑screen strong engineers and create bad candidate experiences.
- Several comments emphasize that any technique fails in the hands of unskilled interviewers; there’s little feedback or training to improve them.
- One thread calls out the near‑total absence of empirical, research‑backed interview design in tech; most processes are anecdotal and rarely validated against outcomes.
Personal projects, NDAs, and ethics
- Strong disagreement over expecting side projects: some claim “real” programmers always have them; many others call this unrealistic and biased against people with families or demanding jobs.
- Concerns raised about asking for deep details of proprietary systems—potentially encouraging NDA or trade‑secret violations.
- Open‑source or personal work can work well for drill‑downs, but not everyone has presentable code they can legally or comfortably share.
Beyond competence: productivity and fit
- Multiple comments note that distinguishing “good programmer” from “productive in this environment” is much harder and largely unsolved.
- Red flags sought include inflexibility, ego, inability to handle requirements they don’t like, and poor collaboration in code review or pair settings.
- Some advocate paid trials or probation periods and fast firing as the only reliable way to catch false positives, though legal and social constraints limit this.