AI killed the tech interview. Now what?

Standardized testing & “licensing”

  • Several commenters propose SAT/ACT-style proctored exams or “licenses” for developers: take one standardized test at a test center, reuse the score for multiple employers.
  • Others note this has been tried (vendor certs, Triplebyte) with underwhelming results and even inverse correlation with performance.
  • Supporters argue it could reduce prescreening cost, de-duplicate Leetcode across companies, and possibly help underrepresented candidates bypass broken recruiter funnels.
  • Critics say it just formalizes “Certified Leetcoder™”, doesn’t reflect real work, is easy to game with AI, and reintroduces known equity and bias issues.

AI, cheating, and what to measure

  • Many report candidates obviously reading AI-generated answers or covertly using LLMs during remote interviews.
  • Some teams now explicitly allow AI on take-homes, then probe understanding by asking for explanations, changes, and complexity analysis.
  • Others argue good use of AI requires underlying competence; the real signal is whether candidates can detect and correct AI mistakes.
  • A recurring suggestion: show an AI’s flawed answer and ask the candidate to critique or improve it, though some expect AIs will soon also be good at that.

Interview formats: what people like vs. what scales

  • Strong support for:
    • Pair programming on realistic tasks or existing tickets.
    • Code review / debugging exercises, especially on intentionally flawed code.
    • Conversational, resume-driven interviews about past projects, tradeoffs, outages.
    • Asking candidates to walk through their own code (but many have no public code or time for side projects).
  • Concerns:
    • Take-home projects and “weekend builds” are seen as exploitative and discriminatory against those with caregiving or financial constraints.
    • Trial employment / paid internships for weeks are viewed as ideal but expensive, hard for employed candidates, and high-overhead for teams.

Remote vs. onsite and surveillance

  • Some say the obvious fix is in-person interviews or controlled test centers; others push back on cost and practicality, especially for non-local or remote roles.
  • Proposals like second webcams, room scans, or test centers for HackerRank-style exams are criticized as invasive or just shifting the same bad tests into new venues.

Leetcode, whiteboards, and FAANG-style processes

  • Widespread frustration with Leetcode/whiteboard puzzles: they select for memorization and interview prep time, not day-to-day problem solving.
  • Several note that “AI killed the interview” mostly exposes that many interviews were weak predictors to begin with.
  • Some defend algorithmic questions as rough IQ/problem-solving proxies, especially for companies needing to sift through huge applicant pools, but admit they’re often misused as pass/fail trivia.

Soft skills, culture fit, and legal/HR constraints

  • Many emphasize that coding is the easy part; the hard parts are design, architecture, debugging in messy systems, and working with people.
  • There’s tension between hiring for “vibes”/team fit and the risk of discrimination claims; HR’s need for documented, “objective” processes pushes toward standardized tests.
  • Psychometrics and rigorous validation of interview signals are mentioned, but most believe almost no companies actually analyze whether their process predicts performance.

Broader unease

  • Some fear AI will reduce demand for average devs, making interviews harsher and more signal-poor.
  • Others argue the industry hasn’t changed as much as the hype suggests; most real-world hiring still values track record, adaptability, and learning ability over perfectly inverted trees.