Please stop the coding challenges
Reality of “ancient codebase” work
- Many commenters say debugging undocumented, legacy, poorly maintained systems is common or even their main job.
- Some note this often includes outdated languages, vendor DSLs, escrow code, and half‑completed migrations.
- Several argue that “debug this unknown mess” is actually a realistic senior‑level exercise, but only if the language/stack matches the role or candidates can choose a familiar stack.
Critiques of coding challenges and take‑homes
- Take‑home tasks frequently take far longer than the “suggested” time and are unpaid; some see this as exploitative and one‑sided.
- Open‑ended mini‑app assignments tend to test project scaffolding and bikeshedding, not day‑to‑day work in an existing codebase.
- Candidates often fear their work is barely reviewed, or judged idiosyncratically (style, tooling choices) without feedback.
- Many feel these processes disproportionately select people with surplus free time, fewer obligations, or higher desperation.
Defenses of coding challenges
- Interviewers report a shocking number of candidates, including “senior” ones, who cannot solve FizzBuzz‑level tasks or use basic tools (Git, an editor).
- Coding exercises are viewed as one of the few objective-ish filters vs. charm, resume puffery, and internal politics.
- Some teams design challenges tightly aligned with their real code (small bugfixes, extending existing APIs) and run them as collaborative, time‑boxed pairing sessions, which they claim work well.
Alternatives and proposed improvements
- Suggestions include: code review interviews, walking through prior work or OSS, short live pair‑programming in the candidate’s environment, realistic bugfix tasks in a sandboxed repo, and structured discussions of systems the candidate has actually built.
- Some advocate paid take‑homes or at least limiting them to later stages and always providing feedback.
- Others rely heavily on references, prior collaboration, or probationary periods, but acknowledge these can introduce bias.
LeetCode, DS&A, and “gaming the system”
- There is extensive debate over LeetCode‑style and system design interviews:
- Critics say they reward cramming scripted patterns, not real engineering judgment or experience.
- Defenders argue they test grit, learning ability, and fundamental CS knowledge, and serve as a standardized, “meritocratic” bar—especially in oversubscribed big‑tech roles.
- Many agree the signal is imperfect and increasingly gamed, but no clearly superior, scalable alternative emerged.