Many hard LeetCode problems are easy constraint problems

Constraint solvers vs “clever” LeetCode solutions

  • Many commenters agree that hard LeetCode questions often reduce to standard constraint or optimization problems (SAT/SMT, ILP, CP-SAT, MiniZinc, OR-Tools, etc.).
  • Some interviewers say they’d view using a solver as a plus: it shows tool knowledge, abstraction skills, and realism about time-to-solution.
  • Others argue this “defeats the purpose” of the interview: they want to see loops, recursion, dynamic programming, and asymptotic reasoning, not library calls.
  • Critics of solver answers note you typically lose runtime/space guarantees and visibility into performance; in an interview that’s a serious omission unless you can discuss tradeoffs and then produce an efficient custom algorithm.
  • Several point out that many LeetCode “hard” problems are in P; the core challenge is recognizing a known pattern (DP, sliding window, etc.), not inventing a new algorithm.

What interviews are really testing

  • One camp says these questions test “cleverness” or pattern recognition; another says they mostly test whether you’ve memorized ~a dozen patterns and practiced under time pressure.
  • Some interviewers say their true goal is to observe problem decomposition, communication, and basic coding competence; they deliberately use easier questions and adjust difficulty.
  • Others describe processes where only optimal solutions, all edge cases, and rubric-approved approaches pass, even for senior roles; this drives heavy grinding and high false negatives.
  • There’s tension between valuing quick-and-dirty, tool-based solutions (constraint solvers, libraries, AI) vs. insisting on hand-rolled optimal algorithms.

Critiques of LeetCode-style hiring

  • Many see LeetCode performance as a proxy for:
    • willingness to grind on unpleasant tasks,
    • cultural conformity to big-tech norms,
    • ability to tolerate hoops and unpaid prep.
  • Commenters argue it disproportionately filters out:
    • experienced engineers with families or limited free time,
    • people whose strengths are design, debugging, or teamwork rather than timed puzzles.
  • Several anecdotes describe senior candidates failing “stupid tricks” yet excelling at realistic take-homes, and companies with messy monoliths reinforcing bad hiring via puzzle-heavy filters.
  • Some propose more job-like assessments: discuss prior projects, debug real bugs, small take-homes, progressive problems with conversation and partial credit.

Real-world use and limits of constraint solvers

  • Practitioners report strong success using CP/ILP/SAT for scheduling, configuration, optimization, and hackathons, especially when requirements evolve.
  • Others report hitting exponential blowups with modest instance sizes and stress that modeling and heuristics expertise are essential; solvers are not magic.
  • There is broad agreement they’re under-taught and underused, but domain-specific algorithms or libraries are often simpler, faster, and easier to reason about for many day-to-day tasks.