Why don't you use dependent types?

Overall stance on dependent types

  • Many comments read the article’s “punchline” as: dependent types are not “bad,” but they’re often unnecessary; you should choose your battles.
  • Isabelle/HOL is cited as evidence that huge formalization projects (schemes, major theorems) can succeed without dependent types or proof objects, with no obvious “expressivity wall.”

Automation, libraries, and communities

  • Several comments agree that automation, well-designed libraries, and legible proofs moved the needle more than a fancier core calculus.
  • Lean’s real win is often framed as mathlib and its GitHub-style contribution model; Isabelle’s AFP is compared to a traditional journal with refereeing.

Use cases: sizes, bounds, and invariants

  • Matrix sizes and index bounds are a recurring example: people want types like “10×5 float32 matrix” and functions whose types enforce dimension relationships.
  • Many point out this can be partly done without dependent types (C/C++/Rust const generics, phantom types, TypeScript tricks, Python+plugins, Common Lisp array types).
  • Others note that truly general “index in bounds even when both are dynamic” is where full dependent types shine, but this often requires threading explicit proofs through code, which is ergonomically painful.

Complexity, UX, and when DTs hurt

  • Dependent types blur “type error” vs “proof error”; debugging failed typechecking can feel like debugging complicated proofs.
  • Heterogeneous equality (e.g., Vec (n+m+o) vs Vec (n+(m+o))) is highlighted as a concrete pain point; some prefer non-indexed structures plus separate theorems.
  • Several practitioners say the “juice isn’t worth the squeeze” for most software, especially CRUD/LoB systems.

Type erasure, QTT, and what DTs buy you

  • There’s an extended explanation of dependent types as:
    • functions returning types,
    • output types depending on input values,
    • later tuple components’ types depending on earlier values.
  • Discussion of erasure vs runtime use: Idris 2’s Quantitative Type Theory (QTT) is mentioned as clarifying which values are erased, used arbitrarily, or used exactly once (linking to linear typing).

HOL vs dependent type theories and logical strength

  • One line of argument: HOL’s core logic is relatively weak for post‑WWII mathematics and category theory; locales and added axioms (e.g., universes/inaccessibles) partly address this, but scaling to a “Mathlib‑size” library is seen as unresolved.
  • Others stress that for software verification, HOL-style systems (Isabelle, ACL2) have strong track records; dependent types are not demonstrably necessary.

Proof objects, LCF, and reflection

  • The article’s criticism of proof objects (space, complexity) is debated.
  • Some argue LCF-style kernels and proof scripts can support “proof by reflection” just as type-theoretic systems do, by treating certified rewrite engines or tactics as trusted extensions.
  • Others counter that proving the soundness of ML-based tactics is informal and that reflection inside a dependent type theory can be more tightly integrated.

Adoption, tooling, and partial solutions

  • Multiple comments note steep learning curves, math-averse cultures, and immature ecosystems as practical barriers; people won’t adopt tools that feel too “mathy.”
  • Some favor “leaning towards” dependent types but not going all the way: refinement types, row types, rich static systems (F*, Dafny, LiquidHaskell, Purescript, advanced TypeScript) or embedding DT-like reasoning in existing languages and databases (e.g., TypeDB).
  • A recurring theme: the real skill is knowing when not to make something dependent, and using DTs selectively where their extra power clearly pays off.