AI founders will learn the bitter lesson

Revisiting the “bitter lesson”

  • Many agree history favors general, compute-heavy methods over hand-crafted, domain-specific systems in core AI research (vision, speech, games, LLMs).
  • Others argue this is overstated: many production systems (recommenders, chess engines, autonomy stacks) remain heavily specialized.
  • Some push back that current deep nets and transformers are themselves crude, compute-hungry hacks, not evidence that “less inductive bias is always better.”

Implications for AI startups and moats

  • One camp: vertical AI “wrappers” around foundation models will be steamrolled as models become more general; founders should expect erosion of technical moats.
  • Counterpoint: you can still build big businesses in the interim, capture users, learn domains, and then swap in better models later.
  • Moat ideas: proprietary data, deep domain expertise, distribution, product/UX, and long-term customer relationships, not prompts alone.

Data, context, and ETL

  • Strong theme: the real bottleneck for many applications is context and data plumbing, not model capability.
  • Startups that can integrate messy enterprise systems, extract and normalize knowledge, and feed it to models (RAG, tools, agents) are seen as durable.
  • This is framed as classic ETL and systems integration, not magic; LLMs are “ovens,” the hard part is preparing good ingredients.

Scaling limits, data exhaustion, AGI timelines

  • Debate over whether “more compute + more data” is running out of runway:
    • Some argue we’re near “peak training data” and seeing diminishing returns.
    • Others expect orders-of-magnitude efficiency and capability gains from synthetic data, new architectures, and hardware.
  • Timelines diverge: from “office-job replacement in <5 years” to “far off / may plateau.”

UX, reliability, and product value

  • Several stress that even with strong general models, UX, workflow design, error handling, and trust remain large sources of value.
  • “Drop‑in remote worker” vision is questioned: LLMs hallucinate, lack incentives, and need supervision; engineering around unreliability is still essential.
  • For many, AI today is a component in a product, not the product itself.