Obituary for Cyc
Cyc’s legacy and availability
- A partial version (OpenCyc) with KB and inference engine exists on GitHub but is old Java and hard to run.
- Cycorp’s site still markets Cyc for enterprise AI and healthcare/insurance; externally it’s unclear what real capabilities remain or whether the full KB will ever be released.
- Several commenters recall isolated “whopping success” deployments, but overall evidence of broad usefulness is thin and often proprietary.
Symbolic AI vs LLMs
- Many argue Cyc showed that hand‑encoding common sense at scale is infeasible: 30M assertions, ~$200M, 2,000 person‑years, no AGI.
- Others counter that symbolic AI did succeed in narrower areas: SAT solving, theorem proving, model checking, planning/scheduling, verification—so “symbolic AI failed” is an overstatement.
- By contrast, language models delivered incremental value (spellcheck, MT, IR, etc.) for decades and scale predictably with more data/compute.
Hybrid / neurosymbolic approaches
- Strong interest in combining LLMs with ontologies: Cyc‑like KB as a “common sense RAG” layer to prevent absurd outputs and provide auditable reasoning.
- Proposals include: LLMs generating symbolic facts/rules, using symbolic systems as external tools (Prolog, constraint solvers, Z3, MiniZinc), and platforms explicitly marketed as neurosymbolic.
- Concerns: if LLMs generate the KB, you inherit their garbage‑in/garbage‑out issues; and translating natural language to logic may not beat just making models better.
Concepts, fuzziness, and ontologies (“chair” debate)
- Long subthread debates whether concepts like “chair” can be captured by rules/facts:
- One side: human categories are fuzzy and context‑dependent; deterministic symbolic logic “fundamentally misunderstands cognition.”
- Others note probabilistic/fuzzy logics and non‑monotonic logics exist, and symbolic formalisms can model defeasible, uncertain reasoning.
- The difficulty of even agreeing on a definition of “chair” is used both as evidence against fully symbolic cognition and as evidence that language ≠ internal representation.
Assessment of Cyc and the obituary
- Some see Cyc as a heroic but failed AGI attempt; others think it’s wrong to treat one secretive project as an indictment of all symbolic‑logical AGI.
- Several criticize the article’s tone as a “hostile assessment” and overly sweeping in declaring symbolic AGI a dead end.
- Others emphasize secrecy as a major lost opportunity: negative results and internal lessons could have strongly informed the field if more had been published.
Costs, scaling, and “bitter lessons”
- Comparison: Cyc’s lifetime cost is now tiny relative to current LLM burn rates; some argue similar investment in symbolic methods was never tried.
- “Bitter Lesson” discussion: methods that exploit massive compute and data tend to win; that doesn’t strictly exclude symbolic methods, but anything human‑curated struggles to scale.
- There’s broad agreement that future systems will likely combine statistical learning (for perception/fuzzy judgment) with structured reasoning/ontologies (for reliability and auditing).