AI is impressive because we've failed at personal computing

LLMs as tools: capabilities, limits, and expectations

  • Thread debates whether it’s “unreasonable” to expect baseline reliability from LLMs, given they’re marketed as data analysts and job-replacers.
  • The “count the B’s in ‘blueberry’” meme is used to illustrate brittle behavior: some argue this disqualifies LLMs from trusted use; others say it’s a mismatch between architecture (tokens) and task (characters).
  • Several comments stress that if a tool confidently fails trivial tasks, users are justified in distrusting it for harder ones.
  • Others counter that all tools and humans are imperfect; what matters is using LLMs where they’re the most efficient option and delegating exact tasks (like counting) to traditional code, possibly invoked by the LLM.

Why the Semantic Web didn’t happen

  • Multiple commenters argue the main blockers were incentives, not technology:
    • Publishers and companies don’t want to expose scrapeable, recombinable data that others can monetize.
    • Academia and business often hoard data for competitive or funding reasons.
  • Technical critiques: global ontologies are too complex, brittle, and bureaucratic; semantic markup gave little direct value to ordinary authors or readers.
  • Some see the Semantic Web as overhyped and never truly existing beyond breadcrumbs, schema.org, and niche RDF/OWL deployments.

AI versus (and alongside) the Semantic Web

  • One camp says LLMs effectively are a new semantic layer: they infer structure from messy text and can generate SPARQL, JSON-LD, or RDF triples over existing corpora.
  • Others question whether inferred relationships from LLMs are more reliable than human-authored structure, especially under hallucinations.
  • There’s enthusiasm for hybrids: using LLMs to:
    • Generate or refine semantic markup (e.g., for Wikidata, knowledge graphs).
    • Translate natural-language questions into structured queries.
  • Concern is raised that AI-generated structured data can also be “correct-looking slop,” complicating future training and search.

Personal computing and UX disappointment

  • Some agree with the article’s lament: computers are vastly more powerful yet feel harder to use; everything funnels users into opaque search boxes instead of transparent structure.
  • Mobile-first design is blamed for degraded desktop UX (low-density interfaces, loss of right-click/tooltips, dialog windows replaced by swipe views).
  • Others argue AI itself is becoming “personal computing”: natural-language interfaces that can orchestrate tools and data, albeit still needing human oversight for code and safety.

Incentives, software quality, and the ad-driven web

  • Strong frustration with the overall software ecosystem: many products are seen as “complete embarrassing shit,” yet succeed because users and buyers lack standards and money rewards speed over quality.
  • Ads and SEO are repeatedly named as corrosive forces: they discourage open structured data, degrade search, and favor engagement over utility.
  • Some see LLMs as a brute-force workaround layered atop this failure; others argue they’re simply the next evolutionary programming paradigm, exploiting surplus compute.