Things I've Done with AI

Scope of AI-Built Projects

  • Many examples shared: personal assistants, note-taking tools, macropads, games, clocks with irregular ticking, drawing “towns,” fictional encyclopedias, blood-test viewers, feature boards, and support-email bots.
  • Some are clearly whimsical or experimental; others aim at concrete utility (e.g., automating life admin, viewing medical tests, support responses).

Usefulness vs. “Slop”

  • Critics argue many AI projects are trivial, duplicative, or self-referential (“tools to use AI”), and often abandoned quickly.
  • Specific criticism targets a fictional encyclopedia that fabricates facts without warning, seen as actively misleading.
  • Defenders say personal joy and learning are valid goals; demanding mass-market success or revenue as a bar is unreasonable and often ideological.

Throwaway Code & Abandonware

  • One side sees the flood of short-lived tools as evidence of no real productivity gain, just dopamine.
  • Others welcome cheap, disposable code: write one-offs, get value, then delete. Reviving abandoned open-source projects via LLMs is cited as concrete value.

Concrete Use Cases

  • Reported successful uses:
    • Reviving and modernizing an abandoned web-based editor.
    • Large-scale refactors of legacy codebases.
    • Tax workflows: renaming and extracting data from PDFs, building web UIs to summarize taxes, preparing documentation.
    • Custom CAD-like desk design tools using browser 3D and B-rep modeling.
    • Custom note-taking apps with specific editor behavior; multiple educational and puzzle games.

Hallucinations, Safety, and Privacy

  • Several note subtle but real hallucinations, especially with large or complex data (lipids, taxes); results can look correct but be numerically or temporally wrong.
  • Mitigations discussed: have LLMs generate deterministic scripts/tools, then run them; extract structured data (JSON) first; use LLMs mainly as validators or hypothesis generators.
  • Strong disagreement over uploading sensitive data (tax, medical) to cloud models; some see it as fine, others as dangerously naive.

AI, Skills, and Careers

  • View 1: Using AI too heavily risks skill atrophy and dependence; might ultimately reduce one’s value.
  • View 2: Refusing AI means “missing out” or being “left behind” in a major computing shift.
  • Pushback: that framing is condescending; tools are easy to learn later, and some skepticism is principled or cautious.
  • Some report barely typing code themselves now, relying on tools like agentic coding environments, but still reviewing output.

Maintainability and System Design

  • Debate over “code that works” vs. maintainable systems:
    • Pro-AI-regeneration side suggests tests + LLMs can regenerate “ugly” code on demand.
    • Critics argue tests can’t capture all behavior; LLMs generate code “nodes” but not the important “edges” (assumptions, relationships).
    • Guardrail-style programming (guided by tests) is seen as insufficient for user-facing, long-lived systems.

Open Source and Bespoke Tools

  • Some predict fewer polished open-source apps: with LLMs, it’s easier to build bespoke tools that exactly match one person’s workflow, with little motivation to generalize or support others.
  • Others counter that using simple, file-based storage (e.g., markdown) and backups mitigates risk; critics question why to reimplement what already exists and is maintained.