AI tools I wish existed

Simple tools vs AI overkill

  • Several commenters note many ideas could be done with “30-year-old tech” (bash, exiftool, ImageMagick, OCR) or basic scripting rather than LLMs.
  • Some see the list as mainly “better UI/UX over a foundation model” rather than fundamentally new capabilities.
  • Others object to dismissiveness, pointing out that what’s “easy” for power users is not easy for most people, and there are viable multi‑million‑dollar products hidden in “simple” ideas.

Recommendation engines & feeds

  • The “read-the-whole-web-for-me” recommendation engine gets lots of attention.
  • Some say “just use RSS,” warning that web search now returns “AI slop” and SEO/LLM-optimized content; human curation is still valued.
  • Others argue the idea already exists as algorithmic feeds (Twitter, TikTok, YouTube, Google News), but these optimize for engagement and ads, not user benefit.
  • Privacy is a major concern: people don’t want random apps reading browser history; proposals include browser-vendor or self-hosted/local implementations.
  • A few see ChatGPT Pulse as a partial realization using chat history instead of browsing data.

AI for reading, writing, and media

  • The AI-augmented ebook reader and “chat with the author” idea is seen as technically feasible (Chrome extensions, future Kindle mods), and there are already “chat with this book” products.
  • Some are excited by richer, in-text, footnote-like explanations and tutoring; others see current implementations as clunky side chats.
  • For filmmaking and storyboards, commenters point to multiple existing AI storyboard tools and emerging previz apps.

Fitness, nutrition, and personal assistants

  • The Strong+ChatGPT workout coach idea resonates strongly; multiple people are building or hand-rolling similar systems (tracking sets, rest, progression, and using an LLM for planning).
  • Calorie/nutrition agents are viewed as attractive but technically tricky: visual calorie estimation is often wildly wrong; even humans struggle from photos.
  • Several note big UX gains if logging could be “jumbled thoughts” that AI normalizes into structured nutrition data.

Speech, UI, and device integration

  • There’s demand for high-quality, fully local speech-to-text integrated into phone keyboards, using Whisper/Voxtral-class models and NPUs.
  • Current DIY solutions work but are awkward (keyboard switching, time limits, press‑and‑hold UX), suggesting a strong product gap.
  • Apple is repeatedly cited as well-positioned to build private, context-rich assistants via deep OS integration.

Authenticity, “AI personas,” and simulations

  • A long subthread debates tools that emulate Hemingway/Jobs/etc. for critique.
  • One side: these are inherently deceptive pastiches; you can’t know “what Hemingway would say,” only what a model guesses, which risks people confusing simulation with reality.
  • The other side: an approximate, stylized “Hemingway lens” could still be useful, analogous to a scholar channeling an author’s style; people often willingly suspend disbelief (like in movies or Star Trek holodeck episodes).
  • Some argue modern culture already runs on such mediated, partly fictional representations; LLMs just make that more explicit.

Local vs cloud, privacy, and surveillance

  • Multiple commenters want local-first versions of “life recorder” tools (screen recording + semantic summaries, Recall-like systems), citing discomfort with cloud vendors seeing everything.
  • Others note practical constraints: local models are often too weak or too battery-hungry for mainstream users, so many current products are server-based.
  • There are references to pervasive existing tracking (browsers, ISPs, ad networks, intelligence agencies), but also the appeal of self-hosted or on-device alternatives.

Children, education, and AI devices

  • The “LLM Walkman for kids” draws both enthusiasm and strong warnings.
  • Concerns: children will treat answers as authoritative; even a 1% error rate could deeply misinform them; and dependence on the device may reduce human interaction, collaboration, and parent–child “learning together.”
  • Others counter that kids already receive lots of misinformation from adults and pre-internet myths; the real issue is reliability, value alignment, and making systems that can admit uncertainty.

Productization gap and incentives

  • Commenters note a disconnect between impressive demos and the scarcity of polished, widely adopted products that truly work as advertised.
  • Hypotheses include: cost of using strong models, difficulty reaching users (expensive ads, high CAC), and platforms’ misaligned incentives (features that reduce engagement or ad views don’t get built).
  • Some see most ideas as special-purpose “agents with tools,” with the real opportunity being orchestration and domain-specific context rather than novel AI capabilities.

Personalization, echo chambers, and agency

  • Several worry that many ideas amount to “give me more of what I already like,” reinforcing tastes and beliefs and intensifying echo chambers.
  • Open questions: who defines the starting state for younger generations? How do we avoid social-media-style harms as agents become better at curating everything?
  • A few argue that while convenience is appealing, we should be cautious about offloading too much choosing, exploring, and critical thinking to AI-driven filters.