Adding a feature because ChatGPT incorrectly thinks it exists

LLMs as a New Acquisition Channel & “Product-Channel Fit”

  • Many see this as classic “product‑channel fit”: a new channel (ChatGPT) is sending ready‑to‑convert users with a clear, shared expectation.
  • Commenters compare it to salespeople promising roadmap features, except now the “salesperson” is an LLM doing free marketing at scale.
  • Some argue this is just unusually cheap market research: repeated hallucinations that converge on a plausible feature = evidence of latent demand.

Building Features from Hallucinations: Pros

  • If hallucinated features are cheap to implement and genuinely useful (e.g., ASCII tab import, formant shifting), adding them is seen as rational.
  • Several teams report using LLM hallucinations as product feedback: when the model “invents” flags, endpoints, or methods, it often reflects what developers would intuitively expect.
  • This leads to notions like “hallucination‑driven development” or using LLMs to guess APIs, then refactoring APIs to be more intuitive and “guessable.”

Risks, Slippery Slopes & Spec Integrity

  • Others are wary: if you keep matching hallucinated endpoints/params, you risk an ever‑mutating API spec and degraded clarity.
  • Suggested mitigations:
    • Implement stubbed/hybrid endpoints with warning headers pointing to canonical docs.
    • Or fail loudly with 404/501 plus an explanation that the LLM is wrong.
  • Concern that teams are reshaping roadmaps around misinformation instead of grounded user research.

AI Shaping Reality & Responsibility for Misinformation

  • Some note a structural asymmetry: it’s often easier to “update reality” (add the feature) than to get ChatGPT fixed, especially for small vendors.
  • There’s debate over who gets blamed: technical users may blame the LLM, but many non‑technical users treat AI answers as authoritative and will fault the product.
  • Broader worry: this exemplifies how AI systems can steer markets and behavior without direct actuator access—humans become the actuators.

LLMs as Design & UX Tools

  • Several describe using LLMs as:
    • API fuzzers (seeing what they guess and where they misuse things).
    • Clarity testers for technical writing and scientific methods.
    • Wizard‑of‑Oz style UX evaluators, revealing missing or confusing flows.
  • A recurring theme: LLMs are weak or unreliable as oracles, but strong as “plausibility engines” and can surface mismatches between expert mental models and average‑user expectations.