Don't make me talk to your chatbot

Meta: Many commenters didn’t read the article

  • Large portion of the thread treats the title as being about customer-support chatbots.
  • Multiple people point out the article is actually about humans offloading their writing/thinking to LLMs and then making others read that output.
  • Some argue HN behaves like other social media: reacting to the headline, not the content.

Customer-support chatbots: experiences and tradeoffs

  • Some users like chatbots as first-line support: instant responses, quick refunds, or painless price negotiations (e.g., deliveries, subscriptions).
  • Others say chatbots rarely solve real problems, serve mostly to stonewall, or funnel users into dead ends (broken callback systems, limited options, repeated data entry).
  • A popular “good” pattern: bot collects structured info, then hands off to a human (“smart answering machine”).
  • Frustration focuses on lack of reliable human fallback, especially for banks, ISPs, government portals, and complex billing issues.

Economics and ethics of support

  • One perspective: human support is very expensive (training, churn, facilities, full burdened costs); trivial calls like password resets or “power cycle your router” dominate volumes.
  • Counterpoints:
    • Big firms helped create these problems with confusing UX, fragile products, and opaque flows (esp. identity and passwords).
    • Customers have already paid; “free support” is just bundled support.
    • High-value calls (real bugs, deep technical issues) are rare but important; current triage systems make them too hard to report.
  • Debate over charging for support with refunds if it’s the company’s fault; concerns about perverse incentives to deny responsibility.

“AI slop” in writing, PRs, and discussion

  • Strong dislike for generic, verbose LLM-generated prose in PR descriptions, blog posts, Slack, LinkedIn, etc.; seen as low-signal, formulaic, and often wrong on details and intent.
  • Several argue:
    • If you had to think hard enough to prompt the model well, you could have just written the thing.
    • Readers care about your reasoning and relationship to the facts, not a synthesized average of internet text.
    • LLMs act as “misunderstanding amplifiers” when given fuzzy internal concepts or jargon.
  • Others see value in LLMs as:
    • Tooling to expose complex systems via natural language interfaces.
    • Grammar/spell-check and expansion of terse points into accessible prose.
    • Triage aids that surface relevant docs or APIs, as long as humans provide a concise, honest “anchor” summary.

Broader worries about AI content

  • Concern that AI-generated “slop” will further drown already noisy internet content, making high-signal material harder to find.
  • Calls for emerging etiquette: don’t use agents as your voice in genuine human exchanges, and don’t force others to “talk to your chatbot” when they came to talk to you.