"ChatGPT said this" Is Lazy
Expectations in conversation and advice
- Many commenters dislike replies framed as “I asked ChatGPT and it says…”, especially to personal questions or code reviews.
- The complaint: they already know LLMs exist; they’re asking for your judgment, context, and experience, not for you to act as a dumb terminal.
- It’s compared to “let me Google that for you”: sometimes meant to shame laziness, but often just feels dismissive or spammy.
Disclosure, responsibility, and citations
- Some view “I asked ChatGPT…” as a useful disclosure that lets readers discount or ignore the content.
- Others see it as responsibility‑dodging: signaling “if it’s wrong, blame the AI.”
- Fear: backlash against explicit disclosure will just drive people to hide AI use.
- Another camp says tools need not be named if you’ve verified the content and take ownership, similar to using Google/Wikipedia but summarizing in your own words.
Quality, laziness, and epistemic issues
- Strong view: LLMs are optimized for plausible language, not truth, so unfiltered outputs are “bullshit” — confident but unreliable.
- Dumping long AI answers offloads cognitive work onto the reader and pollutes discussions with cheap, low-effort text.
- Wikipedia/Google analogies split: some say it’s all just fallible sources requiring verification; others say LLM hallucinations make them categorically worse.
- A minority sees value in LLMs as “median opinion polls” or brainstorming tools rather than authorities.
Impact on engineering and code review
- Multiple stories of PRs, specs, and comments filled with obvious LLM text: generic summaries, irrelevant requirements, pointless changes with leftover AI remarks.
- Reviewers resent being the first human to actually read and reason about “your” code.
- Proposed norms: AI-assisted suggestions are fine, but reviewers must (a) filter them, (b) explain tradeoffs, and (c) stand behind concrete recommendations.
- Some teams run automated AI code reviews and find them genuinely helpful for spotting issues in asymmetrical review situations.
Ethics, culture, and polarization
- Hardline critics refuse AI for engineering at all, arguing it weakens thinking, resembles plagiarism, and is built on unethical data scraping; they liken widespread use to “mental obesity.”
- Others see this as technophobic or dogmatic, emphasizing that critical, curious users can leverage LLMs to learn faster and tackle more ambitious work.
- Broad agreement on one norm: using AI privately as a tool is fine; pasting unvetted output as your contribution is not.