The L in "LLM" Stands for Lying
Framing LLMs as “Lying” or “Forgery”
- Several argue “lying” is technically wrong: lying requires intent and understanding of truth; LLMs only generate probabilistic text and can be wrong without deceiving.
- Others say the user may be lying when they pass off LLM output as if it were their own authentic craft.
- Some propose viewing LLMs as “Pretend Intelligence”: useful but not to be trusted or marketed as truly intelligent.
- A strand suggests treating AI output as a forgery or at least “untrusted” until provenance or originality is demonstrated; others see this as impractical or philosophically confused.
Code Quality, Boilerplate, and “Vibe Coding”
- Many report LLM-generated code as sloppy, repetitive, buggy, and hard to reason about; you must still review and test it.
- Others say with careful prompting and architecture, LLMs significantly speed up boilerplate, glue code, and unfamiliar APIs, and can be integrated with strict linting, tests, and guardrails.
- There’s debate over whether most coding is “small novelty over lots of boilerplate” and thus ripe for automation, versus seeing real systems as holistic and non-trivial throughout.
- Some maintain that code is fundamentally a liability; tools that reduce handwritten code are good if outputs are proven to work.
Art, Craft, and “Artisanal Coding” Analogies
- Strong analogies to textile machines, Luddites, controlled-origin foods, and museum art:
- One side: industrialization reduces quality and erodes heritage/craft, but wins on cost and scale.
- Other side: mass production greatly improves access; most users don’t care how something was made if it works.
- Similar arguments apply to “artisanal code”: a niche, high-quality craft versus mass-produced “vibe-coded” software.
Copyright, Authenticity, and Source Citation
- Disagreement on whether training on open-source code implies pervasive plagiarism.
- Some want models to cite sources to distinguish reuse, copying, and novelty; others doubt this is technically feasible and question whether humans are held to the same standard.
- Art forgery debates spill over: does authenticity (origin, process, geography) matter more than the end product?
Games, Procedural Generation, and AI Assets
- Claim that procedural generation “failed to deliver” is heavily disputed with many counterexamples (Minecraft, roguelikes, etc.).
- Gamers appear to object mainly to obvious AI art assets and low-effort “slop,” not to AI-assisted code or invisible tools.
- General theme: players care about quality and fun, not whether code or assets were handmade—until poor quality becomes visible.
Work, Power, and Economic Impact
- Some see LLMs as tools for management to cut staff, deskill workers, and shift value upward.
- Others report personal productivity and agency gains (small businesses, solo devs, non-programmers building internal tools).
- There is worry that organizational inertia and incentives will turn “productivity gains” into more work, not better lives.
Use as Teachers and Assistants
- LLMs are praised as fast, interactive teachers and rubber ducks, especially with citations and cross-checking.
- But many worry about relying on a “compulsive liar” as a teacher; trust and verification overhead can be stressful.
Meta Reactions to the Article and Site
- The article is characterized both as sober and insightful, and as emotional, moralizing, or “cope.”
- Some note that most enthusiastic replies are short affirmations, while detailed comments tend to be critical.
- The site’s design and interactive header/animations receive widespread admiration.