LLM Inflation
Business communication, verbosity, and etiquette
- Many argue the “bullet points → AI email → AI summary” loop reflects existing corporate norms: long, polite emails and documents were already expected, even though decision‑makers preferred short summaries.
- Others say the four‑paragraph justification example is unrealistic; in their workplaces, lengthy emails are ignored and short requests are preferred.
- Some note a two‑tier pattern: a concise ask for the approver and a verbose version for “the record” or for others who need to justify the decision.
LLMs as compression/decompression engines
- Several commenters like the framing of LLMs as a kind of lossy compression: training compresses text into a model; inference “inflates” it back.
- Others push back: LLMs aren’t near‑lossless compressors; outputs are verbose, sometimes wrong, and conceptually unlike classical compression.
- People warn of “AI loopidity”: inflating text with one LLM, then compressing it with another, accumulating errors like repeatedly re‑saving a JPEG.
Information loss, novelty, and summarization limits
- Concern: LLM summaries often drop the genuinely novel or subtle parts of a text, especially in scientific work, where new ideas aren’t well represented in training data.
- Defenders counter that many long texts are not actually novel and can be safely compressed; critics reply that the risk of missing the 10–25% that matters is non‑trivial.
Bureaucracy, friction, and equipment requests
- Some see the 4‑paragraph requirement as deliberate friction to discourage marginal requests; LLMs undermine this by making verbosity cheap.
- Others argue such processes are just broken bureaucracy with no deeper purpose, often persisting because someone benefits from gatekeeping or because change is hard.
- There’s debate over whether verbosity is a fair “proof of effort,” given it penalizes some (e.g., dyslexic staff) and rewards prolific writers.
Tools, data, and reliability
- A thread explores whether LLMs could replace spreadsheets/office tools; many say no, citing hallucinations, compounding errors, and auditability needs.
- A recurring suggestion: use LLMs as natural‑language interfaces to formal systems (databases, query languages), not as direct calculators.
Effort signals, formality, and future norms
- Verbose corporate language is seen as social signaling—respect, time spent, or formality—across many languages, not just English.
- Several expect norms to shift toward concise “core meaning,” with LLMs optionally inflating text for politeness on the sender or receiver side.