I avoid using LLMs as a publisher and writer
LLMs in Translation and Publishing Quality
- Some publishers report dramatic gains in speed: a solo English→Korean translator can produce decent first drafts quickly and use GPT for typo/grammar cleanup, compressing book timelines to under a month.
- Others testing MT in real workflows found that meeting their quality bar required so much rework that time “savings” evaporated.
- Concerns include loss of translators’ creativity and linguistic sensitivity, weak handling of rich languages (e.g., Czech) and specialized terminology, and long‑term reputational risk for houses known for exceptional human translations.
LLMs as Writing Aids vs. Replacements
- Several posters who consider themselves strong writers see little value in LLM‑generated prose: it doesn’t add ideas, and its voice feels generic or grating.
- Others use LLMs as editors or “mediocre sounding boards”: flagging unclear passages, suggesting alternative phrasings, or breaking writer’s block, while discarding most of the actual wording.
- A debate emerges around a teenager using LLM feedback instead of parental or peer critique:
- Pro: it’s a powerful, always‑available editor that may be “better than any resource a teen has,” and feedback is emotionally safer coming from a machine.
- Con: this displaces human relationships, community (writing groups, workshops), and the growth that comes from the friction of sharing work with people.
Coding, Tools, and “Junior Dev” Analogies
- Many see LLMs as useful for boilerplate, quick examples, and code review, especially when kept within a narrow, well‑defined scope.
- Others compare them to an endlessly scalable but incompetent teammate: they generate plausible code with subtle bugs, never truly improve, and increase maintenance load.
- There’s disagreement over whether they meaningfully reduce cognitive load, or just shift effort into verification and debugging. Some feel models have plateaued or degraded; others expect major gains via specialized models and better tooling.
Creativity, Art, and Authenticity
- Multiple commenters argue that art is defined by dense, deliberate human choices; delegating large chunks to a model makes work feel thin and “decompressed.”
- LLMs are framed as fundamentally extractive (mining existing culture), well‑suited to constrained tasks like translation, tagging, and summarization, but not to genuine creative thought.
- Some readers now assume most online text is AI‑tainted and find this erodes trust and enjoyment; others predict average consumers won’t care as long as outputs are polished.
Skill Levels, Atrophy, and “Being Left Behind”
- For top‑tier writers, LLM output feels clearly inferior; for median or weaker writers, it may match or exceed their own capabilities, which explains much of the enthusiasm.
- There’s worry that long‑term reliance will atrophy people’s expressive and critical‑thinking skills, “median‑izing” voices.
- A recurring clash: one side insists AI adoption is inevitable and non‑users will be “left behind;” the other cites past hype cycles (crypto, VR, etc.), rejects inevitability rhetoric, and prioritizes maintaining human craft even at lower income or speed.