As an experienced LLM user, I don't use generative LLMs often

LLMs for Writing and Feedback

  • Multiple commenters use LLMs as “super editors”: clean up dictated or rough drafts while preserving voice, forbidding new sentences or style changes.
  • The article’s trick of asking for “cynical HN comments” on a draft is widely praised as a way to get critical feedback and anticipate objections without sycophancy.
  • Some explicitly hide authorship from the LLM to avoid flattering responses.

Interfaces, Tooling, and Workflows

  • Many avoid consumer UIs (ChatGPT.com) in favor of provider “studio/playground” backends or APIs for finer control (temperature, system prompts, models).
  • Several CLI tools and agents are shared (general-purpose LLM CLIs, coding agents that mix models, OpenWebUI, Cursor, Aider), often logging prompts/responses locally.
  • Some prefer direct HTTP calls over SDKs for simplicity and async support.

Frontend, Mockups, and UI Work

  • Strong agreement that LLMs are useful for quick UI prototypes and CSS (buttons, layouts) even when final code will be rewritten.
  • Experiences vary: some get clean React/Svelte code following detailed style instructions; others report “spaghetti” or inconsistent use of layout systems (grid vs flexbox).
  • A minority argue website builders or templates are faster for mockups.

Prompt Engineering and System Prompts

  • Several lament the lack of serious, senior-level prompt-engineering guides; Anthropic’s docs and a Kaggle whitepaper are recommended.
  • People note incentives not to share best prompts, though some open-source agents expose theirs.
  • Tactics include role-playing critics, schema-based “structured outputs” for JSON, and having LLMs themselves refine prompts.
  • Debate over the real benefit of system prompts vs front-loading instructions in user messages.

Coding Agents, Reliability, and Search

  • Big split in experience: some get “massive productivity gains” with agentic tools (Cursor, Aider, VS Code agents) that run tests/compilers and iterate; others see endless loops, broken build systems, and messy codebases (“lawnmower over the flower bed”).
  • Agents often fail on logical, performance, or security issues that compile and run; generated tests may share the same misunderstandings.
  • Many report models degrading over long conversations and mitigate by frequently restarting chats or clearing history.
  • Newer models plus web search can now handle breaking library changes or undocumented APIs better; users sometimes paste whole docs/codebases into context or use editor-integrated docs.

Societal and Economic Concerns

  • Intense debate over whether fear of AI-driven job loss is irrational: one side cites historical wage growth; the other points to recent wage suppression, offshoring, and inequality.
  • Some see LLMs as just another automation tool programmers have always built; others explicitly refuse to “automate myself out of existence” and view these tools as direct threats.

Reception of the Article

  • Several readers say the content closely matches their own selective, pragmatic use of LLMs but find the title and tone contrarian or “I’m not like other users.”
  • The author responds that the contrarian feel is unintended and stems from pushing against current hype while trying to stay honest.