Taste in the age of AI and LLMs
Overall reaction to the article
- Many commenters found the piece generic, formulaic, and likely AI-generated: short declarative sentences, heavy use of bullets and subheadings, vague abstractions, and lack of concrete examples or personal anecdotes.
- Some called it ironic that an essay claiming “taste is the moat” reads like AI “slop,” exhibiting the same “empty specificity, borrowed tone, and fake confidence” it criticizes.
- A few suggested it might itself be part of a “train your taste” loop, using Hacker News as a feedback source.
Is “taste” really a moat?
- One camp agrees that judgment/taste matters more as AI makes mediocre output cheap; what differentiates people is what they choose to build, how they cut scope, and how clearly they can critique work.
- Others argue “taste” is overhyped: it’s fuzzy, varies by audience, and can be approximated by data, A/B testing, or scaled models, so it’s not a durable moat.
- Several emphasize that effort, execution speed, distribution, proprietary data, and real-world constraints still matter at least as much as taste.
What is “taste” in this context?
- Competing definitions:
- Product/PM taste: clear vision of what to build, what to reject, and how features fit together.
- Engineering taste: coherent abstractions, consistency, idiomatic patterns, and a “north star” for a codebase.
- Aesthetic taste: style, fashion, and signaling (with jokes about tech uniforms and poor tech “vibes”).
- Some note you cannot just have taste; you must also exert effort and gain skill, or you get informed complaint without good work.
AI, coding agents, and “perfect code”
- There’s active discussion around “agentic coding”:
- One approach: define in detail what “good/perfect code” means in your codebase and use LLMs under strict guidelines to raise quality and consistency.
- Critics counter that specification, review, and evolving requirements are still hard, and that “perfect” is ill-defined and often irrelevant to business success.
- Several mention “comprehension debt” and AI-created big balls of mud: AI can rapidly generate tangled codebases whose intent even AI later struggles to untangle.
- Complex domains (e.g., GPU kernels, legacy systems, security-sensitive or obscure integrations) are cited where current models still struggle badly.
Broader impacts and concerns
- Some see everyone becoming more like investors: the scarce skill shifts from doing to making good bets and allocating effort.
- Others worry about an “ocean of crap”: AI floods content and internal docs, making it harder for high-taste work to be discovered or appreciated.
- Multiple comments highlight that aligning software with messy human needs and institutions will keep human judgment critical, regardless of AI progress.