Scoring Show HN submissions for AI design patterns
Vibe Coding, Quality, and Effort
- Many distinguish between “vibe-coded” (LLM-heavy, fast, shallow) and engineering-driven projects (thoughtful design, tests, refactoring, docs).
- Several note you can build high‑quality products with LLMs, but the base rate is low because most people stop after a weekend MVP.
- Proposed quality signals: sustained development over months, non‑feature commits (tests, benchmarks, cleanups), and lower “sloppification” in code and UI.
- Attempts to use LOC growth as a vibe‑coding detector ran into measurement problems and false positives/negatives.
Side Projects: Learning vs Output
- One camp uses side projects for learning and enjoyment; AI is seen as a “skip to the end” button that removes the fun and the practice.
- Another camp values speed and idea exploration; AI lets them validate many more ideas and iterate on abstractions or product concepts.
- Some split work: AI for boring glue (frontend, refactors, boilerplate), human effort for architecture, domain thinking, and “hard” engineering.
Design Homogeneity and “AI Slop”
- Commenters recognize a common “AI look”: gradients, centered hero, stat banners, rounded cards, colored left borders, trendy fonts, dark themes with marginal contrast.
- Others argue most of these patterns predate AI (Bootstrap, Tailwind, shadcn/ui), so “AI slop” detectors risk flagging lots of human‑made designs.
- Some treat generic, AI‑ish design as acceptable for MVPs; others see it as a proxy for lack of care and originality.
- There is interest in open‑sourcing the scoring tool and publishing lists of “heavy slop / mild / clean” sites to validate its usefulness.
Accessibility Debates
- Many criticize LLM‑styled UIs for poor contrast and weak adherence to accessibility guidelines, arguing it hurts all users and can be a legal risk.
- Others openly say they don’t care, prompting strong pushback citing ethics, future disability, and practical benefits (faster, lighter, more robust UIs).
- Some note AI can improve accessibility if explicitly instructed and tested (e.g., WCAG prompts, Lighthouse/MCP tools).
Signal-to-Noise and Show HN
- Several feel Show HN is flooded with low‑effort LLM projects that are easy to replicate and rarely maintained, eroding its value as a place to learn from others’ craft.
- Others counter that more cheap experiments means faster exploration of idea space; the real problem is discovery and filtering.
- Suggested responses: classifiers (even Bayesian) for “slop,” attention to maintenance history, friction mechanisms (e.g., review others’ projects before posting), or HN‑level tooling that surfaces engineering rigor rather than just polished landing pages.