I quit. The clankers won
Role of AI in Coding and Skills
- Many see coding agents as a massive productivity boost: faster prototyping, easier refactors, and access to new tools and techniques that would have taken much longer to learn manually.
- Others argue agents mostly generate mediocre or brittle code, increase complexity, and require heavy human review; “vibe-coded” codebases are seen as fragile and hard to maintain.
- Concern that juniors using AI from day one don’t build foundational skills, echoing earlier worries about calculators, GPS, and IDEs. Some report firsthand that juniors get stuck in AI-driven rabbit holes.
- Debate whether “effective use of coding agents” is now the most important developer skill. Counterpoint: real differentiation will still come from architecture, review, security, and understanding systems.
Deskilling, Dependency, and Long‑Term Risk
- Fears of widespread deskilling: fewer people able to build OSes, compilers, infrastructure, or debug complex systems without AI.
- Some see this as temporary: if things break badly enough, incentives will re-create deep expertise, as with COBOL or other legacy tech. Others worry there may not be enough time or institutional slack when that happens.
- Comparison to past tech shifts (assembly → C, classic ML → neural nets): some argue LLMs are just the next layer; critics say this ignores their current unreliability, hallucinations, and shallow “understanding.”
Art, Writing, and Human Voice
- Strong split on generative art/text: some call it “irredeemably bad art” and emphasize valuing human-made work (even amateur blogs, kids’ drawings).
- Others note that audiences often don’t care about “genuine creativity,” citing pop music and utilitarian content consumption; they expect AI-generated art to be widely accepted.
- Several argue blogging and journaling matter more now as a way to stay mentally sharp, develop taste, and maintain authentic human voices in an “AI dark forest.” Writing for oneself (even with few readers) is framed as its own reward.
Ethics, IP, and Training Data
- Strong resentment that models were trained on code and writing without honoring licenses (e.g., copyleft, CC BY-SA) or paying creators. Some see this as a betrayal of free/open-source ideals, not their fulfillment.
- Others argue training on public material is just how progress works and liken complaints to wanting 100% of the value from any downstream use.
- Disagreement over whether models are legally/meaningfully “derivative works” and whether copyright is effectively dead in practice.
Workplace, Careers, and Management
- Mixed reports on companies investing in developer growth: some see genuine training and mentoring; others only lip service, with management focused on short‑term delivery and now on AI-driven “efficiency.”
- Worry that AI lets management treat engineers as interchangeable “prompt operators,” eroding craft and pushing wages toward a race to the bottom—especially if everyone can prompt but few can deeply understand systems.
- Counter-view: organizations will still need experienced people to specify, supervise, and audit AI outputs, especially for critical systems; coding may shrink, but engineering judgment remains central.
Coping Strategies and Resistance
- Proposed responses include: blocking crawlers, favoring local models over cloud, being strict on AI-generated PRs, and using AI mainly as a learning/exploration tool rather than a coding crutch.
- Some advocate collective resistance or sabotage (wasting tokens, slow‑walking AI-heavy processes), though others point out this can just help organizations refine their AI usage.
- A recurring emotional theme is mourning: the sense that a beloved craft and online culture are being flooded by “good enough” machine output; others respond with enthusiasm, seeing a new, exciting tooling era.