Thesis: Interesting work is less amenable to the use of AI
Scope of AI‑Friendly Work (Boilerplate vs Core Logic)
- Many argue most real-world software is boilerplate (CRUD, auth, billing, emails), where AI can help a lot.
- Others counter that their work rarely touches CRUD; interesting niches (compilers, modeling languages, research tools) get little benefit because models lack relevant training data.
- Some say mature orgs already solved boilerplate with frameworks; rewriting it with AI risks new technical debt instead of using off‑the‑shelf solutions.
Greenfield vs Long‑Lived Systems
- Several see LLMs as excellent for greenfield scaffolding (“Rails‑new for everything”) and quick dashboards or UIs.
- They’re considered much weaker for late‑stage changes that cross teams, APIs, and deep domain constraints (e.g., clustering visualization, numerical subtleties).
AI as Research, Brainstorming, and Process Tool
- A recurring pattern is using AI as “Google/StackOverflow on steroids,” rubber‑ducking, ideation, and domain discovery rather than source of final code/text.
- Some describe it as offloading “thinking time” for basic background knowledge, not for novel results.
Reliability, Hallucinations, and Verification Burden
- Many report models fabricating API fields, schema columns, or domain behavior despite explicit constraints.
- For scientists and engineers, verification is already the hardest part; AI that introduces new, non‑obvious errors makes that harder, not easier.
- There’s skepticism that current LLMs can truly “reason”; they’re seen as probabilistic pattern machines that output plausible but often wrong answers.
Model Quality: Local vs Frontier
- Experiences with small local models (3–8B parameters) were notably poor for nontrivial refactors.
- Others claim hosted frontier models (o3, Gemini 2.5, Claude 4) are much stronger, but raise privacy and legal worries for proprietary code.
Ethics, Integrity, and Creativity
- Some writers/programmers see AI‑generated creative work as a violation of personal integrity and would avoid such authors thereafter.
- Others frame codegen as “fancy automated plagiarism”: useful when work can be adapted from prior art, but ethically gray and ill‑suited for genuinely new ideas.
Security, Policy, and “Cheating”
- There’s tension between corporate bans on LLMs (for secrecy/compliance) and the expectation that employees quietly use them anyway, or run local models.
- In high‑sensitivity domains (military, medical, cutting‑edge research), several insist strict controls or local deployment are non‑negotiable.
Productivity, Labor, and the Nature of “Interesting” Work
- One camp: AI removes the easy 50%, leaving humans to focus on the hard/interesting half, increasing job quality.
- Another: management will treat “2× productivity” as “½ the staff,” or people will simply slack rather than invest freed time.
- Some broaden the debate: “interesting work” has always coexisted with drudgery; LLMs automate the latter but can’t originate fundamentally new concepts.