AI Is Like a Crappy Consultant

Perceived Productivity Gains

  • Many describe a large jump in usefulness: from “idiot intern” to something like a junior/crappy consultant.
  • Strong praise for AI code completion and refactoring: described as “god mode,” faster than static typing + IDE refactors + Vim/Emacs for large, mechanical edits.
  • Helpful for learning unfamiliar APIs and replacing much Stack Overflow–style searching.
  • Some say AI is far better than Google search and expect it to displace traditional search; others prefer curated search engines (e.g., Kagi) for deterministic, non‑hallucinated results.

Code Quality, Architecture, and “Vibe Coding”

  • Common theme: AI is poor at architecture and data-structure design, tends to force new problems into existing, suboptimal patterns and assumes current code and user instructions are correct.
  • “Vibe coding” (letting the AI build systems end‑to‑end) is seen as risky; multiple anecdotes of silent failures (e.g., broken file migration scripts) and chaotic student projects.
  • Several argue good engineering practice hasn’t changed: tests (ideally TDD), specs, and understanding the code are still critical.

Search, Fact-Checking, and Hallucinations

  • Disagreement over whether modern models meaningfully “fact check” or “cite” versus just wrapping search tools.
  • Critics stress that the core network lacks source attribution, so it can’t explain where a given code fragment or fact came from, which underlies hallucinations and licensing issues.
  • Supporters counter that tool-augmented LLMs already behave like fact‑checkers for many tasks.

Ethics, Training Data, and Citation

  • Significant concern that models are built on unconsented training data, can’t track provenance, and hallucinate licenses; contrasted with human expectations around attribution.
  • Others downplay this, focusing more on practical utility than on data origin.

Roles, Metaphors, and Anthropomorphism

  • AI compared variously to: crappy consultant, junior engineer, fast intern, or dangerous tool.
  • Some warn against anthropomorphizing; others argue it does exhibit rudimentary reasoning and produces genuinely novel, useful “knowledge,” disputing the “stochastic parrot” label.
  • Heated subthread over what “knowledge” means and whether LLMs “understand” anything.

Prompting Strategies and Tools

  • Multiple workflow tips: use AI for tests/docs and repetitive edits; constrain tasks tightly; ask for alternative solutions; force it to ask clarifying questions; reset context often.
  • Tools like Aider + advanced models are reported to outperform basic IDE integrations, though they introduce complexity (diff formats, configuration).

Socioeconomic and Cultural Notes

  • Some see AI as aligning with executive incentives: cheap, confident answers, fueling interest in replacing devs.
  • Calls for unionization and worries about low-quality, AI‑driven software becoming widespread.