The Influentists: AI hype without proof
Skepticism about AI Hype & “Influentists”
- Many commenters see current AI discourse as advertising, not evidence: dramatic claims like “built in an hour what took a team a year” often hide that the result is a toy, partial clone, or heavily guided by expert prompts.
- Social platforms reward sensationalism and vagueness; clarifying follow‑ups or nuanced explanations get a fraction of the reach of viral posts.
- Influencers are seen as follower‑farming or monetizing engagement, similar to past waves of day‑trading / crypto / FBA grifts. Astroturfed Reddit threads and “trust me bro” anecdotes deepen distrust.
- Several argue this is an incentives problem: engagement, career positioning, and national/ corporate AI “races” all push toward overclaiming.
Real-World Use: Helpful but Limited and Risky
- Practitioners report LLMs are good at: quick prototypes, small tools, glue code, learning, summarizing, drafting emails, basic scripts, and “vibe‑coding” personal projects.
- They’re seen as much weaker for: complex domains (e.g. Spark, legacy codebases), performance‑sensitive systems, and security‑critical or regulated work. Models tend to be verbose, duplicate code, and introduce “weird” bugs no sane human would.
- Several highlight that useful use requires deep domain expertise to spot subtle errors; non‑experts over‑trust outputs. Verification, accountability, liability, and security still rest on humans.
Why There’s Little Public “Proof”
- Reasons given for lack of concrete, open demos:
- Outputs are domain‑specific and not broadly reusable.
- Prompts and pipelines are proprietary or a competitive edge.
- Workflows and prompts are boring or embarrassing once revealed.
- Fear of harassment or “slop” accusations when admitting AI use.
Macro Impact and Expectations
- Skeptics argue that if AI really equaled “100k digital workers,” we’d already see obvious, dramatic economic and product changes; instead we see mostly PoCs and incremental tools.
- Enthusiasts counter that progress from early models to current ones is striking, tooling is improving fast, and significant labor displacement or transformation may still be 5–15 years out.
- Some conclude AI magnifies inequality of skill: it makes competent people more productive while enabling low‑effort “slop” from others.
Desired Norms
- Multiple commenters endorse shifting admiration back to reproducible results, detailed process write‑ups, and honest limitations, and away from “hype first, context later.”
- Others say discussion is largely unproductive and that individuals should simply try the tools and judge by their own results.