The AI industry is discovering that the public hates it
Overall sentiment
- Many commenters see rising public hostility toward AI, driven less by the tech itself than by how it’s being deployed and marketed.
- Some argue “the public hates AI” is overstated: outside tech/arts circles many people are indifferent or casually positive, using it like a better search tool.
- Others say people now routinely use “AI” as a pejorative (slop, fake, low quality) even when content isn’t AI-generated.
Perceived harms and externalities
- Job loss and deskilling: fear of mass layoffs, weaker worker bargaining power, and erosion of meaningful, fulfilling work. Some devs feel forced to use AI, with dissent treated as a career risk.
- Economic inequality: perception that AI concentrates gains among a small set of corporations and investors while everyday life gets harder (housing, healthcare, wages).
- Environmental and infrastructure costs: large datacenters driving up electricity prices, straining grids, water use, local emissions caps, land use, and even housing construction in some regions.
- Cultural and informational damage: explosion of low-quality “slop,” fake videos, AI spam in media, and harder-to-detect fraud and manipulation.
- Surveillance and control: concern about AI-enhanced monitoring, automated HR decisions, and “social credit”-like systems used by states and corporations.
Industry behavior and messaging
- Many blame AI leaders’ own rhetoric: loudly predicting job “bloodbaths” and existential risks while racing to sell the tech to governments and corporations.
- Surveys presented by boosters (e.g., “93% at an AI conference are excited”) are mocked as sample-biased and socially pressured.
- There’s resentment over training on copyrighted works without consent, and perceived hypocrisy when companies object to others training on their outputs.
Jobs, productivity, and UBI
- Some propose taxing AI and funding UBI or welfare expansions to share productivity gains; critics say the numbers don’t add up, especially at current AI revenue levels.
- Debate over whether UBI would just entrench a two-tier society with minimal subsistence versus genuinely replacing lost careers and status.
- Skepticism that meaningful redistribution will happen given current political and corporate incentives.
Usefulness and limits of current AI
- Many programmers say LLMs are impressive but unreliable “mediocre assistants” that require extensive verification and generate tech debt.
- Others report large personal productivity gains and don’t want to code without them.
- Some highlight genuinely beneficial uses (e.g., in medicine, research), but others call these mostly aspirational compared to current visible downsides.
Broader context and resistance
- AI backlash is seen as part of wider anger about inequality, precarious work, and unaccountable elites.
- There is debate over how to respond: stronger regulation, slowing or banning “frontier” research, non-violent mass protest, and—more controversially—whether political violence has historically been effective.