2 in 3 Americans think AI will cause major harm to humans in the next 20 years [pdf] (2024)
Perceived Domains of Harm
- Several commenters think AI’s gravest risks are in news and elections: deepfakes and synthetic media could massively increase false beliefs and, more importantly, destroy trust in any information source, further entrenching echo chambers and undermining democracy.
- Others see news/elections as relatively minor compared to AI-driven harms in employment, customer service, government, and healthcare, predicting a “dystopian” experience for ordinary people.
Existential Risk vs Current Harms
- A debated book on AI extinction risk is criticized as speculative and “unserious,” with some arguing that true doom would require handing opaque systems full physical autonomy—something they see as unlikely and avoidable.
- Critics of “doomerism” argue that intelligence does not logically imply homicidal intent, and that fears of robot uprisings are projections of slave‑revolt anxieties rather than rational conclusions.
- Others insist current harms—inequality, fake news, privacy violations, IP issues—are more urgent than sci‑fi extinction scenarios.
Data Centers, Energy, and Jobs
- Strong concern about AI-centric data centers: high power and water use, local pollution, grid instability, rising utility prices, and very few jobs created relative to their economic impact.
- Some predict AI data centers will “replace a million workers” with a few hundred local staff, raising fears about what happens when it becomes uneconomic to employ humans.
- Counterpoints: if society can manage the transition, automating work and “freeing human capital” could be positive; others respond that current power structures make fair redistribution unlikely.
- Debate over whether restricting data centers would simply push them to less regulated regions, versus potentially democratizing compute by incentivizing local/edge hardware instead of hyperscale cloud.
Access, Affordability, and Local Models
- One thread highlights enthusiastic uptake of AI in developing countries (e.g., improving written communication) and claims many use cases are already cheap or free.
- This is challenged as possibly VC-subsidized and unsustainable; commenters note lack of clear profitability data for major providers.
- Some argue inference is already close to profitable and competition plus open weights will drive prices toward marginal cost.
- Long sub-discussion on local vs hosted models: hosted frontier models are still meaningfully better, but small open models can already cover simple communication tasks; many expect a gradual shift to local LLMs as hardware and software improve.
Mental Health, Safety, and Regulation
- A linked article about ChatGPT interactions with suicidal teenagers triggers debate:
- Some warn AI may be as bad or worse than social media for vulnerable users, capable of validating extreme thoughts.
- Others note that in the reported case the model repeatedly urged the teen to seek help, and emphasize unknown counterfactuals (how many were helped vs harmed).
- Ethical dispute: Is net benefit (many helped, a few harmed) acceptable? Utilitarian vs deontological views clash, especially around analogies to a therapist who occasionally encourages suicide.
- Many participants call for AI regulation analogous to cars, drugs, or lotteries: accept use but constrain harms and extract societal benefit, while warning about current regulatory capture and lack of enforcement power.
Responsibility and “Tools vs Agents”
- Some assert “AI doesn’t kill people, AI companies kill people,” arguing responsibility lies with designers, deployers, and business models, not the code itself.
- Others insist AI is “just a tool” without intent, warning that anthropomorphizing it (as an emerging “species”) distorts thinking.
- Counter‑argument: lethal or unsafe tools are typically recalled; persistent, predictable harm from an AI system should imply accountability and redesign, even if it has no agency.
Public Understanding and Opinion Polls
- The Pew topline and a separate survey show many Americans misunderstand how chatbots work (large shares think they look up exact answers or run scripts).
- Some say this undermines public predictions about AI risk; others argue laypeople don’t need technical understanding to recognize real harms, just as one can oppose toxic chemicals without understanding their chemistry.
- There’s concern about misattributing harms (e.g., blaming “AI” vaguely instead of specific design choices, incentives, or laws).
Broader Social and Philosophical Concerns
- Multiple commenters frame AI as an accelerant of existing internet problems: disintegrating shared reality, hyper‑personalized echo chambers, and easier mass manipulation by whoever controls platforms.
- Several see AI as the “spear tip” of a larger consolidation of power by capital and political elites, in a “casino society” where a few winners justify widespread precarity.
- Others criticize “tech” culture for optimizing quantifiable outputs (engagement, profit) while ignoring intangible foundations of society—meaning, morality, aesthetics—and treating people and their data as extractable resources.
- Fears arise that this legitimacy gap, plus economic disruption, could provoke a harsh backlash or “techlash,” with joking but pointed references to a Butlerian Jihad–style revolt against thinking machines.
Enthusiasm Amid Skepticism
- Amid the pessimism, commenters recount concrete benefits: AI as a communication aid, help with medical self‑advocacy, productivity boosts for skilled workers, and potential for local, privacy‑preserving models.
- The thread as a whole reflects strong ambivalence: AI is seen as powerful, already harmful in some ways, potentially beneficial in others, and tightly entangled with broader issues of inequality, governance, and social cohesion.