I've never been so conflicted about a technology

Environmental impact and prioritization

  • Many readers see the article’s “think of the planet” angle as weak or hypocritical, arguing that blogs, phones, cloud, games, and cafés all consume resources; LLMs are not uniquely harmful.
  • Several comments stress relative impact: casual ChatGPT use is likened to a few seconds of a hot shower or a small fraction of beef consumption or video gaming energy. From this view, focusing climate anxiety on LLM queries is misprioritized.
  • Others push back: yes, LLMs may be a small share of global energy today, but AI is driving rapid datacenter build‑out, local water and power strain, noise, and, in some cases, dirty on‑site generation or diversion of nuclear capacity for private use.
  • Debate over “whataboutism”: some say we must compare scales to avoid treating every molehill as a mountain; others insist we “need all fronts” and shouldn’t excuse a new, energy‑hungry industry just because meat or cars are worse.
  • Jevons paradox is raised: even if per‑query efficiency improves, cheaper AI could massively increase total demand.

Timing, governance, and necessity

  • One camp: it’s too early to close the ledger on AI; like social media, we’ll only know the net effect after years of use, and potential benefits might justify the cost.
  • Opposing camp: you must start the accounting now. Once AI is deeply embedded (jobs automated, services dependent), curbing its footprint will be politically and economically painful, as with cars or fossil fuels.
  • On “need”: some agree with the author that we don’t need LLMs to function, so why accept extra emissions. Others argue almost nothing we enjoy is “needed” (web, Netflix, cafés), and utility is subjective.

Web, slop, and training data

  • Some think generative AI is just accelerating trends that already ruined the web (SEO spam, ad‑driven enshittification); the marginal damage from AI slop is small.
  • Others feel AI is qualitatively worsening the information environment and “destroying everything good” online.
  • Training‑data ethics divide commenters: some see scraping as no worse than human learning from culture; others are far more alarmed by privatized infrastructure and corporate capture than by the scraping itself.

AI usage patterns and future scale

  • Coding assistance and possible efficiency gains (e.g., fewer Electron apps, better optimization) are noted as potential offsets, though skeptics doubt many developers actually move to leaner stacks.
  • A detailed thread on Model Context Protocol (MCP) argues that when non‑programmers can turn natural‑language prompts into scheduled “programs” (weather alerts, email triage, etc.), the number of automated AI calls could explode, magnifying emissions far beyond current developer‑centric use.

AI and climate solutions

  • Some are pessimistic: we already know the main climate fixes (less fossil fuel use); AI won’t change the political will problem.
  • Others outline mechanisms for AI as a net positive: lowering design/construction costs of solar, wind, and nuclear; robotics‑assisted deployment; better grid and climate modeling. If AI tips the economics toward clean energy, its own footprint could be more than compensated.