Using ChatGPT is not bad for the environment

Scope of the claim

  • Many commenters argue the title is misleading: the article really shows that per‑user ChatGPT emissions are small relative to major sources (transport, heating, food), not that “using ChatGPT is not bad” in any absolute sense.
  • Several people think such messaging risks trivializing AI’s aggregate impact just as hyperscalers reverse or miss climate targets partly due to AI growth.

Inference vs. training energy

  • Broad agreement: inference for casual personal use is relatively low impact; one commenter notes estimates for GPT‑4 training around 50–60 GWh, comparable to a few hundred long‑haul flights.
  • Critics stress training is not a one‑time cost: models are retrained, scaled up, and many experimental models are discarded. Calling training “one‑off” is seen as misleading.
  • Some highlight that comparisons often rely on outdated GPT‑2/BERT‑era statistics or conservative assumptions; others say newer hardware and software make inference far more efficient than early estimates.

Data centers, grid and water

  • Multiple comments emphasize that while data centers are ~1–1.3% of grid demand today, AI loads are highly localized and can stress regional grids, reduce flexibility, and drive new fossil peaker plants.
  • Others respond that siting near renewables, nuclear, or new generation mitigates this and that major cloud providers have aggressive clean‑energy targets.
  • “Water usage” is debated: some point out it mostly means evaporative cooling water (at plants and DCs); others note this still matters in water‑stressed regions and in specific watersheds.

Individual vs systemic responsibility

  • One camp: debating per‑query footprints is a distraction created by “personal carbon footprint” framing; real leverage is in regulation, grid decarbonization, and industrial policy.
  • Another camp: individual choices and cultural shifts (e.g., less meat, fewer flights, working from home) do scale up and influence policy; dismissing them entirely is harmful.
  • Some argue using ChatGPT contributes to demand signals that justify ever‑larger training runs, so even if one query is cheap, widespread adoption isn’t neutral.

Usefulness and necessity of LLMs

  • Supporters say LLMs are clearly useful for coding, writing, analysis, and some medical triage–like tasks; they see them as just another electricity‑using tool whose benefits must be weighed against costs.
  • Skeptics contend LLMs rarely beat specialized tools (search, chess engines, classical ML) on efficiency, there’s no evident productivity boom yet, and current “omnipotent chatbots” may not justify their environmental burden.

Comparisons to other activities

  • Frequent comparisons: flights, driving, heating, video streaming, meat and dairy, and plastic pollution.
  • Many argue climate efforts must tackle all major sources in parallel, not excuse new ones by saying “other things are worse.”
  • Others counter that activist attention is finite; over‑indexing on LLMs could pull focus from far larger, older emitters.