Can I ethically use LLMs?
Energy and environmental impact
- Several commenters say the article overstates LLM energy use: datacenter GPUs are far more efficient per query than local models, and people confuse instantaneous power with total energy.
- Others argue that AI’s footprint is significant but small relative to everyday activities (e.g., beef consumption, driving), so focusing on LLMs alone is inconsistent.
- The “3 bottles of water per query” claim is widely criticized as misleading and clickbait; critics note large uncertainties and poor assumptions.
- Some say energy use is ethically neutral and that higher AI demand could accelerate clean energy (especially nuclear). Others counter that betting the planet on future tech is itself unethical.
- There’s side debate about blockchain: several condemn “blockchains” broadly, while others insist only Proof-of-Work is energy-heavy and that most current chains are not.
Training data, creators, and compensation
- Strong distinction is made between search crawlers (which drive traffic back) and LLM training (which extracts value without attribution or traffic).
- Some want technical or legal mechanisms to block training on their content; others emphasize copyleft concerns and argue LLMs “launder” GPL/AGPL code.
- A recipe-planning app is used as a concrete dilemma: it pulls structured value from ad-supported sites without returning much. Suggestions include revenue sharing, automatic micro-payments, or requiring users to open the original site.
Jobs, automation, and product quality
- One camp argues automation has always driven material progress and job loss is not inherently unethical, provided society handles redistribution (UBI, safety nets).
- Others fear a “peak happiness” where further automation degrades meaning in work, hollows out crafts, and pushes everything toward “good enough” mass output.
- There’s concern that AI tools empower companies to replace people rather than empower workers, especially at large scale.
Bias, misinformation, and surveillance
- Some see hallucination and bias as non-fundamental, improving with time and mitigated by user awareness.
- Others view biased LLMs as scalable propaganda machines: if models systematically distort “truth,” they become tools of manipulation.
- A feedback loop is feared: pervasive data capture → better behavioral modeling → cheaper automation of human tasks → concentrated control and surveillance.
- Existing uses of AI for policing, facial recognition, and mass monitoring are cited as precedent.
Personal use, abstention, and capitalism
- A few refuse to use LLMs at all, seeing them as dangerous cognitive prosthetics, especially when run by “state-aligned” or corporate actors.
- Others argue that under capitalism almost all consumption is exploitative; LLMs are another contradiction: likely built on “stolen” data but potentially life-saving (medicine, education).
- This creates a prisoner’s dilemma: collectively abstaining might be better, but individually most people gain by using AI, and non-users may be economically sidelined.
Open vs closed models and power concentration
- Some believe near–state-of-the-art models will commoditize, with open-source efforts (e.g., fully open models with transparent training data) offering a more ethical path.
- Others think corporate, closed LLMs will still centralize power even if open models exist, because scale, capital, and surveillance infrastructure sit with a few firms.