Jevons paradox
Jevons Paradox in AI and Nvidia
- Many comments apply Jevons to AI: cheaper or more efficient training/inference (e.g., DeepSeek R1, synthetic data, RL) could drive more total AI usage and thus more total GPU demand.
- Some argue the DeepSeek paper is actually bullish for Nvidia: synthetic data pipelines and “thinking models” imply more and better foundation models, hence more GPU usage overall.
- Others counter that Nvidia’s current margins rely on a few mega-buyers building huge, differentiated datacenters. If AI becomes cheap and commoditized, demand for ultra‑expensive datacenter GPUs and $500B buildouts may shrink even if total AI use rises.
Stock Valuation, Market Dynamics, and Politics
- Comparisons are made between NVDA and AMZN in the dotcom era. Detractors say the analogy fails because Nvidia already has massive operating income; supporters still see bubble‑like speculation and hope for a “dot‑com‑style” AI crash as a buying opportunity.
- Several note Jevons applies to resource consumption, not directly to stock prices. Market moves reflect perceptions of future margins and competition, not just volume.
- A side thread debates a recent high‑profile Nvidia stock sale: some see normal trading on public news; others speculate about political insider knowledge, without evidence.
Efficiency, Constraints, and Demand Curves
- One line of argument: theory of constraints and finite use cases mean there isn’t an infinite GPU demand curve; at some efficiency level, “good enough” caps spending.
- Others claim that large labs will always find ways to saturate any available compute (larger, more frequent, or more specialized models), so efficiency gains still increase total consumption.
- Analogies are drawn to SMT solvers: huge efficiency and price drops didn’t yield infinite or even massive mass‑market demand; adoption is limited by people and workflows, not just cost.
Access to LLMs and Price Sensitivity
- Several commenters say price does lock out users and organizations:
- Paid add‑ons for office suites were too expensive for many SMBs.
- Local SOTA inference often needs 400–768 GB of RAM/VRAM, with hardware costing $15–30k, which is out of reach for most individuals.
- Lower costs plus local trainability are seen as alleviating:
- Lack of tuning control,
- Data ownership/privacy issues,
- Power waste per useful unit of work.
- Some remain skeptical, arguing many end users dislike current AI features and that LLMs are “solutions in search of problems.”
Induced Demand, Rebound Effect, and Definitions
- Multiple comments tie Jevons to induced demand and the rebound effect:
- Rebound: efficiency → more use, partially offsetting savings.
- Jevons: efficiency → more than full offset, total resource use rises.
- Debate centers on whether induced demand is:
- Just “realized latent demand” along a standard demand curve, or
- A genuine shift of the demand curve itself.
- Highway and housing examples illustrate how cheaper travel or lighting can permanently change behavior and urban form.
Energy, Lighting, Transport, and Other Examples
- Home insulation: cited research suggests initial gas savings erode as occupants raise thermostats, matching Jevons‑like behavior.
- LEDs: strong disagreement over whether 10x efficiency led to similar or greater increases in total lighting energy:
- Some point to far more fixtures (accent lights, outdoor, screens) and historical data showing consumption rising >100x as lighting got cheaper.
- Others doubt a full 10x usage increase and focus on per‑fixture savings and reduced replacement.
- Transport: commenters discuss EVs and 1950s travel levels; cheaper per‑km driving may increase total kilometers driven, but lifestyle and urban design constraints complicate this.
Scope and Misuse of Jevons
- Several see Jevons being casually invoked as “cope” to defend high AI and chip valuations, ignoring time lags and competitive dynamics.
- Others stress that Jevons is empirically uncommon compared to ordinary rebound effects and that its relevance must be analyzed case by case, not assumed.