AI is a front for consolidation of resources and power
Language, Translation, and “Babel” Claims
- Some argue LLMs have effectively solved interlingual communication, likening it to lifting the “Tower of Babel” curse.
- Others counter that humans already communicated adequately via English, gestures, and pre-LLM machine translation; they worry about cultural flattening and “everyone talking the same.”
- Heavy disagreement over past vs current translation quality: several say pre-LLM Google Translate was unusable for many languages, others recall it as “good enough.”
- Critics emphasize that true linguistic understanding includes shared culture and worldview, not just semantic transfer; defenders respond that partial understanding is still far better than none.
Bubble, Hype, and Consolidation of Power
- Many see a classic bubble: massive capex into GPUs and datacenters, valuations untethered from proven value, AI interfaces bolted onto everything.
- The article’s thesis — AI as a front for land, energy, and water capture — resonates with some, who fear “energy cities” giving private firms quasi-state power and reshaping energy policy.
- Others call this overblown: it’s just capitalism and standard capex, no special conspiracy beyond usual resource extraction; the same utilities still own most infrastructure.
- Several note bubbles can still leave behind useful infrastructure (like dot‑com fiber) even if many firms die.
What AI Is Actually Good At (So Far)
- Strong consensus that LLMs are most useful for small, well-bounded tasks: code completion, boilerplate, configs, test generation, bug hints, summarizing docs, bulk text edits.
- Many developers report personal productivity boosts (often 5–4× in narrow tasks), but say this hasn’t clearly translated into higher-value outputs or better products.
- Others report the opposite: AI-generated code full of subtle bugs, overengineering, poor abstractions, and worsening junior learning; “vibe-coded” PRs increase review and cleanup work.
- Several argue AI is far weaker at high-level design, large-scale refactors, and writing good documentation or specs; syntax is not the real bottleneck.
Jobs, AGI/ASI, and Long-Term Trajectories
- Fear that SWE is being “automated away,” with LLMs used first to increase offshoring or reduce headcount. Others see this as yet another failed “no‑programmer” dream.
- Debate over AGI/ASI: some see continuous scaling plus automated AI R&D leading to hard takeoff; others say the true bottlenecks are compute, capital, and physical limits, not researcher count.
- Philosophical dispute over whether consciousness is purely physical/computational and whether more compute alone could yield it; several call “rainbows → pot of gold” analogies misleading.
Surveillance, Spam, and Post‑Truth
- Strong concern that AI will supercharge surveillance: automated analysis of ubiquitous video and data to track individuals, summarize activities, and enable predictive policing.
- Others note much of this (e.g., ALPR, classical CV) predates LLMs; LLMs mainly make querying and narrativizing such data cheaper.
- Many see current main uses as spam, slop content, and academic cheating, further eroding already fragile “truth” norms. Some call AI “rocket fuel” for a post‑truth world.
Centralization vs Local Models and Social Outcomes
- Several worry AI will deepen inequality: cutting labor costs, weakening worker power, pushing society toward resource oligarchy while public services (education, academia) erode.
- Others point to open and local models as a possible countertrend, distributing capability back to individuals as hardware improves.
- Across the thread, people broadly agree AI is “useful but overhyped”; the sharp divide is whether current trajectories primarily enable broad empowerment or further consolidate power and surveillance.