Don't push AI down our throats
Aggressive AI Push & User Backlash
- Commenters see AI marketed relentlessly (e.g., TV ads, OS integration, Copilot keys), often via dark patterns and upsells, not organic demand.
- Many argue: if AI were genuinely transformative, it wouldn’t need this much pushing; current behavior signals investor-driven hype and sunk-cost theater.
- Comparisons are made to earlier tech fads (cloud, blockchain, Google+, 3D TVs), but with far greater consumer visibility and resource use.
Coercive Data Practices & Product Degradation
- Strong anger at being forced to trade privacy for basic functionality: Pixel watches and Android voice commands breaking unless Gemini data collection is enabled; AI history disabled unless you consent to training; devices feeling “rented” not owned.
- Users report removing or trashing devices, or trying LineageOS/GrapheneOS/Linux to escape AI creep, but note that work, banks, and healthcare often mandate official Google/Microsoft stacks.
- Multiple examples of regressions: Windows copy/paste and editors misbehaving due to AI hooks; console and TV UIs getting worse with ads and “recommended” content.
Calls for Consumer Protection vs “Let the Market Decide”
- One camp wants laws to prevent functional regressions (“if it did X when I bought it, it should keep doing X”) and/or require rollbacks or refunds. Some lawyers suggest “implied warranty of merchantability” theories.
- Others argue such mandates are technically and economically infeasible (version explosion, backporting security fixes) and that government dictating product features is akin to censorship or overreach.
- This sparks a broader fight over democracy vs free markets: whether it’s legitimate for voters to regulate dominant platforms’ UX, given practical lack of alternatives.
Economics, Bubble Fears & Resource Waste
- Several see AI as a liquidity and prestige play: executives racing to “have an AI strategy,” justify GPU spend, and signal “winning” to investors, not serving user needs.
- Others push back that there’s still a GPU shortage for training and that wide deployment is partly experimentation and data gathering.
- Concern over massive energy and water use of AI datacenters and the broader misallocation of capital if/when the bubble pops, with possible contagion to the wider economy.
Usefulness, Limits, and Copyright Concerns
- Some share genuinely positive use cases (code hints, documentation help, simple Q&A, even spotting plumbing issues from photos) but emphasize current tools still “kind of suck,” are unreliable, or only marginally helpful.
- Coding copilots are seen as especially double-edged: productivity gains for boilerplate vs long-term skill erosion and fragile, incomprehensible codebases.
- Strong resentment toward training on copyrighted works without consent; proposals range from “pay creators fairly” to forcing open-sourcing of models trained on uncompensated data.