Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 234 of 528

Let us git rid of it, angry GitHub users say of forced Copilot features

Alternatives & Centralization Concerns

  • Multiple commenters say they’re moving or donating to Codeberg, Forgejo, or self‑hosted GitLab; some note GitLab is also pushing AI and expect eventual community forks.
  • Debate over whether GitHub is “critical infrastructure” or just a fancy git server with PRs. Some say outages kneecap companies and act as a CDN; others argue that’s bad engineering practice, not inherent criticality.
  • Strong regret that so much FOSS landed on a proprietary, VC‑funded platform, making the community hostage to a corporate owner; others reply that convenience and network effects made this outcome predictable.
  • GitHub stars, free CI (including macOS/Windows), and packages are seen as major lock‑in mechanisms beyond pure git hosting.

Reality of Copilot PR/Issue Spam

  • Several maintainers of popular projects report seeing zero Copilot‑authored PRs or issues; they suspect the scale of the problem is overstated.
  • Clarification: Copilot does not automatically open PRs/issues; a human has to trigger it. The main GitHub discussion is about blocking the copilot bot account, not banning all AI‑authored content.
  • Others worry about LLM‑generated “sludge” from any tool (ChatGPT, Claude, etc.), especially around events like Hacktoberfest or bounty programs.

Forced AI Features & User Hostility

  • Strong frustration with Copilot being surfaced everywhere: GitHub UI, VS Code, Visual Studio, Office 365, and other products. Many describe it as “forced” or dark‑patterned, with limited or hidden off‑switches.
  • Some report Copilot review comments blocking automerge for trivial remarks, and accounts shown as “enabled” for Copilot even when settings say otherwise; GitHub support is described as evasive.
  • Comparison to other “enshittified” products (Google Docs, GCP console) where core quality stagnates while AI buttons proliferate.

Metrics, Hype, and Business Incentives

  • Skepticism about claims like “20M Copilot users” when access is auto‑provisioned or mandated by management, often unused.
  • Many see the AI push as driven by KPIs, investor expectations, and ecosystem self‑interest (e.g., GPU vendors), not organic developer demand.
  • Parallels drawn to crypto and self‑driving hype cycles and to the McNamara fallacy: chasing engagement numbers while ignoring user experience.

Usefulness vs. Cost of LLMs

  • Some developers report substantial productivity gains for prototyping in unfamiliar languages, exploratory scripts, or navigating large new codebases.
  • Others find LLMs useful mainly as fuzzy search / brainstorming tools, with limited or negative net productivity once review and corrections are included.
  • Environmental and infrastructure costs are raised; critics argue the benefits don’t yet justify the scale or the aggressive rollout.

Control, Policy, and Mitigations

  • Workarounds mentioned: hiding AI features in VS Code (Chat: Hide AI Features), Org‑level Copilot disable in GitHub, Visual Studio “hide Copilot” option, and uBlock filters to block Copilot commit‑message generation.
  • Proposals include blocklists for AI‑slop contributors and allowing maintainers to block the copilot bot like any other user.

Corporate Behavior & Regulation

  • Long thread on why Microsoft was allowed to buy GitHub, whether it was already “critical” at acquisition, and the role of antitrust (compared to Adobe/Figma).
  • Some argue corporations are doing exactly what they’re designed to do—maximize profit—and that only regulation and better initial choices (FOSS forges) could have prevented this dynamic.

Why language models hallucinate

Evaluation, Multiple-Choice Analogies, and Incentives

  • Many comments pick up on the article’s multiple-choice test analogy: current benchmarks reward “getting it right” but don’t penalize confident wrong answers, so models are implicitly trained to guess rather than say “I don’t know.”
  • Some compare this to standardized tests with negative marking or partial credit for blank answers, arguing evals should similarly penalize confident errors and allow abstention.
  • Others note this is hard to implement technically at scale: answers aren’t one token, synonyms and formatting complicate what counts as “wrong,” and transformer training doesn’t trivially support “negative points” for incorrect generations.

What Counts as a Hallucination?

  • One camp insists “all an LLM does is hallucinate”: everything is probabilistic next-token generation, and some outputs just happen to be true or useful.
  • Another camp adopts the article’s narrower definition: hallucinations are plausible but false statements; not all generations qualify. Under this view, the term is only useful if it distinguishes wrong factual assertions from correct ones.
  • There’s pushback that “hallucination” is anthropomorphic marketing; alternatives like “confabulation” or simply “prediction error” are suggested.

Root Causes and Architectural Limits

  • Several comments reiterate the paper’s argument: next-word prediction on noisy, incomplete data inevitably leads to errors, especially for low-frequency or effectively random facts (like birthdays).
  • Others argue the deeper problem is lack of grounding and metacognition: models don’t truly know what they know, can’t access their own “knowledge boundaries,” and separate training from inference, unlike humans who continuously learn and track uncertainty.
  • Some see hallucinations as an inherent byproduct of large lossy models compressing the world; with finite capacity and imperfect data, there will always be gaps.

Can Hallucinations Be Reduced or Avoided?

  • Many are positive about training models to express uncertainty or abstain (“I don’t know/I’m unsure”), but question how well uncertainty can be calibrated in practice.
  • There’s broad agreement that you can build non‑hallucinating narrow systems (e.g., fixed QA databases + calculators) that say IDK outside their domain; disagreement is whether general LLMs can approach that behavior.
  • Multiple commenters note a precision–recall tradeoff: fewer hallucinations means more refusals and less user appeal; current business incentives and leaderboards push vendors toward “always answer,” encouraging hallucinations.

Broader Critiques and Meta-Discussion

  • Some see the post as PR or leaderboard positioning rather than novel science; others welcome it as a clear, shared definition and a push for better evals.
  • A recurring complaint is that much public discourse about hallucinations projects folk-psychology onto systems that are, at core, just very large stochastic language models.

Rug pulls, forks, and open-source feudalism

Building from source and packaging models

  • Several comments argue that routinely building from source shifts power to users: switching remotes is easier than abandoning vendor binaries, and cherry‑picking fixes doesn’t require maintainer releases.
  • Guix (and likely Nix) is praised for “source by default” with binary caches and easy local patching; Debian/Devuan cited as long‑standing, reproducible‑build ecosystems, though not as “source‑transparent” as Guix.

CLAs, copyleft, and power asymmetry

  • Many see CLAs that grant unilateral relicensing as the core enabler of rug pulls, especially when combined with copyleft: the company can go proprietary while others remain bound.
  • Others note some CLAs (e.g., certain nonprofit/foundation ones) explicitly promise continued free licensing and are seen as acceptable when backed by strong governance.
  • Copyleft without a CLA (e.g., Linux) spreads copyright to many contributors, making a lock‑in relicensing practically impossible.
  • AGPL+CLA is described as particularly lopsided for SaaS: the company can run a closed service while competitors must publish their changes; Stallman’s view is summarized as prioritizing user freedom over contributor symmetry.

What is a “rug pull”?

  • One camp says there’s no rug pull in FLOSS: old code and GPL/MIT versions “exist forever,” and maintainers owe no future labor. Rug pull can only mean stopping maintenance, which is always allowed.
  • Another camp stresses dependency lock‑in, branding/network effects, active marketing of “open source forever,” and explicit promises (e.g., around core licenses). Under those conditions, relicensing is seen as betrayal.
  • Some distinguish “snapshot and fork” from the large, ongoing effort of sustaining a competitive fork.

Hyperscalers, SaaS, and sustainability

  • Strong resentment toward large cloud providers that monetize popular OSS as services without funding maintainers; examples like Elastic/Mongo/Redis are framed as defensive license changes against this.
  • Others counter that clouds contribute heavily to core infrastructure (kernel, toolchains) and free marketing; they’re just using permissive licenses as written.
  • There’s disagreement on whether criticizing rug pulls is “toxic purism” that distracts from the larger structural issue (hyperscaler dominance), or a necessary defense of community trust.

Funding, responsibility, and entitlement

  • Multiple comments emphasize that most of us are “free riders”; OSS is gift‑giving, and it’s legitimate for maintainers to stop or change direction.
  • Others argue gifts given repeatedly and heavily promoted create moral obligations, especially when users invest labor, integrations, and advocacy.
  • There’s growing interest in more deliberate funding models: sponsoring foundations, directly paying maintainers, industry coordination mechanisms, or even government/sectoral funds.
  • Some enterprises report being burned by license/business changes (Chef, CentOS, VMware/Tanzu) and are pivoting toward funding upstream OSS (e.g., Proxmox/QEMU) instead of proprietary vendors.

SSPL, AGPL, and license design

  • SSPL is seen by some as “almost good”: a stronger anti‑SaaS copyleft, but criticized for vague scope (what counts as the “service”) and incompatibility with GPL/AGPL, making it risky in practice.
  • Several participants wish for a clearer, OSI‑acceptable “AGPL‑plus” that targets proprietary hosted services without sweeping in generic infrastructure or breaking compatibility.

Developing a Space Flight Simulator in Clojure

Clojure / Lisp Readability and Syntax

  • Several commenters coming from C-like or Scheme backgrounds find Clojure visually foreign and “noisy,” especially due to vectors and destructuring.
  • Others argue that once destructuring and Clojure’s maps/EDN are understood, the syntax becomes highly readable and pragmatic, with more compact data representation than JSON.
  • There’s broad agreement that the real shift isn’t parentheses but immutable, high‑performance data structures and the resulting coding style.

Macros, “Code as Data,” and REPL Workflow

  • Some emphasize Lisp’s advantage: code is data, enabling powerful macros (e.g., custom control constructs, threading operators) with tiny code changes compared to non‑Lisps.
  • Others push back that in professional Clojure, macros are used sparingly, mostly in libraries and “with‑context” helpers; application code should prefer functions.
  • A separate thread praises the “live” Lisp/REPL experience (Emacs, babashka, Fennel) and the feeling of “playing” the system by changing running code.

Clojure and Functional Languages in Game Development

  • One camp sees projects like this and native Clojure variants (e.g., Jank) as potentially transformative for some developers: REPL‑driven iteration, good language design, C++‑like performance.
  • Others argue that programming is a small slice of game development; most indie devs are focused on engines like Unity/Unreal/Godot or Lua/C#/C++/Rust, not functional styles.
  • Skeptics call Clojure-as-orchestrator over C++ engines a “beautiful dead end” for mainstream gamedev, citing low FP adoption and art/design priorities, plus GC concerns.
  • Counterpoint: using a high‑level language for the logic while delegating rendering/physics to C++ is exactly the value proposition; maintainability of game logic matters.

Engines, Performance, and Low-Level Concerns

  • Some nostalgia for “rolling your own engine,” but others note that’s now rightly seen as wasteful unless engine building is the goal.
  • The project in question uses OpenGL and a C++ physics engine (Jolt); the author previously prototyped physics in Guile but prefers leveraging specialized C++ for performance.
  • There’s discussion of GC pauses (with mention of ZGC) and of alternative approaches: GC‑free FP (e.g., Carp), high‑level metalanguages generating low‑level code, and functional‑friendly VMs.

Project-Specific Reactions and Wishlist

  • Visuals and technical ambition are widely praised, especially given the non‑traditional stack.
  • Suggested future features include docking, the Moon and eclipses, richer atmospheric/lighting effects, shared planetary datasets, and even elaborate “space war” and ocean simulations.

A sunscreen scandal shocking Australia

Regulation, Enforcement, and Trust

  • Several comments stress that regulations are only meaningful if enforced; lax enforcement lets anti-regulation rhetoric argue “regulation doesn’t work, so scrap it.”
  • Others push back that “more regulation” isn’t obviously the answer, but agree there’s a clear regulatory failure when SPF 50 products test near SPF 4.
  • The deeper concern is a trust gap: products can pass for years, then fail. Suggested fixes: transparent test methods, batch-level public results, routine independent re-testing, and proper recalls.

How Sunscreen Is Tested

  • Many are surprised SPF testing still relies heavily on human volunteers being exposed to UV to see when they burn.
  • Proposals: more in‑vitro / physical testing (standard surfaces, precise application, optical measurement) to screen out failures cheaply, with human tests as a final step.
  • Counterpoint: absorption, sweat, skin condition, and formulation interactions require in‑vivo testing, similar to drugs; labs already combine non‑human and human methods.
  • Anecdotes describe paid test subjects in Australia (Jacuzzi, then UV exposure on treated vs untreated skin).

SPF Numbers, Protection, and Cancer Risk

  • Repeated clarification: SPF is about transmission (1/SPF), not intuitive “percent blocked.” SPF 4 transmits 25% of UV, SPF 30 about 3.3%, SPF 50 about 2%.
  • Debate:
    • One view: benefits rapidly diminish after SPF 30; higher numbers add little in practice.
    • Others argue higher SPF halves transmission again (e.g., 98% vs 99% blocking) and matters over years of exposure; also gives more margin for uneven application and degradation over time.
    • Disagreement over whether SPF meaningfully affects “how long you can stay out” vs just instantaneous dose.
  • Some are unimpressed that even “good” sunscreens might only halve cancer incidence; others see that as still materially valuable at population scale.

UVA vs UVB and Ingredient Safety

  • Commenters note some sunscreens (especially in the US) historically focused on UVB, preventing burns while allowing substantial UVA exposure.
  • Europe, Australia, and Japan are cited as having stronger UVA‑related labelling rules; the US lags.
  • There is concern about contaminants like benzene and about reef‑ and human‑safety of certain chemical filters; others argue background benzene exposure (e.g., from cars) is already significant.

Real-World Use: Clothing vs Cream

  • Many Australians say sunscreen is unreliable in practice because it washes/sweats off and people don’t reapply correctly, especially in water sports.
  • Surf instructors and Queenslanders reportedly favor long-sleeve rash vests, wide‑brim hats, and zinc oxide on high-risk areas; sunscreen is treated as secondary.
  • Others report good results with high‑SPF products when applied heavily and frequently, but still prefer sun‑protective clothing for convenience and certainty.
  • Multiple commenters emphasize hats (not just baseball caps) and UPF clothing as more effective and less fussy than lotion.

Local Brand Perceptions and Scandal Reaction

  • Some Australians say certain major brands “never worked” and had a longstanding reputation as weak; the scandal feels like confirmation of years of folk wisdom.
  • Others, looking at test charts, note those brands often underperform their label but are not uniformly catastrophic; water resistance may be the biggest weakness.
  • Influencer marketing of the failed products is widely criticized: influencers profit, followers are exposed, and there are effectively no consequences for promoters.

Tesla changes meaning of 'Full Self-Driving', gives up on promise of autonomy

Redefining “Full Self-Driving” and broken promises

  • Many see adding “(Supervised)” to “Full Self-Driving” as an implicit admission Tesla won’t deliver unsupervised autonomy to existing buyers, after nearly a decade of “next year” claims.
  • Others argue the change is mostly legal/PR framing: Tesla is now describing what the system does today without explicitly abandoning long‑term Level 4/5 ambitions.
  • Several commenters point to early marketing (e.g. “driver only there for legal reasons”) as clearly implying unsupervised operation, now walked back in practice.

Fraud, regulation, and refunds

  • Large contingent calls this straightforward fraud or securities/false‑advertising abuse, noting stock gains and FSD sales driven by undelivered autonomy promises.
  • Skepticism that US regulators (SEC/FTC, states) will act; some blame “late‑stage capitalism” and weak consumer protection, though others say agencies have probably pushed as far as they can.
  • People who bought FSD years ago feel cheated; talk of class actions is tempered by Tesla’s arbitration clauses and spotty enforcement history.
  • A minority insists early timelines were naïve rather than malicious, but acknowledges they were “irresponsible.”

Waymo, autonomy levels, and what counts as FSD

  • Repeated comparison: Waymo is geofenced Level 4 with remote assistance, Tesla is still Level 2. Debate whether L4 in limited cities “counts” as full self‑driving.
  • Some argue “full” should mean “can drive nearly everywhere humans can”; others say transformative tech doesn’t need universal coverage (analogy to early cell phones and gas stations).
  • There’s disagreement over how often remote assistance occurs and whether that undermines “full” autonomy.

Sensors: vision‑only vs lidar/radar

  • Big fault line: critics say Tesla’s vision‑only bet was “short‑sighted,” rejected decades of sensor‑fusion research, and is now effectively being abandoned.
  • Many engineers and practitioners in the thread argue lidar/radar + cameras are clearly superior for safety, redundancy, and latency; several cite Waymo and Chinese systems as evidence.
  • Defenders counter that:
    • Humans drive mostly on vision, so vision‑only is theoretically sufficient.
    • Extra sensors add cost, complexity, and failure modes; the “best part is no part.”
  • Strong pushback: cameras are not human eyes, current ML is far from human semantics, and engineering safety normally favors redundant modalities.

Human driving, edge cases, and environment

  • Long subthreads on how well humans adapt to foreign driving cultures and conditions vs how localized today’s AVs are.
  • Severe weather (snow, ice, heavy rain, glare, fog) and chaotic traffic (e.g. parts of India, Africa, rural icy roads) are repeatedly cited as unsolved for all vendors.
  • Some argue the bar for machines should be “better than humans,” not merely “as good,” given existing human crash rates.

Current Tesla FSD performance

  • Some owners report FSD now handles 90–95% of their driving, including complex Bay Area/Boston routes, with rare safety interventions. They see rapid progress and consider Tesla far ahead of legacy OEMs.
  • Others report phantom braking, poor behavior in unusual geometry, and camera reliability issues, saying they must intervene every few miles and find it terrifying in real use.
  • There’s a clear split between “it’s already better than average rideshare drivers” anecdotes and “I wouldn’t trust it in bad weather or unfamiliar areas.”

Broader views on Tesla and Musk

  • One camp argues Musk’s leadership created enormous value (EV market, rockets, energy storage) and that overpromising is typical of ambitious tech.
  • Another emphasizes poor governance, hype‑driven valuation, the trillion‑dollar pay package, and a pattern of big, undelivered narratives (robotaxis, cheap cars, tunnels) as warning signs.
  • Some suggest Tesla’s real long‑term play is batteries/energy, with cars and FSD as a bootstrapping and hype vehicle.

Is OOXML Artifically Complex?

Origins and Design of OOXML

  • Several commenters argue OOXML is essentially a direct XML serialization of Office’s legacy binary formats, carrying decades of cruft tied to in‑memory data structures and performance constraints of the 80s/90s.
  • Backward compatibility for “hundreds of millions” of users and regulatory pressure (especially in Europe) are seen as key drivers; designing a clean new format or fully adopting ODF was viewed as too slow and risky internally.

Complexity: Necessary, Accidental, or Malicious?

  • One camp: complexity is “inevitable” given Office’s enormous feature set and commitment to lossless round‑tripping of old documents. Cutting features to simplify the spec would have broken real users.
  • Another camp: much of the complexity is unnecessary for an interoperable standard and exists because Microsoft just dumped internal representation into XML. That’s framed as technical debt and “self‑interested negligence,” not careful design.
  • A more critical camp: the format and spec are intentionally hostile—full of “works like Word95/97” behavior tied to undocumented software, making faithful third‑party implementation effectively impossible.

Interoperability and Standards Politics

  • Strong accusations that Microsoft “bought” or stacked national standards bodies to push OOXML through fast‑track ISO approval, over technical objections and despite overlapping with existing ODF.
  • Some see this as classic embrace‑extend‑extinguish: creating a nominally open but practically proprietary standard to block ODF adoption in government procurement.
  • Others argue both motives can coexist: backward compatibility and strategic obstruction.

Comparison with ODF and Other Formats

  • ODF is praised for clearer, more “markup‑like” structure in simple cases, but also criticized as ambiguous, underspecified, and itself complex once all referenced specs are counted.
  • Debate over which is more “open in practice”: OOXML’s detailed but messy serialization vs. ODF’s cleaner model but reliance on de facto behavior of LibreOffice.

Developer and User Experience

  • Implementers report OOXML as painful: gigantic specs, odd date handling, namespace verbosity, implicit caches, and hidden coupling to Office behavior.
  • Nonetheless, for many tasks (scripts that read/write documents, extract images, simple spreadsheets) OOXML’s zipped‑XML container is seen as a big improvement over old binary formats.
  • Users largely prioritize fidelity over openness; this is cited as why Office remains dominant despite OOXML’s flaws and LibreOffice/Google Docs’ existence.

The math of shuffling cards almost brought down an online poker empire

Article focus and 52! discussion

  • Many commenters find the article’s early emphasis on “52! is huge” largely irrelevant to the real issue, though some enjoy the perspective on how large 52! is.
  • Others note that in “computer terms” 52! is < 2²²⁶, so not astronomically large compared with common key sizes, though still enormous for brute-force enumeration.
  • Several stress that no one sensible generates a random deck by enumerating all 52! permutations anyway.

RNG and seeding failures in the poker system

  • Core bug: the RNG was seeded from time-of-day with millisecond or second resolution, capping possible deck arrangements at about 86 million.
  • This small state space allowed precomputation or clock-synchronization attacks; with observed community cards (especially after the flop), an attacker could narrow down or determine all players’ cards.
  • Thread links to the original technical paper, which describes both a biased shuffle algorithm and the weak PRNG seeding.

Shuffle algorithms and correctness

  • Strong consensus: Fisher–Yates (Knuth shuffle) with a cryptographically secure RNG gives an unbiased, effectively optimal shuffle.
  • Several criticize the article’s implication that computers “cannot replicate” human shuffles; commenters argue computers are typically more random than human dealers, whose physical shuffles are measurably biased.
  • Naïve or ad-hoc shuffling schemes (e.g., repeatedly simulating riffle shuffles or sorting by random keys) are viewed as risky unless mathematically proven unbiased.

Randomness sources and hardware

  • Commenters mention /dev/urandom, CPU instructions like RDRAND/RDSEED, and quantum/thermal noise–based TRNGs as practical entropy sources capable of generating hundreds of megabits per second.
  • Some note that hardware RNGs can be subverted (e.g., via microcode or virtualization), so system design and threat model still matter.

Security standards and blame debate

  • One camp calls the 1990s poker RNG design grossly negligent, arguing that even then probability theory and correct shuffling algorithms were well-known.
  • Another camp is more sympathetic, pointing out that many systems—even by smart teams—have shipped with weak RNGs, and that harm and intent matter when judging “negligence.”

Other games and perceptions

  • Magic: The Gathering Online/Arena shuffles are discussed; some players feel online shuffles “feel different,” with notes about deliberate “smoothing” of opening hands in some modes.

The Universe Within 12.5 Light Years

Tools and Visualizations of the Local Neighborhood

  • Multiple readers recall or suggest 3D navigable star maps and planetaria (100,000 Stars, Stellarium, Celestia, CHView, Galaxy Map, games like Elite Dangerous and Space Engine).
  • There’s frustration that good, modern, interactive 3D maps of nearby stars are rare or outdated compared to the abundance of satellite/solar-system visualizers.
  • Some share physical/artistic maps (laser-etched crystals, posters), and one person mentions building scale-walk tools and videos.
  • Several note the Atlas page itself looks like a “1995 website” but praise its charm and longevity; others point out the map is outdated (e.g., missing objects like Luhman 16).

Interstellar Probes and Propulsion

  • Strong interest in sending unmanned probes to nearby stars, with acceptance that 100+ year missions are plausible.
  • Power is a central problem: RTGs decay too quickly for deep interstellar communication; fission reactors raise reliability and heat-dissipation issues.
  • Beamed-sail concepts (e.g., Starshot) are discussed; critics highlight beam divergence and the need to impart most momentum close to Earth.
  • Some argue tech will improve so fast that later probes might overtake earlier ones; others say we should launch anyway.
  • Generational ships are debated: technical feasibility (size, maintenance, collisions, delta‑v) and ethical/social questions about people born and dying aboard.

Interstellar vs Interplanetary Focus

  • A substantial thread argues our next logical step is thorough exploration and settlement of Solar System bodies rather than nearby stars, both for practicality and to mature ethically as a species.
  • Others still see interstellar craft as an eventual, though distant, goal.

Fermi Paradox, FTL, and Tech Trajectories

  • Some suggest stalled propulsion progress may imply interstellar travel is effectively impossible, offering a bleak answer to “where are the aliens?”.
  • Others push back, citing spurty, unpredictable tech progress and speculative ideas like warp drives, though skeptics note we likely already would see evidence if FTL were feasible.
  • Explanations range from “we’re early/rare” to self-destruction, “prime directive”-style non‑interference, or simply non-overlapping civilizations in space and time.
  • Several insist known physics effectively rules out faster‑than‑light travel; attempted counterexamples (e.g., Cherenkov radiation) are corrected.

Scale of Space and Human Timescales

  • The local 12.5 ly neighborhood feels surprisingly small in terms of viable targets, underscoring how even with big propulsion advances, reachable places remain finite.
  • Long comments use Voyager’s speed and light‑year distances to illustrate how inconceivably slow current travel is, and how even c is “too slow” relative to galactic scales.
  • Relativistic travel and time dilation are discussed: you can reach distant places within a human lifetime on the ship, but millennia pass externally.
  • Some note that returning to a far‑future Earth might be more astonishing than any barren exoplanet.

Physics Sidebars (Light, Gravity, Magnetism)

  • One subthread clarifies “age” of sunlight: energy takes ~hundreds of thousands of years to random-walk from the core to the surface, then ~8 minutes to Earth; photons reaching us are emitted near the photosphere.
  • Another explores relativity: from a photon’s “frame,” no time passes; time dilation and length contraction are explained informally.
  • Magnetism and gravity are discussed as “spooky” action-at-a-distance, leading to historical quotes and field-based explanations.
  • Gravity’s propagation at light speed is mentioned in the context of galaxy-scale effects.

Why Study Beyond the Solar System?

  • Several responses to “why care beyond the Solar System?”:
    • Comparing other systems helps gauge how typical Earth and the Sun are, informing climate and habitability understanding.
    • Astrophysics drives advances in imaging, detectors, and computation that spill over into technology and medicine.
    • Nearby stars and supernovae pose environmental and existential risks; knowing the neighborhood helps quantify them.
    • Distant objects (quasars, pulsars) define stable celestial reference frames and can aid navigation and timekeeping.
    • Historically, stellar observation underpinned calendars, agriculture, and navigation; the same pattern continues at higher tech levels.

Aesthetics, Emotion, and Fiction

  • Many express nostalgia and affection for old-school star maps and game-like galaxy views; comparisons to classic Elite and National Geographic posters are common.
  • The map evokes mixed feelings: awe, insignificance, hope, and a kind of existential sadness.
  • Discussions of galactic empires note that realistic scales make classic sci‑fi political setups and anti‑machine universes (e.g., Dune) administratively dubious without massive automation.

Tesla offers mammoth $1T pay package to Musk, sets lofty targets

Pay Package Structure & Intent

  • Package is entirely stock-based and vests only if Tesla’s valuation increases roughly 7.5–8x over a decade, plus hitting operational milestones.
  • Supporters say this aligns incentives: if the “nearly impossible” targets are met, shareholders get rich alongside Musk; if not, they pay nothing.
  • Critics see it as an “open invitation” to manipulate stock price and definitions of milestones (e.g., what counts as a “robotaxi” or “FSD subscription”).
  • Some view it as a psychological tool to keep investors from exiting a hype-driven bubble.

Current Business, Competition & Brand

  • Several comments argue Tesla’s early-mover advantage in EVs is gone: cheaper and/or better EVs (BYD, European brands) are cited, plus commoditization of batteries.
  • There are claims of falling sales, revenue, profits, and EV market share, along with brand damage from Musk’s public persona and politics.
  • Others counter that Tesla remains profitable, with low debt and leading products (e.g., Model Y, Powerwall), especially compared to money-losing rivals.
  • Disagreement over whether Tesla is still “revolutionizing” solar or just dominating a narrow accessory niche.

Robots, Autonomy & “Next S-Curve”

  • Bulls see robots, robo-taxis, and new products as the real growth story; some assert Tesla’s humanoid robot could become “the most advanced consumer product ever.”
  • Skeptics point to decades of overpromised timelines (notably Full Self Driving), Boring Company’s modest Vegas tunnels, and practical issues of home robots (dirt, damage, safety).
  • Comparisons are made to robotics competitors (Chinese firms, Figure, Boston Dynamics/Unitree); some argue there’s no moat and Tesla is behind, others dismiss rivals as vaporware.
  • One view: Tesla’s edge in autonomy is not technical superiority but willingness to ship at lower safety readiness and lean on regulatory capture.

Valuation, Bubble Concerns & Macro

  • Some argue that after seeing other mega-cap stocks break psychological ceilings, “any valuation is possible,” even multi-trillion for Tesla.
  • Others say Tesla’s P/E and market cap are “disconnected from reality,” describing it as a bubble sustained by hype and fear of missing out.
  • A few tie future valuation to macro factors like inflation, political moves against Fed independence, and geopolitical instability, though the causal links remain speculative and contested.

Musk’s Behavior & Focus

  • Some hope the package nudges Musk to focus on Tesla instead of social media and culture wars; others doubt larger numbers will change his behavior.
  • His public promotion of controversial political ideas is seen by some as implicitly endorsed by a board willing to grant this package.

Kenvue stock drops on report RFK Jr will link autism to Tylenol during pregnancy

Evidence on Tylenol and Autism

  • Commenters link to large observational and meta-analytic studies that both find no association and a small positive association between prenatal acetaminophen use and ASD/ADHD.
  • Reported effect sizes are modest (odds ratios ~1.1–1.2), implying tiny absolute risk changes (e.g., ~0.2–0.4 percentage points; NNH ≈ 500+ if causal).
  • Multiple people stress these are observational data with confounding, publication bias, and diagnostic differences; causation is not established.
  • Some note sibling-controlled studies still show only weak signals, mostly for long-duration use.

RFK Jr., Politics, and Credibility

  • Many participants dismiss the claim primarily because of RFK Jr.’s long history of anti‑vaccine and fringe health positions, and the AI‑tainted “gold standard” MAHA report.
  • Others criticize this as a genetic fallacy: his untrustworthiness doesn’t automatically falsify every specific claim.
  • Some see his move as part of a broader strategy to erode trust in mainstream medicine in favor of “natural” or wellness narratives, and possibly to further restrict women’s autonomy.
  • A minority say he’s reflecting genuine distrust in U.S. health institutions and that some of his targets (e.g., processed foods, additives) may be non‑crazy even if his reasoning is poor.

Autism Rates, Heritability, and Alternative Explanations

  • Several emphasize strong heritability: autistic parents and siblings, twin studies, and likely genetic factors dominating over any single environmental exposure.
  • Rising autism prevalence is often attributed to broadened diagnostic criteria and reduced stigma, analogized to the historical rise in reported left‑handedness.
  • Others raise speculative environmental contributors (pollution, microplastics, pesticides, EM signals), but these are explicitly flagged as unproven.

Pregnancy Risk Tradeoffs

  • Debate over whether precautionary bans on Tylenol in pregnancy are justified given current evidence.
  • Some argue pregnant people should avoid it for anything short of serious fever, relying on non‑drug measures; others counter that untreated pain and especially fever carry well‑documented fetal risks and that alternatives (NSAIDs, opioids, aspirin) are often worse.
  • Concern that simplistic messaging (“Tylenol causes autism”) will drive unsafe substitutions (e.g., aspirin in children, or no fever control).

Acetaminophen Safety and Culture

  • Extensive side discussion on liver toxicity: narrow margin between therapeutic and toxic doses, overdose common in ERs, but recommended dosing is considered safe.
  • Cultural contrast: in the UK it’s ubiquitous and recommended for almost everything; some HN users find this too casual, others find U.S. hostility overblown.

Markets, Lawsuits, and Science Communication

  • Some see the stock drop and public panic as ripe for plaintiff attorneys and perhaps opportunistic traders.
  • Several complain that media and political actors turn nuanced, inconclusive science into absolutist slogans (“no link” vs “proven cause”), further degrading public trust.
  • General worry that politicizing autism causation — whether via anti‑vax or anti‑Tylenol narratives — harms autistic people, parents, and serious research alike.

Nest 1st gen and 2nd gen thermostats no longer supported from Oct 25

What’s Being Ended and What Still Works

  • Google is ending app/API support for Nest 1st/2nd gen thermostats; they will still function as standalone thermostats.
  • On-device scheduling and “learning” modes reportedly continue, but mobile apps, Home app control, and third‑party integrations (e.g. Home Assistant, utility programs) will stop working.
  • Some see this as “not mass bricking,” others say losing remote/app control is effectively losing the core value they paid for.

Trust, Lifetimes, and Google’s Reputation

  • Strong sentiment that Google kills too many products; multiple commenters say this is the last straw for buying any Google hardware or depending on Google services.
  • Debate over expected support duration:
    • Some argue 20–30+ years is reasonable for a thermostat tied to a home and HVAC that can last decades.
    • Others counter that buyers got ~10–14 years, which they view as acceptable for a complex connected device.
  • Several call for regulation: minimum advertised support lifetimes, or mandatory release of keys/APIs/firmware when cloud support ends, to avoid e‑waste.
  • A minority argues Google’s only obligation is to shareholders and that minimal support until it’s legally safe is “normal business.”

Cloud vs Local: Design and Business Models

  • Thread-wide “lesson”: avoid IoT devices that require a vendor cloud and don’t offer local or self-hosted control.
  • Complaints that almost all “smart” gear routes LAN‑to‑LAN control through remote servers and logins, often justified under “security” or account UX.
  • Others tie cloud-dependence to VC‑style subscription valuation and forced upgrade incentives, not technical necessity.
  • One former early Nest engineer notes that adding secure local APIs or modern protocols to 2010-era Linux devices is non-trivial, but many still argue Google could at least keep basic cloud endpoints up or expose a simple local API.

Alternatives and Local-First Setups

  • Many recommend Ecobee, though it also has cloud/API quirks; praise for HomeKit mode and open-source tools (e.g. beetstat) for history/analytics.
  • Other suggested options: Z‑Wave/Zigbee thermostats with Home Assistant, Honeywell Z‑Wave and T6, Sinopé, Venstar (documented local JSON API), cheap Zigbee/Z‑Wave units from AliExpress, Insteon, Amazon’s thermostat.
  • Repeated advice: favor devices with:
    • Local protocols (Z‑Wave, Zigbee, Matter, HomeKit, LAN APIs).
    • Optional or no cloud; no forced OTA; ideally hackable/3rd‑party firmware (e.g. Tasmota).
    • Integration with Home Assistant and isolation on dedicated VLANs.

“Smart” vs “Dumb” Thermostats

  • Pro‑smart arguments: remote control when traveling, pre‑heating/cooling before returning home, using remote sensors, handling system thermal lag, better UI than legacy programmable units.
  • Anti‑smart or skeptical views: old mechanical or simple digital thermostats last 30–50+ years, are cheap, reliable, and not hostage to corporate decisions; many “smart” features (learning, AI) are seen as gimmicky or annoying.

Hacking and Community Rescue

  • Mention of other ecosystems rescued by open source (e.g., Squeezebox/Lyrion, Tasmota), and calls for similar openness from Google.
  • One commenter is building an open-source replacement PCB for Nest 2nd gen using ESP32‑C6, reusing the existing enclosure and integrating with Home Assistant, as a way to keep the hardware useful after Google’s cutoff.

I kissed comment culture goodbye

Experiences with Friendship and Connection

  • Several commenters report making close friends, partners, business contacts, even political allies via comment-based communities (forums, Nextdoor, Reddit, HN, IRC, gaming voice chat).
  • Others say they’ve never formed a single offline connection through comments, especially on HN and Reddit, which feel anonymous and transient.
  • Many note a life-stage effect: as they aged and built offline networks, the drive and energy to form new online friendships declined.

Platform Design and Its Consequences

  • HN’s lack of avatars, PMs, and notifications is seen as intentionally content-focused but connection-poor.
  • Older forums and BBSs (phpBB, LiveJournal, IRC) are remembered as better for relationship-building due to stable identities, signatures, and easier one-to-one follow-up.
  • Modern platforms prioritize engagement via endless feeds and upvote/downvote mechanics, which reward jokes, outrage, and conformity over vulnerability or depth.
  • Some praise smaller, topic-focused spaces (niche subreddits, Discord servers, local FB groups, livestream chats) as still capable of fostering real community.

Polarization, Toxicity, and “Enshittification”

  • Many feel that comment culture degraded around mid‑2010s with polarization, troll farms, and engagement optimization.
  • Comment sections on big sites are described as angry, repetitive, meme-driven, and hostile to dissent; good answers get buried.
  • Up/downvotes become “like/dislike” tools in emotional topics, driving hive-mind behavior and pushing out subject-matter experts.

Authenticity and the Rise of Bots/AI

  • Multiple commenters now doubt whether interlocutors are human, citing bot farms and LLM‑generated content.
  • One anecdote about a meme mis-handled by an AI model triggers broader concern that subtle cultural context is being lost or flattened.
  • Some argue bots aren’t even required: platform dynamics alone can create “false pluralities” and distorted perceptions of consensus.

Why People Still Comment

  • Many say they comment primarily to think, learn, and practice writing, not to make friends; drafting then deleting is common and still useful.
  • Others admit to a commenting “addiction” driven by dopamine from replies and arguments.
  • There’s disagreement over “ROI”: some see comment time as wasted socially, others as high‑value for intellectual growth, career serendipity, or modest connection—especially in smaller, “cozy web” communities.

Anthropic agrees to pay $1.5B to settle lawsuit with book authors

Nature of the case & what was actually punished

  • Many commenters stress this lawsuit was about piracy, not about whether training on copyrighted books is fair use.
  • Anthropic allegedly downloaded large “shadow library” datasets (LibGen, Books3, PiLiMi), then later bought physical books and destructively scanned them.
  • Settlement terms (as extracted from filings):
    • $1.5B fund, estimated ~$3,000 per copyrighted work (500k works; more money if more works are proven).
    • Destruction of pirated datasets from shadow libraries.
    • Release only for past infringement on listed works, not for future training or for model outputs.

Fair use and model training

  • A prior ruling by the judge found that training on legally acquired books was fair use and “transformative”; the illegal act was downloading pirated copies.
  • Several participants underline: settlement creates no binding precedent, but the earlier district ruling is now persuasive authority others will cite.
  • Others argue fair use was never meant for massive LLM training, and that “reading” vs. “perfect recall & regurgitation” remains unresolved in other cases (e.g., Meta, OpenAI).

Economic & strategic takes

  • Many see $1.5B as a “cheap” price for having rushed ahead using pirated data, given Anthropic’s multi‑tens‑of‑billions funding and valuation.
  • Some think investors likely pushed to settle to remove existential downside and avoid an appellate precedent.
  • Debate over proportionality: $3,000 per $30 book seems high to some, but others note statutory damages can reach $150,000 per work, so this is a discount.

Impact on competitors & open source

  • Widespread speculation about pressure on OpenAI, Meta, Microsoft; some think this effectively “prices in” book piracy as a one‑off cost of doing business.
  • Concern that only giant, well‑funded players can now afford clean book corpora (buy + scan), further squeezing startups and open‑source efforts.
  • Some fear this accelerates consolidation; others argue data cost is still tiny compared to compute.

Books, libraries & data sourcing debates

  • Long subthread on whether buying/borrowing physical books then scanning them is ethically/legally different from torrents, and whether this is “scalable.”
  • Comparisons to Google Books and the Internet Archive:
    • Google’s scanning for search/preview was upheld as fair use; IA’s full book lending remains contested.
    • Commenters note irony that destructive scanning for AI is OK while non‑AI archives are punished.

Ethics, corruption & “move fast” culture

  • Strong resentment toward the “break the law at scale, pay later” startup playbook, with analogies to Uber and other tech firms that used illegality as a growth strategy.
  • Some argue this normalizes a regime where only rich entities can afford to violate the law, then settle—eroding the social contract and confidence in institutions.

Authors’ perspective & payouts

  • Authors in the thread actively look up whether their works are in LibGen and register with the settlement site; some note they may earn more from this than from sales.
  • Dispute over who really benefits: large publishers vs individual authors; many expect much of the money to go to rights‑holding corporations, not creators.

International & future legal landscape

  • Discussion of jurisdictions (EU text‑and‑data‑mining exceptions, Japan, Singapore, Switzerland) where training may be broadly allowed if data is lawfully accessed.
  • Some foresee countries explicitly carving out AI‑training exceptions to attract AI companies, while others warn that Chinese labs, less constrained by Western copyright, may gain a long‑term data advantage.
  • Ongoing uncertainty flagged: future rulings on outputs (regurgitation, style emulation), contract‑based restrictions (EULAs barring training), and new litigation (e.g., NYT‑style cases) are still “live.”

What to do with an old iPad

Locked-down hardware, ownership, and e‑waste

  • Strong frustration that old iPads are perfectly fine hardware but “functionally useless” because Apple stops OS support and locks bootloaders.
  • Many argue users should be allowed to install alternative OSes once Apple drops support, instead of being funneled into upgrade-or-landfill.
  • Recycling is seen as inferior to reuse; some view Apple’s stance as profit-driven churn, others also blame internal security/lockdown culture.
  • A minority defends Apple’s approach via trade‑ins and recycling, even framing shredding→new iPad as the “unlock” path.

Alternative OSes, Linux, and jailbreaking

  • Desire to run Linux or even macOS on iPads, especially newer M‑series models, but current reality is: locked bootloader + per‑model SoC complexity.
  • Non‑x86 hardware is described as poorly standardized, making general-purpose OS ports hard; efforts like postmarketOS are cited as struggling here.
  • Jailbreaking is seen as the only route, but it’s fragile: version‑specific, semi‑tethered, dependent on shady tools, and often requires an Apple ID some refuse to create.
  • People mention prior work (Linux on iPad, macOS userspace on iPhone), UTM for virtualized OSes, and iSH for userspace Linux, but none solve the base-OS lock.

Practical reuses and limitations

  • Examples of repurposing: self-hosted blog on an iPad 2, Home Assistant / AppDaemon dashboards, AV room controllers, status panels, PDF music scores, and offline video players (e.g., VLC on treadmill).
  • But old Safari and frozen web standards break many modern browser-based dashboards and apps.
  • Some devices are effectively doomed by bulging batteries or broken touchscreens.

Battery behavior, charging bugs, and “spicy pillows”

  • Concern about battery swelling on always‑plugged devices; some mitigate by unplugging or using smart plugs/timers to cycle charge levels.
  • Reports that certain iPads sometimes drain battery even while plugged in under heavy load (e.g., dashboards), possibly due to weak chargers or OS bugs.
  • Others share decade‑old iPads still holding charge well, highlighting very mixed longevity experiences.

Hosting, Cloudflare, and ISP concerns

  • The blog’s iPad server sits behind Cloudflare; outages were due to tunnels or local network, not HN load.
  • Back-of-envelope numbers suggest HN front-page traffic is only a few to ~10 requests/sec, easily handled by simple static setups.
  • Several argue consumer ISPs rarely care about that kind of upstream use, though contracts often technically forbid “servers.”

Freeway guardrails are now a favorite target of thieves

Rising metal theft and examples

  • Commenters report widespread theft of metals beyond guardrails: copper streetlight wiring, bridge lighting, brass plaques and hydrant fixtures, graveyard sculptures, EV charging cables, telecom and power lines, even cobblestones and plumbing.
  • Similar anecdotes come from multiple countries (US, Europe, South America, Africa, Australia), with impacts ranging from dark streets to weeks-long train outages and even whole countries briefly offline.

Why now? Causes debated

  • Suggested drivers:
    • Higher commodity prices, especially copper/brass, possibly amplified by tariffs.
    • Economic desperation, addiction (meth/fentanyl), and lack of opportunity or social support.
    • Perception that property crime is rarely punished and that local police don’t prioritize it.
    • Dramatic improvements in cordless power tools (recip saws, angle grinders, battery cut-off saws) that make infrastructure fast and quiet to cut, tools which are often stolen themselves.
  • Some argue drugs and mental health issues are the main cause; others emphasize inequality, institutional decay, and weak social safety nets. There is disagreement on which factor dominates.

Economics and incentives

  • Scrap value is low compared with repair costs, but often sufficient for an addict or someone living extremely cheaply; examples given of earning tens or hundreds of dollars for minutes of work (catalytic converters, EV cables).
  • Guardrail repair numbers in the article are seen as small in the context of overall public budgets, but still large relative to the thieves’ take.
  • Some note that “legit” curbside scrap-scavenging is common and useful, contrasting with destructive infrastructure theft.

Infrastructure and material choices

  • Discussion of why LA uses aluminum guardrails: softer impact behavior and corrosion resistance vs galvanized steel, though some say steel can be engineered to be equally “soft.”
  • Officials are reportedly considering fiberglass/composite rails and aluminum instead of copper wiring to remove scrap value.
  • EV chargers, power lines, and railway cables are frequent targets; some operators already use aluminum cables or design de-energized systems to reduce danger and attractiveness.

Scrapyards, fencing, and enforcement

  • Many argue thieves are just one link; the real chokepoint is scrapyards and intermediaries willing to buy obviously stolen material.
  • Proposed responses: strict ID requirements, bans or heavy regulation on buying certain items, major fines, or even criminal liability for yards that accept suspect loads.
  • Others note the volume and randomness of legitimate scrap (e.g., damaged guardrails, HVAC units, industrial scrap) makes perfect screening difficult; thieves can route through licensed “scrappers” or shops that fabricate paperwork.
  • UK-style ID rules and prior US crackdowns are cited; results are mixed, with theft shifting rather than disappearing.

Broader societal interpretations

  • Several comments frame the phenomenon as “third world behavior” or a symptom of societal decline: inequality, eroding institutions, and underfunded public services.
  • Others push back, saying theft exists in rich countries too and is more about addiction, impulsivity, or thrill-seeking than pure poverty.
  • A recurring theme: it’s often cheaper to prevent destitution than to repair the damage caused by those driven (or enabled) to strip public infrastructure for scrap.

Why Everybody Is Losing Money On AI

Cursor, Anthropic, and Weird Channel Economics

  • Commenters found it striking that Cursor reportedly passes essentially all its revenue to Anthropic, which is both its core supplier and direct competitor.
  • Some see this as unsustainable and question what happens to users if Cursor fails; others assume they will just shift to alternative AI coding tools.
  • From Anthropic’s side, selling heavily discounted capacity to a reseller who loses money is also seen as odd but consistent with land-grab strategies.

Training vs. Inference and Real Unit Economics

  • Several argue that model inference appears to have decent gross margins (e.g., ~50%), and that losses are driven mainly by huge training and research spend.
  • Others counter that you can’t ignore ongoing training, data licensing, salaries, and overhead; treating training as a one-off capex is misleading if the competitive race never stops.
  • A recurring point: AI breaks the old “software has near-zero marginal cost” assumption—every query consumes costly compute.

Will Costs Come Down?

  • One camp insists cost curves will improve via hardware, architectures, and software optimizations, citing massive historical drops in storage/compute prices and recent per‑token price reductions.
  • Skeptics argue the article’s point: costs haven’t fallen fast enough so far, structural constraints (GPUs, power, data centers) are real, and not all tech follows a Moore-like curve.
  • There’s disagreement over whether current reasoning/agentic usage patterns are erasing per-token price gains.

Why Keep Losing Money? (VC and Strategy Logic)

  • Many say this is normal VC behavior: burn cash now to capture market share in a potentially huge, winner-take-most space; analogous to early Amazon or Google.
  • Others object that this only makes sense if AI really is a $10T “golden goose,” which some are beginning to doubt.

Profitability, Pricing, and Competition

  • Some argue AI could be profitable today if firms stopped training new models and/or raised prices; competition and expectations, not intrinsic economics, keep margins thin.
  • Others respond that pausing training would sacrifice freshness and advantage, and that high compute, hardware, and energy costs limit how far prices can rise before demand drops.

Adoption, Value, and Skepticism

  • Mixed experiences: some users feel LLMs deliver huge personal value and would pay much more; others have abandoned them with no noticeable loss in productivity.
  • Debate over whether AI usage will become a de facto job requirement, similar to IDEs or smartphones, or remain optional for many “boring” software and business tasks.
  • A few worry about long‑term dependence on AI platforms that may later become “enshittified” once pricing power is consolidated.

Historical Analogies and Bubble Talk

  • Comparisons range from PCs and smartphones (transformative, compounding value) to Segways, Zeppelins, and dot‑com flops (hyped but limited or mispriced).
  • Some expect an AI bubble burst that wipes out weak players while leaving underlying behavioral and technical shifts intact.

European Commission fines Google €2.95B over abusive ad tech practices

Deterrence: Fines vs. Criminal Liability

  • Many argue that repeated antitrust violations show fines are “cost of doing business”; they call for three‑strikes–style rules and personal criminal liability for executives or decision‑makers.
  • Others question who exactly should go to jail in a committee-driven corporation, but some respond: “everyone who knowingly approved illegal conduct.”

How Big and How Effective Is €2.95B?

  • Debate over whether ~€3B is a meaningful penalty: some note it’s ~15% of Google’s annual EU net profit and therefore not trivial; others call it a slap on the wrist for a company of that size.
  • Several note fines can be repeated and increased, and are accompanied by mandated changes to business practices, which is what regulators really want.

Passing Costs On & “Cost of Doing Business”

  • One camp insists any fine or cost will be fully passed on to advertisers and consumers; therefore fines function as an indirect tax on everyone else.
  • Others counter that higher costs reduce competitiveness and margins, so companies can’t always fully pass them on—especially if competitors are not fined for similar behavior.

Google’s Adtech Conduct

  • Commenters summarize the ruling as: Google used dominance in tools for publishers and advertisers plus its AdX exchange to self‑preference, with practices like:
    • Steering Google Ads demand mainly to AdX.
    • Using privileged information about rival bids.
    • Contractual limits on using competing ad tech.
  • Many see inherent conflict in letting a dominant market-maker also be a major market participant.

Ads, Marketing, and the Web

  • A long subthread debates whether targeted online advertising should be radically constrained or even banned.
  • Some want “marketing” or the sale of attention outlawed; others say advertising is structurally necessary for competitive markets and product discovery, but tracking-based, behavior‑modifying ads may not be.

EU vs US, “Leaving the EU,” and Geopolitics

  • Multiple commenters dismiss the recurring threat that Google or other giants will “leave the EU” given the huge profits there.
  • Some worry a future US administration could retaliate via tariffs or pressure to shield US tech, while others argue the EU must not base its laws on shifting US politics.

EU Institutions, Rule of Law, and Tech Scene

  • Disagreement over whether the European Commission wielding both rule‑making and enforcement powers is healthy; some see risks of politicization versus court‑centric systems.
  • Broader argument over why Europe has few global tech giants: suggestions include culture (comfort vs. competitiveness), fragmented markets, weaker VC, and the impact of US megacorp dominance.

Interview with Geoffrey Hinton

Hinton’s Expertise and Credibility

  • Some argue he’s not an LLM/transformer specialist and openly says he doesn’t fully understand them, so they discount his predictions.
  • Others stress his foundational role in deep learning and mentoring key figures, seeing attacks on him as ignorant or disrespectful.
  • Several commenters highlight his history of confident but wrong forecasts (e.g., radiologists being “already over the cliff”), calling him speculative and inconsistent.
  • There’s debate over “hero worship” vs. fair respect for major contributors, and whether citation counts or prizes should matter in judging his current statements.

Is AI Actually “Intelligent”?

  • Hinton’s line that “by any measure AI is intelligent” alarms some, who see it as unusually sweeping for him and likely to age badly.
  • Long subthread on the lack of a clear definition of “intelligence”:
    • Some say this makes the “is it intelligent?” question basically philosophical and unhelpful.
    • Others argue we can still use human-like behavior, or operational tests like the Turing test, as practical proxies.
    • Some insist current systems only mimic intelligence and that calling them intelligent is mostly marketing.

Economic and Labor Effects

  • Core claim discussed: AI will let rich people replace workers, boosting profits for a few and impoverishing many; blame placed on capitalism, not AI itself.
  • Many see this as just a continuation of existing trends in capital–labor imbalance and automation.
  • Others dispute inevitability: past tech often increased overall wealth and reduced poverty, though inequality rose.
  • Radiology and self‑driving cars are cited as examples where “imminent replacement” narratives failed; more likely outcome is job transformation, not mass elimination—at least in the near term.

Capitalism, Regulation, and Possible Responses

  • Strong skepticism that US (or allied) governments will seriously regulate AI; “reverse regulation” to protect corporate interests is seen as more likely.
  • Concerns about extreme concentration of wealth and power if AI + robots allow production without human labor or consumers.
  • Ideas floated: robot/AI taxes, socialism, stronger safety nets, or “techno‑anarchist” visions where personal, decentralized AIs help people coordinate and organize beyond current social‑media platforms.

MentraOS – open-source Smart glasses OS

Openness, “OS” Definition, and Architecture

  • Debate over whether MentraOS is a true OS or mainly an SDK + cloud platform sitting atop AOSP and minimal firmware.
  • Some see it as genuinely open source (including cloud components); others note the Android base and argue crucial low-level code isn’t in the repo.
  • Clarification from project participants: current “AI glasses” model runs AOSP; a 2026 HUD model will use a lightweight MCU client.

Cloud Dependence, Edge Limits, and Privacy

  • Strong criticism that without the cloud MentraOS isn’t much of an OS and becomes a privacy risk, especially with cameras and mics.
  • MentraOS team says the “Mentra Cloud” / relay can be fully self-hosted and that developers host their own apps.
  • Architecture uses cloud to let multiple apps run concurrently and share “context,” and to save phone battery; edge mode will exist but limited to one app and heavier phone battery use.
  • Some argue cloud should be optional, not the core model, and that “cloud apps” inherently increase surveillance and latency.

Device Compatibility and Hardware Trade-offs

  • MentraOS claims to target multiple glasses (Even Realities G1, Vuzix Z100, others), but cannot support locked-down devices like Meta Ray-Bans yet.
  • Discussion that many smart glasses simply run Android; HUD-only devices use lighter stacks.
  • Several users want “just a display” driven by phone/laptop (Xreal, Rokid, Viture, Lenovo Legion, Vufine mentioned), without cameras/mics for privacy and simplicity.
  • Counterpoint: microphones and sensors enable key features like captions, translation, head tracking.

Use Cases, AR Expectations, and “Dumb” vs Smart

  • Desired features: live translation, subtitles, navigation, minimal AR overlays, and even ad blocking (with concern about “subtractive reality”).
  • Some argue today’s products are mostly HUDs, not true AR; others insist full spatial AR is the real goal.
  • A sizable camp prefers “dumb” glasses: act as camera + Bluetooth/USB display for phone apps, no app store or on-device AI. Others respond this breaks down with multiple apps and shared sensors, which is what MentraOS aims to solve.

Business Model, Culture, and Longevity Concerns

  • The careers page (996, “transhumanist hackers,” anti–work-life balance) triggers backlash as emblematic of VC-driven, unsustainable culture.
  • Skepticism that any VC-backed “open” platform will stay open; comparisons to other projects that started open and shifted toward control.
  • Persistent doubt that smart glasses in general will achieve mainstream, lasting utility given ergonomics, battery, and social acceptability.