Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 179 of 354

How can England possibly be running out of water?

Privatisation, Profit and Regulation

  • Many blame England’s water crisis on 1980s–90s privatisation: firms loaded up on debt, paid out large dividends, underinvested in maintenance and reservoirs, and now seek big bill rises to fix decaying assets.
  • Others argue the real driver is political: the regulator kept prices artificially low and demanded investment, forcing companies into debt and making long‑term planning unattractive.
  • Counterpoint: nationalised utilities can also be corrupt or inefficient; examples from Scotland, Ireland, LA and the USPS are cited to show state ownership isn’t automatically better.
  • Several note a “natural monopoly” like water is structurally ill‑suited to shareholder profit, since every penny of profit comes from higher bills, poorer service, or deferred maintenance.

Leaks, Infrastructure and Planning Constraints

  • ~20% of treated water is said to leak from pipes; annual replacement rates are tiny. Commenters see fixing leaks as the obvious near‑term “solution” that private firms lack incentives to pursue.
  • Pre‑privatisation, new reservoirs were built regularly; since then almost none. Water companies claim they have proposed reservoirs (e.g. Abingdon) but have been blocked by regulators and local NIMBY campaigns.
  • The planning system is widely criticised as slow and veto‑ridden, making any large reservoir or canal project a multi‑decade effort.

Climate Change, Responsibility and Behaviour

  • Several point to shifting rainfall patterns: more intense downpours and longer dry spells require much more storage even if annual rainfall rises.
  • Debate over responsibility: some stress oil‑producing states and companies that hid research and lobbied against renewables; others insist consumers and voters share blame for high‑carbon lifestyles and voting against “green” policies.
  • Individual action vs systemic change is contested; some highlight personal emissions cuts, others argue only state‑level coordination and regulation can matter at scale.

Population, Immigration and Demand

  • One camp frames the issue as infrastructure failing to keep pace with ~20–25% population growth, sometimes explicitly tying this to immigration.
  • Others counter that this growth is modest by rich‑country standards, that water abstraction has been flat or declining, and that privatisation and leaks, not migrants, are the core problem.
  • There is concern that blaming population becomes a distraction from governance and investment failures.

Desalination, Energy and Technical Fixes

  • Desalination is debated: some cite sub‑$1/m³ costs and widespread use in arid countries; others highlight high energy needs, capital intensity, and point out that feeding leaky networks with expensive desal water is wasteful.
  • A few suggest pairing desal with surplus renewables; critics respond that “excess” power is intermittent and markets plus batteries will erode such opportunities.
  • Many say building reservoirs and fixing pipes would be vastly cheaper and easier for the UK than large‑scale desalination.

Pricing, Metering and Usage Patterns

  • Commenters contrast England’s widespread flat‑rate or unmetered billing with more metered systems in Germany and elsewhere; some see metering as essential to curb household waste.
  • Others note big users are agriculture and industry; focusing only on domestic hosepipe bans and toilet flushing is seen as symbolic rather than structural.
  • Examples from Scotland, Ireland, Quebec and Chicago show a range of “water as public good” models, some leading to overuse and underfunded infrastructure.

Broader Governance, Neoliberalism and “Nothing Works”

  • The thread repeatedly zooms out: water is cited alongside rail, energy, health care and housing as evidence of a wider UK pattern of underinvestment, short‑termism and “Thatcherite” asset‑stripping.
  • Some defend markets and competition in non‑monopoly sectors, but condemn the UK’s hybrid model as combining “all the disadvantages of private ownership with all the disadvantages of state control.”
  • Others emphasise vetocracy and NIMBYism: even when companies or government want to build, local and legal obstacles stall projects for a decade or more.

Stop writing CLI validation. Parse it right the first time

Scope of parsing in everyday code

  • Some commenters say they rarely write parsers beyond Advent of Code or using JSON/YAML libraries.
  • Others argue parsing underlies most work: user input, CLI args, API payloads, server responses, and that many security bugs stem from not parsing properly at all.

“Parse, don’t validate” explained and debated

  • Supporters describe it as: turn loose data into a stricter type once, then use that type everywhere; don’t validate a loose type and keep passing it around.
  • Emphasis on reflecting invariants in the type system (“illegal states unrepresentable”), reducing scattered defensive checks.
  • Critics say it’s vacuous because “someone is still validating” (often a library like Zod/Pydantic); they see it more as an injunction to reuse good libraries.
  • Clarifications distinguish a parser (returns a new, constrained type) from a validator (predicate on existing value), and note structural vs nominal typing issues in TypeScript.

CLI parsing, Optique, and parser combinators

  • The featured TS library is seen as a specialized parser combinator toolkit for CLI, with strong type inference from parser declarations.
  • Comparisons are made to argparse, clap, docopt, Typer, PowerShell’s parameter system, and various TS/JS libraries; some say it’s closer to schema validation tools like Pydantic or Zod than to basic flag parsers.
  • Several note that parser combinators are conceptually simple and a good fit for argv streams.

Error reporting and invalid states

  • Concern: if you fully encode invariants, can you still report multiple errors or must you fail fast?
  • Responses: use applicative-style validation that accumulates errors into arrays/aggregates; have intermediate representations that allow invalid states but don’t leak them past the parsing boundary.

Design of CLI interfaces

  • Some argue complex dependent options indicate poor UX; prefer simpler schemes (positional args, enums like -t TYPE, combined host:port, DSNs).
  • Others accept required/related options as normal and value explicit named options over ambiguous positional arguments.

Type systems, layers, and safety

  • Disagreement over how much validation belongs in the I/O layer vs domain core.
  • General consensus that parsing to rich types at boundaries aids structure, but it doesn’t replace concerns like SQL injection; type safety is helpful but not absolute.

Languages, runtimes, and tooling

  • Debate over whether CLIs should be native binaries only vs being fine in Node/Python, especially internally.
  • Side discussion about static vs dynamic linking, libc compatibility, and appreciation for ecosystems with strong, type-aware CLI tooling (Rust’s clap, PowerShell, etc.).

Meta reactions

  • Some suspected the article was LLM- or machine-translated based on style; others found it novel, clear, and enjoyable, praising the concrete application of “Parse, don’t validate” to CLI design.

Oldest recorded transaction

Beer, fermentation, and early civilization

  • Discussion notes that beer and bread co-evolved: old bread as beer starter, live beer used in bread dough for flavor.
  • Evidence of grain soaking/light fermentation thousands of years before the tablet suggests nutrition and palatability were key drivers long before “beer as leisure.”
  • Some argue that large-scale grain agriculture and even semi-permanent settlements may have been motivated primarily by fermentation/beer; others treat this as speculative.
  • Debate on why even children historically drank weak beer: one view is pathogen-killing alcohol; another says that “unsafe water” is overstated and that dense, portable calories were the bigger factor.

Receipts, complaints, and what early writing recorded

  • Commenters highlight how striking it is that one of the very oldest texts is a receipt, not a story or prayer.
  • The ubiquity and forgettability of numbers is suggested as a reason writing started with accounting: we remember stories; we don’t remember quantities or debts.
  • Links to the famous ancient customer-complaint tablet show that transactional records and disputes are among the earliest preserved genres.

How writing emerged and evolved

  • Several posts discuss early Mesopotamian writing as logographic/semasiographic: symbols for commodities and quantities without grammar, possibly readable across languages.
  • There’s extended debate over how quickly phonetic use emerged (via the rebus principle) and how to classify modern Chinese characters:
    • One side stresses that modern usage is fundamentally phonetic (characters represent syllables), with historical semantics in the background.
    • Another emphasizes the mixed, messy legacy of logographs, phono-semantic compounds, and bound morphemes, and that Japanese kanji are often less phonetic in practice.
  • More speculative side-thread: language constraining thought vs ideas too complex for speech, and other media (like Dynamicland) as ways to express such ideas.

Durability, survivor bias, and “rock solid” storage

  • The article’s joke about 5000-year durability prompts pushback: tablets survived partly by accident (burning cities firing clay); most are lost.
  • Still, some argue that no modern digital record is realistically likely to survive 5000 years without highly active migration, whereas clay can passively persist.
  • Others note that survival of tablets is contingent (e.g., lost archives where cities didn’t burn or sank below the water table).

Storing ancient dates in modern databases

  • Multiple commenters say museums effectively store ancient dates as text (“circa 2000 BC”, ranges, qualifiers) and keep separate numeric ranges for sorting.
  • One practitioner describes mapping free-form date strings to year ranges in a side table; another links a library built and tested on the Met’s ~470k-object dataset.
  • PostgreSQL’s date range (4713 BCE to far future) is discussed; people are surprised by the asymmetric range and how it fits into 31-bit day counts.
  • ISO 8601’s treatment of year 0000 as 1 BCE (with negative years for earlier dates) is criticized as baking in an off‑by‑one for human-readable BCE.
  • Some suggest richer types for imprecise dates (value + margin, ranges), and note that historical calendars (“year of King X”, consular years, religious calendars) vastly complicate a simple “integer year” model.
  • A few muse about extreme solutions like overloading comparison operators to call an LLM for fuzzy date reasoning, though this is clearly speculative/playful.

Politics, museums, and ownership

  • The blog’s quip about a British Museum manager wanting to store “theft inventory” draws mixed reactions:
    • Some say it’s inappropriate “politics” that undermines neutrality.
    • Others counter that recent thefts and colonial acquisition histories are factual, and that a lighthearted blog can acknowledge them.
  • Related tangent on Tintin: stories where artifacts are “rescued” from non‑European locales and placed in European museums now read as uncomfortably colonial.
  • Another thread notes that as ancient DNA reveals dramatic population replacements, claims that artifacts “belong” to whoever lives on the land today will grow more contentious.

“Oldest” versus “oldest known”

  • One commenter is persistently annoyed by phrases like “oldest recorded transaction” without qualifiers like “known” or “surviving.”
  • Others reply that language is typically understood to mean “oldest surviving example we know of,” though some agree that explicitly saying “oldest surviving/known” would be more precise.

AI surveillance should be banned while there is still time

Policy and regulation proposals

  • Requests for concrete policies: suggestions include mandatory on-device blurring of faces/bodies before cloud processing, and strong limits on training models with user data.
  • Some propose strict liability frameworks: large multipliers on damages and profits for harms caused, to realign incentives.
  • Another thread argues for treating AI like a fiduciary: privileged “client–AI” relationships, bans on configuring AIs to work against the user’s interests, and disclosure/contestability whenever AI makes determinations about people.

Training data, copyright, and data ownership

  • Several argue LLMs should only train on genuinely public-domain data, or inherit the licenses of their training data, with individuals owning all data about themselves.
  • Others stress the “cat is out of the bag”: enforcing new rules now would advantage early violators.
  • There is anger at low settlements for book datasets and claims that current practices are systemic copyright infringement.

Chatbots, persuasion, and privacy risks

  • Strong concern that long-lived chat histories plus personalization enable “personalized influence as a service” (political, financial, emotional).
  • People highlight how future systems could use all past chats (with bots and humans) as context for targeted manipulation or even court evidence.
  • Some see privacy-focused chat products as meaningful progress; others see them as marketing that still leaves users exposed (e.g., 30‑day retention, third-party processors).

Skepticism about bans and institutions

  • Many doubt AI surveillance can be effectively banned: illegal surveillance isn’t stopped now, laws lag by years, and fines are tiny relative to profits.
  • Some view belief in regulatory fixes as naïve given concentrated wealth, lobbying, and revolving doors.
  • Others argue “do something anyway”: build civil tech, secure communications, and new organizing spaces.

Geopolitics, power, and arms-race framing

  • One camp: surveillance AI is like nuclear weapons; unilateral restraint means strategic defeat by more authoritarian states.
  • Counterpoint: nukes already constrain war; “winning” with ASI or AI surveillance may be meaningless or catastrophically dangerous for everyone.

Corporate behavior and trust

  • Persistent distrust of big AI firms: comparisons to therapist/attorney privilege are seen as incompatible with monitoring, reporting, and ad-driven incentives.
  • DuckDuckGo is both praised for pushing privacy and criticized for “privacy-washing” and reliance on third-party trackers/ads.

Platform moderation and everyday harms

  • Numerous anecdotes of AI or semi-automated moderation wrongly banning users on large platforms, with no meaningful appeals.
  • Concern that AI-driven enforcement plus corporate dominance creates undemocratic, opaque control over speech, jobs, and services.

Advertising, manipulation, and surveillance capitalism

  • Debate over targeted ads: some users like relevance, others emphasize ads as adversarial behavioral modification, not neutral product discovery.
  • Worry that granular profiling lets firms push each person to their maximum willingness to pay, shifting surplus from users to corporations and AI providers.

Cultural and technical responses

  • Suggestions include: locally running models, hardware-based business models, avoiding anthropomorphizing AIs, opting out of smartphones/social media, and building privacy-preserving or offline alternatives.
  • Underneath is a shared fear that pervasive AI surveillance will normalize self-censorship and make genuine privacy practically unreachable.

996

What 996 Is and Who It’s “For”

  • 996 = 9 a.m.–9 p.m., 6 days/week. Many see it as acceptable only for founders or owners with huge upside, not for regular employees on normal salaries or tiny equity.
  • Several note that founders’ work (meetings, selling, decisions) is qualitatively different from 12 hours/day of deep technical work, which is far less sustainable.
  • Some people do similar hours on their own projects and don’t experience it as “work” in the same way as employment.

Burnout, Health, and Actual Output

  • Numerous anecdotes: PhD labs, startups, banking, and medicine where long hours helped careers but caused burnout, health issues, and damaged relationships.
  • People describe “pseudo-work”: doomscrolling, socializing, staying late for optics, or shipping low‑quality code that others must fix.
  • Many argue you realistically get ~4–6 good hours of deep work per day; beyond that productivity and judgment crater, especially for engineers.

Power, Culture, and Optics

  • 996 is framed as a power imbalance: when hours aren’t bounded, “flexibility” benefits employers, not workers.
  • Some say 996 culture is mostly theater—for investors, bosses, or “face”—with Slack responsiveness and butts-in-seats mistaken for output.
  • Others connect this to erosion of labor rights, noting the weekend and 40‑hour week were hard‑won and are being quietly rolled back.

Geography and Labor Systems

  • In China, 996 is seen by some as failed management: people “摸鱼” (mentally check out) for large chunks of the week. There are long lunch/nap breaks, so 12 hours in the office isn’t 12 hours of work.
  • China technically bans 996 without overtime pay; enforcement is patchy.
  • European commenters highlight legal hour caps, mandatory overtime compensation, and culturally enforced work–life balance as a contrast.

Equity, Class, and Incentives

  • Strong theme: 996 only makes sense if you capture founder‑level upside. Early employees with 0.1–3% equity are taking similar lifestyle risk for a tiny share of reward.
  • Several frame this as class: owners vs workers, builders vs “redistributors,” and see glorified overwork as wage‑slavery in startup wrapping.

Life Stage, Privilege, and Personal Choice

  • Some defend intense grind early in life, especially from poorer backgrounds, as a rational escape strategy.
  • Others counter that normalizing 996 harms everyone—especially parents, older workers, and those with other commitments—and that “choice” is constrained by economic desperation.
  • Broad agreement: voluntary crunch in short bursts can be meaningful; enshrining 996 as company culture is exploitative and counterproductive.

We hacked Burger King: How auth bypass led to drive-thru audio surveillance

Security system design and vulnerabilities

  • Commenters are stunned that a national chain’s drive‑thru monitoring stack had such basic security flaws (client‑side “auth”, hard‑coded passwords, weak signup flows) despite handling live audio and metrics across many stores.
  • Several note this level of sloppiness is common in corporate “digital transformation” projects, often outsourced or rushed, where analytics and dashboards are prioritized over security.

Surveillance and labor micromanagement

  • A major thread focuses less on the hack and more on the system’s purpose: recording and algorithmically analyzing every interaction to enforce scripted behavior (“positive tone,” slogans).
  • Many find this dystopian given wages and working conditions; some argue low‑paid workers are surveilled and disciplined far more harshly than well‑paid professionals.
  • Others point out this is an efficiency play tied to how replaceable workers are, not personal cruelty, and that similar pressures exist at the very top of white‑collar ladders.

Ethics, legality, and risk of “rogue” security research

  • Multiple commenters warn that targeting companies without an explicit bug bounty or testing authorization risks prosecution under the CFAA or similar laws, regardless of “good faith.”
  • Others push back that “only hack where permitted” neuters the hacker ethos and leaves the field to malicious actors; they see public write‑ups as socially useful pressure.
  • There’s debate over whether this specific post is “responsible”: some stress that issues were fixed the same day, others argue any unauthorized access is still illegal and self‑incriminating.

Disclosure norms and bug bounty economics

  • Discussion distinguishes “coordinated” vs “responsible” disclosure; some say implying non‑coordinated disclosure is inherently “irresponsible” is itself loaded framing.
  • Researchers describe experiences with low payouts, hostile NDAs, and vendors burying vulnerabilities, leading some to favor immediate or at least time‑boxed public disclosure.
  • Others emphasize that early full disclosure reliably harms users by enabling exploitation before patches, and say they wouldn’t hire researchers who ignore coordination.

DMCA takedown and platform leverage

  • The blog was taken down after a DMCA complaint apparently filed via a takedown‑as‑a‑service vendor; many see this as abusive use of copyright law to suppress embarrassing but lawful criticism.
  • People note the power imbalance: hosts/CDNs reflexively honor complaints, leaving targets little recourse; some even propose startups to fight DMCA abuse.

Recording drive‑thru audio: legal and privacy questions

  • Commenters argue over whether recording drive‑thru conversations without notice is legal:
    • Some say there’s no reasonable expectation of privacy in a public‑facing lane on private property open to the public.
    • Others cite all‑party‑consent and wiretap laws in certain US states, plus GDPR in Europe, as potential problems, especially if recordings are stored, analyzed, and linked to PII (cards, plates, profiles).
  • Beyond legality, many find the practice ethically troubling and symptomatic of wider surveillance capitalism, especially if tied to voice profiling or resale.

Qwen3 30B A3B Hits 13 token/s on 4xRaspberry Pi 5

Technical approach and scaling

  • The setup uses distributed-llama with tensor parallelism across Raspberry Pi 5s; each node holds a shard of the model and synchronizes over Ethernet.
  • Scaling is constrained: max nodes ≈ number of KV heads; current implementation requires 2^n nodes and one node per attention head.
  • People question how well performance would scale beyond 4 Pis (e.g., 40 Pis), expecting diminishing returns due to network latency, NUMA-like bottlenecks, and synchronization overhead.
  • Some ask about more advanced networking (RoCE, Ultra Ethernet), but there’s no indication it’s currently used.

Performance vs hardware cost

  • Several commenters find 13 tok/s for ~$300–500 in Pi hardware “underwhelming,” suggesting used GPUs, used mini PCs, or old Xeon/Ryzen boxes yield better cost/performance.
  • Multiple comparisons favor:
    • Used Apple Silicon (M1/M3/M4) with large unified memory as a strong local-inference option.
    • New Ryzen AI/Strix Halo mini PCs with up to 128GB unified RAM as another path, though bandwidth limitations are noted.
    • Cheap RK3588 boards (Orange Pi, Rock 5) offering competitive or better tokens/s than Pi 5 for some models.
  • Others note that GPUs still dominate raw performance, but are expensive, power-hungry, and VRAM-limited at consumer price points.

Local models, capability, and hallucinations

  • Many see local models like Qwen3-30B A3B as “good enough” for many tasks, comparable to last year’s proprietary SOTA.
  • There’s debate on whether “less capable” models are worthwhile for developer assistants:
    • Some argue only top-tier models avoid subtle technical debt and poor abstractions.
    • Others report real value from smaller coder models (4–15B) as fast local coding aids.
  • Hallucinations are seen as the main blocker for “killer apps.” Proposed mitigations include RAG and agentic setups that validate outputs (especially clear in coding), but commenters note this is harder in non-code domains and far from solved.

Consumer demand and killer apps

  • Opinions diverge on whether consumers care about local AI:
    • One camp says hardware is ahead of use cases; people “don’t know what they want yet” and killer apps are missing.
    • Another argues people have been heavily exposed to AI and largely don’t want more of the same (meeting notes, coding agents).

Children’s toys and ethics

  • Some are excited by Pi-scale LLMs enabling offline, story-remembering, interactive toys—likened to sci‑fi artifacts.
  • Others strongly oppose LLMs in kids’ toys, citing parallels with leaving children alone with strangers and concerns over shaping cognition and social norms.
  • A middle view emphasizes “thoughtful design” and intentionality in how children interact with AI, rather than blanket enthusiasm or rejection.

Hobbyist and cluster culture

  • Several acknowledge Pi clusters as more “proof-of-concept” or tinkering platforms than practical inference hardware.
  • Many hobbyists accumulate multiple Pis or SBCs from unfinished projects; repurposing them for distributed inference is seen as fun, if not strictly rational.
  • There’s recognition that for serious, cost‑sensitive workloads, used desktops, mini PCs, or a single strong machine often beat small ARM clusters.

Enterprise and labor implications

  • One long comment argues that even modest-speed, cheap local LLMs can automate large fractions of structured white‑collar tasks documented in procedures and job aids.
  • This view sees near-term disruption in “mind-numbingly tedious” office work, with human‑in‑the‑loop oversight, and raises questions about future work hours and the relative value of “embodied” service jobs that can’t be automated.

Let us git rid of it, angry GitHub users say of forced Copilot features

Alternatives & Centralization Concerns

  • Multiple commenters say they’re moving or donating to Codeberg, Forgejo, or self‑hosted GitLab; some note GitLab is also pushing AI and expect eventual community forks.
  • Debate over whether GitHub is “critical infrastructure” or just a fancy git server with PRs. Some say outages kneecap companies and act as a CDN; others argue that’s bad engineering practice, not inherent criticality.
  • Strong regret that so much FOSS landed on a proprietary, VC‑funded platform, making the community hostage to a corporate owner; others reply that convenience and network effects made this outcome predictable.
  • GitHub stars, free CI (including macOS/Windows), and packages are seen as major lock‑in mechanisms beyond pure git hosting.

Reality of Copilot PR/Issue Spam

  • Several maintainers of popular projects report seeing zero Copilot‑authored PRs or issues; they suspect the scale of the problem is overstated.
  • Clarification: Copilot does not automatically open PRs/issues; a human has to trigger it. The main GitHub discussion is about blocking the copilot bot account, not banning all AI‑authored content.
  • Others worry about LLM‑generated “sludge” from any tool (ChatGPT, Claude, etc.), especially around events like Hacktoberfest or bounty programs.

Forced AI Features & User Hostility

  • Strong frustration with Copilot being surfaced everywhere: GitHub UI, VS Code, Visual Studio, Office 365, and other products. Many describe it as “forced” or dark‑patterned, with limited or hidden off‑switches.
  • Some report Copilot review comments blocking automerge for trivial remarks, and accounts shown as “enabled” for Copilot even when settings say otherwise; GitHub support is described as evasive.
  • Comparison to other “enshittified” products (Google Docs, GCP console) where core quality stagnates while AI buttons proliferate.

Metrics, Hype, and Business Incentives

  • Skepticism about claims like “20M Copilot users” when access is auto‑provisioned or mandated by management, often unused.
  • Many see the AI push as driven by KPIs, investor expectations, and ecosystem self‑interest (e.g., GPU vendors), not organic developer demand.
  • Parallels drawn to crypto and self‑driving hype cycles and to the McNamara fallacy: chasing engagement numbers while ignoring user experience.

Usefulness vs. Cost of LLMs

  • Some developers report substantial productivity gains for prototyping in unfamiliar languages, exploratory scripts, or navigating large new codebases.
  • Others find LLMs useful mainly as fuzzy search / brainstorming tools, with limited or negative net productivity once review and corrections are included.
  • Environmental and infrastructure costs are raised; critics argue the benefits don’t yet justify the scale or the aggressive rollout.

Control, Policy, and Mitigations

  • Workarounds mentioned: hiding AI features in VS Code (Chat: Hide AI Features), Org‑level Copilot disable in GitHub, Visual Studio “hide Copilot” option, and uBlock filters to block Copilot commit‑message generation.
  • Proposals include blocklists for AI‑slop contributors and allowing maintainers to block the copilot bot like any other user.

Corporate Behavior & Regulation

  • Long thread on why Microsoft was allowed to buy GitHub, whether it was already “critical” at acquisition, and the role of antitrust (compared to Adobe/Figma).
  • Some argue corporations are doing exactly what they’re designed to do—maximize profit—and that only regulation and better initial choices (FOSS forges) could have prevented this dynamic.

Why language models hallucinate

Evaluation, Multiple-Choice Analogies, and Incentives

  • Many comments pick up on the article’s multiple-choice test analogy: current benchmarks reward “getting it right” but don’t penalize confident wrong answers, so models are implicitly trained to guess rather than say “I don’t know.”
  • Some compare this to standardized tests with negative marking or partial credit for blank answers, arguing evals should similarly penalize confident errors and allow abstention.
  • Others note this is hard to implement technically at scale: answers aren’t one token, synonyms and formatting complicate what counts as “wrong,” and transformer training doesn’t trivially support “negative points” for incorrect generations.

What Counts as a Hallucination?

  • One camp insists “all an LLM does is hallucinate”: everything is probabilistic next-token generation, and some outputs just happen to be true or useful.
  • Another camp adopts the article’s narrower definition: hallucinations are plausible but false statements; not all generations qualify. Under this view, the term is only useful if it distinguishes wrong factual assertions from correct ones.
  • There’s pushback that “hallucination” is anthropomorphic marketing; alternatives like “confabulation” or simply “prediction error” are suggested.

Root Causes and Architectural Limits

  • Several comments reiterate the paper’s argument: next-word prediction on noisy, incomplete data inevitably leads to errors, especially for low-frequency or effectively random facts (like birthdays).
  • Others argue the deeper problem is lack of grounding and metacognition: models don’t truly know what they know, can’t access their own “knowledge boundaries,” and separate training from inference, unlike humans who continuously learn and track uncertainty.
  • Some see hallucinations as an inherent byproduct of large lossy models compressing the world; with finite capacity and imperfect data, there will always be gaps.

Can Hallucinations Be Reduced or Avoided?

  • Many are positive about training models to express uncertainty or abstain (“I don’t know/I’m unsure”), but question how well uncertainty can be calibrated in practice.
  • There’s broad agreement that you can build non‑hallucinating narrow systems (e.g., fixed QA databases + calculators) that say IDK outside their domain; disagreement is whether general LLMs can approach that behavior.
  • Multiple commenters note a precision–recall tradeoff: fewer hallucinations means more refusals and less user appeal; current business incentives and leaderboards push vendors toward “always answer,” encouraging hallucinations.

Broader Critiques and Meta-Discussion

  • Some see the post as PR or leaderboard positioning rather than novel science; others welcome it as a clear, shared definition and a push for better evals.
  • A recurring complaint is that much public discourse about hallucinations projects folk-psychology onto systems that are, at core, just very large stochastic language models.

Rug pulls, forks, and open-source feudalism

Building from source and packaging models

  • Several comments argue that routinely building from source shifts power to users: switching remotes is easier than abandoning vendor binaries, and cherry‑picking fixes doesn’t require maintainer releases.
  • Guix (and likely Nix) is praised for “source by default” with binary caches and easy local patching; Debian/Devuan cited as long‑standing, reproducible‑build ecosystems, though not as “source‑transparent” as Guix.

CLAs, copyleft, and power asymmetry

  • Many see CLAs that grant unilateral relicensing as the core enabler of rug pulls, especially when combined with copyleft: the company can go proprietary while others remain bound.
  • Others note some CLAs (e.g., certain nonprofit/foundation ones) explicitly promise continued free licensing and are seen as acceptable when backed by strong governance.
  • Copyleft without a CLA (e.g., Linux) spreads copyright to many contributors, making a lock‑in relicensing practically impossible.
  • AGPL+CLA is described as particularly lopsided for SaaS: the company can run a closed service while competitors must publish their changes; Stallman’s view is summarized as prioritizing user freedom over contributor symmetry.

What is a “rug pull”?

  • One camp says there’s no rug pull in FLOSS: old code and GPL/MIT versions “exist forever,” and maintainers owe no future labor. Rug pull can only mean stopping maintenance, which is always allowed.
  • Another camp stresses dependency lock‑in, branding/network effects, active marketing of “open source forever,” and explicit promises (e.g., around core licenses). Under those conditions, relicensing is seen as betrayal.
  • Some distinguish “snapshot and fork” from the large, ongoing effort of sustaining a competitive fork.

Hyperscalers, SaaS, and sustainability

  • Strong resentment toward large cloud providers that monetize popular OSS as services without funding maintainers; examples like Elastic/Mongo/Redis are framed as defensive license changes against this.
  • Others counter that clouds contribute heavily to core infrastructure (kernel, toolchains) and free marketing; they’re just using permissive licenses as written.
  • There’s disagreement on whether criticizing rug pulls is “toxic purism” that distracts from the larger structural issue (hyperscaler dominance), or a necessary defense of community trust.

Funding, responsibility, and entitlement

  • Multiple comments emphasize that most of us are “free riders”; OSS is gift‑giving, and it’s legitimate for maintainers to stop or change direction.
  • Others argue gifts given repeatedly and heavily promoted create moral obligations, especially when users invest labor, integrations, and advocacy.
  • There’s growing interest in more deliberate funding models: sponsoring foundations, directly paying maintainers, industry coordination mechanisms, or even government/sectoral funds.
  • Some enterprises report being burned by license/business changes (Chef, CentOS, VMware/Tanzu) and are pivoting toward funding upstream OSS (e.g., Proxmox/QEMU) instead of proprietary vendors.

SSPL, AGPL, and license design

  • SSPL is seen by some as “almost good”: a stronger anti‑SaaS copyleft, but criticized for vague scope (what counts as the “service”) and incompatibility with GPL/AGPL, making it risky in practice.
  • Several participants wish for a clearer, OSI‑acceptable “AGPL‑plus” that targets proprietary hosted services without sweeping in generic infrastructure or breaking compatibility.

Developing a Space Flight Simulator in Clojure

Clojure / Lisp Readability and Syntax

  • Several commenters coming from C-like or Scheme backgrounds find Clojure visually foreign and “noisy,” especially due to vectors and destructuring.
  • Others argue that once destructuring and Clojure’s maps/EDN are understood, the syntax becomes highly readable and pragmatic, with more compact data representation than JSON.
  • There’s broad agreement that the real shift isn’t parentheses but immutable, high‑performance data structures and the resulting coding style.

Macros, “Code as Data,” and REPL Workflow

  • Some emphasize Lisp’s advantage: code is data, enabling powerful macros (e.g., custom control constructs, threading operators) with tiny code changes compared to non‑Lisps.
  • Others push back that in professional Clojure, macros are used sparingly, mostly in libraries and “with‑context” helpers; application code should prefer functions.
  • A separate thread praises the “live” Lisp/REPL experience (Emacs, babashka, Fennel) and the feeling of “playing” the system by changing running code.

Clojure and Functional Languages in Game Development

  • One camp sees projects like this and native Clojure variants (e.g., Jank) as potentially transformative for some developers: REPL‑driven iteration, good language design, C++‑like performance.
  • Others argue that programming is a small slice of game development; most indie devs are focused on engines like Unity/Unreal/Godot or Lua/C#/C++/Rust, not functional styles.
  • Skeptics call Clojure-as-orchestrator over C++ engines a “beautiful dead end” for mainstream gamedev, citing low FP adoption and art/design priorities, plus GC concerns.
  • Counterpoint: using a high‑level language for the logic while delegating rendering/physics to C++ is exactly the value proposition; maintainability of game logic matters.

Engines, Performance, and Low-Level Concerns

  • Some nostalgia for “rolling your own engine,” but others note that’s now rightly seen as wasteful unless engine building is the goal.
  • The project in question uses OpenGL and a C++ physics engine (Jolt); the author previously prototyped physics in Guile but prefers leveraging specialized C++ for performance.
  • There’s discussion of GC pauses (with mention of ZGC) and of alternative approaches: GC‑free FP (e.g., Carp), high‑level metalanguages generating low‑level code, and functional‑friendly VMs.

Project-Specific Reactions and Wishlist

  • Visuals and technical ambition are widely praised, especially given the non‑traditional stack.
  • Suggested future features include docking, the Moon and eclipses, richer atmospheric/lighting effects, shared planetary datasets, and even elaborate “space war” and ocean simulations.

A sunscreen scandal shocking Australia

Regulation, Enforcement, and Trust

  • Several comments stress that regulations are only meaningful if enforced; lax enforcement lets anti-regulation rhetoric argue “regulation doesn’t work, so scrap it.”
  • Others push back that “more regulation” isn’t obviously the answer, but agree there’s a clear regulatory failure when SPF 50 products test near SPF 4.
  • The deeper concern is a trust gap: products can pass for years, then fail. Suggested fixes: transparent test methods, batch-level public results, routine independent re-testing, and proper recalls.

How Sunscreen Is Tested

  • Many are surprised SPF testing still relies heavily on human volunteers being exposed to UV to see when they burn.
  • Proposals: more in‑vitro / physical testing (standard surfaces, precise application, optical measurement) to screen out failures cheaply, with human tests as a final step.
  • Counterpoint: absorption, sweat, skin condition, and formulation interactions require in‑vivo testing, similar to drugs; labs already combine non‑human and human methods.
  • Anecdotes describe paid test subjects in Australia (Jacuzzi, then UV exposure on treated vs untreated skin).

SPF Numbers, Protection, and Cancer Risk

  • Repeated clarification: SPF is about transmission (1/SPF), not intuitive “percent blocked.” SPF 4 transmits 25% of UV, SPF 30 about 3.3%, SPF 50 about 2%.
  • Debate:
    • One view: benefits rapidly diminish after SPF 30; higher numbers add little in practice.
    • Others argue higher SPF halves transmission again (e.g., 98% vs 99% blocking) and matters over years of exposure; also gives more margin for uneven application and degradation over time.
    • Disagreement over whether SPF meaningfully affects “how long you can stay out” vs just instantaneous dose.
  • Some are unimpressed that even “good” sunscreens might only halve cancer incidence; others see that as still materially valuable at population scale.

UVA vs UVB and Ingredient Safety

  • Commenters note some sunscreens (especially in the US) historically focused on UVB, preventing burns while allowing substantial UVA exposure.
  • Europe, Australia, and Japan are cited as having stronger UVA‑related labelling rules; the US lags.
  • There is concern about contaminants like benzene and about reef‑ and human‑safety of certain chemical filters; others argue background benzene exposure (e.g., from cars) is already significant.

Real-World Use: Clothing vs Cream

  • Many Australians say sunscreen is unreliable in practice because it washes/sweats off and people don’t reapply correctly, especially in water sports.
  • Surf instructors and Queenslanders reportedly favor long-sleeve rash vests, wide‑brim hats, and zinc oxide on high-risk areas; sunscreen is treated as secondary.
  • Others report good results with high‑SPF products when applied heavily and frequently, but still prefer sun‑protective clothing for convenience and certainty.
  • Multiple commenters emphasize hats (not just baseball caps) and UPF clothing as more effective and less fussy than lotion.

Local Brand Perceptions and Scandal Reaction

  • Some Australians say certain major brands “never worked” and had a longstanding reputation as weak; the scandal feels like confirmation of years of folk wisdom.
  • Others, looking at test charts, note those brands often underperform their label but are not uniformly catastrophic; water resistance may be the biggest weakness.
  • Influencer marketing of the failed products is widely criticized: influencers profit, followers are exposed, and there are effectively no consequences for promoters.

Tesla changes meaning of 'Full Self-Driving', gives up on promise of autonomy

Redefining “Full Self-Driving” and broken promises

  • Many see adding “(Supervised)” to “Full Self-Driving” as an implicit admission Tesla won’t deliver unsupervised autonomy to existing buyers, after nearly a decade of “next year” claims.
  • Others argue the change is mostly legal/PR framing: Tesla is now describing what the system does today without explicitly abandoning long‑term Level 4/5 ambitions.
  • Several commenters point to early marketing (e.g. “driver only there for legal reasons”) as clearly implying unsupervised operation, now walked back in practice.

Fraud, regulation, and refunds

  • Large contingent calls this straightforward fraud or securities/false‑advertising abuse, noting stock gains and FSD sales driven by undelivered autonomy promises.
  • Skepticism that US regulators (SEC/FTC, states) will act; some blame “late‑stage capitalism” and weak consumer protection, though others say agencies have probably pushed as far as they can.
  • People who bought FSD years ago feel cheated; talk of class actions is tempered by Tesla’s arbitration clauses and spotty enforcement history.
  • A minority insists early timelines were naïve rather than malicious, but acknowledges they were “irresponsible.”

Waymo, autonomy levels, and what counts as FSD

  • Repeated comparison: Waymo is geofenced Level 4 with remote assistance, Tesla is still Level 2. Debate whether L4 in limited cities “counts” as full self‑driving.
  • Some argue “full” should mean “can drive nearly everywhere humans can”; others say transformative tech doesn’t need universal coverage (analogy to early cell phones and gas stations).
  • There’s disagreement over how often remote assistance occurs and whether that undermines “full” autonomy.

Sensors: vision‑only vs lidar/radar

  • Big fault line: critics say Tesla’s vision‑only bet was “short‑sighted,” rejected decades of sensor‑fusion research, and is now effectively being abandoned.
  • Many engineers and practitioners in the thread argue lidar/radar + cameras are clearly superior for safety, redundancy, and latency; several cite Waymo and Chinese systems as evidence.
  • Defenders counter that:
    • Humans drive mostly on vision, so vision‑only is theoretically sufficient.
    • Extra sensors add cost, complexity, and failure modes; the “best part is no part.”
  • Strong pushback: cameras are not human eyes, current ML is far from human semantics, and engineering safety normally favors redundant modalities.

Human driving, edge cases, and environment

  • Long subthreads on how well humans adapt to foreign driving cultures and conditions vs how localized today’s AVs are.
  • Severe weather (snow, ice, heavy rain, glare, fog) and chaotic traffic (e.g. parts of India, Africa, rural icy roads) are repeatedly cited as unsolved for all vendors.
  • Some argue the bar for machines should be “better than humans,” not merely “as good,” given existing human crash rates.

Current Tesla FSD performance

  • Some owners report FSD now handles 90–95% of their driving, including complex Bay Area/Boston routes, with rare safety interventions. They see rapid progress and consider Tesla far ahead of legacy OEMs.
  • Others report phantom braking, poor behavior in unusual geometry, and camera reliability issues, saying they must intervene every few miles and find it terrifying in real use.
  • There’s a clear split between “it’s already better than average rideshare drivers” anecdotes and “I wouldn’t trust it in bad weather or unfamiliar areas.”

Broader views on Tesla and Musk

  • One camp argues Musk’s leadership created enormous value (EV market, rockets, energy storage) and that overpromising is typical of ambitious tech.
  • Another emphasizes poor governance, hype‑driven valuation, the trillion‑dollar pay package, and a pattern of big, undelivered narratives (robotaxis, cheap cars, tunnels) as warning signs.
  • Some suggest Tesla’s real long‑term play is batteries/energy, with cars and FSD as a bootstrapping and hype vehicle.

Is OOXML Artifically Complex?

Origins and Design of OOXML

  • Several commenters argue OOXML is essentially a direct XML serialization of Office’s legacy binary formats, carrying decades of cruft tied to in‑memory data structures and performance constraints of the 80s/90s.
  • Backward compatibility for “hundreds of millions” of users and regulatory pressure (especially in Europe) are seen as key drivers; designing a clean new format or fully adopting ODF was viewed as too slow and risky internally.

Complexity: Necessary, Accidental, or Malicious?

  • One camp: complexity is “inevitable” given Office’s enormous feature set and commitment to lossless round‑tripping of old documents. Cutting features to simplify the spec would have broken real users.
  • Another camp: much of the complexity is unnecessary for an interoperable standard and exists because Microsoft just dumped internal representation into XML. That’s framed as technical debt and “self‑interested negligence,” not careful design.
  • A more critical camp: the format and spec are intentionally hostile—full of “works like Word95/97” behavior tied to undocumented software, making faithful third‑party implementation effectively impossible.

Interoperability and Standards Politics

  • Strong accusations that Microsoft “bought” or stacked national standards bodies to push OOXML through fast‑track ISO approval, over technical objections and despite overlapping with existing ODF.
  • Some see this as classic embrace‑extend‑extinguish: creating a nominally open but practically proprietary standard to block ODF adoption in government procurement.
  • Others argue both motives can coexist: backward compatibility and strategic obstruction.

Comparison with ODF and Other Formats

  • ODF is praised for clearer, more “markup‑like” structure in simple cases, but also criticized as ambiguous, underspecified, and itself complex once all referenced specs are counted.
  • Debate over which is more “open in practice”: OOXML’s detailed but messy serialization vs. ODF’s cleaner model but reliance on de facto behavior of LibreOffice.

Developer and User Experience

  • Implementers report OOXML as painful: gigantic specs, odd date handling, namespace verbosity, implicit caches, and hidden coupling to Office behavior.
  • Nonetheless, for many tasks (scripts that read/write documents, extract images, simple spreadsheets) OOXML’s zipped‑XML container is seen as a big improvement over old binary formats.
  • Users largely prioritize fidelity over openness; this is cited as why Office remains dominant despite OOXML’s flaws and LibreOffice/Google Docs’ existence.

The math of shuffling cards almost brought down an online poker empire

Article focus and 52! discussion

  • Many commenters find the article’s early emphasis on “52! is huge” largely irrelevant to the real issue, though some enjoy the perspective on how large 52! is.
  • Others note that in “computer terms” 52! is < 2²²⁶, so not astronomically large compared with common key sizes, though still enormous for brute-force enumeration.
  • Several stress that no one sensible generates a random deck by enumerating all 52! permutations anyway.

RNG and seeding failures in the poker system

  • Core bug: the RNG was seeded from time-of-day with millisecond or second resolution, capping possible deck arrangements at about 86 million.
  • This small state space allowed precomputation or clock-synchronization attacks; with observed community cards (especially after the flop), an attacker could narrow down or determine all players’ cards.
  • Thread links to the original technical paper, which describes both a biased shuffle algorithm and the weak PRNG seeding.

Shuffle algorithms and correctness

  • Strong consensus: Fisher–Yates (Knuth shuffle) with a cryptographically secure RNG gives an unbiased, effectively optimal shuffle.
  • Several criticize the article’s implication that computers “cannot replicate” human shuffles; commenters argue computers are typically more random than human dealers, whose physical shuffles are measurably biased.
  • Naïve or ad-hoc shuffling schemes (e.g., repeatedly simulating riffle shuffles or sorting by random keys) are viewed as risky unless mathematically proven unbiased.

Randomness sources and hardware

  • Commenters mention /dev/urandom, CPU instructions like RDRAND/RDSEED, and quantum/thermal noise–based TRNGs as practical entropy sources capable of generating hundreds of megabits per second.
  • Some note that hardware RNGs can be subverted (e.g., via microcode or virtualization), so system design and threat model still matter.

Security standards and blame debate

  • One camp calls the 1990s poker RNG design grossly negligent, arguing that even then probability theory and correct shuffling algorithms were well-known.
  • Another camp is more sympathetic, pointing out that many systems—even by smart teams—have shipped with weak RNGs, and that harm and intent matter when judging “negligence.”

Other games and perceptions

  • Magic: The Gathering Online/Arena shuffles are discussed; some players feel online shuffles “feel different,” with notes about deliberate “smoothing” of opening hands in some modes.

The Universe Within 12.5 Light Years

Tools and Visualizations of the Local Neighborhood

  • Multiple readers recall or suggest 3D navigable star maps and planetaria (100,000 Stars, Stellarium, Celestia, CHView, Galaxy Map, games like Elite Dangerous and Space Engine).
  • There’s frustration that good, modern, interactive 3D maps of nearby stars are rare or outdated compared to the abundance of satellite/solar-system visualizers.
  • Some share physical/artistic maps (laser-etched crystals, posters), and one person mentions building scale-walk tools and videos.
  • Several note the Atlas page itself looks like a “1995 website” but praise its charm and longevity; others point out the map is outdated (e.g., missing objects like Luhman 16).

Interstellar Probes and Propulsion

  • Strong interest in sending unmanned probes to nearby stars, with acceptance that 100+ year missions are plausible.
  • Power is a central problem: RTGs decay too quickly for deep interstellar communication; fission reactors raise reliability and heat-dissipation issues.
  • Beamed-sail concepts (e.g., Starshot) are discussed; critics highlight beam divergence and the need to impart most momentum close to Earth.
  • Some argue tech will improve so fast that later probes might overtake earlier ones; others say we should launch anyway.
  • Generational ships are debated: technical feasibility (size, maintenance, collisions, delta‑v) and ethical/social questions about people born and dying aboard.

Interstellar vs Interplanetary Focus

  • A substantial thread argues our next logical step is thorough exploration and settlement of Solar System bodies rather than nearby stars, both for practicality and to mature ethically as a species.
  • Others still see interstellar craft as an eventual, though distant, goal.

Fermi Paradox, FTL, and Tech Trajectories

  • Some suggest stalled propulsion progress may imply interstellar travel is effectively impossible, offering a bleak answer to “where are the aliens?”.
  • Others push back, citing spurty, unpredictable tech progress and speculative ideas like warp drives, though skeptics note we likely already would see evidence if FTL were feasible.
  • Explanations range from “we’re early/rare” to self-destruction, “prime directive”-style non‑interference, or simply non-overlapping civilizations in space and time.
  • Several insist known physics effectively rules out faster‑than‑light travel; attempted counterexamples (e.g., Cherenkov radiation) are corrected.

Scale of Space and Human Timescales

  • The local 12.5 ly neighborhood feels surprisingly small in terms of viable targets, underscoring how even with big propulsion advances, reachable places remain finite.
  • Long comments use Voyager’s speed and light‑year distances to illustrate how inconceivably slow current travel is, and how even c is “too slow” relative to galactic scales.
  • Relativistic travel and time dilation are discussed: you can reach distant places within a human lifetime on the ship, but millennia pass externally.
  • Some note that returning to a far‑future Earth might be more astonishing than any barren exoplanet.

Physics Sidebars (Light, Gravity, Magnetism)

  • One subthread clarifies “age” of sunlight: energy takes ~hundreds of thousands of years to random-walk from the core to the surface, then ~8 minutes to Earth; photons reaching us are emitted near the photosphere.
  • Another explores relativity: from a photon’s “frame,” no time passes; time dilation and length contraction are explained informally.
  • Magnetism and gravity are discussed as “spooky” action-at-a-distance, leading to historical quotes and field-based explanations.
  • Gravity’s propagation at light speed is mentioned in the context of galaxy-scale effects.

Why Study Beyond the Solar System?

  • Several responses to “why care beyond the Solar System?”:
    • Comparing other systems helps gauge how typical Earth and the Sun are, informing climate and habitability understanding.
    • Astrophysics drives advances in imaging, detectors, and computation that spill over into technology and medicine.
    • Nearby stars and supernovae pose environmental and existential risks; knowing the neighborhood helps quantify them.
    • Distant objects (quasars, pulsars) define stable celestial reference frames and can aid navigation and timekeeping.
    • Historically, stellar observation underpinned calendars, agriculture, and navigation; the same pattern continues at higher tech levels.

Aesthetics, Emotion, and Fiction

  • Many express nostalgia and affection for old-school star maps and game-like galaxy views; comparisons to classic Elite and National Geographic posters are common.
  • The map evokes mixed feelings: awe, insignificance, hope, and a kind of existential sadness.
  • Discussions of galactic empires note that realistic scales make classic sci‑fi political setups and anti‑machine universes (e.g., Dune) administratively dubious without massive automation.

Tesla offers mammoth $1T pay package to Musk, sets lofty targets

Pay Package Structure & Intent

  • Package is entirely stock-based and vests only if Tesla’s valuation increases roughly 7.5–8x over a decade, plus hitting operational milestones.
  • Supporters say this aligns incentives: if the “nearly impossible” targets are met, shareholders get rich alongside Musk; if not, they pay nothing.
  • Critics see it as an “open invitation” to manipulate stock price and definitions of milestones (e.g., what counts as a “robotaxi” or “FSD subscription”).
  • Some view it as a psychological tool to keep investors from exiting a hype-driven bubble.

Current Business, Competition & Brand

  • Several comments argue Tesla’s early-mover advantage in EVs is gone: cheaper and/or better EVs (BYD, European brands) are cited, plus commoditization of batteries.
  • There are claims of falling sales, revenue, profits, and EV market share, along with brand damage from Musk’s public persona and politics.
  • Others counter that Tesla remains profitable, with low debt and leading products (e.g., Model Y, Powerwall), especially compared to money-losing rivals.
  • Disagreement over whether Tesla is still “revolutionizing” solar or just dominating a narrow accessory niche.

Robots, Autonomy & “Next S-Curve”

  • Bulls see robots, robo-taxis, and new products as the real growth story; some assert Tesla’s humanoid robot could become “the most advanced consumer product ever.”
  • Skeptics point to decades of overpromised timelines (notably Full Self Driving), Boring Company’s modest Vegas tunnels, and practical issues of home robots (dirt, damage, safety).
  • Comparisons are made to robotics competitors (Chinese firms, Figure, Boston Dynamics/Unitree); some argue there’s no moat and Tesla is behind, others dismiss rivals as vaporware.
  • One view: Tesla’s edge in autonomy is not technical superiority but willingness to ship at lower safety readiness and lean on regulatory capture.

Valuation, Bubble Concerns & Macro

  • Some argue that after seeing other mega-cap stocks break psychological ceilings, “any valuation is possible,” even multi-trillion for Tesla.
  • Others say Tesla’s P/E and market cap are “disconnected from reality,” describing it as a bubble sustained by hype and fear of missing out.
  • A few tie future valuation to macro factors like inflation, political moves against Fed independence, and geopolitical instability, though the causal links remain speculative and contested.

Musk’s Behavior & Focus

  • Some hope the package nudges Musk to focus on Tesla instead of social media and culture wars; others doubt larger numbers will change his behavior.
  • His public promotion of controversial political ideas is seen by some as implicitly endorsed by a board willing to grant this package.

Kenvue stock drops on report RFK Jr will link autism to Tylenol during pregnancy

Evidence on Tylenol and Autism

  • Commenters link to large observational and meta-analytic studies that both find no association and a small positive association between prenatal acetaminophen use and ASD/ADHD.
  • Reported effect sizes are modest (odds ratios ~1.1–1.2), implying tiny absolute risk changes (e.g., ~0.2–0.4 percentage points; NNH ≈ 500+ if causal).
  • Multiple people stress these are observational data with confounding, publication bias, and diagnostic differences; causation is not established.
  • Some note sibling-controlled studies still show only weak signals, mostly for long-duration use.

RFK Jr., Politics, and Credibility

  • Many participants dismiss the claim primarily because of RFK Jr.’s long history of anti‑vaccine and fringe health positions, and the AI‑tainted “gold standard” MAHA report.
  • Others criticize this as a genetic fallacy: his untrustworthiness doesn’t automatically falsify every specific claim.
  • Some see his move as part of a broader strategy to erode trust in mainstream medicine in favor of “natural” or wellness narratives, and possibly to further restrict women’s autonomy.
  • A minority say he’s reflecting genuine distrust in U.S. health institutions and that some of his targets (e.g., processed foods, additives) may be non‑crazy even if his reasoning is poor.

Autism Rates, Heritability, and Alternative Explanations

  • Several emphasize strong heritability: autistic parents and siblings, twin studies, and likely genetic factors dominating over any single environmental exposure.
  • Rising autism prevalence is often attributed to broadened diagnostic criteria and reduced stigma, analogized to the historical rise in reported left‑handedness.
  • Others raise speculative environmental contributors (pollution, microplastics, pesticides, EM signals), but these are explicitly flagged as unproven.

Pregnancy Risk Tradeoffs

  • Debate over whether precautionary bans on Tylenol in pregnancy are justified given current evidence.
  • Some argue pregnant people should avoid it for anything short of serious fever, relying on non‑drug measures; others counter that untreated pain and especially fever carry well‑documented fetal risks and that alternatives (NSAIDs, opioids, aspirin) are often worse.
  • Concern that simplistic messaging (“Tylenol causes autism”) will drive unsafe substitutions (e.g., aspirin in children, or no fever control).

Acetaminophen Safety and Culture

  • Extensive side discussion on liver toxicity: narrow margin between therapeutic and toxic doses, overdose common in ERs, but recommended dosing is considered safe.
  • Cultural contrast: in the UK it’s ubiquitous and recommended for almost everything; some HN users find this too casual, others find U.S. hostility overblown.

Markets, Lawsuits, and Science Communication

  • Some see the stock drop and public panic as ripe for plaintiff attorneys and perhaps opportunistic traders.
  • Several complain that media and political actors turn nuanced, inconclusive science into absolutist slogans (“no link” vs “proven cause”), further degrading public trust.
  • General worry that politicizing autism causation — whether via anti‑vax or anti‑Tylenol narratives — harms autistic people, parents, and serious research alike.

Nest 1st gen and 2nd gen thermostats no longer supported from Oct 25

What’s Being Ended and What Still Works

  • Google is ending app/API support for Nest 1st/2nd gen thermostats; they will still function as standalone thermostats.
  • On-device scheduling and “learning” modes reportedly continue, but mobile apps, Home app control, and third‑party integrations (e.g. Home Assistant, utility programs) will stop working.
  • Some see this as “not mass bricking,” others say losing remote/app control is effectively losing the core value they paid for.

Trust, Lifetimes, and Google’s Reputation

  • Strong sentiment that Google kills too many products; multiple commenters say this is the last straw for buying any Google hardware or depending on Google services.
  • Debate over expected support duration:
    • Some argue 20–30+ years is reasonable for a thermostat tied to a home and HVAC that can last decades.
    • Others counter that buyers got ~10–14 years, which they view as acceptable for a complex connected device.
  • Several call for regulation: minimum advertised support lifetimes, or mandatory release of keys/APIs/firmware when cloud support ends, to avoid e‑waste.
  • A minority argues Google’s only obligation is to shareholders and that minimal support until it’s legally safe is “normal business.”

Cloud vs Local: Design and Business Models

  • Thread-wide “lesson”: avoid IoT devices that require a vendor cloud and don’t offer local or self-hosted control.
  • Complaints that almost all “smart” gear routes LAN‑to‑LAN control through remote servers and logins, often justified under “security” or account UX.
  • Others tie cloud-dependence to VC‑style subscription valuation and forced upgrade incentives, not technical necessity.
  • One former early Nest engineer notes that adding secure local APIs or modern protocols to 2010-era Linux devices is non-trivial, but many still argue Google could at least keep basic cloud endpoints up or expose a simple local API.

Alternatives and Local-First Setups

  • Many recommend Ecobee, though it also has cloud/API quirks; praise for HomeKit mode and open-source tools (e.g. beetstat) for history/analytics.
  • Other suggested options: Z‑Wave/Zigbee thermostats with Home Assistant, Honeywell Z‑Wave and T6, Sinopé, Venstar (documented local JSON API), cheap Zigbee/Z‑Wave units from AliExpress, Insteon, Amazon’s thermostat.
  • Repeated advice: favor devices with:
    • Local protocols (Z‑Wave, Zigbee, Matter, HomeKit, LAN APIs).
    • Optional or no cloud; no forced OTA; ideally hackable/3rd‑party firmware (e.g. Tasmota).
    • Integration with Home Assistant and isolation on dedicated VLANs.

“Smart” vs “Dumb” Thermostats

  • Pro‑smart arguments: remote control when traveling, pre‑heating/cooling before returning home, using remote sensors, handling system thermal lag, better UI than legacy programmable units.
  • Anti‑smart or skeptical views: old mechanical or simple digital thermostats last 30–50+ years, are cheap, reliable, and not hostage to corporate decisions; many “smart” features (learning, AI) are seen as gimmicky or annoying.

Hacking and Community Rescue

  • Mention of other ecosystems rescued by open source (e.g., Squeezebox/Lyrion, Tasmota), and calls for similar openness from Google.
  • One commenter is building an open-source replacement PCB for Nest 2nd gen using ESP32‑C6, reusing the existing enclosure and integrating with Home Assistant, as a way to keep the hardware useful after Google’s cutoff.

I kissed comment culture goodbye

Experiences with Friendship and Connection

  • Several commenters report making close friends, partners, business contacts, even political allies via comment-based communities (forums, Nextdoor, Reddit, HN, IRC, gaming voice chat).
  • Others say they’ve never formed a single offline connection through comments, especially on HN and Reddit, which feel anonymous and transient.
  • Many note a life-stage effect: as they aged and built offline networks, the drive and energy to form new online friendships declined.

Platform Design and Its Consequences

  • HN’s lack of avatars, PMs, and notifications is seen as intentionally content-focused but connection-poor.
  • Older forums and BBSs (phpBB, LiveJournal, IRC) are remembered as better for relationship-building due to stable identities, signatures, and easier one-to-one follow-up.
  • Modern platforms prioritize engagement via endless feeds and upvote/downvote mechanics, which reward jokes, outrage, and conformity over vulnerability or depth.
  • Some praise smaller, topic-focused spaces (niche subreddits, Discord servers, local FB groups, livestream chats) as still capable of fostering real community.

Polarization, Toxicity, and “Enshittification”

  • Many feel that comment culture degraded around mid‑2010s with polarization, troll farms, and engagement optimization.
  • Comment sections on big sites are described as angry, repetitive, meme-driven, and hostile to dissent; good answers get buried.
  • Up/downvotes become “like/dislike” tools in emotional topics, driving hive-mind behavior and pushing out subject-matter experts.

Authenticity and the Rise of Bots/AI

  • Multiple commenters now doubt whether interlocutors are human, citing bot farms and LLM‑generated content.
  • One anecdote about a meme mis-handled by an AI model triggers broader concern that subtle cultural context is being lost or flattened.
  • Some argue bots aren’t even required: platform dynamics alone can create “false pluralities” and distorted perceptions of consensus.

Why People Still Comment

  • Many say they comment primarily to think, learn, and practice writing, not to make friends; drafting then deleting is common and still useful.
  • Others admit to a commenting “addiction” driven by dopamine from replies and arguments.
  • There’s disagreement over “ROI”: some see comment time as wasted socially, others as high‑value for intellectual growth, career serendipity, or modest connection—especially in smaller, “cozy web” communities.