Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 456 of 543

Diablo hackers uncovered a speedrun scandal

Nature of speedrunning “ethics” and norms

  • Many comments say the core norm is simple: “don’t lie.” Any exploit is fine if it’s fully disclosed and the run is correctly categorized.
  • Debate centers on whether this is a unique “ethics” or just standard competitive honesty shared with most sports and society.
  • Some distinguish between ethics (cheating vs honesty) and norms/rules (what categories exist, what tricks are allowed where).

Categories, glitches, and TAS vs real‑time runs

  • Games usually have multiple categories: any%, 100%, glitchless, “no major glitches,” etc., each with detailed rules.
  • Communities draw lines between “major” and “minor” glitches based on fairness, difficulty, and watchability (e.g., Super Metroid, Ocarina of Time).
  • Tool-Assisted Speedruns (TAS) are separate: they use emulators, savestates, and scripts to push theoretical limits, serve as proofs of concept, and often inspire later human routes.
  • TAS itself has rules (input files must beat an unmodified game; no hacked ROMs or spliced videos). Hardware TAS devices are used to show runs on real consoles.

Verification and cheating detection

  • Serious communities require video, seeds/saves where applicable, and frame-by-frame scrutiny.
  • RNG-based games often require posting the seed; fixed-seed or “fastest known seed” categories can emerge.
  • Cheaters frequently rely on splicing, external tools, or impossible RNG; detection often mixes tooling, reverse engineering, and statistical analysis.

The Diablo scandal and rule ambiguity

  • The Diablo run mixed segments from different seeds/versions, used impossible item drops and boss damage, and likely edited game state (e.g., fireball damage).
  • It was submitted as a segmented run, which was allowed; the controversy is how segmentation was used.
  • Older Speed Demos Archive rules banned hardware cheats and mid-run media tricks but were vague about offline save editing and continuity between segments; later rules explicitly ban file edits and require continuity.
  • Some argue the run clearly violated the spirit of the rules even then; others note the written rules were looser at that time.

Spectatorship, fun, and community dynamics

  • Categories are also shaped by what’s fun and humane: e.g., banning tedious RNG abuse or ultra-fragile tricks, allowing turbo to avoid RSI, or preferring glitchless for watchability.
  • Many see speedrunning as “community vs game” as much as runner vs runner, with researchers and TAS authors valued alongside top competitors.
  • Some dislike glitch-heavy runs and watch only glitchless; others enjoy game-breaking TAS and arbitrary code execution.

Guinness and meta‑commentary

  • Guinness’ involvement in gaming records is widely criticized as low-quality and pay-to-play.
  • Several people treat speedrun cheating exposés and verification deep dives as a kind of technical drama: a “soap opera” mixing human psychology, statistics, and reverse engineering.

Twelve months at 1.5 °C signals earlier than expected breach of Paris Agreement

Perceived inevitability and tipping points

  • Many commenters assume 3–4 °C (or worse) this century is now likely, with some predicting 4 °C by 2050 or even 6 °C longer term.
  • Strong concern over “cliff edge” effects: Gulf Stream/AMOC collapse, extreme winters in parts of Europe, agriculture disruption, permafrost melt, food scarcity and inflation.
  • Others argue impacts in rich northern countries will be manageable compared to devastation in poorer states and LDCs.

Politics, responsibility, and denial

  • Recurrent theme: denial evolves from “it’s not happening” to “too late to fix,” used to justify inaction.
  • Blame is variously assigned to billionaires, fossil fuel interests, corporate media, and a public unwilling to sacrifice comfort.
  • Some frame it as a species-level problem (incentives, desire for growth); others as primarily a problem of concentrated wealth and power.

Global actors: US, China, India

  • Debate over whether US middle-class consumption is central, versus new coal in China and India.
  • Some note China’s large renewables build-out and EV exports; others highlight its coal expansion and tech export controls that slow India’s transition.
  • View that no major state truly prioritizes climate, because rich countries can buffer impacts while the poorest suffer most.

Economic pain, inequality, and transition

  • Tension between calls for fossil fuel taxes/subsidy removal and fears of higher energy prices amid existing hardship.
  • One side argues fossil fuels are already more expensive than electricity and that delay mainly benefits fossil companies; the other stresses massive infrastructure constraints (old grids, low-amp service, slow retrofit cycles).
  • Agreement that poorly designed “green” policies that hit the poor hardest drive backlash.

Protest, democracy, and behavior change

  • Suggested levers: youth protests, general strikes, higher turnout, especially among young voters.
  • Skepticism that protests or voting will deliver rapid change; generational trends and polarization cited.
  • Disagreement over whether systemic change must follow mass individual behavior change, or vice versa.

Geoengineering and technological fixes

  • Some argue geoengineering (stratospheric aerosols, marine cloud brightening) must be researched urgently as a stopgap.
  • Others counter that we already have cheaper clean tech (solar, wind, storage) and that political obstruction, not technology, is the bottleneck.
  • A minority pins hope on unforeseen technological breakthroughs; others are wary of treating AI as a climate “savior.”

AI, crypto, and other sectors

  • Brief debate over whether crypto mining and modern AI meaningfully accelerate warming; one cites estimates of Bitcoin’s small but non-zero share of global emissions.
  • General sense: these loads are harmful and mostly unnecessary, but small compared to total energy growth.

Paris Agreement metrics and enforcement

  • Criticism that using a 20‑year running mean to define thresholds bakes in a decade of delay and is “sclerotic by design.”
  • Others respond that 20–30 year windows are standard in climatology, and dramatizing single years is misleading.
  • Broader frustration that Paris-style agreements lack enforcement and accountability, undermining future trust.

Psychology, communication, and public perception

  • Many note the “boiling frog” problem: people don’t connect climate change to coffee/cocoa prices, local disasters, or future mortgage risk.
  • Some argue constant focus on abstract averages (1.5 °C) fails; propose communicating extremes, droughts, and concrete local impacts instead.
  • There is tension between “doomerism” (nothing meaningful will be done) and “better catastrophe” thinking (trying to improve outcomes even if bad scenarios are locked in).

Geopolitics and civilizational futures

  • Speculation that warming could drive great-power grabs for Canada/Greenland/Arctic resources if the “politeness” of the current order erodes.
  • References to “fall of civilizations” narratives, mass die-off scenarios, underground billionaire bunkers, and whether any future catastrophe would finally induce serious action.

What situations in classical physics are non-deterministic? (2018)

Chaos vs. Determinism

  • Several commenters stress that classical chaos (three‑body problem, coupled/compound pendulums, molecular collisions) is still deterministic: identical initial conditions produce identical trajectories.
  • Chaos limits predictability due to exponential sensitivity to initial conditions and finite measurement precision, but does not introduce fundamental non‑uniqueness.
  • Some confusion appears where “stochastic” or “chaotic” is conflated with “non‑deterministic”; others explicitly correct this.

Norton’s Dome: Claimed Classical Non‑Determinism

  • The dome is shaped so a particle can be rolled up and come to exact rest at the apex in finite time.
  • By time‑reversal symmetry of Newtonian mechanics, there exist mathematically valid trajectories where a particle sits motionless at the apex for an arbitrary time, then spontaneously rolls off in any direction.
  • Crucial point: multiple distinct trajectories satisfy the same initial state (position and velocity zero at the top) and the same Newtonian equation of motion ⇒ non‑unique evolution under the model.

Mathematical Subtleties and Critiques

  • The differential equation for the dome violates the usual Lipschitz condition required for the standard existence‑and‑uniqueness theorem for ODEs; this is why multiple solutions are allowed.
  • Some argue the “paradox” is just exploiting incomplete axiomatization of classical mechanics: if one assumes forces always give unique second‑order ODE solutions, Norton’s dome is simply disallowed by definition.
  • Others highlight related math facts: you can have smooth functions with all derivatives zero at a point yet non‑zero elsewhere, undercutting naive “all derivatives zero ⇒ never moves” intuition.
  • Skeptical voices call the construction “nonsense,” “sleight of hand,” or analogous to division by zero: a breakdown of the model rather than proof of physical indeterminism.

Physical Realizability and Idealization

  • Debate over whether the dome “exists in nature”: perfect shapes, infinitesimal contact points, infinite curvature at the apex, and absence of thermal motion or air currents are all unrealistic.
  • One camp: as with perfect spheres or cubes, idealized shapes are legitimate objects in the classical model; the question is about the model’s determinism, not real materials.
  • Opposing camp: if constructing such a shape is impossible and the pathology is destroyed by any small perturbation, it should not count as a serious physical counterexample.

Thermodynamics, Stochastic Models, and Emergence

  • Heat transfer and thermodynamics often use stochastic descriptions, but several comments emphasize this can emerge from underlying deterministic particle dynamics.
  • Macroscopic randomness and entropy are framed as emergent/statistical, not necessarily fundamental non‑determinism in classical mechanics.

Quantum vs. Classical Non‑Determinism; Model vs. Reality

  • Commenters distinguish classical “multi‑solution” non‑determinism (model under‑specification) from genuinely probabilistic evolution posited by some quantum interpretations.
  • Some suggest that if a classical model admits non‑deterministic solutions (Norton’s dome, “space invaders”), that might instead signal those configurations are unphysical or the model incomplete.

The Impact of Generative AI on Critical Thinking [pdf]

What the study actually measures

  • Several commenters stress the paper studies self‑reported critical thinking within AI-assisted tasks, not whether AI makes people generally less intelligent.
  • Core finding as interpreted: higher trust in GenAI correlates with less perceived critical thinking effort; higher self-confidence correlates with more.
  • Some argue this mainly reveals a general human tendency: when a tool gives plausible answers, people stop digging unless they have a clear reason.

Methodological criticisms

  • Strong pushback on survey design:
    • “Critical thinking” is self-assessed, not objectively tested.
    • Items like “I always trust AI” vs “I question AI’s intentions” blur constructs (frequency, trust, and critical thinking intertwined).
    • Concerns about small, self-selected sample and lack of control group.
  • Some still find the qualitative examples helpful for articulating how work shifts from creation to verification.

AI as cognitive tool: shifting vs losing skills

  • One camp: technology historically reallocates cognition (printing press, calculators, GPS, high-level languages). We offload rote skills and move “up the stack” to abstraction and design. Losing obsolete skills can be fine if outcomes improve.
  • Opposing camp: some “obsolete” skills (navigation, writing, arithmetic) are still practically and psychologically important (resilience, autonomy, motivation). Tech-driven offloading may contribute to alienation and dependence.

Experiences using LLMs at work

  • Mixed utility reports:
    • Helpful for boilerplate code, style rewrites, intros, search-like Q&A, and tutoring/quiz generation.
    • Others find them slow things down: more time spent “babysitting” or rewriting “AI slop” than coding directly.
    • For complex or novel systems, AI often fails; by the time humans take over, the learning curve is steeper.
  • Concern that replacing junior work with AI undermines experiential learning.

Verification, trust, and critical thinking

  • Strong emphasis that LLM outputs must be verified; hallucinations are frequent, especially dangerous for learners who don’t know what to check.
  • Some compare AI to GPS: great for most cases, but failure modes can be severe and people tend to follow it uncritically.
  • Others argue AI, like a pry bar, gives “cognitive leverage,” freeing time for higher-level reasoning—if users understand limitations and switch to other tools (books, experts) when precision matters.
  • Multiple commenters note that people already over-trusted Google; LLMs accelerate an existing problem of treating convenient outputs as ground truth.

Education and cognitive development

  • Worries about students using AI/Grammarly instead of learning spelling, writing, or problem solving, and about a generation habituated to an “easy button.”
  • Counterargument that strict spelling and similar norms are largely social conventions whose importance might be overstated.
  • Desire for longitudinal studies on how AI changes what skills are learned and which atrophy, rather than just how people feel about thinking effort.

Knowledge ecosystem and “truth decay”

  • Some fear that as more sources become AI-generated or polluted by hallucinations, independent verification of facts may become impossible, worsening existing misinformation dynamics.
  • This is framed as a potential threat not just to individual critical thinking but to shared knowledge and social trust.

Societal and governance concerns

  • Skepticism that even strong evidence of cognitive harm would slow AI deployment, given money and inertia; climate change is cited as an analogy with mixed precedent.
  • Concerns about future AR+AI scenarios where people are continuously guided by models, becoming “meat robots”; others note society is already heavily shaped by education, media, and platforms.
  • Some see a paradox: AI vendors warn users to verify outputs while simultaneously marketing systems as highly capable, making widespread over-trust predictable.

Jane Street's Figgie card game

Game concept & purpose

  • Figgie is seen as a compact simulation of markets: fast trading in a single asset with asymmetric information.
  • Several comments reference a linked blog post arguing Figgie teaches trading concepts better than poker, though the author later softened that claim after playing more serious poker.
  • Some commenters like how the game isolates price discovery and information inference without poker’s heavy rules and psychology overhead.

In‑person play with physical cards

  • Many want to play with real cards and brainstorm ways to generate the uneven suit distribution:
    • Two‑person protocols (one sorts, the other randomizes/marks piles; then roles swap).
    • Single‑person combinatorial schemes to pull specific positions from a patterned deck.
    • Simpler methods: multiple pre‑made decks, post‑round resorting by suit, or using many standard decks combined.
  • There is disagreement over practicality: some find these methods clever; others say they’re too slow and fussy for a game meant to be played every four minutes.

App implementation and UX

  • Several people try the app; some report layout glitches and unclear feedback (e.g., username already taken).
  • It’s identified as a React Native app; a few excuse rough edges given the firm’s OCaml focus, others use it as an example of RN pain and argue it could just be a web game (which also exists).
  • Observers note the explicit recruiting banner as unusual but see the game as a typical high‑finance recruiting tool.

Gameplay, stakes, and strategy

  • Questions arise about playing one‑off rounds with no real money: does it still “work” as a game, or does it need tournaments/long sessions like poker and backgammon?
  • People share strategies:
    • Infer likely goal suit from skew in your initial hand, then trade toward that suit.
    • Estimate card values probabilistically and trade whenever market prices deviate.
    • Or ignore the goal suit and just “market‑make” (buy low, sell high) for small consistent gains.
  • Some dislike the emphasis on speed and bots instantly exploiting mispricings; others calculate that you can play reasonably slowly and still beat the ante.

Finance, trading, and morality

  • A major thread debates whether games like Figgie are a “trap” to funnel engineers into morally dubious finance jobs.
  • Critics argue traders, hedge funds, and private equity extract wealth without giving back, profit from crises, and resemble gamblers.
  • Defenders emphasize price discovery, liquidity for pensions and investments, and the honesty of traders about pursuing money compared with “change the world” tech culture.
  • There is friction over whether “trading” vs “speculation” are meaningfully distinct and whether middlemen add value or simply scalp buyers and sellers.

Related writing and off‑topic tangents

  • The linked eulogy for a co‑creator of Figgie is widely praised as moving and insightful, and surfaces other niche resources on AI poker.
  • Discussion spins off into health‑optimization, “seed oil” skepticism, and how easily food and health debates veer into conspiracy territory—seen by some as rational caution and by others as pseudoscience.

Are DOGE's Claims of Social Security Payments to 150-Year-Olds Way Off Base?

Technical Explanations for “150-Year-Olds”

  • Many commenters argue this is almost certainly a data/ETL/reporting issue, not evidence of benefits actually being paid to 150‑year‑olds.
  • A common hypothesis: a birthdate field uses a sentinel or epoch value (e.g., 0 or a fixed historical date) to represent “unknown/unchecked” rather than real age.
  • Others note SSA has formal procedures for people without birth certificates; age can be established via work history, children’s records, or other documents, and the system may use estimated or special-coded dates internally.
  • Several point out that at large scale, data is messy: FNU/LNU names, lost or burned records, foreign births, undocumented people, and contradictory dates are normal.
  • Some suggest the anomaly could arise in a downstream reporting system (casting error, missing joins, treating text dates as numeric, etc.) rather than in the core SSA database.

Debate Over the “1875 COBOL Epoch” Story

  • A widely repeated story claims a COBOL/ISO epoch of May 20, 1875 explains the “150 years” figure.
  • Others are skeptical: no clear preexisting documentation of that epoch is produced; COBOL implementations use various epochs, but 1875 as a standard is unproven.
  • One commenter cites SSA NUMIDENT documentation suggesting dates are stored as strings (YYYYMMDD / MMDDYYYY), making a 1875 epoch less likely.
  • Conclusion in the thread: some legacy system might use 1875 as a sentinel, but this is unverified and possibly internet folklore.

Fraud vs. Anomaly

  • Several participants stress that “very old” entries are expected anomalies in such a huge system; they do not imply payments or fraud by themselves.
  • Others note that real fraud exists (dead people paid, SSNs misused), but that finding outliers by naive “age > X” queries is not a serious fraud analysis method.

Criticism of DOGE / Musk

  • Repeated criticism that DOGE and its leader make confident, public claims without basic verification, treating technical anomalies as political ammunition.
  • Some see this as part of a broader strategy to flood media with misinformation and to delegitimize Social Security.
  • There is concern about unqualified or junior staff misinterpreting complex legacy systems, and about how they obtained such deep access to sensitive government data.

Broader Context and Concerns

  • Commenters emphasize that long‑running national systems are inherently complex, with decades of ad‑hoc decisions and edge cases.
  • Several are alarmed that, regardless of the technical details, the episode will harden into a “fact” in certain political circles that Social Security pays 150‑year‑olds, with little incentive for public retractions.

If you believe in "Artificial Intelligence", take five minutes to ask it

Limits of LLM Knowledge & Hallucinations

  • Many comments agree the article’s core point is valid: LLMs are often wrong on niche, non-verifiable factual questions and will confidently invent details.
  • Several note this is unsurprising given finite model size and lossy compression of training data. Expecting encyclopedic, perfectly reliable recall is seen as a category error.
  • Some argue that if a system is only safe to use when the answer can be independently checked, its value is limited for lay users who can’t verify.

Intelligence vs Knowledge (and Metacognition)

  • Strong debate over whether current LLMs exhibit “intelligence” or are merely large statistical parrots.
  • One camp: LLMs predict tokens; they don’t reason, lack metacognition, and don’t “know that they don’t know,” so outputs must be treated like hearsay.
  • Another camp: prediction is a form of reasoning; internal representations and emergent “reasoning-style” behavior suggest a primitive world model.
  • Multiple examples show models sometimes honestly say “I don’t know,” but skeptics call this surface-level roleplay, not genuine uncertainty.

Appropriate Use Cases & Verification

  • Widely reported sweet spots:
    • Coding (snippets, refactors, bug hints) where compilers/tests verify output.
    • Summarization, extraction, rewriting, translation, itinerary planning, and brainstorming.
  • Many only trust LLMs where correctness is self-evident (code runs, script behaves, travel plan is reviewable) and avoid them for unverified summaries, medical notes, or legal/technical reports.
  • A repeated warning: the most dangerous zone is ~90% correctness with no easy way to tell which 10% is wrong.

Model Progress, RAG, and Tooling

  • Several reproduce the dinosaur-taxonomy question with newer models (or with web/search integration) and get largely correct, nuanced answers.
  • This is used to argue the article is already dated and that real value comes from combining LLMs with retrieval (RAG/GraphRAG) and search, not from raw pretrained models alone.

Hype, Marketing, and Societal Risk

  • Some are tired of “LLMs hallucinate niche trivia” posts; others say such critiques remain crucial because LLMs are marketed and perceived as general knowledge oracles.
  • Comparisons split between “another iPhone moment” and “another crypto bubble”; skepticism focuses on overhyped claims, regulatory capture, and use in surveillance and automation.
  • Epistemology and human parallels are discussed: humans also confabulate and misremember, but unlike LLMs, they learn from feedback and have embodied experience, which many consider a key missing ingredient.

Bookshop.org launches Kindle alternative, sends e-book sales to local bookstores

HN Meta: Why This Post, Not the Earlier One?

  • Some wonder why a similar earlier submission “flopped.”
  • Others note timing, competing topics, and second-chance mechanics drive visibility more than content quality.

Business Model, Affiliates, and Bookstore Support

  • Bookshop.org pays independent bookstores ~30% on their own storefront sales and ~10% to general affiliates; remaining margin is shared.
  • Confusion and disagreement over how clearly mainstream coverage explained this, especially when affiliate vs. bookstore cuts overlap.
  • Some suggest routinely linking Bookshop instead of Amazon to boost exposure; others won’t bother as long as DRM is prevalent.

DRM, Licensing, and Device Compatibility

  • Many titles will be DRM-encumbered; FAQ confirms purchases are licenses, not true ownership, even for DRM-free files.
  • Users worry about long‑term access if Bookshop shuts down; policy promises continued access or refunds but no real guarantees.
  • Officially no Kindle support due to Amazon DRM; users discuss workarounds for DRM‑free epubs (Calibre, email-to-Kindle).
  • Only iOS/Android apps for now; no direct file download, so most e‑ink readers can’t use it unless they run Android and allow app installs.
  • Lack of a DRM‑free filter and e‑ink support is seen as a dealbreaker by many.

Reader Needs and Competing Ecosystems

  • Requests: deep discounts/sales, edition unification, robust typo reporting, better annotation/formatting, and gift card integrations.
  • Several already avoid Amazon by using ThriftBooks, Alibris, Biblio, etc., but note Amazon’s UX and reach are still strong.
  • Some highlight other models: Tolino in Central Europe, Hive in the UK, libro.fm and ebooks.com (with DRM indicators).

Value of Supporting Local Bookstores

  • Pro‑Bookshop camp: divert money from Amazon, keep indie stores alive, and preserve bookstores as social/cultural hubs.
  • Critics: for digital downloads, local stores add little; if Bookshop stops supporting them, it becomes just another retailer.
  • One explanation: Bookshop is effectively an Ingram-powered co-op for small stores that can’t build good e‑commerce, and has already sent tens of millions to indies.

Skepticism, Politics, and Trust

  • Some see the pitch as typical “tech for social change” that could later ratchet down commissions.
  • Counterarguments stress the founder’s track record, niche market, dependence on bookseller trust, and five years of sending online profits to stores.
  • A side debate surfaces over perceived political slant in indie bookstores and Bookshop’s public advocacy; others argue curation has always reflected staff values.

Geoblocking and Global Reach

  • Users outside the US report being blocked entirely, leading to frustration and downvotes when they complain.
  • This raises questions about how “alternative to Amazon” it can be if it remains US‑only or heavily geo‑restricted.

Self‑Publishing and Exclusivity

  • Several note many desired self‑published titles are Amazon‑exclusive (often via Kindle Unlimited), which limits Bookshop’s catalog relevance.
  • Others point out that producing and distributing epubs outside Amazon is technically easy; exclusivity is a choice driven by Amazon incentives, not necessity.

Ask HN: What country would you like to relocate to and why?

Southern Europe: Cyprus, Portugal, Spain, France, Italy

  • Cyprus praised as “best in Europe” for lifestyle if you already have easy EU/Schengen access; its non-Schengen status makes visas hard for many and is seen as a major drawback.
  • Some like the “shady” freedoms offered by the unrecognized north of Cyprus; others worry about Turkish troops and political instability, though a UK base and EU status are cited as security guarantees.
  • Portugal and Spain repeatedly recommended as next-best: good climate, relatively low costs, and Schengen access. Downsides mentioned: slow and painful residence processes (especially Portugal, but also Spain), high income taxes, and very low local salaries unless you keep a foreign (often US) remote income.
  • France, Italy, and Lisbon/Barcelona-type cities appeal for culture, food, and climate; language integration remains a major barrier.

UK & Ireland

  • Views on the UK are sharply split.
    • Negative: “miserable,” social decay, antisocial behavior and youth crime (especially in London), bad weather, political fracturing, and NHS under strain.
    • Positive: great global city in London, good public transport, cultural depth, free-at-use healthcare, and “normal life” security vs US medical risk.
  • Ireland debated: some see it as tightly linked to US corporations and less “visible” in continental Europe; others insist it is deeply European, with its own strong culture and politics, and explain non-Schengen status via its open border with the UK.

Nordics, Switzerland, Netherlands

  • Norway often cited as ideal for families: safety, strong welfare state, nature, and pragmatic culture, but very expensive.
  • Switzerland frequently singled out as a dream: wealth, mountains, high salaries, low taxes, and direct democracy; harder integration and language challenges acknowledged.
  • Netherlands/Amsterdam highlighted for walkability, bikes, strong expat scene, English-friendliness, and good work–life balance.

Anglosphere: US, Canada, Australia, NZ

  • Some Europeans aspire to the US for its dynamism, innovation, and “alive” feeling, despite politics and healthcare concerns; others see the US as collapsing politically and socially.
  • Canada generally liked; New Zealand loved for nature and calm but criticized for weak infrastructure and housing costs.
  • Australia (Sydney/Melbourne) praised as a US-adjacent, English-speaking compromise with better politics and climate; distance, extreme weather, and fauna noted as drawbacks.

Asia & Global Nomadism

  • Thailand, Malaysia (especially Kuala Lumpur), Vietnam, Japan, China, and Singapore appear as options for either deep cultural immersion or high-productivity “efficient but boring” lifestyles.
  • Long-term expats stress:
    • Learning the local language and avoiding expat bubbles for real integration.
    • Remote income from richer countries dramatically changing the equation.
    • “Continual travel” can shed place-specific problems but not personal ones.

Politics, Security, and Ethics

  • Several comments tie relocation hopes to geopolitical events (e.g., postwar Ukraine boom, possible Russian collapse), with pushback on the human cost and unpredictability.
  • Concerns about liberal democracy’s fragility appear across regions; some seek high-functioning social democracies (Nordics, Switzerland), others prefer more “freewheeling” places (US, Thailand).

Meta: Expat vs Immigrant & Expectations

  • Discussion around “expat” vs “immigrant” and class/race dynamics: Westerners abroad are often called expats; others are “immigrants” or “migrants.”
  • Several argue most people don’t truly want to move into a foreign-language culture and accept second-class status; many just want their home-country freedoms, space, and low taxes without its problems.

The 20 year old PSP can now connect to WPA2 WiFi Networks

Technical feasibility of WPA2 on PSP

  • Several comments clarify that WPA2 usually requires firmware (not special radio hardware), and the PSP’s Wi‑Fi chipset already supported the necessary crypto.
  • The PSP uses a Marvell Libertas 88W8010 + 88W8380; datasheets show 802.11i/WPA2 capability was present but unused by Sony.
  • WPA/WPA2 sit above the physical layer, so encryption can be done in software or firmware; newer devices just add hardware accelerators.
  • The new plugin reportedly patches the PSP’s WPA kernel module (pspnet_apctl.prx) to call the chip’s native WPA2 functions instead of WPA ones, mainly changing management frames and key exchange.

Homebrew achievement and reverse engineering

  • Commenters are impressed that this feature was “so close” for 20 years and only now enabled.
  • Old-timers recall early PSP firmware reverse‑engineering, exploits, and community tooling, and see this as a continuation of that collaborative tradition.
  • Some express interest in a detailed technical write‑up of the WPA2 work; current understanding comes from commits and scattered discussions.

Wi‑Fi limitations and 2.4 GHz discussion

  • PSP remains 2.4 GHz only; commenters agree this is a hard hardware limit.
  • Discussion notes that even early PS4 models were 2.4‑only and that 5 GHz wasn’t common in consumer gear at PSP’s launch.
  • A few speculate whether the Marvell radio could be swapped for a 5 GHz‑capable sibling, but this is purely hypothetical.

Nostalgia and perception of the PSP

  • Many recall the PSP as mind‑blowing for its time: near‑PS2 graphics, movies, music, web browsing, and a strong homebrew scene.
  • Others argue it was less impressive, with many “diet ports” and a fragile‑looking form factor, though some defend its physical robustness.
  • Specific titles (e.g., Monster Hunter, GTA, Wipeout, God of War, AC: Bloodlines) are cited as standout experiences; opinions differ on overall library quality.

Homebrew, services, and longevity

  • PSP homebrew is credited with getting people into programming, emulation, and security.
  • Some still daily‑drive modded PSPs or use them offline; swollen batteries and cheap replacements are mentioned as maintenance issues.
  • There’s interest in PSP‑side homebrew stores; one existing online repository with an installable channel is linked.
  • Multiple people wish for similar WPA2/WPA3 hacks for DS/3DS, but note tighter hardware/firmware coupling makes it harder.
  • A closing reminder asks people not to badger developers for install help since the WPA2 plugin is still under active development.

A decade later, a decade lost (2024)

Emotional impact of the essay

  • Many readers say there is “nothing to add” beyond bearing witness; the piece felt like a rare, intimate glimpse into another person’s world.
  • Several comment that it was one of the few Hacker News posts to make them cry; some could barely get through it, especially parents of young children.
  • A recurring reaction: reading it while their own child slept nearby, then being overwhelmed with fear, love, and gratitude.

Personal stories of loss

  • Numerous commenters share their own losses: infants and young children (illness, birth complications, accidents, daycare negligence), siblings, parents, and deeply loved pets.
  • People describe long‑term effects: anxiety, compulsive checking that kids are breathing, changed personalities, and relationships permanently marked by grief.
  • Several emphasize that grief resurfaces at milestones (birthdays, school entry) and “comes in waves,” even decades later.

Parenthood, empathy, and anxiety

  • Many say such stories hit far harder after having children; scenes from movies, news (earthquakes, tsunamis, Gaza), or literature that once seemed abstract now feel unbearable.
  • Some mention deliberately avoiding the author’s older posts because they know they will be devastated, especially if they’ve faced childhood cancer in their own families.
  • Others note that imagining losing a child is like “cracking open the door to a horrible world”—even a brief mental glimpse is too much.

Time, healing, and meaning

  • Debate over “time heals all wounds”: some argue time only adds distance and dulls immediacy; the core pain and self‑blame can remain unchanged.
  • Metaphors (like a shrinking ball hitting a “pain button”) illustrate how grief episodes become less frequent, not necessarily less deep.
  • Historical perspective: in the past, high child mortality created more social and religious “infrastructure” for such losses; today, bereaved parents may feel uniquely alone.
  • Coping frameworks mentioned include religious faith, Stoicism, Buddhist ideas, and speculative multiverse notions; others feel the world itself already resembles hell.

Technology, AI, and authenticity

  • The essay’s raw humanity sparks a strong anti‑AI sentiment in some: they do not want machine‑generated grief narratives, which feel hollow and “not real.”
  • Others counter that much human communication is already constrained, performative, or “meat‑AI‑like,” and that the effect on the reader may matter more than provenance.
  • There is concern that pervasive AI will erode trust in whether text, images, or people online are genuinely human, reducing meaningful connection at a distance.

Legacy, design, and remembrance

  • Several recall the author’s earlier writing about Rebecca (including the origin of “rebeccapurple”) and how it changed their views long before they were parents.
  • His use of his family’s crisis to argue for trauma‑aware, accessible design in hospital and other critical websites is remembered as especially powerful.
  • Commenters are struck by how tools and standards they use daily hide deep personal stories, and resolve to better appreciate the humans behind the code.

We were wrong about GPUs

Nvidia, Virtualization, and Why GPUs Were Hard on Fly

  • Several comments dig into Fly’s technical story: Nvidia’s vGPU licensing and “phone‑home” checks don’t mesh with Fly’s fast‑start microVM model.
  • MIG is described as paravirtualized and tied to Nvidia’s userland stack, not clean PCI devices, making secure cross‑VM sharing difficult without heavy custom work.
  • Ideas like virtio‑cuda, using Nvidia’s vCS via QEMU, or disaggregated emulation are discussed, but generally seen as high‑maintenance and possibly in conflict with Nvidia’s terms.
  • Some argue QEMU startup cost is overstated and that Fly’s Cloud Hypervisor work essentially rebuilt similar VFIO‑style plumbing.

Mismatch Between Fly’s Users and GPU Demand

  • A recurring theme: Fly’s core audience wants a PaaS‑like “git push” DX, not low‑level GPU primitives.
  • Commenters say GPU buyers either want: a) big, dedicated clusters for heavy training/inference, or b) fully managed LLM APIs. Fly sits awkwardly between.
  • People note that customers who pay hyperscaler‑level GPU prices usually prefer hyperscalers or specialist GPU clouds, not a mid‑tier app platform.

Cost, Reliability, and Alternatives

  • Hobbyists and small teams largely find Fly (and its GPUs) too expensive versus homelabs, cheap VPSes, or dedicated servers; GPU marketplaces like Runpod, Vast, Voltage Park, and others are frequently cited.
  • Some praise Fly’s GPU DX (fast on‑demand machines, simple CLI) but say ongoing costs and storage pricing make continuous or casual use hard to justify.
  • There is skepticism about Fly’s overall reliability history; Fly staff claim it has improved and emphasize autosuspend/auto‑stop as key to cost control.

Do Developers Want GPUs or Just LLMs?

  • Many agree with the article’s claim that most developers “want LLMs, not GPUs”: they’d rather call OpenAI/Anthropic/Cloudflare Workers AI than manage drivers, models, and cold starts.
  • Others push back, citing non‑LLM GPU use (vision, “classic” ML, data science) and open‑source LLM self‑hosting as real but more niche workloads.
  • There’s broad agreement that GPU serverless suffers from long cold starts and that today’s API pricing and performance are “good enough” for many apps.

Fly’s Positioning and Takeaways

  • Several commenters say the outcome was predictable: Fly’s brand and DX attract app developers, not infra buyers; succeeding in GPUs would require a different product and focus.
  • Others think Fly exited too early, arguing demand for simpler private LLM and ML pipelines is only beginning.
  • The candid “we were wrong” post is widely respected, but many frame this as a classic product‑market fit miss, not a verdict on cloud GPUs in general.

Complex dynamics require complex solutions

Perceived Value of Taxes and Government Services

  • Strong disagreement over whether US taxes are “too high” for what people receive.
  • Some argue you “get nothing”; others list major programs (Social Security, Medicare/Medicaid, infrastructure, military, education, fire services) and say the real issue is misaligned values, not absence of benefits.
  • Critics highlight neglected or failing services (e.g., firefighting capacity in Los Angeles, poor education, inadequate transport, persistent poverty) and feel the system functions for elites while leaving many in a “cage” of working poverty.
  • There is frustration that despite paying high combined tax rates (federal, state, local, FICA), many feel one crisis away from disaster.

Tax Structure, Social Programs, and Funding Confusion

  • Detailed debate over how Social Security and Medicare are funded: payroll (FICA) vs income taxes, trust funds, and general revenue; participants correct each other using official figures and acknowledge shortfalls, especially in Medicare.
  • Some emphasize that FICA makes taxes higher than people think; others stress that “money is fungible” and general taxes effectively subsidize health programs.
  • Disagreement on what counts as “government’s money” vs citizens’ contributions.

“Tax the Rich,” Inequality, and Power

  • One side sees “taxing the 1% solves everything” as a common but simplistic slogan; others call this a strawman, arguing the real motivation is curbing dangerous wealth concentration and political capture.
  • Counterarguments: rich can often pass on tax burdens, complex systems are easy to game, and increasing competition or regulating rents may be more effective than headline tax hikes.
  • Historical references to higher past top tax rates and weaker billionaire power are contrasted with claims that jobs, immigration, and unemployment dynamics matter more.

Complex Systems, Simple Slogans, and Cognition

  • Original article’s theme—complex dynamics requiring complex solutions—is widely endorsed but also critiqued as “obvious” or sometimes misapplied.
  • Software/codebase analogies are used: naive rewrites of messy systems rediscover hidden requirements; complexity often encodes real constraints, but some legacy complexity is genuinely bad and benefits from refactoring.
  • Several comments stress human limits: most people lack time/attention to grasp policy complexity, so memes and slogans dominate (“taxes are too high,” “big government,” “tax the rich”).
  • Others warn that “there are no easy solutions” itself becomes a slogan used to discourage public engagement and defer to elites.

Information Environment, Ideology, and Trust in Experts

  • Concerns that fragmented media and soon AI-generated content create incompatible “alternative realities,” making shared understanding of complex solutions nearly impossible.
  • Observations that ideologies come with pre‑baked rebuttals (e.g., “you can’t tax the rich,” “nothing can be done”), making wealth concentration and systemic reform feel inevitable.
  • Several argue that we already rely on experts for every complex system (food, infrastructure, tech), so rejecting expertise in policy and science is inconsistent, though corruption and institutional decay are real worries.

Complex vs Simple Remedies in Complex Systems

  • Some emphasize system dynamics and feedback loops: well‑chosen feedbacks can be relatively simple interventions that shape complex behavior.
  • Others note complex behavior can sometimes be tamed by simple “damping”-type moves, but poorly chosen simplifications can worsen instability.
  • Applied examples include climate change (no single silver bullet; massive, distributed, and mostly uncoordinated transition already underway) and social policy (few policies can be welfare‑enhancing, popular, and simple at once).

The hardest working font in Manhattan

Overall response to the article

  • Many readers found the piece exceptional: deeply researched, beautifully photographed, and a great example of “peak internet” long-form writing.
  • Several praised the image-centric layout and the careful integration of history, typography, industrial design, and standards (ANSI, MIL-SPEC, DIN).
  • Some noted performance issues (slow load / site going down) due to its popularity, with archival links shared.

Origin, lineage, and what “Gorton” really means

  • A major thread debates the article’s framing of this lettering as a “Gorton” font line.
  • One side argues the style fundamentally comes from long-established drafting lettering standards: single‑stroke, simple, easily drawn forms taught in 19th‑century textbooks and later codified in standards like DIN and ANSI.
  • Others counter that the article is explicitly about one concrete instantiation: pantograph-engraver templates and their descendants, where machine tooling turned a style into a de facto font standard.
  • Skeptics say the article over-attributes later variants (e.g., MIL‑SPEC‑33558, ANSI Y14.2) to the Gorton/Taylor-Hobson line based mainly on visual similarity, when they could just as well derive independently from general drafting practice.
  • Defenders point to licensing documents, machine lineages (Gorton → Leroy, etc.), and very specific shared letter quirks (G, 4, Q, 7, 8) as evidence of actual transmission, not just convergent style.

Aesthetics: ugly, beautiful, or both

  • Strong split on whether the font is “ugly”:
    • Some typographically trained readers agree with the author’s technical critique: awkward balance, monoline, odd bowls and proportions; fine for labels, bad for body text.
    • Others say they genuinely like it, finding the stark, unstyled forms pleasing and highly legible.
  • Several note that its emotional impact comes from context: metal plates, control panels, elevators, spacecraft, military and industrial gear—so it reads as “serious, consequential information.”
  • One theme: its “honest,” workmanlike imperfection mirrors philosophies in software (“worse is better,” don’t let perfect kill useful).

Legibility, function, and standards

  • Commenters highlight strengths: high legibility at distance, tolerance for squashing/stretching, and compatibility with engraving/CNC and stencils.
  • Weak spots are also noted, especially the confusable 0/O and some awkward numerals; variants like slashed zero are mentioned elsewhere.
  • The link to drafting education (Normschrift, technical drawing classes) comes up repeatedly as the cultural substrate that made these forms feel natural and ubiquitous.

Digital versions and related fonts

  • Multiple digital recreations and cousins are shared: Routed Gothic, Gorton Digital, Brass Mono, “open Gorton,” national-park-style faces, Simplex/CAD fonts, Hershey stroke fonts.
  • The article’s own appendix of recreations is called out, as well as official committee/standards fonts (e.g., ANSI Y14.2) and commercial releases.
  • Some keyboard enthusiasts note these letterforms on classic keycaps (e.g., double-shot manufacturing), tying the font’s spread into computing hardware history.

Personal nostalgia and professional angles

  • Several readers recall learning hand lettering or using Leroy sets, and feeling conflicted as templates replaced craft.
  • Others share memories of seeing this lettering on lab gear, building control systems, keyboards, or 8‑bit computers, associating it strongly with mid‑20th‑century technology.
  • A game developer and CAD/IFC tools developer mention the article directly helping them choose or identify appropriate “authentically technical” fonts.

Meta about writing and typography

  • People enjoyed the mid-essay “reveal” about the essay’s own strange Century variant, which retroactively explains why some found the body text subtly off.
  • Several remark on the rarity and value of long, carefully structured, image-rich essays that reward slow reading and deep curiosity.

LinkedIn is the worst social media I've ever seen

Role of LinkedIn in Hiring and Careers

  • Many see LinkedIn as a necessary evil: a bad platform that persists only because it’s where recruiters and jobs are.
  • Several commenters report that most or all of their jobs over the past decade came via LinkedIn recruiters or connections, especially in tech and in some geographies (e.g., Australia).
  • Others say they’ve deleted their accounts (sometimes years ago) and have done fine via direct outreach, personal networks, or other job boards (Indeed is mentioned positively).
  • Recruiters and hiring managers reportedly use LinkedIn to quickly check employment history and network connections; lack of a profile can be perceived as a red flag by some.

Feed, “Social” Layer, and Enshittification

  • Widespread contempt for the feed: self‑aggrandizing, fabricated “life lesson” stories, business-influencer cringe, political noise, and AI/bot slop.
  • Many view LinkedIn as “the labor market masquerading as social media,” with the social aspects considered useless or actively harmful.
  • Some note that, with careful tuning (unfollowing, hiding, upvoting selectively, chronological sort), the feed can surface decent business/policy content, especially after Twitter/X’s decline.
  • Others simply block or erase the feed with browser extensions or uBlock filters and use the site only as a resume/contact directory.

Account Bans, Verification, and Dark Patterns

  • Multiple reports of sudden suspensions or shadow bans, sometimes after benign security actions (MFA, password change, VPN use).
  • Restoring access often requires uploading passport or government ID via a third-party tool (Persona), which users describe as privacy-invasive and poorly justified, especially given rampant bot/AI spam that apparently goes unaddressed.
  • Concerns that ID and biometric data may be retained or monetized despite deletion claims; comparisons to KYC/AML spillover from finance.
  • Some cite aggressive emails, persistent “resurrecting” accounts, and paywalled features as signs of “enshittification.”

Alternatives, Coping Strategies, and Ambivalence

  • Suggested coping:
    • Use it only for inbound recruiter messages and company lookups.
    • Disable notifications, unfollow liberally, or hide the feed entirely.
  • Alternatives mentioned: personal networks, email/phone, GitHub/portfolio sites, Mastodon/Bluesky, lobste.rs, and traditional job boards.
  • Many feel trapped: they hate LinkedIn’s culture and dark patterns but fear losing career opportunities if they leave.

Sunsetting Create React App

How CRA Got Sunset and Docs Updated

  • React 19 broke Create React App (CRA), which triggered public complaints and an umbrella issue detailing breakage and urging formal deprecation.
  • React team fixed the breakage, published the sunset blog post, updated the “Create a Project” docs, and adjusted SEO so old CRA docs stop ranking.
  • Commenters appreciate the explicit deprecation after years of CRA effectively being dead, but think it came too late.

Docs Messaging: SPA vs Frameworks

  • Many welcome mentioning Vite as an option, but criticize the docs for:
    • Not plainly stating that “React + Vite SPA” is a first-class, valid way to use React.
    • Linking “Build Your Own Framework” but using it mostly to warn users off DIY setups instead of giving concrete Vite/Parcel recipes.
    • Overusing “framework” and sounding paternalistic (“we know what’s best; don’t build your own”).
  • Several people feel the React team is implicitly hostile to SPAs, despite survey data showing most React usage is still SPA-centric.

Next.js, Vercel, and Lock‑In Concerns

  • The new guidance effectively reads as “use a framework,” with Next.js front and center.
  • Multiple commenters are uncomfortable with React pushing a framework largely controlled by a hosting company (Vercel), blurring community vs profit motives.
  • Specific worries:
    • Features like API routes and image optimization incentivize using Vercel.
    • Static export + <Image> support is debated; some say it works with custom loaders, others argue the first‑class path is clearly Vercel.
  • Others counter that Next.js works fine self‑hosted and provides real value (SSR, RSC, routing).

What Replaces CRA in Practice

  • Many treat “Vite (+ React plugin)” as the practical CRA successor: fast, simple, good for embedding React into existing apps; several teams report smooth CRA→Vite migrations.
  • Rsbuild and similar tools are also suggested; some prefer “just React + Vite + tiny router” to avoid Next’s complexity.
  • Some argue Vite itself feels heavy for single-page widgets; they wish for an even simpler, opinionated “React compiler” with minimal config.

Wider Frontend Backlash and Alternatives

  • Strong current of frustration with frontend complexity: “tool to start using a tool,” fragile dependency trees, and constant churn.
  • Many argue most projects could be SSR HTML with light JS; SPAs and RSC frameworks are over-applied.
  • Alternatives mentioned: Vue/Nuxt, Svelte(Kit), Astro, htmx, Go templating, Blazor, Phoenix LiveView, Ruby on Rails.
  • Ongoing meta-debate: frameworks as necessary support for truly interactive apps vs overkill driven by trends, resumes, and vendor incentives.

The NBA Apple Vision Pro app now has a 3D tabletop view

Overall reaction to the tabletop NBA view

  • Many see the tabletop mode as a “cool gimmick” or tech demo rather than a serious way to watch a full game.
  • Critics find the tiny cartoon players “conceptually baffling” and less immersive than a conventional 2D broadcast with professional camera work.
  • Supporters argue it becomes compelling when combined with a standard live feed, letting viewers track off‑screen positioning and see the whole court at once.

Courtside and immersive viewing vs. tabletop

  • Several commenters say a true courtside or “life‑size” perspective, even in plain 2D VR, would be far more appealing than a toy‑like tabletop.
  • Some recall past Oculus/NextVR NBA courtside experiences as genuinely impressive.
  • Others doubt that any VR view can beat either:
    • attending in person, or
    • just watching on a TV with friends, beer, and atmosphere.

Technical feasibility and production constraints

  • Broad agreement that fully immersive, multi‑angle 3D is technically feasible today, but financially unjustified for a tiny VR user base.
  • One side claims you’d need 30–60 cameras and major upgrades to broadcast infrastructure; another, with VR production experience, counters that far fewer lenses are needed for a fixed‑seat 3D view.
  • Some speculate the app may rely on existing tracking systems that reconstruct player motion from skeletal data rather than full video capture.

Comfort, ergonomics, and social use

  • Users are split on headset comfort: some can wear Vision Pro for hours; others find any VR headset too heavy or tiring, especially for passively watching long games.
  • Tabletop viewing requires looking down, which some think is a bad fit for a heavy, front‑loaded device.
  • VR’s solitary, headset‑per‑person nature is seen as worse for social sports watching than TV or theaters.

Alternative applications and broader sentiment

  • Commenters propose richer uses: D&D and tabletop RPGs, wargames, strategy games, motorsports tracking, architecture, and training/analysis tools for coaches and hardcore fans.
  • Opinions on VR/AR are polarized: some see this as exciting experimentation in a stagnating device landscape; others view it as pointless “demoware” and symptomatic of wasteful, environmentally harmful innovation.

Nearly half of Steam's users are still using Windows 10

Why many Steam users remain on Windows 10

  • Several cite hardware lockouts: otherwise-capable CPUs (e.g., pre–8th gen Intel, some 7xxx series) or motherboards without enabled/working TPM/fTPM fail Windows 11’s requirements. Gamers often upgrade GPU/RAM but keep older high-end CPUs/boards.
  • Some deliberately bought Windows‑11‑capable hardware but then chose Windows 10 after hearing about more telemetry, Start-menu ads, and Edge lock‑in.
  • Others say serious gamers usually upgrade rigs frequently, so most Steam systems should meet 11’s requirements; they argue Steam’s user base is more hardware‑current than the general Windows population.

Windows 11 experience and perceptions

  • Many describe 11 as “almost the same as 10” with minimal compelling features; some can’t name a single upgrade reason beyond continued support.
  • Reported annoyances: Start menu and taskbar regressions (no vertical taskbar, worse pinning), removed shortcuts, fragmented settings vs Control Panel, ads/promotions, stronger push to Microsoft accounts/OneDrive, and random rebooting for updates.
  • Others report smooth usage: no visible ads after turning options off, better snapping (especially with ultrawides), improved multi‑monitor behavior, snipping tool enhancements, and better performance on newer AMD CPUs.
  • There’s disagreement on how intrusive ads and nags actually are; some never see them, others find them pervasive and a primary reason to avoid 11.

EOL timing and Microsoft’s strategy

  • Debate over whether ending Windows 10 support ~4 years after 11’s release is unusual.
    • One side: Windows versions have long had ~10‑year lifecycles; 10 ending now is “as planned.”
    • Other side: historically you could comfortably skip at least one major version; the short overlap between 10 and 11 feels like pressure to move people to a more locked‑down, monetized platform.
  • Speculation on what happens when a majority are on unsupported 10: extending EOL, loosening 11’s requirements, or more aggressive nudging/crippling behaviors; no consensus.

Linux, SteamOS, and alternatives

  • Multiple commenters already game primarily on Linux (Mint, Pop!_OS, Arch derivatives, Bazzite, Steam Deck). Proton is widely praised; many say most of their library “just works,” with issues mainly around some online/anti‑cheat titles or niche peripherals.
  • Some plan to abandon Windows entirely when 10 loses support or when Steam drops it, accepting that they’ll skip incompatible titles.
  • Others still see desktop Linux as a “hobby” with fragile edges (NVIDIA, HDR, mixed-refresh setups), and explicitly reject switching.
  • There’s interest in a “just works” gaming distro; some pin hopes on Valve/SteamOS, while others worry Proton’s success reduces incentives for native Linux ports.

MacOS and other notes

  • A subset has moved to macOS to escape Windows’ ads/bloat, accepting reduced game choice.
  • Others dislike macOS for needing third‑party (often paid) apps for behaviors Linux/Windows users expect to tweak natively.
  • One game developer shared their own telemetry: roughly mid‑50% Windows 11, mid‑40% Windows 10, tiny fractions on Wine/Steam Deck/older Windows.

Siren Call of SQLite on the Server

Edge and distributed SQLite

  • Several comments connect server-side SQLite (Fly, Cloudflare, Turso, LiteFS) to “edge compute”: colocating DB and compute to reduce latency.
  • This is seen as especially attractive for read-heavy or mostly-static datasets that can be “file-copied” to many edge locations.
  • Others argue that existing distributed SQL systems (e.g. Cockroach) or traditional client–server DBs could serve similar purposes in practice, with fewer bespoke trade-offs.
  • There’s interest but also caution: distributed SQLite is still seen as young, with tricky semantics around reads/writes and high availability.

When SQLite on the server makes sense

  • Advocates highlight: no network hop, very low latency on NVMe, tiny operational footprint, easy backups via file copy or tools like Litestream, gobackup, sqlite_rsync, and good fit for small/medium apps or on-prem tools.
  • Multiple production anecdotes: SQLite + Litestream + simple web servers running for years with minimal downtime; low-cost self-hosting after leaving managed cloud databases; hermetic unit tests.
  • Detractors counter that network latency to a colocated Postgres/MySQL instance is negligible and the operational overhead of a DB server is small; starting with Postgres avoids painful migration when scaling beyond a single webserver.

Per-user / per-tenant database pattern

  • A major pro-SQLite theme is “datastore-per-user/tenant”: one SQLite DB per user/org, plus a small global metadata DB.
  • Claimed benefits: easy sharding and horizontal scaling, strong isolation (no accidental cross-user leaks), simpler reasoning, and backups via whole-file copies.
  • Critics question global constraints (e.g., unique email), cross-DB atomicity, and the need to implement consistency and messaging in the application. Responses suggest: immutable user IDs, global indexes for identifiers, eventual consistency, and careful partition boundaries.

Technical drawbacks and concerns

  • Concurrency: default locking model, single writer, and multi-process access require care; WAL mode and busy_timeout mitigate but don’t remove limits.
  • Schema migrations: SQLite’s ALTER TABLE is weaker because it stores raw SQL definitions; complex migrations can be fragile.
  • Data integrity: default “dynamic typing” and disabled foreign keys worry some; strict tables and pragma settings help but must be explicitly enabled.
  • Tooling ecosystem: mainstream ELT/BI tools rarely target SQLite directly; workarounds involve copying files or using other engines (e.g., DuckDB) for analytics.

DA, sheriff, who shared woman's nude photos on phone are covered by QI

Qualified Immunity: How It Works in Practice

  • Commenters describe QI as requiring a previous case with nearly identical facts where a court has already ruled the conduct unconstitutional.
  • Courts often treat small factual differences as enough to make a case “novel,” preserving immunity even when behavior is obviously abusive.
  • This leads to a dynamic where the first time a right is violated in a specific way, the official is shielded, and only future identical cases would have a chance.

Origins and Expansion of QI

  • Several note QI is a court‑created doctrine, not in the Constitution or statute, originating in late‑1960s Supreme Court civil-rights cases.
  • Later cases in the 1980s and 2000s broadened it, especially by allowing courts to dismiss on QI grounds without fully deciding whether rights were violated, reinforcing a “catch‑22.”

Debate: Should QI Exist at All?

  • Abolitionists argue QI and related immunities (sovereign immunity for agencies, absolute immunity for prosecutors) effectively put officials above the law and block civil remedies for even egregious misconduct.
  • Others argue some form of protection is needed so officers/judges aren’t personally liable for enforcing laws later struck down, and to avoid floods of frivolous lawsuits.
  • Proposed reforms include:
    • Letting more cases reach juries instead of being cut off early by QI.
    • Colorado-style statutory limits on QI and capped personal liability.
    • Replacing QI with mandatory professional liability insurance for officers.

Case-Specific Issues and Legal Nuances

  • Some see the underlying Fourth Amendment violation as “cut and dry”; others note the facts are legally complex:
    • The woman consented to a search by Idaho police, who copied her phone.
    • Oregon officials later accessed/shared that data, ostensibly to investigate her deputy boyfriend.
  • The court found a constitutional violation but still granted QI because no prior case in that circuit addressed this specific combination (consented extraction by one agency, later sharing by another).
  • Commenters highlight that this decision now likely creates precedent for future, similar cases, but offers no remedy to this victim.

Accountability, Culture, and Democracy

  • Many see this as cultural failure as much as legal: leadership tolerated officers gossiping about and viewing intimate photos, an obvious abuse of power.
  • There is frustration that sheriffs and DAs are elected and yet rarely removed by voters, weakening practical accountability.
  • Several advocate stronger oversight boards, better leadership standards, and use of tools like decertification registries (not always allowed in union contracts).

Practical Takeaways and Privacy Concerns

  • Recurrent advice: never consent to searches, don’t talk to police without a lawyer, and be wary of keeping sensitive material on phones given current legal realities.
  • Some push back that this borders on victim-blaming but agree that, given QI and law-enforcement behavior, individuals must act defensively.