Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 234 of 356

A Rust shaped hole

Language Alternatives to Fill the “Rust-Shaped Hole”

  • Several commenters argue the author’s criteria actually point more to OCaml, Nim, Swift, or Zig than to Rust:
    • OCaml: native, GC’d, expressive; seen as an excellent fit but hampered by a smaller ecosystem and historically weak multithreading.
    • Nim, Odin, Zig, Gleam: each proposed as “Rust without the pain” in different ways; trade-offs are ecosystem maturity, ergonomics, or explicit allocators (Zig).
    • Swift is highlighted repeatedly as an underrated, cross‑platform, memory‑safe, high‑performance alternative with good C/C++ interop and an increasingly decent toolchain.

TypeScript, JS Runtimes, and Native Binaries

  • Several participants share the author’s affection for TypeScript’s type system and abstraction level but see major drawbacks:
    • Native binary story is weak: Bun and Deno can “compile” TS/JS to binaries, but outputs are large (≈60–70MB) and bundle full runtimes.
    • npm ecosystem and tooling configuration (ts-node, ESLint, lints like no-floating-promises) are described as painful and slow on large codebases.
  • There’s interest in “TypeScript-like but compiled” languages:
    • C# is suggested but rejected as too nominal and rigid compared to TS’s unions, literal types, and type manipulation.
    • AssemblyScript, ReasonML, and OCaml are mentioned, but each has gaps in documentation, ecosystem, or TS-like expressiveness.

Rust: Memory Management, Complexity, and Refactoring

  • Strong debate over whether Rust is “manual memory management”:
    • Pro-Rust side: memory is explicitly modeled, but freed automatically via ownership; most code feels closer to automatic management than C’s malloc/free.
    • Critics: the burden moves into the type system and borrow checker; you must choose between ownership, cloning, Rc/Arc, RefCell, etc., which is perceived as complexity.
  • Many report Rust makes large refactors easier:
    • Changing types and ownership patterns lets the compiler point out all required updates; if it compiles, memory and thread-safety invariants are likely preserved.
  • Others find Rust more difficult than C or Haskell in practice, citing:
    • Lifetimes, closure traits (Fn/FnOnce), trait resolution, macros, and the need to pull in many crates for basic tasks.
    • Perception that Rust rivals C++ in complexity, though defenders argue Rust’s complexity is more principled and better supported by tooling and error messages.

C, C++, and “Simplicity” Debated

  • The article’s claim that C is simple and easy to review is heavily contested:
    • Commenters point to undefined behavior, data races, null propagation, aliasing with restrict, and global state as “spooky action at a distance.”
    • Annex J.2’s long UB list is cited as evidence that C’s apparent syntactic simplicity hides deep semantic fragility.
  • C++ is widely seen as more complex than Rust due to decades of backwards compatibility, multiple initialization modes, template and move-semantics intricacies, and a “hoarder’s house” of overlapping features.

Performance, GC, and “Solid” Native Programs

  • Some argue GC’d languages (Java, Go, Haskell) can achieve competitive or superior throughput, especially when allowed large heaps; tracing GCs are framed as converting RAM into speed.
  • Others emphasize memory footprint, predictability, and fewer runtime dependencies as reasons to prefer native, statically linked binaries (Rust, Go, Swift), which “feel solid” by depending mainly on the OS kernel rather than external runtimes or shared libraries.

Miscellaneous Technical Points

  • The article’s RAM-latency analogy (days vs minutes) is dissected; several posts clarify the distinction between latency and throughput and caution against mixing them in metaphors.
  • Go’s “simplicity” is criticized as pushing complexity into application code (especially error handling), whereas Rust and some ML-family languages encode more invariants in types at the cost of a steeper learning curve.

Code highlighting extension for Cursor AI used for $500k theft

Supply-chain risk and dev setups

  • Many commenters see this as yet another supply-chain attack and describe hardening their workflows: per‑project VMs/containers, minimal host installs, Nix, Flatpak, or LXC.
  • There’s frustration that modern stacks require opaque binaries and network access just to build, making true from-source, offline bootstrapping rare.
  • Some share simple container workflows (e.g., one container per project with only that folder mounted), and distrust “devcontainers” that expose too much of the host (.ssh, full FS).

VS Code, Cursor, and Open VSX responsibilities

  • Debate centers on who bears responsibility: Cursor (a commercial product), Open VSX (volunteer‑run registry), or the user.
  • One side: Cursor is a high‑funding company effectively outsourcing a critical security component to under‑resourced volunteers; they should fund or provide hardened infrastructure and vetting.
  • The other side: Cursor merely exposes a third‑party registry; users choosing random extensions must accept the risk, similar to package managers.
  • Microsoft’s tweet about blocking the malicious extension “in 2 seconds” is seen by some as marketing; others note VS Code’s marketplace still lets many malware extensions through.

Extension security model and sandboxing

  • Multiple people were surprised to learn that VS Code/Cursor extensions are not sandboxed and inherit full user permissions: filesystem, network, and ability to spawn processes (e.g., PowerShell).
  • Comparisons are made to browser extensions (sandboxed) vs Electron apps (browser UI, fewer OS protections).
  • Several argue editors should implement permission systems and sandboxing (Docker, WASM, or OS sandboxes); others claim perfect sandboxing of arbitrary code is unrealistic.

Crypto storage and user behavior

  • Strong criticism of keeping ~$500k of crypto on a general dev machine; many argue such amounts should live on hardware wallets or isolated “bank‑like” devices.
  • Counterpoint: in practice, few people can realistically audit all software they run, and modern computing makes true vigilance hard.

Mitigation strategies proposed

  • Use hardware wallets and testnets; segregate “money machines” from dev machines.
  • Restrict or whitelist extensions, pin versions (e.g., via Nix), and monitor network traffic.
  • Run IDEs in containers/VMs with limited filesystem access; keep sensitive data in separate encrypted locations.

LLM Inevitabilism

Debating “Inevitability” vs Choice

  • Many see LLMs as effectively inevitable: once a technology is clearly useful and economically powerful, multiple actors will pursue it, making rollback unrealistic short of major collapse or coordinated bans.
  • Others argue “inevitability” is a rhetorical move: if you frame a future as unavoidable, you delegitimize opposition and avoid debating whether it’s desirable.
  • Several comments distinguish between:
    • LLMs as a lasting, ordinary technology (like databases or spreadsheets), and
    • Stronger claims that near‑term AGI or mass human obsolescence are destined.

Comparisons to Earlier Tech Waves

  • Supporters liken LLMs to the internet or smartphones: rapid organic adoption, hundreds of millions of users, clear individual utility (search-like Q&A, coding help, document drafting).
  • Skeptics compare them to Segways, VR, crypto or the “Lisp machine”: loudly hyped, heavily funded, but ultimately niche or re-scoped.
  • Counterpoint: none of those “failed” techs had current LLM‑level usage or integration into many workflows.

Economics, Sustainability, and a Possible AI Winter

  • Disagreement over whether current LLM use is fundamentally profitable or heavily subsidized:
    • Some operators claim ad‑supported consumer usage and pay‑per‑token APIs can be high‑margin.
    • Others point to multibillion‑dollar training and datacenter spend, rising prices, and “enshittification” signs (nerfed cheap tiers, opaque limits).
  • Concerns include: energy and water use for data centers, finite high‑quality training data, and diminishing returns in model scaling.

Real‑World Usefulness vs Hype

  • Many developers report genuine productivity gains for boilerplate, refactoring, docs, glue code, and “junior‑engineer‑level” tasks, especially with careful prompting and tests.
  • Others find net‑negative value on complex, legacy codebases: non‑compiling patches, subtle bugs, and high review overhead. Studies are cited suggesting AI‑assisted programmers feel faster but are often slower or introduce more defects.
  • Similar splits appear outside coding (writing, law, finance, customer support): from “game‑changer” to “unreliable toy.”

Societal, Psychological, and Ethical Concerns

  • Strong unease about AI companions, AI‑generated social sludge, mass disinformation, and loss of genuine human interaction; social media is repeatedly referenced as a warning case.
  • Fears that gains will accrue mainly to model owners, deepening inequality and centralization, and that LLM‑based tools will be used to cut labor costs rather than improve lives.
  • Some emphasize environmental and geopolitical risks: AI as leverage in trade or sanctions, and as another driver of emissions.

Agency and Governance

  • Several argue that past “inevitable” trajectories (industrialization, nuclear, social media) were shaped—though not fully controlled—by policy, labor action, and public resistance.
  • The thread repeatedly returns to the idea that LLMs are very likely to persist, but how and where they are deployed, who controls them (centralized clouds vs local/open models), and what is off‑limits remain political choices, not fixed destiny.

The new literalism plaguing today’s movies

Phones, Attention, and “Second-Screen” Storytelling

  • Many see hyper-literal dialogue and constant signposting as a response to distracted audiences watching while on their phones.
  • Others invert the causality: people reach for their phones because the movies are shallow, overlong, and empty.
  • Several cite streaming execs explicitly wanting shows to work as “background” content, requiring characters to spell out what’s happening.
  • Some argue the issue is less “short attention span” and more impatience and different pacing norms vs. older, slower films.

Global Markets, Censorship, and Cost

  • A recurring theory: blockbusters must now work for non‑native English speakers and pass foreign censors, pushing toward simple visuals, repeated flashbacks, and easily dubbed exposition.
  • Huge budgets and reliance on international box office encourage lowest‑common‑denominator storytelling and easy moral clarity.

Is “New Literalism” Actually New?

  • Several commenters say mainstream films have always telegraphed emotions and themes; the article cherry‑picks recent examples.
  • Past hits like “Good Will Hunting” or “Titanic” were also obvious and heavily signposted, while subtler films existed in parallel.
  • Others counter that today’s combination of on‑the‑nose dialogue, repetition, and “ruined punchlines” (“did you just say X?”) feels qualitatively worse.

Bad Writing, Not Just Literalism

  • A strong thread: the real plague is weak, committee‑driven writing and executives who don’t care about scripts, not literalism as an aesthetic choice.
  • Literal explanation can work (anime, some games, some experimental films) when used deliberately; the problem is when it’s there only to protect confused or hostile viewers.

Blockbusters vs. Indie and Foreign Films

  • Several suggest that people wanting nuance already gravitate to indie, festival, and foreign films, which still embrace ambiguity and “show, don’t tell.”
  • Examples cited as counter to “new literalism” include recent European and Asian films, as well as some high‑profile awards titles.
  • Others are skeptical that arthouse output is fundamentally better; it may just be smaller‑scale and less market‑tested.

Audiences, Politics, and Message-First Cinema

  • One line of argument: studios now fear misinterpretation (e.g., antiheroes idolized, satire co‑opted), so they hammer home a single, “safe” message.
  • Another: writers themselves are increasingly explicit about using movies as vehicles for social or political statements, which pushes toward didactic, literal storytelling.
  • Some point to franchises (superheroes, certain sequels) as emblematic: themes are spelled out, metaphors explained, and moral ambiguity avoided.

Theaters, Economics, and Changing Habits

  • Ticket sales per capita peaked long before streaming; declines are tied to home tech improvements, piracy, rising prices, and COVID, not just film quality.
  • Many now reserve theaters for “event” movies (IMAX, spectacle) and watch everything else at home, where rewinding and pausing reduce the need for spoon‑feeding.
  • Others insist the cinema experience still offers unmatched engagement—when audiences aren’t distracted by phones and noise.

Reception of the Article

  • Some find the “new literalism” frame insightful; others call it shallow, elitist, or indistinguishable from the perennial “movies were better before” complaint.
  • Criticism that the piece leans on negative examples, offers few concrete positive counter‑examples, and blurs together very different films under one label.

AWS Lambda Silent Crash – A Platform Failure, Not an Application Bug [pdf]

Core technical issue: Lambda execution lifecycle

  • Many commenters say the described “silent crash” is just Lambda’s documented behavior: once the handler returns a response, execution is frozen and in‑flight async work (like HTTP requests) may be interrupted.
  • The code (as inferred from the write‑up and AWS sample) appears to queue an async task (e.g., sending an email) and then immediately return 201, expecting the background task to complete reliably.
  • Several point out that Lambda is not guaranteed to run any code after the response is returned, except for a fuzzy, non‑deterministic “post‑invocation” phase mostly meant for extensions/telemetry.

Bug vs. user error

  • Majority view: this is not a Lambda bug but an architectural mistake and misunderstanding of the platform.
  • Minority view: some think there might have been intermittent VPC/network issues and that Lambda should not appear to “crash mid‑await” if the handler is truly awaiting the HTTP request.
  • A few note that context.callbackWaitsForEmptyEventLoop and Node.js handler semantics complicate the picture, but nothing clearly contradicts the basic “return = end of execution” model.

Proper patterns for background work

  • Recommended pattern: have the HTTP‑facing Lambda enqueue a job (e.g., SQS) or invoke another Lambda asynchronously, then return.
  • Relying on Node.js event emitters or async logging/reporting after return is described as fragile and known to fail with tools like logging/monitoring SDKs.
  • Some mention that on busy Lambdas you can sometimes “abuse” reuse of execution environments for caches or non‑critical background tasks, but it’s explicitly non‑guaranteed.

AWS documentation and support

  • Several say the behavior is clearly documented in Lambda runtime docs, but acknowledge it’s easy to miss if you don’t read them carefully.
  • Mixed views on AWS Support: some describe it as ineffective, call‑center‑like, or unwilling to admit faults; others think support likely did explain the issue and the author either didn’t hear it or omitted it.
  • A recurring theme: AWS will not debug user application code in depth unless you’re a very large spender.

Reactions to the write‑up and author

  • Many readers find the 23‑page PDF overwrought, hostile toward AWS, and built on a fundamental misunderstanding, turning it into a cautionary tale about hubris and confirmation bias.
  • Some think it’s an example of how emotionally charged, “forensic” write‑ups can backfire, damaging credibility more than helping.
  • There is additional criticism of time spent on a weeks‑long “investigation” and re‑architecture instead of shipping features, especially for a startup.

Protecting my attention at the dopamine carnival

AI, Coding, and Cognitive Load

  • Some commenters report clear wins from AI tools (surfacing dead documentation, enabling tasks they couldn’t do before, “infinite” time savings for code snippets and logos).
  • Others describe losing days to bad AI guidance, reviewing dangerously wrong code, and chasing fixes that would’ve been faster to do manually.
  • A recurring “best practice” pattern: use AI for first drafts/ideas, then take over; avoid endless back-and-forth trying to get AI to perfectly fix small issues.
  • Concern that offloading too many “hard nuts” to AI may atrophy problem-solving skills, akin to overusing calculators or forklifts.

Skepticism about the Article’s Cited Studies

  • Multiple people challenge the headline stats (brain connectivity −50%, 8x worse recall, “reverse 10 years of decline in 2 weeks,” 19% slower with AI).
  • Critiques: small samples, narrow tasks, not peer-reviewed, misframed metrics (less brain activity may mean efficiency, not “damage”).
  • Some see the dopamine framing as pop-neuroscience or even “security theater”–style rhetoric for tech.

Phones, Apps, and Addiction Management

  • Strong split: some liken app timers to an alcoholic’s “just one drink” rationalization and advocate deleting addictive apps outright (especially TikTok).
  • Others argue phones/apps are now socially mandatory (kids’ activities on WhatsApp, events on Facebook, photos on IG), so the question is how to dip in without getting sucked into infinite scroll.
  • Tactics mentioned: grey-scale displays, browser extensions to strip YouTube’s “enshittified” features, using websites instead of apps, or uninstalling to test whether a service is truly missed.

MFA, Security, and Cognitive Distraction

  • Frustration that many crucial services (banks, some employers) require phone-based MFA, forcing the phone into the workspace and undermining attention.
  • Alternatives discussed: FIDO/U2F tokens, smartcards, desktop/authenticator apps, password-manager-based TOTP; but some banks only allow their own proprietary app.
  • Recognition that phone MFA is partly about security, partly about driving app usage and gaining a “foothold” on users’ devices.

“Brain-Growth” vs Junk Content

  • Question raised: if time on the “dopamine carnival” is spent on science, blogs, or lectures, is it still harmful?
  • Common view: short-form and skim-based consumption mostly yields shallow, quickly forgotten knowledge unless followed by application, reflection, or deeper study.
  • Others note that even “mindless” content can spark creative ideas; often it’s enough to know concepts exist and can be revisited when needed.

Attention, Planning, and Everyday Design

  • Strong resonance with the idea of designing one’s day instead of relying on raw willpower.
  • An ADHD perspective: pre-planning micro-steps (e.g., self-checkout flow) dramatically improves performance; hypothesis that social media overuse may impose ADHD-like attentional costs on neurotypical people.
  • Some report that laptops, not phones, are their real attention sink.

Perceived Cognitive Decline and Generational Notes

  • Several report noticing more “goofy,” spaced-out behavior and ultra-short attention in everyday interactions.
  • Examples: party guests zoning out mid-magic-trick, the meme-ified “Gen Z stare,” and FOMO-driven demands to “do it again” after missing something.
  • Economic stress and social media are cited as likely contributors; AI is seen as a secondary factor.

Ads, Engagement, and Enshittification

  • Some users genuinely like Instagram’s hyper-targeted ads and even invest in Meta because of their own high click-through rate.
  • Others warn that many such products are optimized for clicks and conversion funnels, not quality—“QVC for millennials” with frequent scams or disappointments.

Tools, Gadgets, and Minimalism

  • Mention of pricey “elegant” Faraday/lock boxes vs cheap Faraday bags or simply airplane mode; debate over whether expensive time-lock boxes are worth it.
  • A minority advocates going all the way to dumbphones (calls/SMS only), GPS and camera as separate devices, as the only truly clean break.

The Collapse of the FDA

Chelation, “Natural” Therapies, and FDA Limits

  • Commenters identify RFK’s “chelating compounds” as EDTA, legitimately used for heavy metal poisoning but abused by “detox” and autism quacks, sometimes fatally (e.g., calcium depletion causing cardiac arrest).
  • EDTA can be approved as a prescription drug, sold as a supplement, and used as a lab reagent or food additive; the fight is mostly about what can be marketed as medicine.
  • Some argue preventing self‑medication was a mistake; others say this is exactly where FDA protection is most needed.
  • Similar tensions show up around raw milk and “clean foods”: some seek enzymes/bacteria they believe are beneficial; others see unjustified culture-war hostility to basic safety like pasteurization.

How Well Is the FDA Working Now?

  • Several posts cite “Bottle of Lies” and ProPublica pieces to argue the FDA is already failing on overseas generics: manipulated records, contamination, and secret exemptions granted to avoid drug shortages.
  • Others say this is less “collapse” than underfunding, limited jurisdiction, and impossible political tradeoffs between safety and supply.
  • There’s debate over whether current manufacturing standards are unrealistically strict and drive production offshore, or whether import laxity is the core failure.
  • Some stress that, despite real scandals, the FDA clearly blocks vast numbers of unsafe/ineffective drugs; calling its performance a “crapshoot” is rejected as misleading.

Regulation vs. Freedom and the “Nanny State”

  • One camp wants major deregulation: sees FDA as slow, heavy‑handed, blocking lifesaving drugs (e.g., narcolepsy treatment TAK‑861) and overburdening innovation; suggests post‑hoc policing of “top used” products instead.
  • Critics respond that in a snake‑oil market honest R&D loses to aggressive fraud, and history shows regulations are “written in blood” after mass poisonings.
  • Arguments over whether we should let “Darwin” cull reckless individuals clash with concerns about children, misled patients, and corporate incentives to harm for profit.

Advertising and Pharma Power

  • Many see direct‑to‑consumer prescription drug ads as corrosive: they pressure doctors, distort demand, and are framed as “speech” to evade control, unlike tobacco ads.
  • Others say ultimate responsibility lies with prescribers, not advertisers, though that view is challenged as ignoring real market and liability pressures.

Devices, Cybersecurity, and Broader Institutional Decay

  • Commenters note the FDA’s crucial role in medical devices, from scammy or unsafe implants to networked monitors with serious cybersecurity and national‑security implications.
  • Some fear broader institutional erosion (courts, agencies, public health) is pushing the U.S. toward “failed state” dynamics; others attribute that sense to media‑driven fear and polarization.

RFK, COVID, and the Future FDA

  • Opinions diverge on new FDA leadership and RFK’s agenda: some defend evidence‑based critiques of blanket COVID vaccination policy; others see the same work as selectively framed “crazy MAGA” anti‑vax rhetoric.
  • A few hope the current shock could clear ossified structures and eventually yield a better regulatory regime; many others worry that what replaces the FDA will be far worse, especially for food and drug safety.

Roman dodecahedron: 12-sided object has baffled archaeologists for centuries

Knitting / textile tool hypothesis

  • One camp insists these are tools for making glove fingers or knitted/“loom-knit” tubes (wool or metal chains).
  • Supporters claim the varied hole sizes fit different finger or chain diameters, that a multi-face object is a convenient multi-size tool, and that northern find spots align with colder climates where gloves were needed.
  • Some argue “loom knitting” or nalbinding could predate the usual “invention of knitting” date, so chronological objections are overstated.

Counterarguments to knitting theory

  • Multiple commenters stress there is no firm archaeological evidence; it remains one speculative hypothesis among dozens.
  • Objections:
    • No wear marks where yarn or wire would rub.
    • Earliest known knitting appears centuries later; Roman textiles are overwhelmingly woven.
    • Only five pegs per face and hole geometry don’t match how knitting/loom-knitting actually works.
    • Similar icosahedra exist; hard to reconcile with a glove-tool story.
  • Several call the “grandma solved it on YouTube” narrative pseudoarchaeology that journalists repeat uncritically.

Other functional tool ideas

  • Coin gauge: dodecahedrons found in coin hoards suggest a size-checking tool, but ancient coins varied by weight more than diameter, and simpler gauges would suffice.
  • Surveying/range-finding: differing hole pairs could give fixed sighting distances, but lack of markings and decorative knobs weaken this.
  • Chain or chainmail tool: wrapping wire around corner balls to make chains; disputed as hard to detach and again lacking wear patterns.
  • Glove templates: personal sizing jigs for outsourced glove-making.

Status symbol, amulet, or game

  • Some favor a “masterpiece” or craft test object: difficult bronze casting that proves skill. Critics ask why they’re regionally clustered and often buried with women.
  • The article’s “cosmic symbol / amulet” idea is noted; others add generic “ritual object” skepticism but admit lack of practical explanations.
  • Several propose toys, puzzles, or gambling devices; one likens them to ancient fidget spinners or novelties.

Manufacture, distribution, and evidence gaps

  • Likely bronze lost-wax castings; high craftsmanship and cost suggest non-trivial value.
  • Found mainly in Gallo‑Roman areas (France, Britain, etc.), not Italy or the East, and in graves (both sexes), coin hoards, camps, and refuse.
  • Variability in size and lack of standardization argue against a calibrated measuring system.
  • Broader discussion notes how much everyday practice goes undocumented, and how our own digital culture may leave similar mysteries.

Apple's MLX adding CUDA support

What the PR Actually Does

  • Adds a CUDA backend for MLX, targeting Linux with CUDA 12 and SM 7.0+ GPUs.
  • It’s not CUDA on Apple Silicon, and not a reimplementation of the CUDA API.
  • Intended use: write MLX code on a Mac (Metal/Apple Silicon), run it on Nvidia clusters/supercomputers via CUDA.
  • Early tests show mlx-cuda wheels exist (currently for Python 3.12 only).

Why This Matters

  • Makes MLX a more serious competitor to PyTorch/JAX by giving it access to mainstream Nvidia infrastructure.
  • Improves developer experience for Mac users: prototype locally on Apple hardware, deploy at scale on Nvidia.
  • Some speculate this could slightly increase overall AI capacity if it eases use of existing clusters.
  • Others stress this does not threaten Nvidia; abstraction layers typically still land on Nvidia GPUs in production, which reinforces Nvidia’s position.

Unified Memory & Performance Discussion

  • MLX leans on unified memory; CUDA’s “Unified Memory” is implemented via page migration and on-demand faulting, not physically shared RAM.
  • On Apple Silicon, CPU and GPU truly share physical memory; on most CUDA systems, data must still be moved, just hidden by the runtime.
  • Several commenters note that CUDA Unified Memory can cause severe memory stalls without manual prefetching, especially for ML training; performance is highly workload-dependent.
  • High-end Nvidia setups (Grace Hopper, NVLink, Jetson) offer tighter CPU–GPU memory integration, but behavior and speed still differ from Apple’s UMA.

Legal / IP and CUDA Compatibility

  • Thread repeatedly clarifies: this PR does not reimplement CUDA APIs, so copyright/API issues aren’t directly engaged.
  • Google v. Oracle is cited as important precedent for reimplementing APIs under fair use, but people caution that the ruling is narrow and legally nuanced.
  • Multiple comments emphasize that CUDA is an ecosystem (compilers, libraries, tools, debuggers, profilers), not “just an API”, and cloning it fully would be enormously difficult and expensive, even aside from IP questions.

Broader Ecosystem & Apple Strategy

  • Some hope this is a step toward MLX as a vendor-neutral layer; others see it simply as Apple making its stack usable in Nvidia-centric research environments.
  • There is frustration that open standards (OpenCL, Khronos) failed to counter CUDA, with some blame placed on Apple for abandoning OpenCL just as demand rose.
  • Debate continues over Apple’s AI strategy, lack of Nvidia support on Macs, and whether Apple will ever ship datacenter- or Nvidia-based solutions; no consensus, and no concrete evidence in the thread.

Anthropic, Google, OpenAI and XAI Granted Up to $200M from Defense Department

Contract Scale and Structure

  • Several commenters note that “up to $200M” per company is a ceiling over multiple years, likely via time-and-materials style orders, not guaranteed revenue.
  • Relative to DoD’s budget and the companies’ own revenues/compute costs, many see it as strategically symbolic “testing the waters” rather than a huge procurement.
  • Some speculate that if the pilots work, much larger follow-on spending is likely.

Which Companies, and Who’s Missing

  • Confusion initially over whether $200M is split or per company; clarifications show it’s per vendor.
  • Debate over Amazon and Meta “losing out”: others point out they already have large defense and GovCloud contracts, and that AWS will likely still capture much of the compute spend.
  • There is criticism of Amazon’s own models as being behind state of the art.
  • Some find xAI’s inclusion suspicious; others argue Grok is a real product and omitting them would also look political.
  • Ethical concerns raised about a recent government–employee-to-founder “revolving door” and about Grok’s recent extremist outputs.

Big Players vs Startups

  • A strong thread argues the money should be split into many $10M awards to smaller AI startups to foster innovation and competition.
  • Pushback: this is procurement of concrete capabilities, not an innovation grant program; a few frontier providers are best positioned to deliver secure, integrated systems at scale.
  • Others note that many “AI startups” just wrap or fine-tune the big models, so funding them would often pay the same incumbents indirectly anyway.

Weaponization, Safety, and Misuse

  • Some fear “agentic” LLMs contributing to autonomous weapons or “hallucinating enemies.”
  • Others counter that current LLMs are ill-suited for real-time targeting and that existing military AI is mostly specialized vision/target-ID systems.
  • There is concern about LLMs being misused in bureaucratic decision-making (e.g., screening grants by ideology) even if not directly in weapons.

Broader Political/Economic Themes

  • Comparisons with EU AI funding; some claim Europe is “sleeping,” others cite large EU AI investment plans.
  • Discussion of contracts as selective industrial policy or “planned economy” via military spending.
  • Worries about AI accelerating white-collar job loss, including in government IT roles.

Anthropic signs a $200M deal with the Department of Defense

Scope and Money of the Deal

  • Multiple links clarify this is “up to” $200M, and not just Anthropic: Google, OpenAI, and xAI reportedly have similar ceilings.
  • Several commenters note this is likely a contracting “vehicle” / cap, not guaranteed spend; actual initial budgets may be 10–100x smaller.
  • Comparisons are made to other defense contracts (e.g., billions for AR headsets), implying this is modest by Pentagon standards and may mostly yield consulting-style outputs (use-case lists, best practices, prototypes).
  • Some argue the reputational damage isn’t worth the relatively small guaranteed money; others see it as a rational “foot in the door.”

Ethical Debate: Selling AI to the DoD

  • One side views doing business with the U.S. military as inherently unethical: “exporter of death,” involvement in current conflicts, and likely use in targeting and surveillance.
  • They worry about AI in life-or-death decisions and diffusion of moral responsibility (“the computer said so”), referencing AI-assisted targeting in current wars.
  • The opposing side argues:
    • Every major power will use AI; abstaining won’t stop militarization.
    • Better for more safety-focused companies to be involved than less constrained actors.
    • Paying taxes already funds the DoD; corporate participation is a continuum of involvement.
  • There’s a deeper philosophical exchange about complicity in “empire,” analogies to religion, historical wartime contexts (WWII, Cold War), and whether all participation in the system is morally tainted.

LLMs, Surveillance, and Technical Role

  • Some see LLMs as transformative for intelligence: turning massive surveillance data into actionable insights, enabling near-total analysis of unencrypted communications.
  • Concerns: a panopticon becomes technically feasible; hallucinated “facts” could put innocents on watchlists with little recourse; pressure to weaken or ban encryption may rise.
  • Others push back on the “LLM as database” framing:
    • LLMs are poor, expensive storage/query engines but strong as interfaces over traditional databases and as tools for document parsing and report synthesis.
    • Classic NLP + rules are cheaper at scale; LLMs may be reserved for complex or edge cases.
  • Mention of “agentic” systems: LLMs writing and iterating on code to query data, but current reliability remains questionable for serious automation.

Broader System and Cultural Comments

  • Side thread on “rebooting” government: complexity, Gall’s Law, and the difficulty of designing simple systems that “work” for hundreds of millions of people.
  • Some note Hacker News culture feels more corporate/LinkedIn-like now; others openly celebrate tech–military collaboration, while a few users say they’ll cancel Anthropic subscriptions over this.
  • xAI’s inclusion is questioned; commenters are unsure what it contributes relative to the other firms.

LIGO detects most massive black hole merger to date

Nature of Black Hole Mergers

  • Consensus: two black holes merge into a single, more massive black hole; mass, spin and charge combine, with some energy radiated away as gravitational waves.
  • Mass determines horizon “size”; larger black holes are less dense on average (radius ∝ mass).
  • Commenters debate “consume vs merge”; better analogy is two droplets joining or two tears in fabric fusing into one.
  • Event horizons are described as geometric boundaries, not physical surfaces; crossing is defined by escape velocity reaching c.

Shape, Spin, and Horizons

  • Non-rotating (Schwarzschild) black holes have spherical horizons; rotating (Kerr) black holes have oblate horizons and additional structures like ergospheres and Cauchy horizons.
  • During mergers, the horizon can be highly distorted (“peanut-shaped”) but must relax to a smooth spherical/oblate shape; GR doesn’t allow permanent “lumpy” horizons.
  • Some discussion on how to even define “volume,” “density,” or “shape” in curved spacetime; several people flag this as conceptually tricky.

Time Dilation and What We Can See

  • From a distant observer’s frame, infalling matter (or another black hole) appears to slow and “freeze” at the horizon, redshifting into invisibility.
  • This leads to confusion about whether black holes “really” form or merge; multiple comments stress that what happens inside the horizon, or at the singularity, is fundamentally inaccessible.
  • Numerical simulations deliberately treat the interior as untrustworthy; errors are “trapped” inside the horizon while the exterior evolution is modeled accurately.

Energy Release and Gravitational Waves

  • The merger into a 225-solar-mass black hole implies ~15 solar masses converted to energy, mostly as gravitational waves.
  • Commenters quantify this as more instantaneous power than all stars in the observable universe combined, comparable to tens of thousands of Sun lifetimes released in seconds.
  • Gravitational waves are incredibly weak by the time they reach Earth (strain ~10⁻²⁰), illustrating both the stiffness of spacetime and the huge energy at the source.

Thought Experiments on Collisions

  • Head-on, high-speed collisions: kinetic energy largely ends up in the final black hole’s mass, minus what escapes as gravitational waves; momentum and energy conservation still hold.
  • Grazing encounters could, in principle, briefly share apparent horizons without forming a single global horizon, but once a true shared horizon forms, separation is impossible.

Cosmological Analogies

  • Some discussion on whether a black hole with the mass of the (observable) universe would be about the size of the universe, and whether the early universe “was” a black hole; participants highlight unresolved and unclear aspects here.

Detectors, Funding, and Networks

  • Multiple comments worry about proposed U.S. funding cuts to NSF and LIGO, including risk of shutting one U.S. interferometer.
  • Triangulation and sky localization currently rely on a small global network (LIGO sites, Virgo, KAGRA, GEO600); losing a LIGO site would significantly degrade localization.
  • LISA (the planned space-based detector) is led by ESA; some concern is expressed about NASA’s role and U.S. budget decisions, but ESA’s core mission is moving forward.

Usefulness and Spin-Offs

  • Direct “practical uses” are unclear; commenters emphasize that fundamental experiments often pay off via enabling technologies: ultra-stable lasers, precision metrology, isolation systems, advanced detectors, and software pipelines.
  • Gravitational-wave astronomy may probe the very early universe, beyond the photon-based cosmic microwave background, potentially informing new physics.

Awe, Scale, and the ‘Chirp’

  • Many express a sense of existential smallness and awe at energies and scales involved.
  • The audible “chirp” from the signal, if up-shifted into hearing range, corresponds to massive black holes orbiting hundreds–thousands of times per second; listeners find it eerie and “insane.”

Cognition (Devin AI) to Acquire Windsurf

What actually got bought?

  • Many are confused by the sequence: OpenAI’s rumored deal collapses → Google pays ~$2.4–2.5B for a perpetual license plus an acquihire of the founders/top talent → Cognition now “acquires Windsurf.”
  • Commenters debate whether Cognition is buying a hollow shell (brand, IP, remaining staff, user base) while Google already took the key people and long‑term rights to the tech.
  • Some speculate this is an example of a “blitzhire” structure: big tech gets talent + IP license quickly while avoiding full M&A scrutiny.

Where did the billions go, and did employees get shafted?

  • Strong disagreement over who benefits from the Google money: many assume most went to investors and the execs who left for Google, with “left‑behind” employees getting little.
  • Cognition’s blog claims 100% of Windsurf employees “participate financially” with cliffs waived and vesting accelerated, but commenters call this PR‑speak without numbers; “participate financially” could still mean trivial sums or illiquid stock.
  • Some argue this outcome is still better than a typical failed startup; others say it poisons trust in startups if founders can cash out via backdoor deals while rank‑and‑file are stranded.

Impact on products and users

  • Users ask what happens to Windsurf’s IDE and plugins, especially JetBrains support. Some fans say Windsurf’s agentic behavior, tab model, and code indexing were superior to Cursor; others report almost no adoption in their circles.
  • Several say the rapid ownership churn makes Windsurf hard to trust going forward and expect higher prices, nerfing, or eventual shutdown.
  • Devin itself is polarizing: some call it overhyped and underwhelming; others report using it successfully for smaller features.

Valuations, moats, and “AI bubble” concerns

  • Commenters question the logic of paying billions for licenses and for “AI software engineer” wrappers with no clear moat over foundation model providers or IDE vendors.
  • Many see this as strong evidence of an AI bubble disconnected from fundamentals; others counter that real revenue (e.g., from leading model labs) and personal productivity gains justify large bets, even if many players will still be wiped out.
  • There is broad skepticism that tools like Devin/Windsurf have durable defensibility if model providers or Microsoft/JetBrains decide to bundle comparable agents directly.

Data brokers are selling flight information to CBP and ICE

Scale of Data Brokerage vs “Big Tech”

  • Many comments argue that data brokers are far more invasive than widely blamed platforms like Google or Facebook.
  • Big ad platforms are said to mostly keep data in-house for targeting, whereas brokers directly sell detailed dossiers.
  • People in the industry claim the scale is “10–1000x” worse than most HN readers imagine, and that this has been true for years.

Where the Data Comes From & How It’s Combined

  • Claimed sources include credit‑card networks, POS terminals, mobile carriers, auto manufacturers, retailers, loyalty programs, airports, license-plate cameras, tax and property records, professional associations, and public records.
  • A key value of brokers is joining messy, heterogeneous data sets (often public but hard to work with) into unified, individual-level profiles.
  • Example: combining “anonymized” purchase data by postal code with a unique address can fully de‑anonymize a household.

Government Use & Legal End‑Runs

  • Buying from brokers lets agencies like CBP/ICE bypass warrant processes and inter‑agency data‑sharing constraints that would apply if they went through TSA or airlines directly.
  • Some see this as a direct workaround of constitutional/search protections; others note it’s not clearly illegal under current US law.
  • In the EU, commenters think airlines and intermediaries like ARC/IATA could face serious GDPR risk if they sell identifiable flight data.

Skepticism, Proof, and Concrete Examples

  • Several commenters demand concrete evidence and pricing for hyper‑granular data (e.g., “35‑year‑old dentists on Elm Street”) and are unconvinced by vague “trust me” claims.
  • Others respond with examples of known brokers and news stories (e.g., Kochava, credit-card data sales, carrier location fines), but exact price lists and demo receipts are rarely provided.
  • Some insist individual‑transaction histories by named person are routine; others say they’ve only seen targeting by segment/zip code.

Privacy Harms, Apathy, and Mitigation

  • There’s a recurring theme that trust was destroyed (telemetry misuse, repeated scandals), so people now assume the worst.
  • Many lament broad public indifference: even knowledgeable users underestimate non‑tech industries’ role.
  • Mitigation ideas: ad blockers, minimal social media, cash, privacy‑focused services, GDPR/CCPA requests, and specific opt‑outs (e.g., emailing ARC). Several argue true “digital rebirth” is nearly impossible.

Reconstructing Flight Histories Without First‑Party Data

  • One contributor describes reconstructing individuals’ flight histories at scale from spatiotemporal “breadcrumbs” (social media, ad logs, IoT), inferring flights from impossible travel speeds and matching to public schedules.
  • Others press for details and remain skeptical, but generally agree that pervasive location and event metadata make powerful inference models feasible even without direct airline feeds.

Oakland cops gave ICE license plate data; SFPD also illegally shared with feds

Flock Safety, YC, and surveillance capitalism

  • Many see Flock as purpose‑built for mass surveillance and inter‑agency data sharing, not just “crime prevention.”
  • Criticism extends to its VC/YC roots: profit- and founder-first culture, weak ethical constraints, and marketing claims like “eliminate crime.”
  • A former employee describes a literal “eliminate all crime” mindset, misleading transparency pages, and aggressive cross‑agency sharing.
  • Activists describe local campaigns to block deployments and map camera locations (e.g., community-led inventories and teardowns).

Legality of data sharing (SB 34, SB 54, supremacy clause)

  • Commenters dig into California’s SB 34: AG guidance says ALPR data may not be shared with private, out‑of‑state, or federal agencies, regardless of use case.
  • Some initially confuse SB 34 (ALPR) with SB 54 (sanctuary law) and argue sharing is only barred for immigration enforcement; others rebut with statute/AG text.
  • Debate over whether Oakland PD itself broke the law vs. other California agencies that queried Oakland’s Flock data “on behalf of” ICE/FBI and then relayed results.
  • Supremacy‑clause arguments (“federal law > state law”) are countered with the anti‑commandeering doctrine: states can’t be forced to enforce federal law and may prohibit local cooperation.

Who is responsible: builders vs users vs law

  • One camp: blame law enforcement; this misuse was predictable and explicitly illegal.
  • Another: blame those who created/approved the dataset and ignored predictable abuse; once such systems exist they will be repurposed.
  • A third view: both are culpable; you must design for abuse resistance and also prosecute misuse. Existing CA law is civil, with weak, individualized remedies and no real deterrent.

Policing, impunity, and the “defund” / reform debate

  • Long subthread on police behavior: qualified immunity, DAs reluctant to prosecute cops, unions and informal “strikes” or “quiet quitting.”
  • Disagreement over whether “defund the police” was actually tried; some cite modest budget trims and bail reform, others say police budgets mostly rose and cops simply refused to do their jobs.
  • Several argue the only proven lever is changing incentives, rebooting departments, and imposing real consequences, not just passing new rules.

Immigration enforcement and civil liberties

  • Strong disagreement over ICE: from “just enforcing duly enacted, harsh laws” to descriptions of dragnet operations, racial targeting, revocation of status without due process, and deportations to abusive foreign prisons.
  • Some defend tracking vehicles of undocumented immigrants; others stress false positives, political targeting, and the ease of repurposing such data against legal immigrants, minorities, or dissidents.
  • Nazi comparisons are contested: some see clear historical rhymes in data‑driven targeting and deportation; others call that trivializing the Holocaust.

Data collection, privacy, and historical parallels

  • Recurrent theme: once large-scale personal datasets exist (ALPR, DNA, phone, payments), they will be reused—often beyond original scope—and become magnets for abuse and breaches.
  • Historical examples cited: Nazi use of registries and IBM tabulating systems; Dutch debates over the civil registry; modern DNA and commercial datasets later opened to law enforcement.
  • Some push for radical data minimization and strong consent-based privacy law; others argue you can’t “defang” the state with paper rules—rights require continuous political engagement.

Local crime vs civil liberties in Oakland

  • Oakland residents describe extremely high rates of car thefts, home-invasion style robberies, and armed crews using stolen cars—often gone before 911 can respond.
  • Some report Flock cameras materially help identify and arrest repeat offenders (similar claims made for SF drones), and see them as one of the few working tools.
  • Others argue the same tech is quickly diverted to ICE and federal task forces, and that local quality-of-life concerns are being leveraged to normalize a broader surveillance and deportation regime.

Two guys hated using Comcast, so they built their own fiber ISP

Wired vs wireless, and the appeal of local fiber

  • Many commenters are happy to see real wired infrastructure instead of big carriers’ push toward wireless, which is seen as cheaper to deploy but lower quality.
  • Fiber is praised as dramatically more reliable than DSL/cable, eliminating whole classes of faults (water in copper, lightning, marginal lines).
  • People who’ve had local cable/fiber ISPs report much better support, pricing, and reliability than national incumbents.

Support burden and “home internet plumbers”

  • Several ex‑ISP and helpdesk workers say most tickets are not plant failures but user issues: Wi‑Fi range, email setup, lost passwords, “TV on wrong input”, or even no‑computer dial‑up stories.
  • Others note fiber simplifies troubleshooting (ISP can see up to ONT; often just send a tech).
  • There’s a recurring analogy: you don’t call the water company for a clogged sink, but ISPs are expected to support everything from Wi‑Fi to printers. Some wonder why “home network handymen” aren’t more common.

Monopolies, competition, and Comcast behavior

  • Strong hostility toward Comcast and similar incumbents: data caps, unreliability, scripted support, and exploitative pricing in low‑competition areas.
  • Multiple anecdotes of Comcast (and Cox, etc.) removing or softening caps, improving offers, or calling customers aggressively once a fiber competitor appears.
  • People highlight lobbying against municipal broadband and “captured” state/local governments that slow or block new deployments.

Building an ISP: capital, trenches, poles, and law

  • Comments push back on the idea that “anyone could have done this”: you need technical skill, $millions, and the ability to handle legal, permitting, and physical plant.
  • Underground vs pole attachments is a major trade‑off: underground is robust and aesthetic but expensive and permit‑heavy; poles are cheap but vulnerable and subject to incumbent obstruction.
  • Some argue “captured government” is overstated; others cite pole‑owner and permitting games that have even hampered Google Fiber.

CGNAT, IPv6, and network design choices

  • Prime‑One and similar small ISPs often use CGNAT and locked‑down routers; power users complain (no inbound services, no public IPs).
  • There’s a big debate on IPv6:
    • Pro‑IPv6: avoids CGNAT, enables direct connectivity, can reduce CGNAT hardware costs, and is considered “table stakes” by some.
    • Skeptical small‑ISP operators: almost no customers ask for it; CPE support is inconsistent; dual‑stack introduces extra failure modes for little visible gain.
  • Alternatives like NAT64/464XLAT, MAP‑T, and DS‑Lite are discussed but are seen as limited by current CPE support.

Starlink and rural/US vs EU comparisons

  • Starlink is seen as a strong option for rural/mobility use, but data‑heavy households can’t realistically replace wired with it.
  • European and some Asian commenters note cheap symmetric gigabit or multi‑gigabit with no caps, contrasting sharply with many US markets.
  • Others stress US experience is highly local: some cities have excellent cheap fiber; many suburbs and towns still face de facto monopolies.

Do we really need gigabit (or 10G)?

  • Some say 300 Mbps is enough for a family; others point to upload bottlenecks, multiple 4K streams, cloud backups, and work‑from‑home needs.
  • Technically, gigabit+ is often the “natural” minimum speed for modern fiber gear; oversubscription means advertised speed ≠ guaranteed rate, but higher tiers provide useful headroom.
  • A common stance: once the trench is dug, bandwidth is cheap; the expensive part is building the fiber in the first place.

Random selection is necessary to create stable meritocratic institutions

What “sortition” is and why it’s proposed

  • Commenters note the idea is long‑studied under “sortition”/“demarchy”: offices filled by lottery rather than election.
  • Motivation: elections systematically select for charisma, money, and sociopathy rather than public‑spirited competence; lobbying and staffers/lawyers effectively write laws.
  • Randomly selected citizens are argued to be more representative and less corruptible, since they don’t need to fundraise or seek reelection.

Arguments in favor of sortition or partial sortition

  • Juries and citizens’ assemblies (Ireland, France, local experiments) are cited as proof that random citizens can deliberate, absorb expert input, and reach nuanced, workable recommendations.
  • Several propose hybrid systems:
    • Randomly selected lower or upper houses, or a fixed fraction of seats filled by lottery.
    • Policy‑specific “citizen juries” that vet, amend, or approve legislation.
    • “Election by jury” where a random panel interviews and chooses between candidates.
  • Others suggest expanding legislatures (e.g., US House) and filling some of the new seats by sortition to dilute partisanship and money.

Design variants and safeguards

  • Ideas include:
    • Eligibility pools (basic education, clean record, prior local service).
    • Training periods and good pay to make service attractive and feasible.
    • Stratified sampling or quotas to ensure demographic balance.
  • These proposals draw criticism: eligibility tests risk recreating Jim‑Crow‑style exclusion or being captured by existing elites.

Objections and perceived failure modes

  • Fear of “randos” writing law; lawmaking is seen as more complex and gameable than jury decisions.
  • Concern that power would simply shift to unelected staff, experts, and lobbyists, as with term limits.
  • Juries themselves are criticized as biased and manipulable; some prefer professional judges or mixed panels.
  • Sortition‑based bodies can be steered by facilitators/secretariats, as alleged in Irish and French examples.

Meritocracy, metrics, and alternatives

  • Thread debates whether meritocracy is achievable or just money/elite reproduction in disguise; Campbell/Goodhart’s laws are invoked (metrics get gamed).
  • Some see “meritocracy” mainly as a way to stop elites kicking away ladders; others say the word now masks entrenched privilege.
  • Direct or “liquid” electronic democracy is floated but criticized for rational ignorance, agenda control, and Californian‑style proposition failures.
  • Many conclude some mix of qualification, randomness, and structural reforms (campaign finance, voting systems, institutional design) is needed; no consensus on how far to push sortition.

You Are in a Box

Mobile/desktop “boxes” and user agency

  • Many feel “boxed in” most acutely on phones: instead of the phone acting as a user’s agent, siloed apps control data and interactions.
  • Desired fix: clear, open APIs and better semantics so agents can compare options (e.g., restaurant menus) and transact on the user’s behalf, rather than platforms and gatekeepers steering business.
  • iOS Shortcuts is cited as an example of powerful but artificially limited tooling; app vendors often avoid exposing automation hooks because it threatens engagement metrics.
  • Sandboxing and data exfiltration (especially via cloud AI) create justified mutual distrust, which also blocks interoperability.

OS, shells, and interoperability models

  • Several comments riff on “objects and actions” as the real primitives, but note it’s hard to expose safely and generally.
  • Comparisons:
    • Bash-style text pipes (“exterior” design) vs richer-but-incompatible structured shells (PowerShell, Nushell).
    • COM and Java/JVM as earlier attempts at language‑level interop within one runtime “box.”
  • One commenter argues shells must remain “exterior glue” (text/bytes between processes) to scale across heterogeneous systems; typed, in‑VM designs create extra layers of glue and complexity.

Plan 9, Unix philosophy, and security

  • Multiple people say the post echoes Plan 9’s “everything is a file” and per‑process namespaces: the environment as a composable space, not a prison.
  • Debate over whether Plan 9 treated security as an afterthought or had a coherent story that evolved (Factotum, TLS services).
  • Some dismiss Plan 9 as a failed, over‑hyped Unix alternative; others push back, calling that an uninformed take.

Data formats, schemas, and models

  • Several frame the problem as primarily about data, not code: data is locked in proprietary models; there’s little standardization of representations.
  • Skepticism about a universal “model of everything” registry; suggestion that LLMs might dynamically translate schemas between programs.
  • Discussion of SOAP vs GraphQL: some see them as equivalent in power; others argue GraphQL is superior when decoupled from underlying DB schemas.
  • Apache Arrow/Parquet gets praise as a way to share columnar data without repeated (de)serialization, but mutation performance and distinction between “data” and “data model” are raised.

Style and capitalization flamewar

  • A large subthread fixates on the article’s unconventional capitalization (all‑lowercase or, via referrer‑based CSS, ALL CAPS for HN readers).
  • Some find it cognitively tiring, disrespectful, or “pretentious”; others see it as expressive, conversational, or as signaling non‑AI, non‑corporate voice.
  • Meta‑point: several note that style complaints drown out substantive discussion and violate HN’s guideline about griping over formats.

Other proposed perspectives/solutions

  • Emacs/Smalltalk/Pharo and personal OS experiments are cited as “more open” environments, but criticized for fragility, lack of types, and practicality.
  • A DSL for web pipelines that passes JSON between dynamically loaded steps is offered as a composable, extensible alternative to monolithic apps.
  • One commenter claims the boring but effective answer is simply: keep your own data in normal filesystem files; SaaS and mobile platforms mainly re‑hide that universal interface.

On doing hard things

Perception of “Hard Things” and the Title

  • Several readers felt the story is more about psychological courage, grit, and persistence than about conventionally “hard” achievements, so the title feels slightly mismatched.
  • Others argue that the core lesson generalizes: hard things take time, require daily small efforts, and progress is usually only obvious in hindsight.

Learning, Talent, and Looking Dumb in Public

  • A recurring takeaway: the real “hard thing” is being okay with repeatedly looking foolish in public.
  • People connect this to first attempts at running, going to the gym, or learning games/sports.
  • “Talent” is reframed as often being long, playful exposure since childhood rather than innate ability.

Immersive Practice and Environment

  • The “immersive calibration of self to environment” resonated: examples include spearfishing, long bike commutes, rowing, and plastering.
  • With time, body and perception adapt; tasks go from overwhelming to fluid, even beautiful, despite initial discomfort or failure.

Fitness, Health, and Consistency

  • Multiple stories echo the same pattern: modest, consistent exercise (running, walking, boxing, rowing, weightlifting) beats sporadic high-intensity efforts.
  • Heart-rate–based training is praised for making running sustainable and enjoyable.
  • Debate arises over whether very intense endurance training harms the heart or joints; evidence is cited on both sides, with no clear consensus in the thread.

Fear, Status, and Adult Learning

  • Many note people avoid new experiences because they fear looking dumb; this costs them rich experiences.
  • Learning with AI is valued because it allows low-status, judgment-free trial and error.
  • Some enjoy being beginners in areas where they’re not “the expert,” as a release from professional pressure.

Kayak Stability and Body Factors

  • Discussion clarifies that kayak stability varies widely: racing/sprint kayaks can be extremely tippy, while most recreational kayaks are very stable.
  • Center of gravity, kayak width/length, and stroke technique all affect perceived difficulty.

Value of “Pointless” Hard Things

  • Several commenters appreciate doing difficult but externally “useless” things (Rubik’s cube, juggling, basic piano, martial arts) purely for the joy and personal growth.
  • The closing sentiment many highlight: there’s “quiet dignity” in almost-success stories, not only in spectacular wins.

Lenovo Legion Go S: Windows 11 vs. SteamOS Performance, and General Availability

SteamOS vs Windows performance on Legion Go S

  • Commenters describe SteamOS/Proton results as dramatically better than Windows on the same hardware, with Cyberpunk 2077 called out for ~28% higher FPS and ~25% better battery life.
  • People want deeper breakdowns (CPU vs GPU vs OS overhead, resource graphs) to understand where the gains come from, especially given that the Ryzen Z2 Go APU is only modestly ahead of the Steam Deck’s APU on paper.

Linux graphics stack momentum

  • Mesa 25.2 improvements to AMD’s next-gen geometry pipeline and better culling are cited as ongoing gains.
  • AMD’s shift away from proprietary GL/VK drivers toward fully open-source is seen as a long‑term win that should keep pushing Linux performance up.

Why Linux/Proton might beat Windows

  • Some argue the main differences are:
    • Vulkan/DXVK outperforming native DirectX, even on Windows.
    • Lower OS overhead and fewer background services, especially network‑calling telemetry, improving both FPS and battery life.
  • Others speculate about subtle feature mismatches (e.g., driver‑reported capabilities, missing shadows) but note that broad, cross‑title gains point to platform/stack effects, not single‑game quirks.

Windows, gaming, and Microsoft’s direction

  • Several users say they’ve largely abandoned Windows except for gaming, citing slow Explorer, confusing settings, ads, and start menu UX.
  • There’s disagreement over whether new “AI” and UX features meaningfully impact performance, but consensus that Windows has many small background services that add up.
  • Some hope these benchmarks push Microsoft to fix low‑level performance; others hope complacency drives gamers to Linux or consoles.
  • Multiple comments suggest Microsoft now prioritizes cloud and Office/M365 over Windows itself, with less dogfooding and more internal macOS/Linux use.

macOS and Linux for development

  • Strongly mixed views: some find macOS a joy to develop on, others complain about API churn, poor docs, and Swift/Obj‑C complexity.
  • WSL2 is praised as vastly better than Docker-for-mac for Linux‑targeted dev; others say Orbstack makes macOS containers “almost native.”
  • One thread notes Windows+PowerShell can be pleasant if you don’t try to force Unix workflows; another counters that disk I/O and compile times remain weaker than native Linux.

General‑purpose vs gaming OS debate

  • One side argues comparing Windows to SteamOS is “apples to oranges”: Windows must run legacy business apps; SteamOS is purpose‑built for gaming.
  • Others respond that:
    • The devices are marketed as Windows gaming handhelds, so comparison is exactly what matters to buyers.
    • SteamOS is effectively a general‑purpose Linux distro with a KDE desktop mode and can run non‑gaming workloads (sometimes via Wine).
    • For a handheld use case like “play Outer Wilds on a plane,” general‑purpose legacy support is irrelevant.

OEMs, licensing, and dual‑boot skepticism

  • Some suspect a familiar pattern: vendors publicly flirt with Linux but ship and promote Windows SKUs due to OEM licensing incentives and fear of support calls (e.g., anti‑cheat games not working).
  • Historical examples with BeOS and Windows OEM contracts are cited to illustrate how dependent large PC makers can be on Windows‑related margins.

Win32 as Linux’s de facto stable ABI

  • Several comments note the irony that Linux, which deliberately avoided a frozen kernel ABI/HAL, now effectively has one in user space via Wine/Proton and Win32.
  • There’s debate:
    • One side thinks lack of a stable ABI is what has held back “Year of the Linux Desktop” and that distros should layer one on top.
    • Another defends Linux’s ability to “move fast” by not ossifying low‑level interfaces, pointing to long‑term stability offerings like RHEL/Ubuntu LTS instead.
  • Someone characterizes Wine itself as the missing stable ABI/HAL, joking about its “20‑years‑in‑the‑making overnight success.”

Adoption barriers and user experience

  • Anti‑cheat remains a major blocker for competitive/multiplayer gamers even as many single‑player titles now work well on Proton.
  • Some report early Steam Deck quirks (slow/no boot when offline, docking issues), though others can’t reproduce them and assume they may have been fixed.
  • A few users have already moved entirely to Linux/macOS for daily use, keeping Windows only when forced, with ads in Windows cited as a tipping point.

Frame generation on handhelds

  • One Legion Go owner sticks with Windows primarily for advanced driver‑level frame generation (AFMF 2.1), claiming it can double/triple apparent FPS and is ideal for handheld screens.
  • Others counter that SteamOS already supports FSR-based frame generation (and via GE‑Proton and mods even newer variants), and that Valve is unusually fast at shipping such improvements in the Linux world.
  • There’s disagreement over input lag: some say framegen adds too much latency for action games; others report recent implementations add ~10–25 ms, which they find acceptable on small handheld displays.