Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 192 of 355

U.S. government takes 10% stake in Intel

Deal Structure & Mechanics

  • Government now holds ~10% of Intel common stock, bought at ~$20.47/share using:
    • ~$5.7B in CHIPS Act grants that had been awarded but not yet disbursed.
    • ~$3.2B from a “Secure Enclave” defense program.
  • This replaces earlier profit‑sharing / clawback terms attached to those grants.
  • Debate whether shares were newly issued (dilution) vs taken from authorized but unissued pool; either way, economic effect is ~10% dilution for existing shareholders.
  • Intel confirms no board seat or formal governance rights; government gets standard shareholder voting plus warrants for an additional 5% if Intel ever loses majority control of its foundry business.

Is This Investment, Bailout, or Shakedown?

  • Supporters frame it as:
    • Belatedly sensible: equity in exchange for public money instead of “free” grants.
    • A step toward a US-style sovereign wealth/development fund (Temasek, Mubadala analog).
    • Aligning taxpayer upside with industrial policy.
  • Critics see:
    • Retroactive change of terms on already‑awarded grants—described as blackmail or extortion (“give us 10% or wait years in court”).
    • A precedent that government can arbitrarily rewrite deals and seize equity.
    • A de facto partial nationalization done by a single administration, not Congress.

National Security vs. Market Distortion

  • Pro‑deal arguments:
    • Intel is one of the only US-based advanced logic manufacturers; x86 and onshore fabs are viewed as strategic (defense, supply resilience if Taiwan/TSMC disrupted).
    • Older nodes are still adequate for most military systems; having any domestic capacity matters more than absolute process leadership.
    • A government stake may stabilize Intel’s funding and justify further preferential policies.
  • Skeptical views:
    • This props up a poorly run incumbent instead of fostering new competitors or distributed, older‑node fabs.
    • Creates moral hazard and a conflict of interest: state now has reason to tilt regulation, contracts, and export policy toward Intel and against rivals (AMD, GlobalFoundries, TI, Micron).
    • Some argue a DARPA‑style multi‑vendor fab program or forcing Apple/Nvidia‑type firms to co‑invest would be healthier than a single‑firm bet.

Rule of Law & Political Hypocrisy

  • Concern that the executive is effectively overriding Congress’s intent for CHIPS (pure grants) and using appropriated funds as leverage for equity.
  • Debate over whether CHIPS language (“financial assistance”) legally permits this interpretation.
  • Commenters note contrast with past Republican rhetoric against “socialism” and 2008 nationalizations; many label this state capitalism or the “nationalization phase of fascism.”
  • Others, including some on the left, welcome the principle of taking equity where the state puts in large strategic subsidies, but still distrust the current administration’s motives and methods.

From $479 to $2,800 a month for ACA health insurance next year

Pandemic-Era ACA Subsidies Expiring

  • Many note the core issue is a COVID-era enhancement of ACA tax credits ending, not a raw 5x price jump across the board.
  • Commenters say the big increases mainly hit those earning above 4× the federal poverty level; lower-income subsidies stay intact.
  • Several call the NPR example an “outlier” and “rage bait,” pointing out that final 2025 premiums aren’t published yet and that geography (e.g., West Virginia ZIPs) massively skews prices.
  • Others push back that even a one-time or partial increase is catastrophic for many households, especially retirees or near-retirees.

Universal vs. US-Style Healthcare

  • One camp argues universal or single-payer care would cut US costs by eliminating insurance middlemen, advertising, denial-management staff, and complexity; they cite estimates of ~15% of US spend going to insurance-related administration.
  • Another camp replies that universal coverage alone doesn’t fix the underlying cost structure: cross-subsidies (e.g., private plans subsidizing Medicare), entrenched interests, and decades of half-reforms created a system that’s extremely hard to unwind.
  • Some stress that simply “spending more” is not the answer; the US already spends roughly twice peer countries’ share of GDP with worse outcomes.

Quality, Access, and Wait Times

  • Defenders of the US system say that with “good insurance” in a major metro, care can be “amazing” and fast, with top hospitals and drugs covered.
  • Others counter with long waits for specialists even on high-tier plans and note that this good experience applies to a minority; many are uninsured or underinsured.
  • Experiences from Canada and Quebec highlight serious wait times and backlogs; defenders of socialized systems reply that this is triage, not denial, and contrast it with Americans simply forgoing care or going into debt.
  • Several say the US now manages to combine the worst of all rationing methods: high prices, access problems, and quality gaps.

Cost Drivers in US Healthcare

  • Proposed villains include: insurance bureaucracy, private equity–owned hospital chains, drug pricing, and regulatory capture.
  • Some blame high physician salaries; others point to research showing administration and overhead dwarf clinician pay.
  • Commenters discuss how profit caps on insurers push profits into vertically integrated sister companies (pharmacies, clinics, PBMs).

Work, Freedom, and Health Insurance

  • Multiple stories describe “job lock”: people staying in unwanted jobs, abusive relationships, or abandoning entrepreneurship and education to keep coverage.
  • Some argue this is an economic and equality issue: tying healthcare to employment gives employers disproportionate power.
  • Others warn that any universal scheme risks huge tax burdens and must actively control prices, not just shift costs from premiums to taxes.

Politics and ACA Sabotage Narratives

  • Several believe Republicans are intentionally letting ACA supports expire (e.g., individual mandate removal, subsidy sunset) to trigger a “death spiral” while blaming Obama/Biden.
  • Others note the official justification is simply that pandemic “emergency” subsidies were temporary.
  • There’s debate over whether Democrats will effectively campaign on these hikes; some are pessimistic about their electoral strategy and broader media framing.

International Comparisons and Exit Talk

  • Commenters compare US care with systems in Australia, Canada, Europe, Mexico, and Germany/France, debating tax rates, wait times, and private top-up insurance.
  • Views range from “US with good insurance is the best in the world” to “US is by far the worst system I’ve experienced.”
  • A minority seriously discuss emigration (e.g., Mexico, Europe) as a rational response to US healthcare costs.

Our Response to Mississippi's Age Assurance Law

Mississippi law vs UK rules

  • Commenters highlight Bluesky’s own distinction: UK Online Safety Act targets specific features/content, with optional age checks and no tracking of minors’ status.
  • Mississippi’s law is described as requiring site-wide age verification, persistent tracking of who is a child, and collection of “sensitive” IDs, seen as far more intrusive.

Bluesky’s IP block and community reaction

  • Many support blocking Mississippi as the “only correct response” to an overreaching law and a way to pressure courts and lawmakers.
  • Others argue this isn’t “fighting” but surrendering part of the market to stay safe legally.
  • Some suggest fully exiting US jurisdictions or leaning more on the underlying protocol so alternative apps can circumvent state rules.

Child safety, porn, and social media harms

  • Strong split between “parental responsibility and filters are enough” vs “that’s technically hard and many parents can’t or won’t do it.”
  • Some see the real problem as addictive algorithmic feeds and social media’s correlation with teen mental health issues, more than porn itself.
  • Others note obvious workarounds (VPNs, shared devices, open Wi‑Fi) will limit effectiveness, making these laws mostly “security theater.”

Motives and competence of lawmakers

  • Widespread belief that legislators don’t understand internet technology or second‑order effects.
  • Some speculate that think tanks and culture‑war agendas (anti‑LGBTQ, anti‑porn, broader surveillance) drive these laws under “protecting children.”
  • Hypocrisy is noted: same factions talk about parental autonomy but seek broad content control.

Impacts on speech, small platforms, and power

  • Many see age‑verification mandates as de facto speech regulation that burdens small and emerging platforms most, while big players and politically allied sites may be shielded.
  • There’s concern this delegitimizes the state and legal system when enforcement is visibly selective.

Age verification schemes and infrastructure

  • Ideas raised: centralized or app-based age-verification, government-backed digital IDs with zero-knowledge proofs, browser/HTTP headers marking “adult” content.
  • Skeptics warn these create markets for “age-verified” accounts, push toward device-identity binding, and are a dream for surveillance and ad targeting.

Circumvention, enforcement, and broader trend

  • VPNs, Tor, and private tunnels are seen as the only practical workarounds; some ask for low-friction, privacy-focused options.
  • Questions arise about how states can fine out-of-state or foreign platforms; answer: interstate commerce and pressure on local ISPs.
  • Other states (Wyoming, South Dakota) are cited as even more extreme, and there’s a sense this era is pushing toward wider internet fragmentation and control under the banner of child safety.

Why was Apache Kafka created?

Perception of the article & naming

  • Some initially suspected the post was AI-generated or “blogspam” due to Substack’s look and heavy bullet/bold formatting; author clarified only the thumbnail was AI, content was hand-written.
  • Several jokes and side-comments about the Kafka name (writing-optimized system, “Kafkaesque” configuration, dark literary references).

When Kafka fits vs when it doesn’t

  • Strong theme: Kafka is often misused as “just a queue” or generic pub/sub because of hype, resume-building, or sales pressure; this can cost a lot in money and complexity.
  • Multiple commenters: if you only need a simple queue, better options include RabbitMQ, Redis, SQS, MQTT, or even named pipes/ZMQ; Kafka becomes needless overhead for small/internal apps.
  • Others argue Kafka is appropriate once you have many producers, large event volumes, and multiple independent consumers that need durable, ordered, replayable streams.

Kafka’s core value: distributed log & replay

  • Defenders stress that Kafka is not a traditional queue but a distributed log:
    • Very high write throughput.
    • Retention and replay for minutes to days.
    • Decoupled producers/consumers; new consumers can re-read from any offset.
    • At-least-once delivery with ordering within partitions.
  • Replayability is cited as a key original motivation at LinkedIn: enabling new consumers on historical data without rebuilding pipelines.

Alternatives and comparisons

  • MQTT: good for lightweight pub/sub and IoT, but weak on persistence and backpressure; often paired with Kafka/RabbitMQ rather than replacing them.
  • NATS / Jetstream:
    • Praised for simplicity and flexibility, but nuanced analysis notes tradeoffs: per-stream/consumer overhead, scaling limits, 1MB message cap, more complex topologies and semantics.
  • Redis Streams and cloud services (Kinesis, Pub/Sub) suggested as simpler, cheaper Kafka alternatives at non–Fortune-100 scale.
  • Redpanda highlighted as Kafka-compatible with less operational/JVM overhead.

Operational & complexity concerns

  • Recurrent complaints: Kafka is “Kafkaesque” to run; HA clusters are hard, especially on unreliable cloud infra. Managed services (AWS MSK) help but migrations (e.g., to KRaft) are non-trivial.
  • Stream-processing systems and replay tooling (offset management, separate replay consumers, DLQs) are described as much harder than batch/S3+SQL approaches, especially for small teams.

LinkedIn scale and evolution

  • Discussion of LinkedIn’s reported Kafka scale (tens of trillions of records/day, ~17 PB/day) and how that drives very different design choices.
  • LinkedIn is now moving from Kafka to an internal system (Northguard) to address metadata scalability and partitioning limitations; seen as a “clean-sheet” redesign for extreme scale.

Java / JVM side-thread

  • Long subthread debates whether Kafka’s Java/JVM basis makes it “bloated” or just widely adopted and well-tooled; disagreement centers on memory use, GC tradeoffs, and real-world performance vs theory.

Show HN: JavaScript-free (X)HTML Includes

Browser & standards status of XSLT

  • Chrome is adding a flag to disable XSLT to measure breakage; some commenters initially misread this as immediate deprecation.
  • Google, Mozilla, and Apple all support removing XSLT from the HTML standard; only XSLT 1.0 is referenced there.
  • The XSLT spec itself is at 3.0, but browsers only ever implemented 1.0, and even non‑browser tools rarely go beyond 1.0.
  • Some see this as normal standards evolution (no implementers → remove), others as large vendors unilaterally killing features.

Security, maintenance & rationale

  • Pro‑removal arguments: old code (libxslt/libxml2) full of security issues, no embargoes, and an attack surface overlapping with XML parser bugs (entities, “logic bombs”, XXE).
  • Counterpoint: many of these problems are solvable with better defaults and limits; XML’s bad reputation is partly historical.

Developer reactions & real‑world usage

  • Some developers rely on client‑side XSLT for personal sites and niche domains (e.g., technical publishing, TEI, documentation pipelines) and feel abandoned.
  • Others argue that many vocal defenders had never used XSLT before, and that widely cited examples of “real usage” are overblown or already have server‑rendered HTML equivalents.

Client vs server transforms & performance

  • Concern: client‑side XSLT can cause request waterfalls (XML → includes → XSLT → assets), echoing early SPA “spinner” patterns.
  • Mitigations mentioned: caching static includes, embedding styles/templates in the XSLT, or running the transform in CI/CD or server‑side (PHP, Java, CMSs).

Alternatives & related technologies

  • CSS-only styling of XML can handle simple display but not templating, includes, or logic.
  • HTML Imports were abandoned as redundant and poorly integrated with JS modules; HTML Modules are suggested as a future direction.
  • Some wish browsers had safe external entities or a simple declarative “HTML includes” mechanism.
  • Others propose JSON + template engines (a hypothetical “JSLT”) as a modern parallel.

Ergonomics, complexity & tooling

  • XSLT is praised for declarative power (XPath, templates) and rich XML workflows, but criticized as verbose, unintuitive, and inconsistent across processors.
  • Several note that general-purpose languages (Ruby, PHP, JS) are often easier for transforms.
  • A few report mixed results using LLMs to write XSLT: good for syntax scaffolding, poor at nontrivial logic.

The first Media over QUIC CDN: Cloudflare

MoQ, WARP, and Streaming Formats

  • WARP is a media format candidate for MoQ, built around CMAF/fMP4 segments, so it can share caches and remain backward-compatible with HLS/DASH during rollout.
  • MoQT deliberately separates transport (fan-out, priorities, groups/tracks) from media details, allowing different streaming formats and use cases to coexist on the same relay/CDN infrastructure.

MoQ vs WebRTC and Transport Behavior

  • QUIC/WebTransport can drop media on streams or datagrams, giving WebRTC-like congestion behavior.
  • Current issue: common QUIC congestion controllers favor throughput over latency, unlike WebRTC’s GCC; server-side tuning can help but browser changes are also needed.
  • Latency is largely under receiver control: players must build their own jitter buffers and choose when to render frames.

QUIC, NAT, and Infrastructure

  • For client–server, QUIC has no special NAT traversal problem: 1 RTT to connect; main issue is environments that block UDP.
  • P2P over QUIC in browsers is considered impractical today; WebRTC+TURN/ICE remains the path there.

CDN / Relay Architecture and Load Balancing

  • Public MoQ relay uses GeoDNS today; anycast plus QUIC preferred-address is seen as the better long-term option but cloud anycast+UDP is limited.
  • Others warn that anycast can be fragile at low QPS, while DNS steering has caching trade-offs but more control.

Performance and Goodput

  • Some discuss research showing lower goodput/bitrate with HTTP/3 vs HTTP/2; consensus is that QUIC implementations aren’t yet as tuned as TCP, especially for very high bitrates and low loss.
  • Hardware offload and better GSO/GRO-style optimizations are seen as key future improvements.

Browser Support and Bugs

  • WebTransport works in Chrome and Firefox desktop; Safari has partial support behind a flag and is still buggy.
  • Firefox has known WebTransport and HTTP/3 quirks; people share bug links and debugging tips (DoH, HTTPS DNS records, Alt-Svc behavior).

Demos, UX, and Rendering Issues

  • Many report the MoQ demos as “instant” and extremely low-latency, praising “click-to-streaming time.”
  • Several mobile users see fast-moving horizontal black lines when using canvas-based rendering; others see aspect-ratio issues. Suspected causes include browser rendering bugs and canvas vs <video> behavior, with some discussion of power usage and autoplay blocking.

Other Use Cases and Features

  • MoQ could support low-latency multiplayer/game networking by mapping game data onto MoQT objects/tracks and using relays for fan-out.
  • Some ask about multicast; responses argue CDNs already approximate multicast at L7, and true IP multicast likely only helps the very largest events or P2P.
  • Multiple publisher bitrates/simulcast and receiver-driven track switching are already possible; subgroup-based and sender-side adaptation need more experimentation.

Ecosystem, Tooling, and Meta

  • Third-party implementations (e.g., MediaMTX integration with WebTransport + native QUIC) show early ecosystem growth.
  • Questions arise about adoption in OBS/YouTube and relationship to existing APIs like WebRTC, WebCodecs, and MSE.
  • One thread branch debates HN’s “original source” rule regarding this blog vs the Cloudflare announcement; moderators ultimately restore the blog post, with recognition that it adds distinct technical content and tooling.

Should the web platform adopt XSLT 3.0?

XML/XHTML/XSLT: Promise vs Reality

  • Several comments recall an “alternate universe” where XML and XHTML won, yielding strict, beautiful, long-lived tooling.
  • Many others say that universe briefly existed: XML “took hold” in the 2000s, then developers abandoned it after painful real-world experience.
  • XHTML’s strictness (fatal errors on trivial mistakes) is cited as a major reason it lost to forgiving HTML, though some still view that strictness as a feature.

Why XML and XSLT Lost Ground

  • Core complaints: verbose markup, awkward data model vs typical programming objects, confusing decisions about attributes vs elements, and hated namespaces.
  • XML specs often became “design-by-committee tire fires” (SOAP, SAML, XML-DSIG, multiple schema systems), which soured people on XML generally.
  • XSLT is widely described as powerful but hard to read, debug, and reason about; many tried it for HTML rendering and later regretted the complexity.
  • JSON gained favor because it matches language data structures, is simpler to type and experiment with, and works natively with JavaScript.
  • Some argue XML was unfairly blamed for bad schema and API design in an immature era; others insist devs just didn’t like using it and moved on.

Disagreements on Causes and Merits

  • One camp blames Google and the search/advertising ecosystem for preferring a sloppy, JS-centric web and starving XHTML/semantic web efforts.
  • Others strongly reject that, saying XML “died because it sucks” and the semantic web was flawed conceptually.
  • There’s debate over streaming: some say XML/JSON both fit poorly; others note proper streaming parsers (SAX, XmlReader) work fine if used correctly.

Current and Potential Uses of XSLT

  • Real-world uses mentioned: publishing pipelines (e.g., JATS), RSS display, document-heavy sites, and static site generation.
  • Supporters emphasize XSLT’s role in separating content from presentation, its concision via XPath, and its longevity vs churn in JS frameworks.
  • Skeptics see browser-side XSLT as niche and mostly obsolete compared with server-side templating, HTML <template>, and JS/WASM.

XSLT 3.0 on the Web Platform

  • Pro-XSLT-3.0 arguments: modern features (JSON, maps/arrays, streaming, better iteration) could make in-browser XML toolchains more useful and future-proof.
  • Anti arguments: browsers shouldn’t carry complex, rarely used engines; XSLT can live in server-side or WASM-based tools instead.
  • Some suggest if XSLT remains, upgrading to 3.0 and deprecating 1.0 is preferable to outright removal.

The US Department of Agriculture Bans Support for Renewables

What the USDA Policy Change Actually Does

  • USDA will no longer use its programs to fund solar projects on “prime American farmland” or projects using panels from “foreign adversaries.”
  • Support is shifted toward biofuels, biomass, natural-gas-derived hydrogen, and “American energy” on national forest land.
  • Several commenters stress this is not a legal ban on renewables, but a ban on support / subsidies via USDA, which may still be critical because farmers mainly interact with USDA, not DOE.

Perceived Climate and Energy Consequences

  • Many see the move as deeply shortsighted given drought, crop stress, and the need to decarbonize electricity.
  • Commenters argue renewables (especially solar + batteries and offshore wind) are already the cheapest new generation and will “win anyway,” but policy can significantly delay deployment.
  • Others note that energy used in production and industry is large and must also be decarbonized; full electrification is argued to reduce total energy demand.

Renewables vs Nuclear and Grid Reliability

  • A long sub-thread debates whether high-renewables grids are viable without massive storage.
  • One side: renewables are now the cheapest energy in history; batteries are improving; rare “Dunkelflaute” events can be handled by overbuild, flexible demand, or limited fossil/bio/synfuel backup. Nuclear is portrayed as too expensive and slow.
  • The other side: examples like Germany and Spain are cited as warning signs—needing huge multi-day storage (estimates run to tens of TWh) and new gas plants; proponents argue that if past subsidies had gone to nuclear, countries could already have fully decarbonized grids at lower prices.

Motivations and Politics

  • Many commenters see the policy as driven by fossil-fuel and corn/ethanol interests, culture-war hostility to “woke” solar/wind, and a broader “burn it all down” nihilism.
  • Others frame it as “America first” and food-security policy: prioritizing cropland for food, curbing farmland price inflation from subsidized solar, and blocking Chinese panels.
  • There is sharp disagreement over whether this genuinely protects national interests or merely entrenches legacy industries and raises emissions.

Impacts on Farmers, Land Use, and Biofuels

  • Several point out that farmers often want solar and wind for extra income and reduced costs; hybrid agrivoltaic setups (shade + crops/grazing) may be especially harmed.
  • Land-use arguments against solar are widely criticized: wind has tiny ground footprint; solar can coexist with ag; by contrast, 30–40% of U.S. corn/soy is tied to biofuels according to one summary, though another commenter notes USDA data show biofuels use much less than a majority of total cropland.
  • Overall sentiment: the policy favors biofuels and fossil-adjacent systems over much more land-efficient PV and wind, with unclear long-term benefits for food security or farmers.

Leaving Gmail for Mailbox.org

Gmail vs mailbox.org & PGP

  • mailbox.org integrates PGP in the web UI (including mobile browsers), unlike Gmail, which requires external tools and doesn’t manage keys natively.
  • Some warn that provider-managed PGP weakens the threat model: you’re protected from others but not from the provider itself.
  • Others note that switching from Gmail for privacy is still a win even without full end‑to‑end encryption, simply by leaving Google’s data ecosystem.

Owning your domain & portability

  • Very strong consensus: use your own domain so you can move providers without changing your public address.
  • Benefits: painless migration (just update MX records), diversification away from any single provider, and easy aliasing per service.
  • Risks discussed: forgetting to renew a domain (especially “premium” ones), high annual costs, and catastrophic consequences if someone else takes it and starts receiving your email.
  • Mitigations: multi‑year renewals, stacked payment methods, calendar reminders, choosing common TLDs (.com/.net/.org) to avoid validation and human‑error issues.

Provider reliability, spam & self‑hosting

  • Several report excellent reliability from large independents (Fastmail, Proton, Migadu, Zoho, Purelymail, etc.), but note that Gmail remains a high bar for deliverability and uptime.
  • Self‑hosting is seen as attractive but risky: deliverability to big providers, ongoing security, hardware/ISP dependence, and reputation building. Some suggest hybrid models (self‑host inbound, third‑party for outbound).
  • Many emphasize that backing up email (IMAP, local Dovecot, cloud storage) is as important as choosing a provider.

Experiences with mailbox.org

  • Positives: integrated PGP, IMAP support, EU hosting/GDPR, ability to bring your own client and domain, decent price.
  • Negatives/concerns raised:
    • Anti‑spoofing and outbound spam‑scanning behavior (including reports of silently dropped outbound mail).
    • Some spam filtering and 2FA rollout issues; business plans lacking proper 2FA in at least one account’s experience.
    • UI considered mediocre; some sites reportedly reject @mailbox.org addresses.
    • Address recycling after account lapse (90 days on some plans) is seen as a security risk.

Other providers & trade‑offs

  • Fastmail receives repeated praise: excellent support, strong UI, good aliasing/catch‑all handling, reliability with large inboxes. Concerns: US hosting, relatively high base plan, and address reuse policy.
  • Proton and Tutanota: appreciated for e2e encryption within their ecosystems, but IMAP limitations and reliance on their apps/bridge are seen as significant trade‑offs.
  • Migadu, Zoho, NameCrane/CraneMail, Purelymail, disroot.org, mxroute and others are mentioned for specific niches (price, family domains, generous aliasing), usually with some quota or feature caveats.

De‑Googling & broader ecosystem

  • Many describe broader “de‑Googling” journeys: moving email, calendars, storage, and photos (e.g., Immich, Nextcloud, Ente), and experimenting with alternative Android ROMs (GrapheneOS, LineageOS, Calyx).
  • Others are satisfied stopping at a non‑Google mail provider, arguing that further steps (self‑hosting, custom ROMs) are unnecessary or too fragile for their threat model.

Limits of email privacy & PGP

  • Multiple comments stress that email is fundamentally not private: no ubiquitous e2e, metadata leakage, and most correspondents still sit on Google/Microsoft.
  • PGP is seen as powerful but niche: few people use it, discovery and onboarding are awkward, and usability vs security remains a core tension.
  • Some propose using email only for “boring” communication and moving truly sensitive conversations to modern encrypted messengers instead.

XSLT removal will break multiple government and regulatory sites

Perceived impact and real-world usage

  • One side argues XSLT-in-browser is barely used; cited examples all have prominent HTML equivalents, so removal won’t meaningfully harm users.
  • Others counter that certain government/regulatory and personal sites rely on XML+XSLT specifically to make machine-targeted data human-readable at the same URL (e.g., laws, feeds, IoT/admin UIs).
  • Some worry especially about “silent” breakage where XML pages suddenly become “robot barf” again.

Metrics, telemetry, and “how much is enough?”

  • Pro-removal commenters lean on Chrome usage stats showing very low XML/XSLT page loads compared with text or PDF.
  • Critics say these metrics are biased (enterprise users, technical users, telemetry-off users) and too crude (page loads ≠ importance). They propose survey-like methods and weighting by user groups, but are challenged for lacking counter-data.

Security, maintenance cost, and standards simplification

  • XSLT engines are old, C-based, and have accumulated serious, long-lived vulnerabilities; keeping them is seen as ongoing security and maintenance “tax” for a tiny audience.
  • Others note the same vendors are happy to add complex, risky APIs (WebUSB, WebSerial, etc.), so “security/complexity” feels selectively applied.
  • Some argue browsers should have modernized to newer XSLT versions instead of letting implementations rot, then declaring them too painful to maintain.

Alternatives: server-side, polyfills, and extensions

  • Suggested workarounds: server-side transforms, JS/WASM polyfills, moving functionality to extensions, or just serving HTML instead of XML.
  • Pushback: automatic application of xml-stylesheet on bare XML URLs can’t be replicated purely with a client-side polyfill unless browsers add new hooks.
  • A WASM-based polyfill and a PR to allow script tags in XML are mentioned as partial mitigations.

XSLT’s unique value: format reuse and declarative web

  • Supporters highlight XML+XSLT as a clean separation of data and presentation, with one canonical XML source serving both machines and humans.
  • Use cases cited: RSS/Atom feeds with explanations, admin tools, low-power/IoT devices offloading rendering to the browser.
  • There’s broader frustration that browsers lack good, standardized, low-JS ways to build interactive, data-driven pages (XForms/XPath-like ideas).

Governance, process, and trust

  • Some see this as routine: deprecations can take a decade; removal from spec != immediate engine removal; precedents include applet and mutation events.
  • Others see inconsistency with the “never break the web” ethos and suspect commercial or anti-competitive motives from dominant browser vendors.

Google did not unilaterally decide to kill XSLT

Scope of the Decision and Standards Process

  • Multiple vendors (Chrome, Firefox, WebKit) are described as wanting to remove XSLT; it is not a solely Google-initiated move, though a Google-linked issue triggered the public reaction.
  • Participants note that WHATWG issues are primarily coordination tools for spec editors, not community referendums, which helped fuel misunderstandings.
  • Some stress that standards cannot force implementations: “the web is what engines ship and sites use.” If all engines drop a feature, the spec becomes moot.
  • Others argue that decisions by a small set of vendors are effectively unilateral for “tech that affects billions,” and that there should be a clearer, formal public process and venue for feedback.

Resources, Power, and Responsibility

  • One side: browser teams have limited budgets; maintaining a niche, complex, security-sensitive feature is a poor use of resources. Companies are entitled to allocate their money as they like.
  • Other side: “we don’t have resources” is viewed as a political choice, especially for a company with vast overall wealth. With monopoly-like power comes a moral duty to preserve web stability.
  • There’s a split between “Play Nice” (persuade vendors respectfully) vs “Fight the Power” (public shaming and resistance are ethically justified when powerful actors shape the web).

Backwards Compatibility vs Cleanup

  • Some see removing underused features as essential hygiene, citing security risk and maintenance cost of XML/XSLT, with prior precedents (FTP, unload, mutation events, Flash, cookies).
  • Others counter that XSLT is currently implemented and used; removing a baseline browser feature that can entirely break pages is qualitatively different and should have an extremely high bar.
  • Several note that timeline matters: people running sites, factories, and legacy enterprise systems need years of warning and viable migration paths.

Technical Merits and Alternatives

  • Supporters of XSLT value client-side XML→HTML templating, separation of data/presentation, and zero-build, low-JS workflows; some still use it for hobby sites, reporting, or glue systems.
  • Critics call XSLT and XML “impossible to implement correctly,” attack-surface heavy, and rarely seen on today’s web outside niche/legacy cases.
  • Proposed mitigations include WebAssembly polyfills (criticized as large and impractical unless bundled), proxies, or replacing XSLT with CSS/JS and new HTML features (e.g., includes/partials).
  • There’s broader debate about whether new declarative features (HTML includes, partial updates) should become native, versus “just use a few lines of JS.”

Texas Instruments’ new plants where Apple will make iPhone chips

Government subsidies, equity, and state ownership

  • Strong disagreement over whether CHIPS Act money should come with ownership stakes.
  • One camp: grants are to “buy” a domestic chip industry; equity or bailouts distort markets and risk conflicts of interest, corruption, and regulatory capture.
  • Another camp: giving “stupidly large” sums to profitable firms without equity is a taxpayer giveaway; some support government taking shares or stronger conditions.
  • Ohio’s constitutional ban on state equity/credit in private firms is cited as a model to avoid financial risk and corruption, though workarounds via “development corporations” exist.

Industrial policy, national security, and self‑sufficiency

  • One side argues chips are strategically like agriculture: you must be able to meet minimal domestic needs in a crisis, even if production is uncompetitive without subsidies.
  • Critics say this logic can justify protectionism in “everything,” reduces aggregate output, and often creates crony, inefficient industries; comparative advantage and global supply chains matter.
  • Debate over how risky it really is to rely on allies like Canada/Mexico, with some claiming war is essentially implausible in 100 years and others calling that hubristic.
  • Russia’s ability to produce basic missiles under sanctions is used to argue that even legacy chips suffice for many military needs and emergency ramp‑up is feasible.

TI’s role, process nodes, and what “matters”

  • Several comments clarify that TI’s core business is analog and mixed‑signal parts; calculators and MSP430s are a tiny share of revenue, though probably high margin.
  • Skepticism that TI will or can compete with TSMC/Samsung at leading nodes; these Sherman fabs target 45–130 nm, which some see as underwhelming but others note are the “boring” chips in everything from cars to appliances.

Sites, water, and environment

  • Concern about siting fabs in water‑stressed Texas/Arizona; Sherman’s draw on Lake Texoma is highlighted, with claims the fab nearly doubles city water use.
  • Counterpoints: Texas is large, eastern reservoirs like Texoma are often full, and fab water is heavily recycled, though not perfectly and with potential wastewater issues.

Politics of CHIPS/IRA and regional siting

  • Discussion on why investment clusters in red states: claims of faster permitting, weaker regulation, cheaper land and labor, and business‑friendly politics versus claims that it’s primarily political allocation of federal funds.
  • Some note backlash in parts of the Midwest to foreign (especially Chinese/Korean) investors despite promised jobs, seeing this as driven by xenophobic or conspiratorial politics.

Free market vs managed economy and labels

  • Many see CHIPS and forced deals (e.g., mooted Intel–TSMC tie‑ups) as “Soviet‑style” or fascistic state–corporate collusion; others counter that the U.S. has long been a mixed, heavily regulated economy, not pure capitalism.
  • Disputes over the correctness of calling this “socialism” vs “authoritarian capitalism” vs “fascism,” with some emphasizing suppression of labor and corporate favoritism over ideology.

Terminology, calculators, and transparency

  • Multiple nitpicks about “fabric” vs “factory/fab” in the title.
  • Side thread on whether graphing calculators are still mandatory in U.S. schools and the evolution of TI calculators.
  • Some frustration that major industrial policy decisions, tariffs, and corporate aid appear to be made with little transparent debate or congressional constraint.

Waymo granted permit to begin testing in New York City

NYC as a Testbed: Hard or Overhyped?

  • Many see downtown Manhattan (irregular grid, canyons, multi-level roads) as a uniquely hard environment; others argue LA/SF are comparably complex, just different.
  • Distinctive NYC traits repeatedly cited: dense and assertive drivers, constant rule-bending (blocking the box, “NY lefts”), heavy jaywalking, sidewalk overflow, chaotic bikes/scooters.
  • Some think claims of NYC exceptionalism are exaggerated; others with experience in both coasts insist NYC/NJ driver psychology is qualitatively harsher and more adversarial.

Weather and Winter Driving

  • Snow and ice are widely viewed as the real new challenge vs. prior Waymo cities.
  • Debate over whether AVs can truly learn localized winter behavior (migrating lanes, drifts, black ice, whiteouts) versus just deciding not to operate in marginal conditions.
  • Thread notes prior Waymo testing in Buffalo/Michigan but questions commercial viability of full-winter robustness.

Pedestrians, Game Theory, and “Assertiveness”

  • Concern that if robots are strictly lawful, NYC pedestrians will endlessly step in front of them; cars need to “signal willingness” to move or they’ll be stuck.
  • Reports from SF/LA that Waymos are already becoming more humanlike and assertive (timing yellows, inching at crosswalks, taking gaps at 4-way stops) but sometimes freeze or create awkward standoffs.
  • Some fear humans will “bully” AVs—cutting them off, refusing to let them merge—unless they become more aggressive.

Safety, Enforcement, and Law

  • Many riders say Waymos feel consistently safer and more predictable than average human drivers, though occasionally weird (odd routes, strange stopping points, rare dangerous edge cases).
  • Broader concern: correlated software failures vs. uncorrelated human errors.
  • Long subthread on decayed traffic enforcement, red-light cameras, and whether automated evidence from AV sensors should be used for mass ticketing; opinions split between safety gains and dystopian surveillance.
  • One line of criticism: AV-caused crashes may be treated differently in law, effectively putting owners in a special category.

Transit, Taxis, and Economics

  • Mixed views on urban impact: some see Waymo as “modern transit” that complements or substitutes for weak bus/rail; others insist real solution is walkability, bikes, and transit, not more cars.
  • Concerns in NYC about medallions, congestion pricing, and whether Waymo will worsen traffic or must buy into capped fleets.
  • Multiple riders strongly prefer Waymo over Uber (no tipping drama, no bad drivers, consistent experience), but some feel guilty about displacing human drivers.

Long-Term Urban Effects

  • Hopes: calmer traffic, fewer human-caused deaths, eventual restriction of human driving to niche/recreational contexts, rural coverage where taxis are scarce.
  • Fears: vandalism, political backlash if early deployments snarl traffic, corporate resistance to street pedestrianization/“low-traffic neighborhoods,” and further entrenchment of car-dependence in a city that could be bike- and transit-first.

Trees on city streets cope with drought by drinking from leaky pipes

Known phenomenon: tree roots and pipes

  • Many commenters note that trees invading water and sewer pipes is very old news; the novelty is the study’s quantified comparison: street trees show less water stress than park trees, traced to city water leakage.
  • Multiple anecdotes of roots clogging residential sewage and sprinkler lines (cast iron, clay, and PVC), sometimes causing very expensive repairs and even flooding.

Lead isotopes and tracing water sources

  • Clarification that the study’s finding relies on different lead isotope “signatures”:
    • Lead in street trees matches local lead water pipes (old, geologically distinct ore).
    • Lead in park soils/trees matches atmospheric lead from gasoline additives, whose ore sources were more centralized and globally traded.
  • Several comments elaborate how ore source, uranium/thorium content, and industrial supply chains lead to different isotope ratios that can be used as tracers.

Health and contamination concerns

  • One camp argues leaky pressurized water mains mostly leak outward, so intrusion of contaminants is rare except during depressurization events (power outages, shutoffs), which is why boil-water advisories are sometimes issued.
  • Others counter that unannounced shutoffs are common, global infrastructure is often intermittently pressurized, and contamination via rivers downstream of sewage discharges can be serious even without pipe intrusion.

Scale of leakage and whether to fix it

  • Montreal’s reported 500 million litres/day loss prompts shock and comparisons:
    • Self-reported losses elsewhere: ~20–30% typical, up to ~40–50% in some cities.
  • Debate on fixing leaks:
    • Pro-fix: leaks waste treated water, shorten pipe life by attracting roots, and indicate underinvestment and “technical debt.”
    • Skeptical/nuanced: in water-rich systems returning to the same basin, ecological benefits to trees and cooling might outweigh costs; moreover, digging up roads and coordinating utilities is enormously expensive and politically difficult.

Urban hydrology and design choices

  • Several commenters argue the real design failure is routing rainwater rapidly into storm drains instead of deliberately watering street trees.
  • Others respond that “it’s cheaper not to do it,” while some push back that short-term cost focus and fragmented incentives, not absolute cost, drive these decisions.

Copilot broke audit logs, but Microsoft won't tell customers

Scope and Severity of the Issue

  • Many see this as a serious security/compliance bug: an AI-assisted feature could expose document contents without a corresponding, expected audit trail.
  • Others downplay it as a regular defect that was reported and fixed, arguing it doesn’t automatically imply catastrophic HIPAA or regulatory failure.
  • There is concern that customers weren’t proactively notified, despite clear implications for audits and incident investigations.

CVE and Vulnerability Classification

  • Strong disagreement over whether this deserves a CVE:
    • Some argue CVEs are just standardized IDs for specific vulnerabilities and should apply even to cloud services and single-vendor systems.
    • Others claim CVEs are for broadly distributed software or issues requiring customer action; since Copilot is auto-patched, they say a CVE is unnecessary.
  • Several commenters suspect Microsoft’s interpretation of CVE scope is influenced by PR concerns rather than technical criteria.

How Copilot Likely Interacts with Data and Logs

  • Many infer that Copilot is not directly opening files; it’s using an indexed or RAG-based search layer over M365 data.
  • The consensus guess: audit events are emitted by the surrounding “scaffolding” or search/index services, and instrumentation was placed in the wrong spot (e.g., only when content is surfaced, not when it is retrieved).
  • Some stress that logging should be deterministic and tied to data access at the storage/search layer, not to LLM prompts or behavior.

Compliance, HIPAA, and Audit Implications

  • Commenters familiar with compliance note:
    • HIPAA does not literally require every access be logged, but regulators strongly encourage detailed auditing and “reasonable and appropriate” controls.
    • Any path where sensitive info can be surfaced without a reliable audit trail undermines SOC 2 / HIPAA / ISO-style assurances.
  • Several note this is especially dangerous where users can ask about medical, HR, or other regulated data via Copilot and have no corresponding record of access.

Microsoft Security Culture and AI Push

  • Many see this as fitting a pattern: “insecure by default,” product sprawl, rushed AI integrations, and competing internal KPIs (security vs growth/engagement).
  • References are made to prior Microsoft security criticisms and marketing claims about “security above all else,” contrasted with behavior in this case.
  • Strong frustration at Copilot being “crammed into everything” (VS Code, M365, Excel, etc.), sometimes re-enabling itself or being hard to disable.

Technical Debate: Secure RAG, Indexing, and Permissions

  • Long subthread on how to do access-controlled AI search:
    • Some argue this is a well-known, solved problem in enterprise search: store ACLs as metadata, pre-filter candidates by permissions, then pass only allowed documents to the LLM.
    • Others counter that real environments have complex, changing rights across multiple systems, making per-user or per-query filtering and reindexing hard, race-prone, and potentially leaky.
  • Concerns that separate search indexes (or vector stores) can become effectively a second, under-audited copy of sensitive data.
  • Debate over embeddings: some say vectors are like irreversible hashes; others note that embeddings can leak semantic information if the model is known.

Trust, Governance, and Responsibility

  • Repeated theme: customers’ trust in Microsoft for security and compliance is eroding; some organizations are actively trying to move off the stack.
  • Several argue executives often prefer “vibes” and short-term AI wins over deeply understanding risks; “the AI did it” is seen as a future accountability shield.
  • For internal AI chatbot projects, commenters warn that unless authorization is enforced at every data access point, sensitive leaks are inevitable—and that raising this with leadership is often met with resistance.

How Not to Buy a SSD

Prevalence of Fake / Misrepresented SSDs and HDDs

  • Multiple anecdotes of clearly counterfeit or tampered drives from major marketplaces (Amazon, eBay, AliExpress, eMag), sometimes even labeled “new” and apparently sealed.
  • Common scam pattern on eBay: 4TB “brand-like” SSDs with lookalike Samsung/WD styling but no logo, containing ~100GB of flash and firmware that lies about capacity, then bricks once full.
  • Reports of HDDs with reset or forged SMART data, including drives showing >1 year powered-on time sold as “new.”

Marketplaces, Commingling, and Counterfeits

  • Strong criticism of Amazon’s commingled inventory: items marked “Ships from/Sold by Amazon” may actually be fulfilled from third-party stock, enabling counterfeits and returns fraud loops.
  • Some users say they’ve never seen a counterfeit from Amazon; others report multiple fake SSDs, SD cards, batteries, chargers, and even books.
  • Perception that Amazon has shifted from trusted store to chaotic marketplace with search spam, fake reviews, and lots of low-quality China-sourced goods.
  • Similar warnings about other marketplaces (AliExpress, eMag, etc.): deep discounts (70–80% off) are seen as a red flag.

How People Detect or Test Drives

  • Heuristics: suspiciously low weight, too-good-to-be-true price, missing major brand logo, odd packaging, or limited/odd SMART data.
  • Recommended tools:
    • Destructive full-disk write/read (e.g., f3 / f3fix) to detect capacity lies.
    • ValiDrive (Windows, non-destructive spot checks across the drive).
  • Note that some testing is destructive; people suggest doing it before putting drives into real use.

Buying Strategies and Trusted Channels

  • Many prefer:
    • Direct-from-vendor (WD, Seagate, etc.).
    • Reputable specialized retailers (B&H, Micro Center, local camera/PC shops).
  • Several users avoid buying any critical electronics, storage, or health/beauty items from Amazon or generic marketplaces.

Used Enterprise SSDs: Mixed Views

  • Some strongly favor second-hand enterprise SSDs (often SAS/U.2, with power-loss protection and high DWPD ratings), usually from eBay, Taobao, or forum marketplaces; claim excellent longevity.
  • Others argue pricing often overlaps with new consumer SSDs with warranties, making used enterprise less compelling unless you find real bargains.
  • Example strategies include mirrored arrays, ZFS, and optane for metadata to mitigate risk.

Drive Quality Notes

  • Kingston A400 line is called out as genuinely poor even when authentic (firmware issues, high failure rates).
  • Debate over old SLC vs newer MLC/TLC endurance, with conflicting anecdotal evidence about reliability vs cost/capacity.

Anna's Archive: An Update from the Team

Access, Blocking, and Censorship

  • Commenters report Anna’s Archive being blocked differently by country and ISP: HTTP 451 via Cloudflare in Belgium, DNS blocks or connection resets on some UK and Dutch ISPs, while others in the same countries have full access.
  • People discuss using VPNs, Apple Private Relay, Tor and alternative DNS to bypass blocks, and note that Cloudflare must comply with local legal orders or risk being blocked wholesale.
  • There’s unease that Cloudflare is effectively becoming a filter on individuals’ web access. Some want Ofcom/EU regulators to make blocking policies more consistent and transparent.
  • HTTP status 451 (“Unavailable for Legal Reasons”) is discussed as a censorship marker; other status codes appear as well (523, etc.).

Role, Mission, and Ethics of Anna’s Archive

  • Many see AA as “one of the last good things on the internet,” a modern Library of Alexandria preserving scientific papers, textbooks and books for the whole world, especially where legal access is limited or prohibitively expensive.
  • Others push back on the site’s rhetoric (“attacks on our mission”), arguing it is fundamentally a piracy operation, even if its archival side effects are valuable.
  • Some distinguish between AA’s role in liberating scientific/academic content (often produced with public funding and locked behind paywalls) and its distribution of recent commercial ebooks that directly affect authors’ income.

Impact on Authors, Copyright, and Fairness

  • One author in the thread is furious that a book they worked on for decades is freely downloadable; others reiterate that many writers already earn very little per sale and piracy feels like “mind theft.”
  • Supporters counter that most downloads are not lost sales; many users say they use AA to discover or preview works and then buy physical or DRM‑free copies, especially for niche or older titles.
  • Multiple people cite studies suggesting weak or no robust evidence that piracy significantly displaces overall sales, though skeptics argue these effects are hard to measure and likely non‑zero.
  • Libraries vs AA: physical and controlled digital lending buy/licence limited copies and replace them over time; AA distributes unlimited perfect copies. Some see that as a crucial legal and moral difference.

LLMs, Training Data, and Shadow Libraries

  • Several comments state or assume that OpenAI, Meta and others have trained on data from LibGen, Z‑Library, AA and similar sites; a few claim to have direct knowledge of small payments to AA‑like projects for bulk access.
  • There’s a deep argument over whether training on copyrighted books is or should be “fair use,” and whether companies that don’t use all available (including pirated) data will be outcompeted.
  • Some argue that if training on books is judged fair use, rights‑holders must “just accept it”; others insist that changing this should require democratic reform of copyright, not unilateral corporate decisions.
  • A separate line of debate asks whether the social benefit of powerful models built on shadow‑library data justifies those libraries, and whether models should be open‑weights if built on such material.

Shadow Library Ecosystem and Preservation

  • The AA blog update notes: massive scrapes from Internet Archive’s Controlled Digital Lending, HathiTrust, DuXiu, WorldCat, Google Books; partnerships with LibGen forks, STC/Nexus, Z‑Library; and the disappearance of a LibGen fork.
  • Commenters worry that explicitly bragging about scraping IA’s lending system could harm IA in court, by letting publishers argue that even “controlled” lending leaks into unrestricted piracy.
  • WeLib is called out by AA as mirroring AA’s collection and code but not sharing new material or code back; some agree this is parasitic and dangerous for preservation, others say any extra mirror improves decentralization.
  • AA publishes large torrent sets (e.g., sci‑hub, libgen) so anyone can help seed. Some individuals discuss the feasibility and cost of personally mirroring ~100–200 TB of scientific knowledge and whether high‑quality PDFs vs deduplicated text should be preserved.

Funding, Paywalls, and Non‑profit Claims

  • AA uses “soft” throttling: free downloads are slow/queued; donations unlock faster mirrors. Some users are suspicious, comparing this to commercial file‑host monetization; others point out that bandwidth, storage, and legal risk are expensive and volunteers are likely not “getting rich.”
  • There’s debate over calling AA a “non‑profit” when it’s an illegal, opaque operation with no formal status or audits. Some argue “non‑profit” should be reserved for regulated entities; others say it’s about intent and non‑distribution of profits, not paperwork.
  • Anonymous funding and crypto: Monero and indirect methods (e.g., buying gift cards with crypto) are discussed; some worry that large money flows plus opacity make AA vulnerable to greed or accusations of money laundering.

Internet Design, Privacy, and Piracy Culture

  • A side thread argues the internet should be redesigned to resist DDoS, spam, surveillance, and mass scraping; replies note trade‑offs between openness, decentralization, and control, and that many “attacks” are features for powerful actors.
  • Tools like Hashcash, Tor, Freenet, I2P, and proof‑of‑work schemes are mentioned as partial mitigations with significant usability or efficiency costs.
  • Broader piracy ethics recur: some see most pirates as simply wanting free stuff and rationalizing; others emphasize that heavy pirates are often heavy buyers and that streaming/DRM and high prices helped create the demand for shadow libraries in the first place.

Show HN: OS X Mavericks Forever

Scope of the Project / “Show HN” Debate

  • Some argue this isn’t a typical “Show HN” because it targets a narrow slice of old hardware, but others say that’s fine: the guide, custom tools (Aqua Proxy, plugins, patched widgets), and detailed instructions are substantial “to show.”
  • Several commenters thank the author, noting they’ve actually used the guide to revive old Macs.

Why Mavericks? Aesthetics, UX, and Era

  • Many praise OS X 10.9 as a visual and UX high point: last broadly “Aqua-ish,” fast, and still feeling like a “real computer” rather than an appliance.
  • Others prefer Snow Leopard, Tiger, El Capitan, or Mojave as their personal “peak Mac,” but generally agree the modern iOS‑style design, margins, and iconography are regressions.
  • There’s nostalgia for old QuickTime, Dashboard widgets, colored sidebar icons, and “Quake-style” drop‑down terminals/Finders.

Installation, Hardware, and Recovery Quirks

  • Discussion about which Macs can run Mavericks (roughly 2008–2014) and oddities in macOS Recovery: different key combos (Cmd+R, Opt‑Cmd‑R, Shift‑Opt‑Cmd‑R) yield different target versions; firmware level also matters.
  • Some use older releases like Catalina or High Sierra on unsupported hardware as a compromise between age and usability.

Security, Browsers, and Networking Workarounds

  • Strong concern about running a 9‑year‑unpatched OS on the internet: attack surface, sensitive data exfiltration, and outdated SSL/TLS.
  • Workarounds:
    • Modern browsers backported to legacy macOS (e.g., Firefox forks, Chromium Legacy).
    • HTTPS proxy (Aqua Proxy) to offload TLS to a modern stack.
    • Native VPN protocols vs third‑party VPN apps that bypass proxies.
    • Running modern browsers in VMs or on another machine and remoting in.
  • Some think this is still “insane” for daily‑driver use; others accept the risk with backups and a limited threat model.

Hackability vs Lock‑Down

  • Mavericks is praised for being easy to tinker with: deletable stock apps, SIMBL plugins, Objective‑C method swizzling, hex‑patching system libraries, and no SIP/SSV.
  • Counterpoint: immutable or locked‑down systems (SIP, signed system volumes, Linux images like Bazzite/NixOS) make it much harder for users to brick machines and easier to say “just try things.”
  • Long sub‑thread debates tradeoffs: freedom vs safety, admins vs normal users, and whether SIP should be easy to disable.

Alternatives: Linux/BSD, Hackintosh, and Re‑creations

  • Several commenters say this level of effort to cling to Mavericks should instead go into Linux/BSD desktops or GNUstep/NeXT‑style systems; some report being very happy on modern Linux (e.g., Bazzite, KDE).
  • Others feel Linux/Windows still trail macOS on consistency, input feel, and UI polish despite progress.
  • Projects like helloSystem, ravynOS, NEXTSPACE, and GSDE are mentioned as attempts to recreate classic Mac/NeXT UX, but seen as immature or skin‑deep so far.

Sentiment About Apple’s Direction

  • Strong thread of discontent: iOS‑ification, locked‑down design, nagging dialogs, hardware that isn’t user‑serviceable, and focus on services/ads/cloud.
  • A minority argue macOS keeps getting better: world‑class dev tools, Apple Silicon performance, and that restrictions rarely impede serious work.

Who Invented Backpropagation?

Automatic differentiation, gradient descent, and backprop

  • Commenters distinguish:
    • Gradient descent (very old, “obvious” once you have gradients).
    • Automatic differentiation (AD) as an efficient way to compute gradients.
    • Backpropagation as reverse‑mode AD applied to neural networks.
  • Reverse‑mode AD:
    • Applies the chain rule “backwards,” caching intermediate values.
    • Is efficient for many-input / few-output functions (e.g., training).
    • Conceptually dual to forward mode, which is better for few-input / many-output.
  • Explanations compare reverse vs forward mode to memoized vs naive recursion, and to standard vector calculus derivations.

Control theory, Apollo, and adjoint methods

  • Several commenters link early backprop-like ideas to optimal control and adjoint/gradient methods from the 1960s:
    • Papers on optimal flight paths and lunar mission thrust programming using steepest descent and adjoint gradients.
    • Classic optimal control texts that derive a procedure essentially identical to backprop using Lagrange multipliers.
  • There is debate whether a popular essay’s line about “optimizing Apollo thrusts” referred specifically to backprop or more generally to control theory.
  • Some note that many neural nets can be cast as state‑space systems, but say that reframing learning as optimal control is usually not practically useful.

“Just the chain rule?” Novelty vs triviality

  • One camp: backprop is “just the chain rule,” so asking who invented it is uninteresting; any 17th‑century calculus inventor could have done it.
  • Counterpoint (echoing the article): the novelty is the efficient application of the chain rule to large computation graphs; many inefficient ways exist.
  • There’s a technical side debate:
    • One view: symbolic differentiation and AD are fundamentally different, and naive symbolic methods blow up in expression size.
    • Opposing view: with DAG representations and common subexpression elimination, symbolic and AD are effectively equivalent implementations of the same math.

Attribution fights and awards

  • Multiple commenters say backprop has been “invented and forgotten” many times; they question the value of awarding priority at all.
  • Others argue that careful historical credit matters, especially for overlooked groups (e.g., Japanese researchers).
  • The article’s author is seen by some as doing serious archival work; others see it as “sour grapes” about major prizes for deep learning pioneers.
  • There’s extended back-and-forth about:
    • Whether certain AI researchers deserved a Nobel in physics or only a computing award.
    • Whether the physics community actually views ML contributions as worthy physics.
    • The broader pattern of a North American establishment over‑crediting its own.

Why backprop mattered late

  • Commenters note that neural networks and backprop were long viewed skeptically because deep nets were hard to train.
  • They emphasize that:
    • Backprop alone wasn’t enough; practical success required architecture innovations (CNNs, recurrent variants, transformers), better optimizers, activation functions, and mitigation of exploding/vanishing gradients.
    • GPU computing and differentiable-programming frameworks (Theano, TensorFlow, PyTorch, JAX) were major enabling factors.
  • Some share personal anecdotes of early enthusiasm for NNs, evolutionary training, and regret at leaving AI before the 2010s deep-learning boom.

The road that killed Legend Jenkins was working as designed

System design and POSIWID framing

  • Several commenters apply “the purpose of a system is what it does” to US road design: if roads consistently endanger or kill pedestrians, that reveals the real priorities.
  • High-speed arterials through neighborhoods are seen as intentionally prioritizing car throughput, often historically routed through politically weaker communities.
  • Debate over the article’s line that the system “worked as designed”:
    • Critics say it’s misleading to imply anyone designed roads to kill children.
    • Supporters respond that no one needed child deaths as an explicit goal; they’re a predictable side effect of favoring cars.

The specific road and crossing choices

  • Commenters inspect the Gastonia location via maps/street view: wide, fast “stroad,” narrow sidewalk on one side, few signalized crossings, dangerous median.
  • Some emphasize a marked crosswalk ~300–350 feet away and argue it was reasonable to expect kids to use it.
  • Others counter that such detours are long in practice, that the crosswalk itself appears poorly designed, and that many locals likely cross at the median because that’s where life actually connects (apartments ↔ shops).

Legal culpability vs systemic failure

  • Strong disagreement over charging the parents with manslaughter:
    • Some see obvious parental negligence in allowing a 7‑year‑old (even with a 10‑year‑old) to cross a quasi‑highway.
    • Others argue escorting by an older sibling is reasonable care; if a 10‑year‑old can’t cross safely, the environment is at fault.
  • Multiple comments stress that sidewalks imply “fit for walking”; if a sidewalked road is lethal, that’s a design failure, not user error.
  • “Jaywalking” is criticized as car-supremacist framing that shifts blame to pedestrians and historically enabled selective enforcement.

Car‑centric urban form and impacts on children

  • Many describe US suburbs as fundamentally hostile to pedestrians: wide, straight, fast roads; sparse crosswalks; large retail blocks; mandatory driving for basic errands.
  • Links and anecdotes reference US pedestrian death crises versus cities that have reached or approached zero traffic deaths.
  • Several parents say cars are their primary fear for children, above water or crime, and tie kids’ reduced outdoor independence and even falling fertility to car-dominated environments.

Proposed fixes and constraints

  • Suggested interventions: narrower lanes, traffic calming, roundabouts/“traffic beans,” bollards, more and better crosswalks, or even car-free areas in dense neighborhoods.
  • Grade-separated crossings (bridges/tunnels) are noted as declining due to cost, maintenance, perceived crime, and homeless use; some argue they’re the wrong fix versus making streets inherently crossable.
  • One thread proposes civil liability for unsafe road design; critics warn that blanket liability plus grandfathering could freeze new development.
  • Cost and politics recur: vast existing car-centric infrastructure, voter attachment to driving/parking, and fragmented incentives make change slow and contentious.

Lived pedestrian experience and international contrast

  • Commenters who walk in US suburbs describe it as “frogger”: missing sidewalks, hostile arterials, dead ends, and dangerous improvisation just to reach nearby stores.
  • Visitors from more pedestrian‑friendly countries express shock at how aggressive US suburban design feels toward people on foot and ask why pedestrian needs are so ignored; the thread offers partial answers but no consensus history.