Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 187 of 526

New York Times, AP, Newsmax and others say they won't sign new Pentagon rules

Refusal to Sign & Nature of the New Rules

  • Many commenters praise outlets refusing to sign as rare examples of institutional backbone amid growing pre‑emptive compliance.
  • Shared copy of the rules highlights the most controversial change: reporting of classified (CNSI) and “controlled unclassified” information (CUI) could cost outlets their Pentagon access unless pre‑cleared by officials.
  • Critics see this as converting independent press into a Defense Department PR arm; supporters argue it’s about protecting sensitive information, not all reporting.

First Amendment, Access, and “Terms of Service”

  • Debate over whether the Constitution requires physical press access to facilities; some think outlets would lose in court, others think punitive access denial based on content is unconstitutional.
  • One side frames the policy as a neutral rule everyone must “agree to,” like any ToS.
  • Opponents counter that the requirement itself is arbitrary, that refusing to sign is a protected act, and that equal application doesn’t make an unconstitutional condition legitimate.

Press Freedom, Propaganda, and Autocracy Concerns

  • Many see this as part of a broader “assault on the press” and a deliberate chilling of scrutiny of the military and executive branch.
  • Strong fears that this is one step in a “speedrun to autocracy”: normalizing military involvement in domestic affairs, tightening control over information, then manipulating elections.
  • Some predict militarized “securing” of polling places and chain‑of‑custody of ballots; others think outright cancellation of elections is unlikely but acknowledge serious risks.

Right‑Wing Media & Access Politics

  • Discussion notes that one fringe-right outlet reportedly intends to sign, reinforcing its reputation as a loyal propaganda outlet.
  • Another right‑leaning channel declining to sign surprises some, who assume it expects to benefit when power changes hands.

Distrust of Both Pentagon and Legacy Media

  • Several argue major outlets already act as tools of elites and have long failed on issues like wars, surveillance, financial crises, and political scandals.
  • Others push back that this history doesn’t justify further state control or retaliation against critical coverage.

Tone, Competence, and “Terminally Online” Governance

  • Commenters criticize the defense secretary’s social‑media taunting of reporters as unserious and lowbrow.
  • Broader frustration surfaces about politicians’ competence, online performativity, and the public’s appetite for leaders who wield power cruelly rather than responsibly.

Don’t Look Up: Sensitive internal links in the clear on GEO satellites [pdf]

Scale and Nature of the Exposure

  • Commenters are stunned by the paper’s examples: unencrypted satellite backhaul carrying T‑Mobile SMS/voice and web traffic, AT&T Mexico user traffic, TelMex VoIP calls, Mexican government and military traffic, Walmart Mexico corporate emails and credentials, and SCADA/utility control systems.
  • Some of the most sensitive leaks include real-time military object telemetry and ship identifiers.
  • A few affected organizations reportedly fixed issues after disclosure (e.g., T‑Mobile, Walmart, KPU), but many others remain unclear.

Why Links Remain Unencrypted

  • Cited reasons from the paper/Q&A: encryption overhead on already scarce bandwidth, extra power and hardware cost for remote receivers, paid “encryption licenses” from vendors, and operational pain (troubleshooting, emergency reliability).
  • Commenters add: very old satellite hardware lifecycles, vendor excuses (e.g., 20–30% “capacity loss” with IPsec), and a culture that undervalues security versus “build and sell.”
  • Economic incentives are weak: decision-makers rarely face personal consequences; liability is often diffused or shielded by EULAs and weak data‑protection enforcement.

Where Encryption Should Happen

  • One camp: satellites can be dumb repeaters; all endpoints and intermediate networks should assume the link is hostile and use TLS/IPsec/application-level crypto.
  • Others counter that average users (e.g., airline passengers) can’t reasonably be blamed for unencrypted DNS and other leaks; satellite ISPs or airlines should enforce encryption by default, similar to cellular networks.
  • Metadata leakage is discussed: even with “dumb pipes,” unencrypted headers and identifiers can reveal location and activity.

TLS Everywhere and Centralization

  • Several comments connect the paper’s finding (“almost all consumer web/app traffic used TLS/QUIC”) to the long push for HTTPS‑by‑default.
  • Debate over what drove adoption: Google search ranking, Let’s Encrypt, HSTS/Chrome warnings, and post‑Snowden surveillance concerns vs. more cynical takes that big platforms mainly wanted to protect commercial data and ad revenue from ISPs.
  • Some argue the TLS push both improved privacy and pushed traffic through large intermediaries like Cloudflare, creating new centralization and operational burdens.

Broader Security & Threat Perspectives

  • The satellite situation is framed as part of a wider pattern: pagers, hospital/government systems, and industrial control links still send highly sensitive data in cleartext.
  • Some downplay the risk due to volume and difficulty of sifting traffic; others note that targeted interception of backhauled cellular/SMS or SCADA traffic is clearly exploitable, especially by intelligence services.

South Africa's one million invisible children without birth certificates

Comparisons to Other Countries’ Documentation Gaps

  • Commenters note the US and other states have ad‑hoc processes for people without birth certificates (fires, midwife births, older cohorts, overseas births).
  • China before 1996 and rural China more broadly had many births without hospital certificates, but alternative local documents (hukou, village attestations) usually anchored identity.
  • Amish and some religious groups in the US illustrate that even today, some people deliberately remain lightly documented, though this is constrained by law.

Citizenship, “Natural Born” Status, and Legal Ambiguities

  • Long subthread on US citizenship law: children of citizens born abroad, territorial status (Philippines, Panama Canal Zone), and shifting statutes (Expatriation Act, Cable Act, INA).
  • Disagreement over whether figures like John McCain were “citizens at birth” vs later statutorily recognized.
  • Examples show how technical legal definitions and missed paperwork can create years of unnecessary immigration hardship.

Risks of Statelessness and Disenfranchisement

  • Several posts link lack of documentation to vulnerability: detention, deportation, inability to vote, or being written out of social benefits.
  • Historical parallels drawn to stateless populations in WWII and their role in enabling mass atrocities.
  • Some see modern US voter-ID and citizenship controversies as early warnings about using documentation gaps to disenfranchise.

Banking, KYC/AML, and Crypto Proposals

  • KYC/AML rules are criticized for excluding undocumented people from financial systems while failing to seriously hinder well‑resourced criminals.
  • One side argues crypto could give “invisible” people a form of digital money and savings.
  • Others counter that crypto doesn’t solve the core problem: without legal identity, children still can’t attend school, access healthcare, or join official leagues, and face usability and volatility issues.

Bureaucratic Failure and Lost Records

  • Multiple anecdotes from South Africa, Europe, and North America describe records “disappearing” or people only being “properly entered” into systems years later.
  • South African Home Affairs offices are portrayed as slow, often offline, and hard to access for people in precarious work.

Is South Africa in “Steady Decline”? – Disputed

  • One camp cites severe load‑shedding, water outages, high crime, corruption, underinvestment in infrastructure, manufacturing weakness, and falling GDP per capita as evidence of decline.
  • Another camp emphasizes dramatic post‑1994 gains: near‑universal formal access to water, electricity and schooling; expanded middle class; free public healthcare; end of racial legal discrimination; and recent political shifts (coalition government, some privatization) as signs of long‑term improvement despite serious problems.
  • Debate touches on foreign investment trends and whether current woes stem mainly from apartheid’s legacy vs contemporary governance.

Historical and Demographic Context

  • Apartheid‑era authorities allegedly undercounted or ignored Black South Africans in censuses, making today’s “invisible children” unsurprising to some.
  • Discussion of who counts as “native” in South Africa (Khoisan vs Bantu vs later European and Asian settlers) becomes contentious, with concern that such debates can be weaponized in modern politics.

Philosophical Concerns About Identification Systems

  • Some argue people should be able to exist outside “The System,” comparing modern birth registration to older religious registries.
  • Others respond that large‑scale welfare states and social insurance systems practically require robust identification to avoid abuse and collapse, making some form of universal documentation hard to escape.

SpaceX launches Starship megarocket on 11th test flight

Mission Reaction & Presentation

  • Commenters widely regard Flight 11 as a “smashing success,” with a notably clean profile for both booster and ship.
  • Many praise the livestream: clearer technical explanations, better visuals, and playful touches (e.g., “crunchwrap” tiles jokes).
  • Several describe these launches as personally inspiring and morale-boosting, especially compared to earlier decades with little visible progress.

Orbital vs. (Near-)Suborbital Trajectory

  • Some question why Starship is still not doing full orbital missions.
  • Others explain current flights are intentional near-orbital / “transatmospheric” trajectories: up to ~98–100% orbital speed but with a steep path to ensure reentry over oceans and avoid long-lived debris.
  • Discussion covers debris risk corridors (Caribbean, Africa), targeted splashdown near Australia, and how failure timing affects where hardware falls.
  • Rationale given for delaying a true orbit: stabilize engines (especially V3), improve tile retention, and be ready for controlled deorbit and possible “catch” tests.

Reuse, Heat Shield, and Remaining Technical Hurdles

  • Booster reuse is seen as largely demonstrated (reflown Block 2 boosters and engines), though only a few times so far.
  • Upper-stage reuse is viewed as the hard part: tile losses, flap heating damage, and the gap between surviving once vs. rapid turnaround.
  • Commenters stress that overall success still depends on:
    • Reliable full-stack reuse
    • Turnaround time and cost
    • Long-term reliability across many flights
    • Whether marginal cost beats building new vehicles

Timelines, Artemis, and Economics

  • Critical voices argue Starship is behind its early promises: reduced payload versus initial claims, missed lunar timelines, and no completed orbital insertion yet.
  • They question whether orbital refueling and many tanker flights will make lunar missions complex and possibly not cheaper than SLS once realistic launch costs are applied.
  • Counterarguments emphasize:
    • Much lower development cost versus Apollo/Shuttle/SLS
    • NASA knowingly chose a high-risk, high-payoff HLS path under tight budgets
    • Multiple Starship configurations (e.g., non-reentry HLS variants) and shared challenges with other landers needing refueling.
  • Several note that even with impressive engineering, commercial viability (A380/Concorde analogy) is not guaranteed.

Why Go to Space? Philosophical Debate

  • Pro-space commenters cite communications, navigation, medical research, species survival, resource access, and inspiration.
  • Skeptics respond that many benefits are incremental or overstated, and that justifying exploration with vague possibilities feels weak.
  • Others frame space capability as strategic (military and geopolitical), as well as inherently exploratory, even if near-term payoffs are uncertain.

Shifting Sentiment & Aesthetics

  • Some observe dramatic swings in public/online sentiment: from assumed inevitability, to “hubris,” back to optimism after two good flights.
  • A recurring theme is that “the last 20%” (true rapid reuse and economics) remains non-trivial.
  • A few note they still find Saturn V more elegant; Starship is admired more for capability than looks.

DDoS Botnet Aisuru Blankets US ISPs in Record DDoS

Why ISPs Don’t Aggressively Block Botnet Traffic at the Source

  • Several commenters argue there’s little direct economic incentive: outbound DDoS traffic often doesn’t hurt the ISP as much as it hurts others, and mitigation costs money and risks angering customers.
  • Many residential networks are heavily asymmetric (much more inbound than outbound), so there’s often “room” for large outbound attacks before the ISP feels pain.
  • Abuse handling is labor‑intensive: building convincing reports and coordinating with remote networks is seen as not worth the effort compared to just mitigating inbound traffic.
  • Only now, with multi‑terabit outbound attacks from residential networks, are some ISPs reportedly starting to feel operational pain and consider more serious outbound controls.
  • Some examples exist (e.g., ISPs that quarantine users via captive portals), showing it’s possible but not widespread.

How End Users and Routers Could Help

  • Suggestions: ISPs cut off or rate‑limit compromised customers, routers snapshot per‑device traffic before disconnection, and users hire local services to locate infected devices.
  • Power users note there’s no simple, mainstream way to know if they’re in a botnet; proposals include router‑level monitoring, Pi‑hole DNS anomaly checks, jailed/guest LANs, and better traffic graphs (e.g., opnsense, IPFire).

IoT Insecurity and Regulation Proposals

  • Many see insecure IoT as the core problem: devices re‑infect “within minutes” after reboot. Some say such products are defective and should effectively become bricks; others insist vendors should be forced to patch and support them.
  • Policy ideas:
    • Mandatory recalls for devices participating in DDoS, with strong manufacturer liability.
    • Hard caps on IoT outbound bandwidth (e.g., 10 Mbps) unless explicitly justified.
    • No default passwords, secure onboarding flows, signed firmware, long‑term updates, and possibly ISP‑mandated routers that filter DDoS traffic.
  • Critics warn this risks over‑lockdown, erosion of software freedom (signed‑only ecosystems), and black‑market imports; some prefer periodic DDoS to a “highly regulated internet.”

IPv6, CGNAT, and Blocking Strategies

  • One camp argues widespread IPv6 would let operators block individual compromised addresses or /64s instead of entire CGNAT ranges, making botnet suppression easier and restoring end‑to‑end connectivity.
  • Others with DDoS experience say IPv6 doesn’t fundamentally change the problem: attackers can control large prefixes; defenders still end up blocking bigger ranges, risking collateral damage.
  • There are also privacy concerns around IPv6 address stability, and questions about what business incentives ISPs actually have to deploy IPv6.

Attack Scale and DDoS Mitigation Market

  • Commenters note a jump from ~5 Tbps to nearly 30 Tbps in about a year, overwhelming many DDoS mitigation providers and some traditional hosts (Hetzner, OVH mentioned as seeing issues).
  • Smaller/cheaper mitigation providers are reportedly struggling; large players with huge edge capacity (e.g., Cloudflare, possibly a few others) appear to cope better, raising concerns that serious protection may become affordable only at high monthly cost.
  • Some are surprised that the dominant strategy is “absorb and scrub” rather than blocking near sources; others mention cooperative schemes (like shared routing/flowspec blackholing) but doubt broad ISP participation.

Bandwidth, Hardware, and Botnet Power

  • Contributors link the new scale of attacks to:
    • Widespread FTTH with high upstream (1–2 Gbps) in some regions.
    • Cheap SoCs capable of saturating gigabit links and generating high‑rate traffic.
    • CGNAT making it hard to block individual compromised users without impacting many others.
  • There’s debate over how common symmetric gigabit really is; some say it’s routine on fiber, others call it rare outside specific markets.

Targets and Motives (Minecraft, Games, Extortion)

  • Many attacks reportedly focus on Minecraft and other online games.
  • Hypotheses: extortion (“buy DDoS protection or stay down”), emotional players paying to knock out rivals, or low‑profile targets that avoid attention from law enforcement and big security teams.
  • Some note the engineering challenge of building very large botnets, but acknowledge diminishing returns once they’re already huge.

Governance, Freedom, and “Authoritarian” Risks

  • A visible thread worries that every major DDoS incident will be used to justify tighter control over networking, devices, and software.
  • Speculative comments suggest that powerful intermediaries (e.g., CDNs, DDoS vendors) benefit from a threat landscape that drives everyone onto their platforms, prompting suspicion about incentives.

User‑Level Concerns and Practicalities

  • Some ask for concrete tools to detect compromised devices at home; responses mostly mention router graphs, separate VLANs/guest networks, and ISP usage meters (with skepticism about what ISPs actually share).
  • Others suggest simple baseline rules: no remote login for IoT outside the local network, mandatory guest networks/proxies, and default network isolation for untrusted devices.

Responsibility and Liability

  • Strong calls appear for:
    • Regulating ISPs to detect, alert, and disconnect compromised customers.
    • Regulating device makers and retail/logistics platforms so insecure or noncompliant devices can’t be sold.
    • Potential tort liability for harm caused by grossly insecure devices.
  • Counterpoints emphasize cost to consumers, dead vendors (no one left to patch), and the risk that over‑broad rules would also hit general‑purpose computers or encourage locked‑down “appliance” designs.

Sony PlayStation 2 fixing frenzy

Accessing the Article

  • Original site was down (“hug of death”), so people shared multiple archive links (Wayback, archive.is).

Repairing PS2s and Devkits

  • For PS2 devkits, retail-style TEST units (DTL-H) can mostly follow standard PS2 teardown guides.
  • TOOL units (DTL-T10000/T15000) are more specialized; a detailed disassembly/maintenance guide was linked on archive.org.
  • The article’s refurb project apparently couldn’t even recoup parts and time at ~$150/unit, despite HDD mods.

Reliability: Consoles and Controllers

  • Mixed PS2 reliability reports: some fats/slims still work flawlessly; others had repeated optical drive or spindle failures, sometimes making PS2 their only dead console compared with still-working GameCube/Wii/Genesis.
  • DualShock 2 durability is debated: some report >10 years of use, others frequent failures, especially on thumbsticks; generic pads were seen as worse.
  • Newer hardware feels less robust to some: PS3 pads still fine vs multiple PS5 controllers with stick issues.
  • Sticky rubberized coatings on controllers/cases are a common age-related problem; people remove it with methanol or isopropyl alcohol. Some speculate it’s plasticizers/oils migrating, especially on items left in storage.

Analog Button, Pressure Buttons, and Adaptive Triggers

  • Several people asked what the PS2 “Analog” button does. Consensus:
    • It toggles the sticks between true analog and digital/D-pad emulation, mainly for PS1 backward compatibility.
  • Clarifications that this is separate from PS2/PS3 pressure‑sensitive face buttons (256–1024 levels), which a few racing and action games exploited.
  • Those analog face buttons caused some players to over-press and develop hand strain.
  • PS5 adaptive triggers are polarizing: some love the added tactility and buy games on PS5 for it; others report hand ache and reduce resistance in settings.

Controller Backward Compatibility and Lock-In

  • Debate over PS5 refusing PS4 pads for PS5 titles:
    • One side: justified because adaptive-trigger‑based mechanics wouldn’t translate well and would confuse players; Sony cert assumes DualSense.
    • Other side: it’s technically solvable via thresholds/remapping, and the restriction mainly encourages hardware sales and e‑waste.
  • Noted inconsistency: PS5 games streamed to PS4 do work with DualShock 4.
  • Comparisons with Xbox: newer Xbox consoles generally honor older controllers, but have their own BC gaps (e.g., Xbox 360 wired accessories).

Evolution of Dual-Stick Controls

  • Long subthread on when modern dual‑stick camera/aim controls became standard.
  • Early examples:
    • FPS: Alien Resurrection (PS1) and Turok (N64) had proto-modern dual-stick/d-pad + stick schemes that reviewers originally found awkward.
    • Third-person: Ico and other PS2 titles used the right stick for at least horizontal camera movement; debate over which game first offered fully “free” third-person camera rotation.
  • People contrast early “tank controls” (Tomb Raider, Mega Man Legends) with later movement-relative-to-camera and dual-stick FPS layouts.

Getting a Reliable PS2 Today vs Emulation

  • Suggestions for hardware:
    • Look for slim models; often considered more reliable.
    • Buy from tested/guarantee-oriented second-hand shops, game stores, or platforms like Etsy (for modded units).
    • Thrift/pawn shops can still be cheap if you can test or gamble.
    • Some recommend replacing the laser as routine maintenance or using HDD/SATA/SSD mods and running games from disk instead of the DVD drive.
  • Regional buying tips mentioned (Mercari + Buyee, EU stores), sometimes requiring reshippers.
  • Emulation (PCSX2) is widely recommended but not perfect:
    • Some games (Stuntman series, certain Ace Combat titles) are cited as still having physics or rendering issues that make original hardware preferable.

Video Output and Latency Issues

  • Hooking PS2 to modern HDMI TVs can introduce deinterlacing latency, making games feel laggy or nauseating.
  • Workarounds:
    • Component/RGB output plus upscalers like RetroTink or GBS-C for low-latency conversion (cost and import fees can exceed the console price).
    • For purists, a CRT is still ideal.

Storage Choices: HDD vs CF/SSD

  • Some question why the project used HDDs instead of CompactFlash/SSD:
    • CF is electrically IDE, but many cards present as “removable,” which may cause compatibility issues; industrial CF that behaves like fixed disks is expensive.
    • High-capacity CF historically suffered from stuttering and firmware quirks; some SSDs also boot too slowly for certain BIOSes.
    • HDDs remain cheaper per GB and “good enough” for console use, so likely chosen for cost and simplicity.

Miscellaneous Nostalgia and Details

  • Sticky PS2 controller coating is jokingly described as a “badge of honor” for 2000s gaming.
  • Some recall specific mod packs (like a “FHDB Noobie Package”) for HDD-based PS2 setups having tens of thousands of downloads, illustrating how big the PS2 modding scene became.

Thoughts on Omarchy

Technical value of Omarchy

  • Some commenters dismiss Omarchy as “r/unixporn in ISO form,” predicting it will break like other highly opinionated Arch-based setups (e.g., Manjaro/LARBS-like scripts).
  • They argue competent users can install Arch + i3/Hyprland in minutes and that relying on someone else’s dotfiles without understanding them is a long-term handicap.
  • Others with decades of Linux experience say Omarchy is highly productive and fun: strong TUI focus, good launchers, fast “from ISO to working dev environment,” and simple text-based customization.
  • One user highlights technical strengths: Btrfs + snapper + Limine provide multiple bootable rollbacks, directly countering claims of “no rollbacks.”
  • Some report practical pain points: dislike of Hyprland, difficulty integrating Flatpaks, heavy AUR dependence, and complexity beyond what they want.

“Distro vs dotfiles” and user elitism

  • Debate over whether Omarchy is really a distro or just packaged dotfiles; several say it’s essentially “convenient repackaging” and another base layer to customize.
  • Accusations that Omarchy users “don’t know what they’re doing” are criticized as elitist; defenders note not everyone wants to tinker endlessly.
  • A side argument devolves into “nerds vs geeks” stereotypes and parasocial attitudes toward creators.

Ethics, politics, and open source

  • One camp insists there is “no ethics complication”: open source licenses forbid discrimination, and judging software by its author’s politics is seen as misguided “complicity” thinking.
  • Another camp argues “everything is political,” especially OSS, and explicitly avoids Omarchy and related projects due to the creator’s alleged xenophobic/racist statements.
  • Others say users should be informed of the controversy and decide for themselves; some ask what happens when people are informed but use it anyway, prompting sarcastic replies about exaggerated moral purity.
  • A meta-thread explores whether “no discrimination” principles should also constrain community behavior (e.g., racist maintainers), and whether forking over such behavior is itself “politicizing.”
  • Separate but related debate erupts over whether current US politics are “fascist,” with arguments hinging on definitions and historical analogies rather than Omarchy itself.

Alternatives and practicalities

  • Suggested alternatives include “roll your own Hyprland setup,” CachyOS (Arch with preconfigured Hyprland/Niri), and Pop!_OS for something simpler.
  • Some question the article’s torrent-vs-HTTP critique: HTTP supports resumable downloads; download managers or wget -c are suggested.
  • Minor side topics: Omarchy’s pronunciation (tied to “Arch Linux,” not Greek “-archy”) and whether the article fits the site’s stated mission.

Don't Be a Sucker (1943) [video]

Role of Propaganda vs. Structural Causes of Fascism

  • One line of discussion argues Nazi power ultimately depended on direct media control and censorship, so a film that focuses on street-corner demagogues understates economic and political factors (e.g., crises of the 1920s–30s).
  • Others counter that the Nazis got to the point of controlling media partly through exactly the kind of divisive rhetoric depicted in the film; public speeches and agitation did matter.
  • There’s debate over gaps in scholarship on fascism’s buildup: destroyed records, Cold War taboos, and underexplored roles of foreign actors and post–WWI Allied decisions.

Media Control, Censorship, and Social Platforms

  • Several comments generalize from Nazi media control to today’s environment: social media as a “public square” run by a few billionaires or states.
  • Dispute over whether recent US administrations pressured platforms to suppress certain outlets; some provide partisan sources as evidence, others reject them as unreliable.
  • One commenter reads the film as implicitly attacking mass organizing and public agitation; others insist it’s about how people think (skepticism toward demagogues), not about restricting speech.

Contemporary US Politics, Law Enforcement, and Division

  • Many see the film as urgently relevant to current US polarization and ethnonationalist rhetoric.
  • A heated subthread debates “masked agents kidnapping people”: whether some ICE/federal actions are unlawful or abuses of power vs. legitimate law enforcement under democratically enacted immigration laws.
  • There’s conflict between “rule of law” arguments and concerns about due process, proportionality (misdemeanor vs. paramilitary tactics), and targeting based on appearance.
  • Multiple commenters point out that framing half the country as “bad” voters itself fuels division; others argue some recent political movements are precisely the kind of scapegoating warned about in the film.

Propaganda, Nationalism, and Churches

  • Many acknowledge the film is explicit US government propaganda, with overt nationalism and idealized depictions of American industry, liberty, and churches.
  • Some see that as acceptable or even admirable given the anti-fascist message; others criticize the glossing over of US racism, restrictive immigration laws of the era, and the more ambivalent historical role of churches.
  • Discussion of what “propaganda” means: biased vs. necessarily misleading; several argue propaganda can be truthful and used for good, provided we remain aware it is propaganda.

Relevance, Manipulation, and Human Nature

  • Commenters link the film’s anti-immigrant rabble-rouser to contemporary media figures and blog posts lamenting demographic change.
  • Some stress that the “cartoon villain” bigot is still effective; others warn the subtler influencers—commentary framing unequal treatment as policy “nudges”—are more dangerous.
  • A pessimistic thread suggests humans are inherently manipulable “suckers” whose switches can be flipped by good or bad narratives; a counterpoint says widespread trust and expectation of good intentions are crucial to resisting extremist propaganda.

LLMs are getting better at character-level text manipulation

Prompting, Guardrails, and Safety Orientation

  • Early Claude models explicitly instructed themselves in the system prompt to “think step by step” and explicitly count characters; that guidance disappears in later models, suggesting improved post-training or a desire to reclaim context for other rules.
  • Some see extremely long safety/system prompts as “guard rails” that trade creativity and performance for brand safety, while others argue this is precisely the responsible way to uncover and mitigate dangerous behaviors before real-world deployment.

Counting, Tools, and “Cheating”

  • Many commenters argue LLMs could reliably handle character-level tasks via tools (e.g., Python), and in practice already do so when explicitly asked.
  • Frustration: users must micromanage models (“use your Python tool”, include certain files, etc.), which undermines the promise of intuitive, general intelligence.
  • There’s tension between wanting “pure” model ability vs. accepting tool use as legitimate intelligence, analogous to humans using calculators.

Tokenization and Architectural Limits

  • Modern LLMs tokenize at subword/morpheme level, so character-level detail is below their native resolution; models must effectively “reverse engineer” tokenization to count letters.
  • Tokenizing by character would help these tasks but greatly reduces effective context and efficiency under current architectures, though newer architectures (Mamba, RWKV, byte-level experiments) may mitigate this somewhat.

Training, Overfitting, and Emergent Skills

  • Some see improvements (e.g., correct “count the r’s in strawberry”) as overfitting to viral test questions rather than true reasoning. Others note related tests like “b’s in blueberry” don’t show the same pattern, suggesting broader skill.
  • Base64 decoding is discussed as likely emergent from web data, not explicitly optimized, whereas custom base-N encodings expose limits and inconsistencies.

Real-World Use Cases and Remaining Weaknesses

  • Character-level skills matter in word games (Quartiles, Wordle-like puzzles), language-learning tasks that dissect morphology, and possibly toxicity detection where users obfuscate insults.
  • Despite progress, models still fail on structured symbol tasks like Roman numerals and can hallucinate in constrained word puzzles or spelling-by-phone scenarios.

Debate Over Testing Relevance

  • One side: these tests are “hammer vs screw” misuse of LLMs; just use deterministic algorithms.
  • Other side: it’s informative and important that systems touted as near-human intelligence still break on seemingly simple symbolic tasks.

Ask HN: Has AI stolen the satisfaction from programming?

Loss of Satisfaction and Sense of “Ownership”

  • Several commenters resonate with the feeling that AI makes both philosophy and programming feel less “theirs”: if an LLM can generate or endorse an idea, it feels less meaningful; if it can’t, the idea feels invalid.
  • For coding, some say: doing it by hand now feels slow and pointless; doing it with AI feels like the work doesn’t “count,” as if credit flows to the model.
  • This feeds into impostor‑syndrome feelings and a sense that once-rigorous crafts (philosophy, politics, programming) are being cheapened.

AI as Accelerator, Not Thinker

  • Many argue the premise “AI automates the thinking” is wrong in practice: models can’t truly reason, and using them without understanding causes technical debt and emergencies.
  • Others see AI as a junior dev or a library: you still design the system, decompose problems, direct the architecture, and review everything.

Learning, Hobbies, and “Worthwhile” Problems

  • A core lament: toy projects (toy DBs, Redis clones, parsers) used to be joyful learning; now they feel “one prompt away” and thus not worth doing.
  • Counterpoints:
    • People already could have copied GitHub repos; this didn’t previously kill the joy.
    • Hobbies are intrinsically “inefficient” (like touring by bike instead of plane); it’s okay to keep doing small projects for learning.
    • New “games” exist, like trying to outperform the LLM or tackling areas with little training data.

Quality, Reliability, and Copyright

  • Some find LLM output banal, wrong, or only suitable for boilerplate and tests—dangerous for critical or novel work unless deeply reviewed.
  • Others report large productivity gains (rewriting major apps, adding many features solo).
  • Debate over whether common code is “boilerplate” or protected expression; some worry AI hides de facto code copying.

Workplace Culture and Expectations

  • Several say the real problem is organizational: pressure to ship AI‑generated code without understanding, and expectations of “10x output.”
  • Others report the opposite culture: devs are expected to fully understand and be responsible for AI‑assisted code.

Analogies, History, and Diverging Reactions

  • Analogies range from Ikea assembly vs woodworking, to hand saw vs table saw, to cameras vs painting and record players vs instruments.
  • Historical parallels are drawn to prior tooling waves (digitalization in surveying, IDEs, libraries).
  • Reactions are split: some feel joy and empowerment is higher than ever; others avoid AI entirely to preserve the “grim satisfaction” of solving problems themselves.

America's future could hinge on whether AI slightly disappoints

Access and framing

  • Original post is paywalled; discussion quickly shifts to macroeconomy and AI rather than the article’s specific arguments.
  • Several commenters think focusing on “AI share of GDP growth” is cherry‑picking, and that tech capex has been rising for a long time due to cloud, not just AI.

Is the economy already “crashed”?

  • Some argue core indicators (unemployment low, GDP positive) look fine while lived experience is “Great Recession–level” sentiment: high housing costs, food inflation, medical bills, and stagnant wages.
  • Others see early signs of a downturn: rising unemployment, weak non‑AI GDP growth, customers cutting spend, and packaging/retail demand falling.
  • There’s debate over how much of recent GDP growth is “real” vs stimulus and low rates, and how much blame belongs to different administrations.
  • Real estate and asset inflation are framed as a hidden tax on younger/working people, subsidizing older asset‑owners.

AI as macro risk / AI bubble

  • Many see the US as “one big bet on AI”: tech is driving a large share of capex and market cap (Nvidia, Microsoft in particular).
  • Concern: even a mild AI disappointment could unwind data‑center spending, trigger corporate defaults (especially where capex is debt‑financed), and puncture stock valuations.
  • Others argue AI is a small slice of overall GDP; even a big AI bust would be macro‑manageable compared to housing or credit bubbles.
  • Some think market cap and capex numbers are being double‑counted (same dollars cycling through vendors, investors, and partners).

Jobs, productivity, and inequality

  • Scenario if AI “works”: massive productivity gains, but potentially widespread white‑collar redundancy, fiercer competition for remaining jobs, and downward wage pressure in already‑low‑paid service roles.
  • Optimists reply that previous technology (plow, electricity) raised living standards and shifted labor into services/experiences rather than eliminating work entirely.
  • Skeptics note AI can often automate both new and old roles, unlike past tools that still required humans in the loop.

Energy, infrastructure, and sector mix

  • Some expect “skyrocketing” electricity costs from AI data centers; others counter that solar and storage costs are falling and could enable local or off‑grid solutions.
  • There’s worry the US has under‑invested in broader infrastructure, manufacturing, and health/biotech while China spreads bets across EVs, batteries, solar, and AI.

AI capabilities vs hype

  • Heavy skepticism that current LLMs are on a straight extrapolation path to AGI: benchmarks like SWE‑bench may be overfit and poor proxies for real‑world autonomy.
  • Daily users report LLMs are genuinely useful accelerants (especially for breadth of tasks) but still unreliable, hallucination‑prone, and bad at systems thinking.
  • Education and medicine are highlighted as domains where AI currently causes harm (cheating, shallow learning) or faces high regulatory and reliability barriers.

Content pollution and social impact

  • Multiple commenters worry that AI‑generated text is flooding the internet, reducing the value of online discourse and undermining trust in what’s human.
  • There’s broader anxiety about AI exacerbating inequality, enabling dangerous biotech, and being used as a political smokescreen amid deeper structural and governance problems.

Environment variables are a legacy mess: Let's dive deep into them

Security of environment variables for secrets

  • Many argue env vars are a poor channel for secrets: same-UID processes can usually read each other’s /proc/<pid>/environ, so any tool or plugin running as the user (LLM agents, editors, extensions) can exfiltrate tokens meant for a single script.
  • There’s debate about how bad this is: one side says since 2012 env access effectively requires ptrace rights, and any process with ptrace can already read all memory; others counter that on default systems same-UID ptrace is broadly allowed, so this is still effectively wide-open.
  • Containers somewhat improve isolation (one container can’t see another’s env), but not against a host process, and “containers as security boundary” is treated skeptically.

Alternatives for handling secrets

  • Suggested approaches:
    • Permissioned files (e.g. config files, ~/.ssh, .netrc), sometimes encrypted and decrypted on demand (SOPS, sqlite-based stores).
    • Secret managers (Vault/OpenBao, CyberArk, AWS/GCP secrets, Conjur) accessed via libraries or sidecars; criticized for lock-in and operational fragility (uptime, upgrades).
    • Systemd’s credential system and encrypted credstore; k8s secrets mounted as files or env vars; TPM-backed secrets; TPM/OAuth/IAM to avoid static secrets.
    • Newer primitives like memfd_secret and FIFOs/pipes where secrets never hit disk or long-lived env.
  • Disagreement over whether pointing to a config file (CONFIG_PATH) is actually more secure than env; SELinux and similar can help but are not cross-platform.

Unix security model and isolation

  • Core tension: Unix equates “user account” with “security domain”; many commenters want finer-grained, user-controlled isolation so untrusted tools can’t access all their data.
  • Namespaces and containers are seen as partial, leaky barriers; some recommend real VMs for strong isolation. Others mention seccomp, Landlock, AppArmor/SELinux, Yama, but treat them as mitigations, not cures.

API and implementation quirks

  • setenv() is called fundamentally unsafe on POSIX: getenv() returns raw pointers, so overwriting variables can break other code; some OSes “fix” this by leaking memory instead. Consensus: avoid setenv in libraries; use execve to set env for children.
  • There’s discussion of getenv_r, tracing env access (e.g., Node’s --trace-env), and the ARG_MAX “argument list too long” limit, with xargs as an imperfect workaround.

Configuration UX and philosophy

  • Complaints about the fragmented, non-persistent ways to set env vars on Linux vs Windows’ single GUI; systemd’s /etc/environment(.d) is cited as a partial unifier.
  • Some see env vars-as-config as abuse: they’re global, opaque, typo-prone, and differ across shells, SSH, cron, etc. Others defend them as the simplest, most portable configuration surface, especially for containers.
  • Conceptually, env vars are compared to globals or dynamically scoped variables that hurt determinism; some advocate more hermetic, fully specified runtimes (NixOS, containers) instead.

$19B Wiped Out in Crypto's Biggest Liquidation

How big was the crash?

  • Some argue it was “business as usual” for Bitcoin: price briefly flash‑crashed (~$104k) but mostly returned to levels seen two weeks earlier.
  • Others stress that, in dollar terms, this was crypto’s largest liquidation event ever, especially because it happened extremely fast and triggered mass forced liquidations rather than voluntary selling.
  • Much of the carnage was in altcoins, with many dropping 60–80% (or more, briefly) in minutes before partially rebounding.

Leverage, liquidations, and exchange mechanics

  • Commenters highlight extreme leverage (100–1000x) and lack of risk management as primary causes of the liquidation cascade.
  • Auto‑deleveraging systems on exchanges closed positions en masse once markets moved, particularly in thinly traded coins with absent market makers.
  • A linked analysis claims attackers exploited a collateral/oracle loophole on Binance’s “Unified Account” system to crash certain collateral assets and trigger liquidations.

Tether’s role and backing debate

  • One view: Tether “printed” around $1B USDT during the drop, providing crucial liquidity and cushioning the fall; Tether is described as a de facto central bank for Bitcoin.
  • Supporters say Tether is now likely fully backed, hugely profitable via Treasury yields and other investments, and has strong incentives not to commit fraud.
  • Skeptics question whether all reserves are real or risk‑free, note historical under‑backing findings, the absence of a full Big‑4 audit, opaque investments, and hard redemptions.
  • There’s disagreement over whether new US stablecoin regulations (e.g., GENIUS Act) meaningfully address concerns, especially since Tether is not currently US‑regulated.

Insider trading and manipulation concerns

  • Multiple comments allege large, precisely timed short positions were opened shortly before the President’s tariff announcement, yielding hundreds of millions in profit.
  • Some argue insider trading in Bitcoin is illegal under CFTC rules but practically unenforced, especially for politically connected actors.
  • Others point to additional alleged manipulation vectors: price oracles, exchange behavior, and concentrated liquidity providers.

Bitcoin, macro factors, and value debates

  • Many see Bitcoin trading as a high‑beta risk asset, correlated with equities and macro shocks (tariffs, central bank moves).
  • Long‑term holders frame this as a normal, temporary drawdown in a volatile but deflation‑resistant asset.
  • There is extended debate over Bitcoin vs gold, “intrinsic value,” whether Bitcoin is money or a speculative security, and whether it could ever realistically go to zero.

Android's sideloading limits are its most anti-consumer move

What Google’s New Policy Changes

  • Android will require all apps installed on certified devices to be tied to a verified developer identity, even outside Play Store.
  • APK installs will still be possible via adb (e.g., from Android Studio), and there is mention of a free, low‑volume dev tier without ID, but bulk distribution and “just share an APK link” workflows break.
  • Many see this as shifting from “you can install what you want” to “you can only install software from people Google has approved.”

Security vs. Control Debate

  • Pro‑change side:
    • Argues the main goal is to slow malware iteration by forcing attackers to burn real identities and accounts, making cleanup and attribution easier.
    • Frames it as analogous to ID checks at airports or code‑signing prompts on macOS/Windows: annoying for power users but safer for the vast majority who don’t understand security.
  • Skeptical side:
    • Notes Play Store already hosts scams and malware; sandboxing and permissions, not central vetting, are the real defenses.
    • Sees “security” as cover for business goals: protecting ads (e.g., NewPipe/ReVanced), data collection, and cementing gatekeeping power.
    • Emphasizes that restrictions ratchet in one direction; “temporary workarounds” are boiling‑the‑frog.

Impact on FOSS, F-Droid, and Developers

  • F-Droid warns this effectively kills anonymous/open distribution on stock Android: every package ID must be tied to a verified developer Google can ban.
  • Solo devs cite opaque account terminations and permanent bans as already career‑threatening; this raises the stakes and eliminates low‑friction hobby/experimental distribution.
  • Some say this makes Android unsuitable for private/internal apps or niche hardware tools where only an APK exists.

Alternatives and Workarounds

  • Custom ROMs (GrapheneOS, LineageOS, /e/OS) are widely discussed as an escape hatch, but:
    • Hardware support is limited and getting harder (e.g., Pixel device trees, Play Integrity attestation).
    • Banking/government apps increasingly refuse to run on rooted or uncertified systems.
  • Linux phones (postmarketOS, Ubuntu Touch, Sailfish, PinePhone, Fairphone‑based options) are mentioned but seen as immature, with poor app coverage and banking support.
  • Some argue that if Android loses sideloading as a USP, many will just move to iPhone for better hardware/UX and similar lock‑in.

Ownership, Language, and Antitrust

  • Strong sentiment that “if you can’t freely install software, you don’t own the device.”
  • Several argue that even the term “sideloading” is manipulative; they prefer “direct install” or simply “installing software.”
  • Calls for stronger regulation (EU DMA‑style or new laws) and even breaking up Google/Apple; others counter that current US law likely permits these moves, so only new legislation would help.

NanoChat – The best ChatGPT that $100 can buy

Course and educational focus

  • nanochat is positioned as the capstone project for an upcoming LLM101n course from Eureka Labs; materials and intermediate projects (tensors, autograd, compilation, etc.) are still in development.
  • Many see this as high‑leverage education: small, clean, end‑to‑end code that demystifies transformers and encourages tinkering, similar to earlier nanoGPT work.
  • Several commenters relate their own “learn by re‑implementing” projects and expect nanochat to seed new researchers and hobby projects.

Societal, ethical, and IP concerns

  • Supporters hope this kind of open teaching recreates the open‑source effect for AI: broad access to know‑how, not just closed corporate models.
  • Critics argue current AI is largely controlled by big corporations with misaligned incentives; worry about surveillance, censorship, dictatorships, and concentration of power.
  • Strong debate around “strip‑mining human knowledge”: some call large‑scale training data use theft; others argue strict IP over ideas mainly enriches a small owner class and harms the commons.
  • Concerns about LLMs lowering demand for human professionals and creative workers, and about a future full of low‑quality “LLM slop”.

Cost, hardware, and accessibility

  • Clarification: “$100” means renting 4 hours on an 8×H100 cloud node ($24/h), not buying hardware.
  • The trained model is small (~0.5–0.6B params) and can run on CPUs or modest GPUs; only training needs large VRAM.
  • Discussion of running on 24–40 GB cards by reducing batch size, with big speed penalties; some share logs from 4090 runs and cloud W&B setups.
  • A few see dependence on VC‑subsidized GPU clouds and Nvidia as reinforcing an “unfree ecosystem”; others argue the actual contribution is tiny relative to the broader AI bubble.

Model capabilities and practical use

  • nanochat is explicitly “kindergartener‑level”; example outputs (e.g. bad physics explanations) are used to illustrate its limitations, not to claim utility.
  • For domain‑specific assistants (e.g. psychology texts or Wikipedia‑like search), multiple commenters advise using a stronger pretrained model with fine‑tuning and/or RAG rather than training such a tiny model from scratch.

Technical choices: data, metrics, optimizers

  • Training draws on web‑scale text (FineWeb‑derived corpora) plus instruction/chat data and subsets of benchmarks like MMLU, GSM8K, ARC.
  • The project incorporates newer practices (instruction SFT, tool use, RL‑style refinement) and the Muon optimizer for hidden layers, praised for better performance and lower memory than AdamW.
  • Bits‑per‑byte is highlighted as a tokenizer‑invariant loss metric; side discussion covers subword vs character tokenization and the compute/context trade‑offs.

AI coding tools and “vibe coding”

  • The author notes nanochat was “basically entirely hand‑written”; code agents (Claude/Codex) were net unhelpful for this off‑distribution, tightly engineered repo.
  • This sparks an extended debate:
    • Many developers report large productivity gains for CRUD apps, web UIs, boilerplate, refactors, and test generation.
    • Others find agents unreliable for novel algorithms or niche domains, and criticize overblown claims about imminent AGI or fully autonomous coding.
  • Consensus in the thread: current tools are powerful assistants and prototyping aids, but still require expertise, verification, and realistic expectations.

Reception and expectations

  • Many commenters are enthusiastic, calling this “legendary” community content and planning to use it as a learning baseline.
  • Some were misled by the title into expecting a $100 local ChatGPT‑replacement; once clarified as an educational from‑scratch stack, most frame it as a teaching and research harness rather than a production system.

America is getting an AI gold rush instead of a factory boom

AI vs Manufacturing Investment

  • Many see the “AI gold rush” as soaking up capital and power that could have gone into factories and durable productive assets; others note data doesn’t yet show data-center capex crowding out overall equipment investment.
  • Power consumption of AI datacenters and its impact on electricity prices is a recurring concern.
  • Some argue US manufacturing value added is at record highs but growth is modest and jobs are down; others say unit output is flat and GDP hides industrial decline.

AI’s Role in Factories and Robotics

  • Optimists: AI (especially vision and transformer-based control) could drastically expand what robots can do—handling messy, context‑rich tasks, lowering the minimum scale at which automation pays off, and enabling more flexible, “general-purpose” robot workcells.
  • Skeptics: LLMs excel at flexible, fuzzy tasks—the opposite of mass manufacturing’s need for tiny, exact, repeatable instruction sets. Current industrial automation already uses “AI” (ML, vision) where it helps.
  • Some see LLMs’ main manufacturing impact as assisting engineers (design, programming, workflow), not running robots directly.
  • Several commenters dislike that “AI” is used to lump together control/robotics and LLM chatbots, which drives confusion and hype.

Jobs, Wages, and Desirability of Factory Work

  • Fears: AI plus automation could further hollow out the middle class and crush new‑grad and creative jobs, without delivering widely shared gains.
  • Others say current hiring weakness is macro (rates, tariffs, politics), not AI, though belief in AI makes managers more willing to cut headcount.
  • Long subthread on why US factories struggle to hire: monotonous, physically demanding, often unsafe work versus similar or lower-paid service jobs with easier conditions.
  • Disagreement over whether “higher pay and better benefits” claims from employers are substantial or illusory; unions and working conditions (breaks, music, respect) are central themes.

US vs China Industrial Capabilities

  • Multiple commenters argue China has quietly built deep process knowledge, heavy automation, and broad tech leadership, while the US financialized and offshored its industrial base.
  • Others note many Chinese factories still rely on labor‑intensive assembly, and demographic decline will force more automation globally.
  • Debate over whether US can realistically rebuild manufacturing capacity at scale after losing tooling ecosystems and skills, versus targeted, highly automated reshoring (EVs, chips, defense).

Trade, Tariffs, and National Security

  • Competing views: tariffs as necessary to preserve strategic industries vs tariffs as a regressive tax that makes everyone poorer.
  • Strong argument that some domestic manufacturing is essential for leverage and security (we must be able to “build it ourselves” if trade breaks down), but not everything can or should be onshore.
  • Japan’s protectionist playbook and China’s import substitution are cited as examples where import barriers worked only with long‑term, coordinated industrial policy.

AI Bubble, ROI, and AGI Bets

  • Widespread unease that AI resembles past bubbles: enormous capex into rapidly obsoleting hardware, unclear sustainable business models, and “too big to fail” political backing.
  • Practitioners report real but modest productivity gains (coding help, summarization) alongside new costs: reviewing AI-generated “slop,” hallucinations, and brittle integrations.
  • Intense argument over the “AGI race”: some claim whoever reaches AGI first will dominate geopolitically, justifying massive overinvestment; many others doubt LLMs can reach or safely control AGI and question the wisdom of betting an economy on that assumption.

Structural Barriers to a Factory Boom

  • Experienced founders describe capital markets, startup culture, and exit environment as heavily biased toward software and asset‑light “middleman” models; financing bus‑sized machines in rich countries is hard, exits are scarce, and supply‑chain fragility is rising.
  • Even where demand exists, regulatory delay, permitting, and fragmented policy make standing up new plants slow and risky compared with software or Chinese manufacturing.

Ofcom fines 4chan £20K and counting for violating UK's Online Safety Act

Enforceability and Symbolism of the Fine

  • Many argue the fine is effectively unenforceable: 4chan is US‑based, apparently has no UK presence or assets, and is unlikely to pay or materially suffer.
  • Others counter that it’s not “just symbolic” because non‑payment can, in principle, lead to arrest if responsible individuals travel to the UK (and potentially to countries that cooperate with UK enforcement).
  • There’s disagreement over how serious a loss it is to be effectively barred from the UK; some see it as trivial, others as a meaningful restriction on freedom of movement.

Jurisdiction, Extradition, and “Police State” Claims

  • Long subthreads debate extra‑territorial jurisdiction: whether a country can demand compliance from foreign sites merely because they’re accessible locally.
  • Some draw parallels to Russian fines against Google and Belarus/Russia‑style tactics; others insist the UK remains a flawed democracy enforcing bad laws, not an outright authoritarian state.
  • The risk of unexpected diversions (flights rerouted to the UK) is raised as a non‑obvious way people could be exposed to arrest.

Censorship, Blocking, and VPNs

  • Many expect Ofcom’s real endgame is to order UK ISPs to block 4chan after demonstrating “non‑compliance” through steps like this fine.
  • Commenters note the UK already blocks some sites (e.g. Pirate Bay) and see 4chan as a “think of the children” test case to justify stronger blocking powers.
  • VPNs and Tor are discussed as workarounds; some mention recurring political interest in restricting VPNs or compelling key disclosure, and the massive infosec and corporate IT fallout such moves would create.

Online Safety Act Goals vs Overreach

  • Supporters emphasize the Act’s stated aim: preventing children from accessing porn and harmful content (including suicide‑encouraging sites like “Sanctioned Suicide”).
  • Others doubt technical feasibility or effectiveness, argue that parenting and existing laws should be primary controls, and worry that bans on suicide discussion could hinder harm‑reduction or research.
  • There’s recognition that “child protection” rules can be used to push platforms out indirectly instead of overtly censoring them.

Platform Power, Precedent, and Internet Fragmentation

  • Several note that large platforms can afford compliance, while smaller sites cannot, so laws like this entrench incumbents.
  • Comparisons are drawn to GDPR and Russian data‑localization demands as precedents for extraterritorial regulation that can conflict across jurisdictions.
  • Some advocate sites simply blocking the UK; others fear this leads toward a balkanized, nation‑firewalled internet where the most restrictive law effectively governs everyone.

Ofcom’s Strategy and Regulatory Politics

  • One view: Ofcom is just mechanically enforcing a bad law, following a statutory escalation path (information requests → fines → potential blocking).
  • Another view: targeting a notorious but relatively poor US site with legal support is tactically dumb and will show other US companies they can defy Ofcom.
  • Skepticism extends to UK regulators generally (Ofcom, Ofwat, Ofgem), with accusations of incompetence, regulatory capture, and political pressure from moral‑panics and “think of the children” constituencies.

Software update bricks some Jeep 4xe hybrids over the weekend

Car Software Safety and Aviation Comparisons

  • Several argue car software needs airline-level rigor; others note avionics rely on strict processes, manual updates, and redundancy that automakers haven’t adopted.
  • Some think only a mass‑casualty incident (or a high‑profile death) will force that level of seriousness.
  • Others counter that computer‑controlled cars are still a huge net win for performance, emissions, and safety; the problem is implementation, not the idea.

OTA Updates, System Isolation, and Failure Mode

  • Many are shocked that an OTA “infotainment”/telematics update can disable the powertrain mid‑drive; they expected strict isolation between entertainment and drive systems.
  • Others explain that modern OTAs routinely update ECUs, TCMs, BCMs, etc., with the infotainment unit acting as gateway; that’s how serious defects can be fixed remotely—but also how cars can be “borked.”
  • Some insist mission‑critical components should never be updated OTA at all, or at least only when parked at home, with clear rollback paths and user‑controlled timing.
  • There is confusion over what exactly was updated and in what order (infotainment vs telematics vs core controllers).

Rollback, Testing, and Cost/Process Pressures

  • Multiple commenters describe robust A/B or dual‑image update schemes used in cheap IoT devices and industrial gear, and are baffled these aren’t standard in high‑end cars.
  • Others note A/B only protects against interrupted flashes, not deeply buggy new firmware, and that auto bootloaders often forbid downgrades.
  • Strong suspicion that cost‑cutting, outsourced development, and deadline pressure (including pushing fleet‑wide updates on a Friday) trumped good engineering and QA.

Ownership, Control, and Regulation

  • Many question whether they truly “own” a car that can be remotely altered or disabled, and worry about kill‑switch capabilities being abused by creditors, governments, or attackers.
  • Several call OTA access to core vehicle systems a national‑security risk and argue it should be illegal or tightly regulated, with aviation‑style accountability and possibly criminal liability for safety‑critical bugs.
  • Others are pessimistic that regulators or markets will fix this soon.

Experiences with Jeep and Other Brands

  • Numerous anecdotes portray Stellantis/Jeep electronics as glitchy for years: random warnings, failing cameras, climate and seat issues, electronic parking brakes misbehaving.
  • A 4xe owner describes zero clear communication from Jeep, contradictory forum guidance, clueless dealers, and no way to know if one has the bad update or the fix, while still risking sudden power loss.
  • Similar infotainment‑quality complaints surface about Land Rover, Mercedes, Mazda, Hyundai, etc., often traced to underpowered hardware and outsourced, low‑priority software.

AI-Assisted Coding Speculation

  • Some immediately blame “AI‑assisted coding,” citing Stellantis’ recent AI‑adoption announcement.
  • Others push back, noting no concrete evidence ties this specific failure to AI; at most, the timing is troubling but unproven.

Backlash and Desire for Simpler Cars

  • Many commenters express renewed desire for “dumb” cars: no OTA, no connectivity, physical buttons, minimal ECUs, even older Japanese models or simple off‑roaders, accepting fewer digital features in exchange for predictability and control.

Smartphones and being present

Managing notifications and attention

  • Many describe aggressively taming notifications: batching them a few times per day, permanent Do Not Disturb with a tiny whitelist, or using Focus Modes to hide almost all alerts on both Android and iOS.
  • Some physically separate themselves from the phone (phone box, leaving it in another room, using a wristwatch instead of checking the lock screen for time).
  • Tools like Screen Time, app timers, focus modes, “flip-to-shh,” and third‑party blockers (Lock Me Out, Bloom+Freedom, Clearspace) are used to add friction or hard lockouts; people differ on whether these are enough if motivation is low.
  • One camp insists you must address the underlying “escape” need behind doomscrolling; others report dramatic improvements from strict technical limits, even with ADHD.

Social media, short-form video, and addiction mechanics

  • Short-form video with infinite scroll is repeatedly likened to slot machines, cigarettes, and hard drugs: fast dopamine loops, suspense, and constant novelty make it hard to stop, and kids are seen as especially vulnerable.
  • Several people recount immediate gains in sleep and mood after removing phones from bedrooms or quitting Reels/TikTok; others notice involuntary relapse once the phone returns.
  • Some feel “immune” to TikTok-style content but admit similar compulsions toward text-based forums, drama threads, or news comments, arguing it’s the variable‑reward loop, not the medium.

Apps, hostile mobile web, and recommendation engines

  • There’s strong resentment toward being forced into apps (QR-only townhalls, school/sports platforms, social links that are unusable without installing the app). Workarounds include desktop-mode browsing, uBlock/annoyance lists, custom CSS, and Telegram download bots.
  • YouTube is a major battleground: some see its recommendation engine as uniquely valuable—like a knowledgeable librarian—while others say recommendations inevitably hijack attention, so they delete history, disable recs, or use extensions (Unhook, Untrap, SocialFocus, alternative clients).
  • Reddit, Instagram, and TikTok mobile experiences are widely criticized as intentionally broken or manipulative, pushing users toward apps and deeper engagement loops.

Alternative devices and “dumbification”

  • Strategies include tiny phones, e‑ink Androids, old iPhones, de-Googled devices, disabling browsers/App Store, or even abandoning smartphones entirely and relying on cash and offline tablets.
  • Critics argue smartphones can also be powerful creative tools (camera, audio, notes, field work), and that blanket claims like “phones are pure consumption” ignore younger generations who do real work on them.

Presence, boredom, and context

  • Many try to reclaim “being present” by reading paper books, taking walks, traveling, or just letting the mind wander instead of reflexive phone use.
  • Others push back: in lonely, unsafe, or hyper-digital societies, the phone is described as a necessary escape or social lifeline, and not everyone wants to be more present in their immediate environment.

No science, no startups: The innovation engine we're switching off

Innovation, control, and short‑termism

  • Several comments argue that incumbents (companies, elites, nations) see startups and radical innovation as threats to control, so they instinctively suppress novelty.
  • This is framed as “short‑term thinking” reinforced by quarterly earnings and election cycles; decision‑makers optimize for extracting wealth now and being gone before long‑run consequences arrive.
  • Some see the US as in a leveraged‑buyout phase: strip assets, underinvest, and let future stability be someone else’s problem.

Why corporate labs declined

  • One camp accepts the article’s narrative: mid‑century corporate labs (Bell, Xerox PARC, IBM, etc.) were funded by monopoly profits and high tax rates that made R&D a smart way to avoid tax; financialization + stock buybacks shifted surplus to shareholders instead of basic science.
  • Others say buybacks are overstated: firms always could return cash (dividends) and the real drivers were:
    • End of regulated monopolies.
    • Antitrust actions.
    • Bayh‑Dole moving basic research into universities via exclusive licensing.
    • Management failures and bureaucracy; research groups as internal power centers that leadership disliked.
  • There’s sympathy for the loss of “pure” corporate labs, but also skepticism: many of those firms failed to capitalize on their own breakthroughs, so the model was commercially fragile.

Stock buybacks, dividends, and incentives

  • Long subthread dissects whether buybacks inherently crowd out R&D:
    • One side: buybacks are economically similar to dividends, just more tax‑efficient and timing‑flexible; they simply move capital to where markets see better returns.
    • Other side: tying executive comp and investor expectations to stock price makes buybacks a politically easy way to juice metrics, unlike uncertain, slow‑payoff basic research.
  • There’s debate over who benefits most (option‑holding executives, frequent sellers, wealthy margin borrowers vs all shareholders) and whether legal/fiduciary norms (“maximize shareholder value”) force short‑termism.

Role of government, universities, and “planned” science

  • Many agree that only government can consistently fund long‑horizon, high‑risk basic science; companies and VCs mostly do applied work and optimization.
  • Others describe public science funding as a “planned economy” dominated by committees, politics (including DEI fights and “kissing the ring” in grant language), and bureaucracy; they see academia as a status racket often hostile to practical innovation.
  • There’s concern that US science agencies are being politicized and that cuts will erode the innovation base; counter‑voices argue the existing system was already misallocating talent and failing to turn discoveries into domestic industry.

Is there still anything to discover? Science vs engineering

  • A minority claims “there’s nothing left to research, only optimizations”; most strongly reject this as naïve, pointing to ongoing frontier work in materials, quantum, biology, batteries, etc.
  • Multiple comments stress the distinction and interdependence of:
    • Science: generating new explanatory knowledge, generally in universities and national labs.
    • Engineering: turning that knowledge into rockets, LLMs, drugs, chips, etc. (SpaceX, Ozempic, GPT‑4 are cited as engineering atop decades of prior science).
  • Some argue recent slowdown in visible “game‑changers” may be real, making science look lower‑ROI and politically vulnerable; others see that as an illusion of perspective.

Incentives and careers in science

  • Commenters describe academic science as funding‑driven rather than curiosity‑driven, with harsh career funnels (PhD → postdoc → rare tenure), heavy grant‑writing load, and sometimes misaligned evaluation metrics.
  • There’s frustration that PhDs can be treated as hiring “red flags” in industry and that pitch‑deck/VC styles are invading research evaluation.
  • Despite all this, several insist that basic science remains essential infrastructure for future startups and national prosperity, even if the return is diffuse, delayed, and hard to measure.