Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 466 of 544

Out of Africa: celebrating 100 years of human-origins research

Out of Africa vs. Multiregional / “It’s Complicated” Models

  • Some argue genetic data has decisively killed classic multiregionalism; alternatives are framed as crackpot or religious.
  • Others counter that leading geneticists describe a more uncertain picture between ~2M–500k years ago: ancestors of modern humans may have been in both Africa and Eurasia and later mixed.
  • A commenter lists challenges to a simple, recent single-exodus model: very old “sapiens-like” fossils in Eurasia, archaic DNA in non-Africans not seen in Africans, divergent Y-chromosome lineages, tool continuity in Asia, and early American evidence.
  • Replies emphasize:
    • Earlier out-of-Africa migrations and multiple bottlenecks are already part of the mainstream model.
    • Admixture with Neanderthals, Denisovans, and unknown archaics fits “OOA + interbreeding,” not multiregional continuity.
    • Fossil dates before Homo sapiens or from small, extinct side lineages don’t overturn African origins of the main lineage.

Religion, Ethnicity, and Origin Myths

  • Discussion over which religious groups prefer multi-regional vs. single-origin myths.
  • Nation of Islam’s Yakub story is cited as an example of modern myth-making; there’s debate over whether it is truly “Islamic” or a separate, syncretic religion.
  • Idea that many ethnic/tribal religions articulate distinct origin myths for their own group.

Politics, Racism, and How Theories Are Used

  • Some say multiregionalism historically appealed to racists and nationalists (e.g., as a Han-Chinese homeland narrative); OOA undercuts claims of deep racial separation.
  • Others argue OOA has also been weaponized to claim post-African groups are “more evolved,” showing that racist frameworks simply absorb new data.
  • One commenter explicitly defends presenting OOA in a way that maximizes its anti-racist, unity-promoting power, even as details get more complex.

Genetics, Race, and Behavior

  • Long subthread on whether scientists avoid uncomfortable interpretations about population differences in cognition and behavior.
  • Points raised:
    • Population structure is easily detectable in genomes; alleles affecting traits should, in principle, differ in frequency between groups.
    • Critics respond that within-group variation exceeds between-group differences, environment and confounders are huge, and much “race IQ” work has been methodologically flawed or later discredited.
    • Some defend strong taboos around “scientific racism,” citing historical abuse; others say suppression invites conspiracy thinking.

Methodology, Bias, and Survivor Effects

  • Question about fossil survivor bias: why so many early remains in Africa vs. elsewhere?
  • Responses: preservation differences (e.g., bogs vs. tropics) and multiple converging lines of evidence (morphology, founder effects, molecular clocks) all point to Africa, while fossil locations are not equated with the exclusive origin point.

Neanderthals, Taxonomy, and Disease

  • One commenter notes several sequenced Neanderthals carry a variant implicated in congenital adrenal hyperplasia, which could depress their population size and exert immune-system selection pressure; they say this appears under-discussed.
  • Another predicts future reclassification of many “archaic species” as H. sapiens subspecies, arguing genetic distances are small and current taxonomy may reflect prestige-seeking.
  • Some prefer to view all Homo members as “human,” suggesting labels like Homo gregarius to emphasize sociality over supposed superior “wisdom.”

Books and Miscellany

  • Recommended reading on human origins and related topics includes narrative works on fossil hunters, Leakey’s memoir, Neanderthal life, and specific discoveries.
  • Brief aside connects “Pontic Steppe” not to human origins but to leading theories on the Proto–Indo-European language homeland.

Aluminum batteries outlive lithium-ion with a pinch of salt

Missing energy density & article criticism

  • Many commenters focus on the line “energy density will need to be improved” and note that neither the article nor headline numbers clearly quantify it.
  • This omission is seen as a major red flag: without energy density, cost, and charge/discharge characteristics, you can’t judge commercial viability.
  • IEEE Spectrum is criticized for:
    • Using misleading “typical Li-ion” cycle life (300–500 cycles) when many modern chemistries achieve thousands.
    • Glossing over trade-offs and failing to contextualize the research paper’s data.

Li-ion performance corrections & lifespan nuances

  • Commenters note:
    • LFP (LiFePO4) routinely achieves ~3000+ cycles to 80% capacity, with claims up to ~6000.
    • NMC/NCA chemistries in EVs and Powerwall-type products show much better lifetime than the article suggests.
  • 80% State of Health is industry-standard “end of life”; practical runtime can deteriorate faster than this simple percentage implies.
  • Depth of discharge, charge limits (e.g., capping at 80%), and temperature strongly affect lifetime.

Potential applications: grid, stationary, and devices

  • Debate on whether energy density “matters” for grid storage:
    • One side: mass/volume are secondary; cost, safety and longevity dominate.
    • Other side: footprint, structural load, monitoring complexity, and round-trip efficiency still make density relevant.
  • Aluminum’s long cycle life and potential safety advantages (less fire-prone) are seen as promising for:
    • Grid-scale and building storage.
    • Second-tier use cases (plug-in hybrids, possibly gadgets) where ultra-high density isn’t critical.

Lithium vs aluminum: cost, abundance, sustainability

  • Disagreement over how “rare” or “expensive” lithium is; its price has been volatile but is still a significant multiple of aluminum’s.
  • Aluminum is far more abundant in the crust and benefits from mature, efficient recycling; lithium mining and recycling remain more resource-intensive.
  • Several argue that, at very large scale, aluminum-based storage would be more sustainable if technical hurdles are solved.

Technical characteristics of the Al-ion approach

  • The paper uses a solid-state electrolyte with aluminum fluoride and fluoroethylene carbonate; fluorinated species raise toxicity questions but are compared to existing Li battery salts.
  • 99% capacity retention after 10,000 cycles is highlighted as impressive, though commenters want total energy-delivered metrics rather than just percentage retention.
  • Dimensional change during cycling—historically a big Al-ion concern—is reported as small, which, if accurate, is a meaningful advance.

Alternative chemistries & competitive landscape

  • LFP is repeatedly cited as a strong incumbent: cheap, safe, long-lived, and already in many EVs and stationary systems.
  • Other non-Li options discussed: iron flow batteries, nickel–iron (extremely long-lived but heavy and self-discharging), heated-sand storage.
  • Consensus: if any non-lithium chemistry gains traction, it will likely start in stationary/grid applications, but it must beat rapidly improving LFP and other mature tech on cost, safety, and practicality.

Hype, skepticism, and expectations

  • Many frame this as another “Better Battery Bulletin”: exciting lab result, but far from market, with missing key metrics.
  • Some suspect such stories can encourage “wait for the next thing” attitudes toward EV adoption.
  • Others remain optimistic that steady, incremental progress across many chemistries will cumulatively reshape energy and mobility, even if no single breakthrough dethrones lithium soon.

US Cloud soon illegal in EU? US punches first hole in EU-US Data Deal

Legal conflict: US surveillance law vs EU privacy rules

  • Many see a fundamental clash between GDPR and US laws like the CLOUD Act, FISA 702, and EO 12333, which can compel access to data held by US companies anywhere in the world.
  • Key point: data location (EU datacenter) is increasingly viewed as irrelevant; control and ownership by a US entity is what matters.
  • There is confusion and disagreement over whether EU subsidiaries of US firms are directly subject to the CLOUD Act, or whether it applies only to the (often US‑based) data controller.
  • Commenters note that adequacy decisions and Standard Contractual Clauses work for many countries, but the US is seen as uniquely incompatible.

Sovereign cloud and corporate structuring

  • US hyperscalers are building “sovereign cloud” offerings (AWS, Oracle, Google+T‑Systems), with EU‑only infrastructure and staff, but many argue that as long as a US parent exists, FISA/CLOUD Act risk remains.
  • Others speculate about forcing divestiture: requiring US firms to sell EU operations or ensure majority EU ownership, but expect they’ll prefer local subsidiaries/JVs over true sales.
  • European alternatives mentioned: Hetzner, OVH, IONOS, Scaleway, Stackit (Lidl), Evroc, various government clouds. Concerns remain about capacity and fragmentation.

Practical impact and feasibility

  • Some welcome a hard break with US cloud/SaaS (“please let this happen”), expecting growth of EU providers and renewed technical competence (e.g., running own email).
  • Others predict “economic suicide” and “biblical chaos” if US cloud and SaaS (AWS/Azure/GCP, Office 365, Google Workspace, GitHub, WhatsApp, iCloud, Gmail, etc.) were suddenly unusable for EU business and government.
  • Several note that regulations roll out slowly with long grace periods; a sudden cutoff is seen as politically impossible. Likely pattern: new interim deals, later struck down again.

Digital sovereignty, geopolitics, and industrial policy

  • Strong current of frustration that EU talent and markets “subsidize” US tech giants; some call for bans, tariffs, or “localization” similar to China/India.
  • Others warn that import‑substitution and state‑picked “national champions” usually produce worse, more expensive services and burden consumers.
  • Debate extends into security (US military role vs EU+UK capabilities), future EU–Russia relations, and media influence, but these points remain contested and speculative within the thread.

Trust, government clouds, and OS stack

  • Proposal: EU‑run citizen cloud with free storage, treating cloud as critical infrastructure; opponents distrust their own governments as much or more than foreign ones.
  • Consensus among security‑minded commenters: treat any cloud provider as a potential adversary; use client‑side encryption, though this is too complex for most users.
  • Some extend sovereignty concerns down to Linux distros and binaries, worrying about compelled backdoors in US‑controlled components; others see this as overreach but indicative of rising distrust.

Hotline for modern Apple systems

Security and Protocol

  • Original Hotline traffic is unencrypted plaintext over TCP; no TLS/SSL in the classic Mac/Windows clients.
  • Some unofficial *nix ports added basic encryption, but this wasn’t interoperable with the official clients.
  • Commenters note that at the time (late 90s), even email, IRC, and web browsing were typically unencrypted, with HTTPS used mostly for credit-card pages.
  • One person argues encryption isn’t needed if you’re unconcerned about MITM for this use case; others note there was a TLS-enabled successor (KDX) but it came late and never became dominant.

Core Features and UX

  • Described repeatedly as a “TCP/IP BBS” or “community in a box”: file sharing, live chat, message boards (news), and user lists in a single UI.
  • Trackers let you discover servers from within the client; some remember “tracker trackers” that indexed trackers themselves.
  • Some servers enforced upload–download rules or “requests” folders to gate access; banners and puzzles were sometimes used to distribute passwords.
  • There were hidden Ctrl-F12 commands for secret icons, joke modes (e.g., pig/oink), basic ciphers, and ratio reporting.

Community, Culture, and Use Cases

  • Strong memories of Hotline as a Mac-centric, semi-underground culture: piracy (apps, games, ROMs, MP3s), niche music scenes, anime, and tech/hacker communities.
  • Servers each had their own vibe and cliques; getting a non-guest account was a status milestone.
  • Several people credit specific servers (e.g., programming or REALbasic-focused ones) with kickstarting their careers, friendships, and long-term interests.
  • Stories include sneaking dial-up access at night, running servers on university T1/T3 lines or early cable/ADSL, and learning sysadmin skills managing disks and bandwidth.

Relation to Other Systems

  • Compared with Napster/Limewire/Kazaa/Soulseek: Hotline is seen as more community-centric, less purely search/transfer oriented.
  • Parallels drawn to BBSes, Citadel, Reticulum-based tools, Freenet, and BeOS tools like BeShare.
  • Carracho and especially KDX are remembered as spiritual or direct successors; KDX added TLS and a more futuristic UI.

Ongoing Scene and Modern Revival

  • Trackers and some original servers still run; hltracker.com was repurposed so vintage clients “just work.”
  • New FOSS clients exist (e.g., Qt-based), and archival sites host classic binaries and documentation.
  • Many express deep nostalgia for Hotline’s “cozy,” small-community feel versus today’s social media.

Software Efficiency Tangent

  • A long subthread contrasts Hotline’s ~10 MB RAM footprint with modern bloat (Electron apps, heavy browsers).
  • Debate centers on trade-offs: developer time vs. efficiency, user hardware growth, UX improvements (HiDPI, compositing) vs. resource use, and whether poor performance has been normalized.

Some terminal frustrations

Skill Issues vs. Design Problems

  • One camp frames most terminal pain as “skills issues”: Unix tools are old, deeply embedded, and powerful; users should invest in learning them (man, info, readline, job control, etc.).
  • Others push back that this becomes gatekeeping: dismissing legitimate UX problems as laziness, ignoring empathy and modern expectations, and treating memorizing flags as moral virtue.
  • There’s tension between respecting historical constraints (80x24, tiny RAM) and acknowledging that hardware and user expectations have changed.

Editing, Copy/Paste, and the TTY Model

  • Many want terminals to behave like normal text fields: click-to-edit commands, GUI-standard shortcuts for copy/paste, ESC to reliably “escape” a program.
  • Replies note this is mostly about shells/REPLs, not terminal emulators: the screen is a log, input is separate. Readline, zsh, fish, rlwrap already support cursor editing; some terminals support mouse placement.
  • Copy/paste shortcuts are a major annoyance: Ctrl-C vs SIGINT, Ctrl-Shift-C/V, Command-C/V on macOS, Windows-style behaviors, X11 primary selection (select + middle-click) vs “clipboard”.
  • Some describe key remapping setups (Super/Command as copy/paste, Ctrl reserved for terminal signals) as partial fixes.

History, Discoverability, and “Second Brains”

  • Remembering commands/flags is widely seen as the real bottleneck.
  • Tools like fzf, Atuin, mcfly, hishtory, zsh+Ctrl‑R, long HISTSIZE, and project-scoped history are praised as “second brains” for complex commands.
  • Others prefer curated “playbooks” over raw history search.
  • Discoverability of tools and features is poor; people report learning via random tips over years. Sites like Terminal Trove and cheat.sh/tldr try to fill this gap.
  • Several describe using LLM-based helpers (llm, gh copilot, how.sh, Pal) to generate commands or explain errors directly in the terminal.

CLI Argument Conventions and Help

  • Strong sentiment that --help (and usually -h) should always work and do nothing but print help. Programs that reject --help or distinguish -help/--help are called hostile.
  • Complaints about:
    • Inconsistent flags (-h = help vs host vs halt).
    • Single-dash long options (common in Go/Java tools, find).
    • Tools that spew pages of help on minor errors vs concise diagnostics.
  • Some argue man pages are the true “standard”; others note many tools ship no man pages, so in-band help is essential.
  • Writing good CLIs (clear --help, good defaults, meaningful errors, examples) is viewed as a serious UX skill.

Colors, Terminfo, and Terminal Standards

  • Hard-coded ANSI escapes (bypassing terminfo) are blamed for broken colors and capabilities across terminals and SSH hops.
  • Proposed direction: use terminfo or XTGETTCAP-like queries to detect capabilities in-band; never output escape codes when stdout isn’t a TTY.
  • Frustration that many developers don’t even know terminfo exists; terminal knowledge seen as a “lost” foundation.

Alternative Tools and Ecosystems

  • Frequent recommendations: tmux (with tweaks like inheriting current directory), zellij, fzf, vivid for LS_COLORS, eza, fd, ripgrep, bat, ugrep, Atuin, Emacs/ansi-term, Lightkeeper GUI, and more.
  • PowerShell sparks a split: some praise piped objects and structured serialization; others find it verbose, non-portable, and conceptually misguided compared to plain-text Unix pipelines.

Philosophy and Legacy

  • Some celebrate the resilience and composability of the “dumb” text-only stack (SSH → shell → tmux → editor) and its backward compatibility, even with 1980s hardware.
  • Others argue the terminal-as-emulator model is archaic and blocks better interaction designs, but recognize replacing it without breaking everything is extremely hard.

Linux Running in a PDF

Novelty and limitations of Linux-in-a-PDF

  • Commenters find the project amusing and impressive: Linux (and even vi) running inside a PDF via a RISC‑V virtual machine.
  • Many jokes about running Doom (and DOOM Emacs), recursive “Linux.js in a PDF in a browser in Linux…”, and tongue‑in‑cheek ideas like “proxmox.pdf” and “mobile Kubernetes cluster” on a USB stick.
  • Some note that, despite the headline, it’s effectively “Linux in a PDF in Chromium,” since the file only works in Chromium-based browsers and not in Adobe Reader, Firefox, Evince, Safari, or typical printer workflows.

Runtime environment: Chromium, JavaScript, and VMs

  • The implementation relies on JavaScript execution inside PDF, which is well-supported by Chromium’s PDF engine but not consistently elsewhere.
  • People connect this to earlier “code in PDF” exploits: Tetris, Doom, Z‑machine interpreters, and other PostScript-based hacks.
  • There’s discussion about Turing completeness: PostScript is Turing-complete, but PDF only includes a restricted subset; later, JavaScript was added, reintroducing full programmability.

Security concerns and attack surface

  • Several commenters highlight that PDFs are a long-standing malware vector, especially with complex interpreters (JS, PostScript) and large, vulnerable codebases like Acrobat and Ghostscript.
  • Some suggest turning off scripting in viewers (e.g., a specific Firefox setting) or converting PDFs through Ghostscript or to formats like DjVu or OpenXPS.
  • Others warn that Ghostscript itself is risky on untrusted input and should be sandboxed (e.g., containers, gVisor).
  • One person notes VirusTotal flags this PDF with a few detections, though the significance is unclear.
  • There’s speculation about future attacks as LLMs begin “ingesting” PDFs with embedded active content.

PDF viewers, UX frustration, and alternatives

  • Strong dislike for modern Adobe Reader: AI assistant overlays, cluttered UI, and perceived hostility to simple reading.
  • Users mention Sumatra (including portable builds), Okular, built-in browser viewers, Edge’s PDF support, MuPDF, and mobile readers.
  • On locked-down systems, executable viewers are often blocked while PDFs are allowed, making “PDF as app platform” oddly attractive.

“Because you can” vs. usefulness and accessibility

  • Debate over whether such stunts are a waste of time versus valuable curiosity-driven exploration and security education.
  • Some lament “dynamic PDFs” that are just shells for JS-loaded web content, calling them anti‑PDF: worse for accessibility, longevity, and screen readers, which often see only a “Please wait…” placeholder.

Subway crime plummets as ridership jumps significantly in congestion pricing era

Overall safety vs perception

  • Many commenters argue NYC subways are very safe given scale: ~3.8M daily rides and 147 reported crimes in a month is seen as remarkably low.
  • Several people say media and social networks amplify rare, horrific incidents (burning, track shoves) to create a distorted “crime-ridden” narrative.
  • Others counter that focusing on per-capita rates minimizes the impact a single high-profile attack can have on public fear and ridership.

Underreporting and statistics

  • Multiple comments claim substantial underreporting: victims don’t bother if they expect no response, and police sometimes refuse or downgrade reports.
  • One self-identified officer alleges systemic “management” of crime stats: downgrades, dismissals, desk appearance tickets that later “disappear,” suggesting official numbers understate reality.
  • Skeptics reply that even if stats are imperfect, trends over time still show improvement, and lived experience is a biased metric.

Subway vs cars and other risks

  • Long subthread compares subway violence to car crashes:
    • Cited figures: ~10 subway murders/year vs ~250 traffic deaths in NYC, with fewer daily drivers than riders.
    • Several insist cars are objectively far more dangerous per person, but society normalizes vehicular harm as “accidents.”
  • Some argue “agency” matters: people feel more in control driving than trapped in a train car, even if that’s statistically misleading.

Policing, cameras, and causes of decline

  • Disagreement over what reduced crime:
    • Explanations include: more cops and National Guard in the system, congestion pricing boosting ridership (“eyes on the street”), and pandemic-era crime spikes naturally receding.
  • Some see police presence as effective deterrence; others say it’s mostly theater, and that long-term crime reduction comes from social services, not punitive responses.
  • Cameras are viewed by many as a net positive for solving crime; a few doubt they deter the kind of extreme, impulsive violence being discussed.

Rider experiences, fear, and social conditions

  • Long-time riders report rarely seeing serious crime and feeling safe; others recount frequent harassment, especially women and Asian riders.
  • Homelessness, mental illness, and “uncomfortable incidents” are said to be common but mostly non-criminal, yet psychologically impactful.
  • Comparisons to London and other global cities split: some say NYC feels uniquely menacing; others say it’s comparable or safer, with perception heavily colored by culture and media.

OpenWrt 24.10.0 – First Stable Release

Package management and new release features

  • Commenters note APK (Alpine’s package manager) migration is planned for a later release, not 24.10.
  • Some are already using 24.10 RCs; upgrades generally preserve configuration cleanly, though targets moving from swconfig to DSA require manual reconfiguration.
  • New features like better tunnel support (e.g., IPIP6) are appreciated by people on IPv6‑native / tunneled setups.

Hardware choices & recommended devices

  • Three broad approaches emerge:
    • All‑in‑one consumer routers: GL.iNet Flint 2/MT‑6000, OpenWrt One, various TP‑Link/Asus/Netgear/Dynalink boxes; NanoPi and Banana Pi boards for more DIY.
    • x86/mini‑PC routers (Lenovo Tiny, Protectli, Teklager boxes, custom rackmount builds) plus separate OpenWrt APs.
    • Full UniFi stacks, with some later reflashing UniFi APs to OpenWrt.
  • Mediatek Wi‑Fi chipsets and ath9k/ath10k/11k are repeatedly cited as well‑supported; Realtek 2.5G NIC support in this release is called out.
  • Some caution that new Wi‑Fi 7 / Qualcomm SoCs (e.g., IPQ53xx) will lag in OpenWrt/Linux support.

High‑speed links (1–10 Gbit)

  • Several users run 1–10 Gbit home connections; OpenWrt on modest x86 (Core i3, Xeon‑D, etc.) can route 10 Gbit at low CPU usage.
  • FreeBSD‑based pfSense/OPNsense is reported to hit 5–7 Gbit ceilings on identical hardware in at least one case.
  • Hardware offload in OpenWrt is still seen as immature; CPU can become the bottleneck at multi‑gig speeds on embedded SoCs.

Configuration management & complexity

  • One theme: OpenWrt is excellent for features and security vs stock firmware, but long‑lived configs become hard to reason about (defaults vs changes, auto‑generated cruft).
  • Suggested mitigations:
    • Use /rom/etc vs /etc diffs where available; keep “firstboot” backups for comparison.
    • Track /etc in git or compare downloaded backup archives.
    • Cron jobs to record installed packages and include that in backups.
  • Some prefer NixOS or similar “config as code” systems on router‑class hardware and relegate OpenWrt to dumb APs; others feel NixOS images are too large or not well‑supported on typical ARM router SoCs.

Security and package quality

  • One commenter criticizes non‑core packages as under‑audited, citing an SSTP client script disabling TLS validation by default.
  • Others argue that despite such flaws, OpenWrt remains far more secure and up‑to‑date than most vendor firmwares (e.g., ancient 2.6 kernels, infrequent updates).

Flashing, official hardware, and ease of use

  • Many note that most supported routers can be flashed via the stock web UI or TFTP, without hardware mods; Xiaomi is mentioned as a common exception.
  • OpenWrt One is praised as a “no‑screwdriver” device: decent specs, 2.5G WAN, hacker‑friendly design (JTAG header), and good stability on 24.10.
  • Some warn specific boards (e.g., Banana Pi R4) have rough edges: broken Wi‑Fi/SFP in current kernels and immature upstream support.

Mesh networking and fleet management

  • OpenWrt supports mesh; OpenWisp is suggested for centralized management, though perceived as overkill for small home setups.
  • An improved third‑party “Table of Hardware” frontend is highlighted for picking devices by detailed criteria.

Comparisons and alternatives (pfSense, OPNsense, OpenBSD, Merlin, UniFi)

  • pfSense/OPNsense are viewed as easier for some routing‑only use cases but weaker for Wi‑Fi/AP roles; a common pattern is OPNsense router + OpenWrt APs.
  • Some move to NixOS or OpenBSD routers for clearer, versioned configuration, or to Linux/Alpine on small PCs.
  • Asuswrt‑Merlin is described as “enhanced vendor firmware”; OpenWrt wins on longevity once vendors end support.
  • One person replaces a fragile DIY stack with UniFi gear for family reliability; another reports the opposite (aging UniFi abandoned, revived via OpenWrt).

Community and project direction

  • Enthusiastic praise is common: long‑term stability, painless upgrades, and powerful QoS/adblock setups on cheap hardware.
  • One contributor expresses disillusionment: small PRs were merged but larger, multi‑year efforts were ignored; they perceive maintainers focusing on “fun” targets (GPUs, Doom, custom hardware) over merging substantial improvements.

I believe 6502 instruction set is a good first assembly language

6502 as a First Assembly Language

  • Many commenters report 6502 as their first assembly (often on Apple II, C64, NES) and find it approachable: few core registers, simple addressing modes, easy to hand-assemble, and runs on very simple systems without OS complexity.
  • Zero page is often described as a “pseudo-register file,” which, once understood, makes the ISA feel richer than the 3 visible general registers suggest.
  • Some praise how constraints (8‑bit arithmetic, no MUL/DIV, tiny stack) force you to understand multi-byte math, pointers, and low‑level behavior deeply.

Critiques of 6502 for Beginners

  • Others argue it’s a poor first ISA: highly idiosyncratic, pointer width > register width, heavy reliance on zero page, awkward stack, and patterns that don’t map well to modern CPUs.
  • Scaling beyond toy programs exposes pain points: manual management of global scratch bytes, difficulty with recursion, and hacks around the small stack.
  • Several note you spend disproportionate time fighting limitations rather than learning generally transferable assembly skills.

Alternative “First” Architectures

  • 6809 and 68000 are frequently suggested as more orthogonal, “C‑like,” and pleasant, with more registers and better addressing modes; PDP‑11 and Z80 also have strong advocates.
  • For modern relevance, many recommend RISC‑V or ARM (especially Cortex‑M / ARMv6‑M), citing clean load/store designs, toolchain support, and cheap hardware.
  • There is sharp disagreement on RISC‑V: some call it the best, simplest ISA; others criticize missing features (e.g., overflow flags, indexed addressing) as making real-world assembly harder.

Pedagogy, Context, and Motivation

  • Several emphasize that the platform matters as much as the ISA: simple 8‑bit micros or NES‑class machines make buses, memory-mapped I/O, and cycles easy to visualize.
  • Others prefer teaching assembly via modern compilers and disassembly (godbolt, objdump) on x86/ARM so students see directly how their usual languages map to machine code.
  • A common theme: assembly is most useful not as a primary language, but to build mental models of hardware, understand C/pointers, and reason about performance.

Microplastics in the human brain

Headline, Units, and Framing

  • Many focus on the “spoonful of plastic” phrasing as misleading and ambiguous:
    • Article actually refers to mass equivalent to a plastic spoon, not volume filling a spoon.
    • Others note spoons vary in size; “credit card’s worth of plastic” is seen as a clearer analogy.
  • One commenter claims a likely unit conversion error from another news piece: ~4.8 mg in the brain vs ~4 g for a plastic spoon (i.e., ~1000× less).

Study Methodology and Quantification Disputes

  • The study extrapolates whole‑brain values from tiny tissue samples, assuming uniform distribution; several see this as a weak assumption.
  • Reported ~25% within‑sample variation in gas chromatography is seen as a large uncertainty that becomes problematic when scaled to whole organs.
  • Some argue that scaling uncertainty doesn’t justify confident “spoonful” claims; others counter that the paper’s main value is showing longitudinal trends, not exact totals.
  • Use of pyrolysis GC is criticized: it breaks down all polymers, and some note that natural fatty acids can produce similar signatures to plastics, risking mis‑attribution.

Do Microplastics Actually Harm Us?

  • Several note the paper is a preprint, emphasizing “may,” i.e., presence and harm are both uncertain.
  • There is concern about correlations with dementia, but also pushback that many things correlate with dementia and causality is unclear.
  • Others argue that lack of clear mechanistic proof isn’t comforting, given ubiquity and difficulty of establishing control groups (similar to early leaded gasoline or smoking debates).
  • Comparisons are made to sand dust, plant fibers, and other particulates; some question why nanoplastics should be uniquely dangerous vs historically present particles.
  • One explanation: nanoplastics can cross gut lining and the blood–brain barrier, enter cells, mimic existing molecular “keys,” and potentially interfere with cellular processes, even if most ingested plastic is excreted.

Bioaccumulation, Clearance, and Trends

  • Study reportedly finds:
    • No correlation between total brain plastic and age, suggesting possible clearance or equilibrium rather than simple lifetime accumulation.
    • Roughly 50% increase in brain plastic concentrations in the last ~8 years, consistent with rising environmental levels.
  • This leads to cautious optimism that reducing exposure could lower body burden, but concern that exposure is still rapidly increasing.

Sources of Exposure and Individual Actions

  • Car tires are repeatedly cited as a major source of microplastics; some note EVs’ higher weight may worsen tire wear, others counter that regenerative braking and future lighter EVs complicate the picture.
  • Microplastics are said to be in rainwater, food, and drink; moving away from cities or roads might reduce but not eliminate exposure.
  • Suggested personal mitigations include:
    • Using reverse osmosis or similar filtration, growing some food, reducing driving, biking/walking more.
    • Regular blood donation is speculatively mentioned as a way to reduce body burden, though this is half‑joking and not evidence‑backed.
  • Some argue it’s practically impossible to avoid plastics given packaging and infrastructure, so systemic rather than purely individual changes are needed.

Policy, Transport, and Urban Form

  • Many tie the issue to urban planning and transport:
    • Calls for more mass transit, rail, cycling, and “15‑minute cities.”
    • Counter‑arguments that most US regions lack density or infrastructure to rely mainly on transit/bikes today.
  • Vehicle design:
    • Advocacy for lighter cars and against SUV/truck “bloat,” with criticism of regulatory incentives that favor heavier vehicles.
    • Ideas like biodegradable tires or mandating plastic‑free tire compounds are floated, with skepticism about political will.
  • Some express pessimism: voters and markets keep demanding bigger vehicles, and policy changes (especially in the US) are seen as unlikely in the near term.

Risk Communication and Public Reaction

  • Several criticize “panic porn” headlines and fear‑based framing that jump from “may” to “you have a spoonful of plastic in your brain.”
  • Others argue the opposite: having plastic in the brain at all should be alarming enough to justify precautionary action, even before detailed harm is quantified.
  • There is meta‑discussion about how to communicate emerging environmental risks:
    • Tension between dispassionate accuracy vs. framing that actually motivates change.
    • Concern that microplastics haven’t yet attracted the sustained public focus seen for topics like vaccines, despite potentially global, inescapable exposure.

What does it mean that MP3 is free?

Ongoing relevance of MP3

  • Many argue MP3 is far from obsolete: tiny files, “good enough” quality, and near‑universal playback keep it dominant, especially for legacy devices and cars.
  • Others note that typical users haven’t manually downloaded MP3s in years; streaming has replaced file‑based listening for most people.

Competing formats: AAC, FLAC, Opus, Vorbis

  • AAC-LC and HE-AAC are described as widely supported and, according to some, now effectively patent‑free; at common streaming bitrates (160–320 kbps) AAC and MP3 are similar in size.
  • FLAC is valued for being lossless and space‑cheap on modern drives; some rip CDs only to FLAC, then transcode to a lossy format for phones. Others see FLAC’s complexity and imperfect seeking as unnecessary versus simple PCM or high‑bitrate MP3.
  • Opus is praised as dramatically better than MP3 at low and mid bitrates (e.g., 32 kbps for voice, ~96 kbps for music) but criticized for spotty ecosystem support, especially on Apple platforms.
  • Ogg Vorbis is remembered as technically strong but hampered by poor early software/hardware support and an off‑putting name.

Device compatibility and practical tradeoffs

  • Compatibility is a recurring reason to stick with MP3: old car stereos, portable players, and miscellaneous gadgets reliably support it, while FLAC/Opus often do not.
  • Storage constraints (phones without SD slots) still push some users to lossy formats; others prioritize having one unified collection over juggling lossless + lossy copies.

Patents, “freedom,” and business impact

  • Several note MP3 patents actually expired years ago; the “now” in the article is seen as misleading.
  • Many individuals had long used MP3 encoders/decoders freely; the main impact of patents was on businesses distributing MP3s at scale.
  • Discussion expands to H.264: some patents are expiring now, though the landscape is complex; AV1 is highlighted as a royalty‑free‑by‑design successor, though not guaranteed “patent‑free” in the absolute sense.

UX, branding, and platform quirks

  • FOSS audio adoption issues are linked to poor UX, naming, and marketing (e.g., “Ogg Vorbis,” confusing Linux codec install paths).
  • Apple’s iOS is criticized for making local MP3 management cumbersome compared to Android’s simple file access, even though playback support is longstanding.

Okta Bcrypt incident lessons for designing better APIs

Bcrypt truncation and API design

  • Many commenters focus on bcrypt’s 72‑byte input limit and silent truncation as a fundamentally bad API design for security‑sensitive code.
  • Several languages/libraries expose both “raw bcrypt” and “non‑truncating” variants; criticism is that the unsafe, interoperable one is the default, while safer versions are longer‑named or hidden.
  • Suggested alternatives:
    • Fail loudly (error/exception) on >72 bytes or on NUL bytes.
    • Make the strict/safe API the default, and move legacy/“raw” bcrypt into an explicitly dangerous/hazmat namespace.
  • Some defend the current behavior for interoperability and legacy compatibility, but others argue that “compat” is not worth the footgun.

Password hash vs KDF and misuse of bcrypt

  • Strong agreement that bcrypt is a password hashing function, not a general key derivation function; it’s designed for short, low‑entropy secrets plus salt.
  • Okta’s usage (hashing userId + username + password for a cache key) is seen as misusing a password hash where a general KDF or plain hash would have been more appropriate.
  • There’s extended discussion clarifying distinctions:
    • Password hashes: slow, salted, produce verifier strings.
    • KDFs: derive keys of specific sizes from higher‑entropy inputs, often with different cost tradeoffs.
  • Some note the naming confusion in the ecosystem (bcrypt historically called a KDF) and the resulting developer misunderstanding.

Why mix username/userId/password into a cache key?

  • Hypotheses:
    • Ensure cache entries differ per user and password.
    • Auto‑invalidate cached auth data when a password changes.
  • Several commenters argue this is overcomplicated and risky: better to store user‑scoped data (e.g., password version or last‑credential‑change timestamp) and/or the bcrypt hash itself, instead of the raw password.
  • Others point out that pre‑hashing before bcrypt introduces additional gotchas (NUL handling, encoding).

Was bcrypt the real bug?

  • Some argue the deeper bug is algorithmic: treating a hash as a unique key without validating the underlying data, ignoring that any fixed‑width hash can collide.
  • Others counter that with a strong 192‑bit bcrypt output, real‑world collision risk is effectively negligible; the practical issue was the truncation behavior, not generic hash collisions.

Library ecosystem and examples

  • PHP’s password_* API is cited as harder to misuse (no expose‑your‑own‑salt hash function, just password_hash and password_verify).
  • Other libraries (e.g., Common Lisp’s Ironclad) are praised for explicitly rejecting overly long inputs.
  • Rust, Zig, and other ecosystems are discussed as partial successes/partial failures in API design around truncation flags and “non_truncating_*” functions.

Broader lessons and reactions

  • Many characterize Okta’s design as a “rookie mistake,” especially for an auth company, and use it to argue that security vendors often employ ordinary generalist developers without deep crypto review.
  • The thread repeatedly reinforces: don’t invent ad‑hoc constructions; use well‑designed KDFs (PBKDF2, scrypt, Argon2, HKDF, libsodium primitives) and APIs that fail safely.

Google kills diversity hiring targets

Access and Article Context

  • Several commenters had trouble accessing the paywalled WSJ link; others pasted large excerpts or alternative mirrors.
  • Summary of article as discussed: Google is dropping explicit hiring/leadership diversity targets, removing strong DEI language from its annual report, and citing recent court decisions and new federal executive orders as reasons to “evaluate changes” to DEI programs.

Motivations for Dropping Targets

  • Many see this as a business/legal move, not an ideological one:
    • Desire to align with the current US administration, avoid antitrust or DEI-related legal risk, and protect government and military contracts.
    • As a federal contractor, Google is now under a new executive-order regime replacing older affirmative-action rules.
  • Others call it “craven” symbolism either way: DEI was adopted and is now being abandoned primarily for optics.

Is DEI Racist or Corrective?

  • One camp argues diversity targets are just a sanitized form of racial discrimination, particularly against white and Asian men, and were likely illegal all along.
  • Another camp argues DEI is about counteracting well-documented bias, expanding candidate pools, and improving decision quality through diversity of perspectives.
  • Several note a gap between “ideal” DEI (e.g., bias-aware, race-blind processes) and how many corporate programs actually operated (e.g., de facto quotas, bonus incentives, explicit “diversity headcount”).

Experiences and Anecdotes

  • Some hiring managers report explicit pressure or mechanisms to prioritize certain genders/races (e.g., extra headcount, quota-like expectations) and say it produced weak teams and resentment.
  • Others who ran or participated in DEI initiatives say:
    • The main real effect was diversifying who got into the interview funnel.
    • Diverse candidates who made it to interviews were often stronger on average because they had to clear higher informal barriers.
  • Individual stories include:
    • “Diversity recruit” processes that didn’t change bar but altered sourcing.
    • Candidates feeling excluded or deprioritized for not disclosing demographic data.

Representation, Metrics, and Categories

  • Debate over what “diversity” should be measured against:
    • US population, global talent pool, or top-university pipelines.
    • Race vs. childhood household income vs. other disadvantage indicators.
  • Multiple comments highlight Asians being overrepresented in tech and often treated as “effectively white” in DEI frameworks.
  • Several argue that focusing on race entrenches racial thinking instead of moving toward a truly color-blind system; others counter that ongoing racism makes color-blindness aspirational, not current reality.

Meritocracy, Standards, and Interviews

  • Some insist DEI lowered standards and distracted Google from technical competitiveness, especially in AI.
  • Others argue tech leadership was never a true meritocracy; informal networks (school, family, “clubby” culture) already distort merit.
  • There is skepticism that Leetcode-style interviews are an accurate or fair “pure merit” filter; DEI or not, hiring is inherently subjective.

Pipeline vs. Hiring-Stage Fixes

  • Significant support for shifting effort upstream:
    • Early childhood education, K–12 support, and STEM access rather than late-stage hiring targets.
  • Several note that underrepresentation of some groups in CS degrees and elite universities constrains what any corporate hiring program can realistically do.
  • Others caution that dismantling DEI at big companies can still harm outreach partnerships that were nudging the pipeline.

What Comes Next

  • Some fear a flip to “prove you’re not doing DEI,” where every hire is scrutinized to show the absence of diversity preference, potentially re-normalizing old biases.
  • Others welcome the rollback as the end of what they see as divisive, coercive corporate ideology.
  • Several point to a broader trend: big tech moving toward “no politics at work,” shrinking DEI orgs, and re-centering on “mission first” and legal compliance.

Tell HN: Cloudflare is blocking Pale Moon and other non-mainstream browsers

Cloudflare challenges and browser blocking

  • Many users report Cloudflare making sites unusable on Pale Moon, SeaMonkey, Falkon, Min, qutebrowser, some Librewolf/Zen setups, and older Firefox builds: endless “Verifying…” loops, repeated captchas, or outright “You have been blocked.”
  • Similar issues occur on mainstream setups: Firefox/Chrome on Linux, Firefox on macOS, Arc, Chromium, TV/phone browsers, and iOS with Brave or iCloud Private Relay.
  • Newer Cloudflare challenges appear to depend on Web Workers / Service Workers and specific JS APIs; blocking or lacking those can cause infinite loops or even browser crashes.
  • Some sites behind Cloudflare (e.g., shops, GitLab instances, parcel tracking) become impossible to use for affected users, who often simply abandon the site or vendor.

Misconfiguration vs Cloudflare defaults

  • Captchas and challenges frequently appear on robots.txt, sitemap.xml, JSON/XHR APIs, RSS/Atom feeds, and even robots-only endpoints, breaking crawlers and feed readers.
  • Some argue this is site-owner misconfiguration; others note even Cloudflare’s own blog RSS feed and large sites misconfigure it, suggesting CF’s defaults are poor for “machine-consumption” URLs.
  • Cloudflare customers can tune rules or exempt paths, but most never do; Cloudflare’s dashboard doesn’t clearly expose the tradeoffs or consequences.

Privacy tools, fingerprinting, and spoofing

  • Users who clear cookies, block trackers, use strict anti-fingerprinting, containers, VPNs, Tor, or iCloud Relay report far more Cloudflare friction or total blocks.
  • Commenters say Cloudflare goes well beyond User-Agent: TLS/JA3 fingerprints, browser feature tests, JS behavior, and “browser integrity” signals; UA spoofing often fails.
  • Privacy-focused settings (e.g., Firefox resistFingerprinting, disabled Web Workers/Service Workers) commonly trip challenges or cause silent failures.

Security benefits vs harms and centralization

  • Pro-Cloudflare side: at scale, sites face DDoS, AI scrapers, card-testing fraud, aggressive crawlers, and volumetric abuse that basic rate limiting can’t handle; Cloudflare’s free/easy WAF, caching, and bot filtering are seen as essential.
  • Skeptical side: many long-time operators report few serious DDoS incidents; simple tools (fail2ban, local rate limiting, good app design) handle most problems without outsourcing MITM control.
  • Several call Cloudflare “security theater”: sophisticated scrapers bypass protections with headless browsers and residential proxies, while normal users are blocked.

Open web, discrimination, and future direction

  • Strong concern that a single intermediary now effectively decides which browsers and network paths are “legitimate,” creating a de facto whitelist of “major up‑to‑date” browsers and default configs.
  • This is seen as hostile to new engines and niche browsers: if they can’t pass Cloudflare challenges, they can’t reach “half the web,” making innovation and diversity infeasible.
  • Cloudflare’s MITM role (TLS termination, potential content modification, origin–edge TLS weaknesses) plus concentration of traffic is viewed as a systemic risk to the open, decentralized web.
  • Some suggest legal angles (public nuisance, accessibility/ADA, regulation) or simply boycotting Cloudflare-backed sites; others see no practical replacement given current abuse levels.

Kill the "user": Musings of a disillusioned technologist

Respect, coercion, and ad-driven design

  • Commenters highlight four questions (respect, mental health, lifestyle fit, non‑coercion) as a powerful lens; most modern consumer software fares poorly.
  • Strong sentiment that users are often not the real customers; software is optimized for advertisers or internal KPIs.
  • Advertising is debated: some call it “root of the rot”; others say it’s necessary but has become adversarial.
  • Distinction is made between informative vs persuasive ads; proposals include segregated “yellow pages”-style spaces, bans on certain ads, and tighter regulation of digital tracking.

CLI as refuge and its limits

  • Several people gravitate to CLI tools as “respectful”: no sign‑in, composable, long‑lived, less manipulation.
  • Others argue this is only because CLIs filter for literate, skeptical users; once mass‑marketed, CLIs would also accumulate telemetry, dark patterns, and bloat.

From HCI to UX to “late capitalism”

  • Many see a shift from “interfaces as tools” (HCI, platform HIGs) to “user experience” as a vehicle for branding, engagement, and sales.
  • Critiques: persona decks and landing pages replace understanding feedback, consistency, and workflows across apps.
  • Several tie this to “late capitalism”: infinite growth, MBAs, VC incentives, and engagement metrics overpowering user welfare.

Old vs new software layers

  • Photoshop, macOS, Windows, Word are cited as “tree rings”: a solid, user‑centric core layered with subscriptions, sign‑ins, web‑views, telemetry, and in‑product ads.
  • Some nostalgia for pre‑subscription eras; others note that old software also had UX warts and inconsistent dialogs.
  • The web is blamed both for normalizing bloated, ad‑centric UIs and for destroying the market for paid, native “real software.”

Lock‑down, ownership, and bifurcation

  • Strong worry that “personal computing” is being replaced by locked‑down ecosystems (app stores, mandatory signing, integrity checks).
  • Android is cited as an early example (payments/features disabled on unlocked devices).
  • Some predict a split: secure, entertainment‑capable consumer devices vs separate, more open “developer machines.”
  • Linux is viewed as a fragile but crucial refuge, yet criticized for unstable, fragmented desktop UX.

AI, agency, and the future UX

  • Some hope AI will disrupt incumbent big tech and act as a translator between human intent and machine operations (“augmented” or “calm” computing).
  • Others fear further loss of human agency: automation and AI may deepen dependency and shallow engagement.
  • A speculative vision emerges: humans interact mainly with AI agents; the web becomes a backend of APIs, with “adversarial prompt engineering” as the new dark pattern.

Open questions

  • How to actually enable non‑experts to create “folk” or personal software remains unclear.
  • Renaming “user” is seen as insufficient; commenters argue misaligned economic incentives must change.

Ingesting PDFs and why Gemini 2.0 changes everything

Perceived strengths of Gemini 2.0 for PDFs

  • Many commenters report Gemini 2.0 Flash / 1.5 Flash as “good enough” or better than legacy OCR for:
    • Financial PDFs (KYC/due diligence, fintech ingestion, SEC filings).
    • Healthcare lab reports.
    • Mixed text/tables/diagrams where schema is defined (JSON output).
  • Ease-of-use, multi‑modal support, huge context windows, and simple prompts (“OCR this PDF into this JSON schema”) are repeatedly cited as major advantages over prior cloud OCR products.
  • Some see it as a breakthrough for RAG ingestion and semantic chunking: model can both extract and suggest meaning‑preserving chunks.

Accuracy, benchmarks, and limitations

  • Reported table benchmark score (~0.84 vs ~0.90 for a specialist model) is debated:
    • Author and others argue many “errors” are superficial structural differences; numerics are “almost never” wrong.
    • Specialist vendors counter that in production, hallucinated rows, checkbox states, and subtle sentence rewrites still occur, and their customers need near‑deterministic behavior.
  • Several practitioners emphasize that traditional OCR is only ~80–85% accurate anyway, but LLM hallucinations are qualitatively worse: they can rewrite or invent entire phrases.
  • For high‑stakes domains (finance, healthcare, legal), multiple commenters say even “very few” numeric errors are unacceptable; they layer multiple models, validation, or human review.

Bounding boxes, layout, and attribution

  • Strong consensus that Gemini currently struggles with precise bounding boxes and spatial reasoning on digital docs, even if text recognition is good.
  • Workarounds:
    • Use classic OCR/layout engines (Textract, Tesseract, Unstructured, Docling, Chunkr, etc.) for boxes + text, then feed segments to an LLM for understanding.
    • Two‑pass LLM approaches: first extract entities, then ask the model to locate them among OCR’d chunks.
    • Some open/commercial systems offer accurate layout segmentation with rich JSON (Docling, Marker, Chunkr, Reducto, others) and then call VLMs only on complex pieces (tables, formulas, charts).

LLMs vs traditional OCR / specialist services

  • Experiences vary:
    • Some replaced well‑known OCR vendors with Gemini, cutting latency from minutes to seconds and cost by an order of magnitude, accepting ~4–10% residual error.
    • Others found Sonnet, GPT‑4o, or Qwen‑VL outperform Gemini on certain PDFs, especially technical papers and long tables.
    • Specialist document‑AI vendors argue that pure‑LLM pipelines are brittle at scale; they combine VLMs with classic CV models, layout detection, heuristics, and human‑in‑the‑loop to meet strict SLAs.
  • Open‑source options (Tesseract + Tika, Docling, Marker+Surya, Qwen2.5‑VL, Chunkr, edgartools, etc.) are widely discussed as cheaper, local, or more controllable, but usually require more engineering.

Cost, scale, context, and determinism

  • Flash models are praised as extremely cheap per page, especially with batch/Vertex pricing, though some commenters recalc lower “pages per dollar” than in the article.
  • Several note that all major LLM APIs are subsidized; long‑term pricing and vendor lock‑in are concerns.
  • Mixed reports on long‑context reliability:
    • Some users successfully work at 100–200K tokens.
    • Others see degradation beyond ~20–40K, with hallucinations when asking multiple questions over large docs.
  • Non‑determinism (even at temperature 0) is flagged as a real issue, especially for pipelines that depend on reproducible outputs.

RAG, semantic chunking, and workflows

  • A recurring pain point: naive fixed‑size chunking of PDFs hurts RAG recall; users are excited about using Gemini to produce semantically coherent chunks directly for indexing.
  • Suggested patterns:
    • Use Gemini for OCR + semantic chunking + schema‑filled JSON.
    • Store both structured data and raw model outputs; sometimes also embed for vector search.
    • Mix lexical search (BM25) with semantic search to reduce “zero‑result” failures.
  • Ideas like multi‑model cross‑checking (two models + a third arbiter), reasoning‑based re‑queries, and explicit citations/bounding boxes are proposed to mitigate hallucinations.

PDFs, standards, and philosophy

  • Many lament PDF as a “dead‑tree emulation” that discards structure; entire industries now exist just to re‑extract machine‑readable data that began life digitally.
  • Some note that PDF does support logical structure and embedded metadata (Tagged PDF, hybrid PDFs, iXBRL, Factur‑X/ZUGFeRD), but these features are underused by real‑world producers.
  • Several argue that, despite hype, Gemini 2.0 doesn’t “change everything”:
    • It meaningfully expands the feasible set of RAG/ingestion tasks and pressures legacy OCR vendors.
    • But fundamental challenges—hallucinations, attribution, high‑stakes accuracy, and messy real‑world layouts—remain unsolved and still demand careful system design.

Tesla sales plummet in the UK, France, and Germany

Perceived causes of the sales drop

  • Several commenters argue the main drivers are economic and competitive: many more viable EVs exist from European groups (VW, BMW, Mercedes, Stellantis, Renault-Nissan) and Korean brands, plus rising prices and less generous tax treatment (e.g. UK benefit-in-kind and road tax changes).
  • Others insist politics is now a major factor, especially in Europe, where Tesla is no longer seen as the default EV and negative sentiment toward the brand is visible in polls and personal anecdotes.

Competition and Chinese EVs

  • There’s disagreement on how important Chinese EVs are in Europe.
    • One side: “Chinese have far better and cheaper EVs,” with BYD, MG, and others gaining ground, and tariffs (around 27–35%) only a partial brake.
    • The other side cites industry data showing Chinese brands still relatively small vs. established European groups, with Geely/Volvo the most successful so far.
  • Concerns are raised about relying on “a brutal dictatorship” and on the financial health and parts availability of many Chinese carmakers.

Musk’s politics and brand damage

  • Many posts link collapsing demand, especially in Europe, to Musk’s behavior: Nazi-like salutes, support for far‑right parties (AfD) and figures (e.g. Tommy Robinson), and antisemitic conspiracy amplification.
  • In several countries, people reportedly refer to Teslas as “Nazi cars” and some owners are debadging or selling their cars to avoid the association.
  • A minority argue ideology is overstated and pocketbook/competition effects dominate; others counter that once alternatives exist, political toxicity matters a lot.

Product, FSD, and service

  • Mixed ownership reports: some say their Teslas have been exceptionally reliable and low-maintenance; others complain about poor build quality, cheap interiors, and “nightmare” service/parts delays in both US and Europe.
  • FSD is heavily contested: some call it Tesla’s main advantage; others describe it as unsafe, stressful to “babysit,” and still far from true self-driving, with earlier hardware now admitted inadequate.

Charging, alternatives, and buyer behavior

  • In the US, Tesla’s Supercharger network is still seen by some as a decisive advantage for road trips, though others report successful long trips in non-Tesla EVs using CCS networks.
  • Many commenters say they will switch to Hyundai/Kia, Rivian, or other brands once NACS access is ubiquitous.
  • Broader theme: cars are strong social signals; owning a Tesla now communicates a political stance for many observers, which some buyers and ex‑buyers find unacceptable.

DOGE employees ordered to stop using Slack

FOIA, Slack, and Recordkeeping Status

  • Several commenters say the key move isn’t “Slack vs not-Slack” but shifting DOGE from under OMB to being a “presidential component” under the Presidential Records Act (PRA).
  • Under that interpretation, DOGE records would be PRA, not Federal Records Act, meaning they would generally not be FOIA-accessible until years after the president leaves office.
  • Others note a former National Archives official expects this status to be litigated, with courts deciding if DOGE is really just presidential advice or an oversight-like agency.
  • Some argue FOIA often fails in practice anyway because hostile agencies can stonewall or overuse exemptions.

Legality of DOGE and Presidential Powers

  • One camp says DOGE itself is legal: created via executive order by repurposing USDS and structured as a temporary organization authorized under existing statute.
  • Another camp argues that what DOGE is doing (e.g., de facto shutting down or freezing agencies like USAID, mass spending pauses) likely violates Congress’s “power of the purse” and anti‑impoundment rules.
  • There’s detailed back‑and‑forth over whether freezing or redirecting funds constitutes unlawful impoundment, and whether the president can abolish or “transform” agencies established in statute.
  • Some distinguish DOGE (advisory) from the president (who signs EOs and bears legal responsibility), others counter that advisors selected to match the president’s agenda blur that line.

Transparency, Oversight, and Authoritarian Drift

  • Many see moving DOGE communications out of immediate FOIA reach and off Slack as a direct attempt to avoid accountability, especially given DOGE’s role in sweeping government changes.
  • Commenters connect this to a broader pattern: fire or disable inspectors general, overwhelm courts with rapid changes, then rely on slow litigation to normalize overreach.
  • Comparisons are drawn to “self‑coup” dynamics and pre‑authoritarian transitions; others stress this is the predictable exploitation of long‑existing constitutional and procedural holes.

DOGE’s Role, Access, and Civil Service Purge Concerns

  • Reports that DOGE is locking people out of systems and even modifying code lead some to claim it is acting less like an auditor and more like an operational authority.
  • Multiple comments frame DOGE as a tool to purge “uncooperative” civil servants and hollow out agencies by creating chaos so people quit and positions remain unfilled.
  • Others emphasize that, formally, DOGE only investigates and recommends; the president (and agency heads) execute actual changes.

Alleged Corruption and Spending Examples

  • Supporters point to DOGE‑amplified “receipts”: large federal grants to NGOs (e.g., religious refugee/child services), foreign aid projects (e.g., reproductive health in Gaza), and Politico Pro subscriptions, calling them corruption or partisan slush.
  • Critics respond that:
    • Much of this looks like standard humanitarian, soft‑power, or information‑service spending.
    • The Politico story in particular is mostly about agencies buying subscription data; USAID’s direct funding to Politico appears relatively small.
    • Musk and allies selectively highlight numbers without context, in a reprise of the “Twitter Files” style: big insinuations first, details and corrections later (if ever).

System Design, Checks and Balances, and Political Polarization

  • Several discuss structural issues: filibuster‑driven Senate paralysis, gerrymandering, two‑party lock‑in, and a judiciary slow or unwilling to constrain a determined executive.
  • Some argue the US system assumed “good faith” and restraint; once a president and party reject those norms, the legal architecture is easily abused.
  • There are calls for deeper reforms, even suggestions of a constitutional convention, versus resignation that both recent parties have stretched legality.

Slack, Alternatives, and Data Control

  • Separate from FOIA, some question why sensitive government work ever used Slack, given data sits on a third‑party’s servers.
  • Others note government Slack instances have been treated as fully FOIA‑able, with all content presumptively disclosable.
  • Side discussion debates Slack vs Teams vs Discord vs Rocket.Chat in cost, UX, and suitability for long‑term knowledge; many prefer Slack but acknowledge its expense and centralization.

Public and Emotional Reactions

  • The thread shows sharp polarization: some call DOGE “obviously illegal” and urge boycotts of Musk companies; others claim it is “obviously legal,” popular, and exposing entrenched corruption.
  • A recurring worry: either DOGE reflects a massive unlawful power grab, or it exposes enormous unguarded backdoors in US governance; both are seen as alarming.

Are LLMs able to notice the “gorilla in the data”?

Causes of “gorilla blindness”

  • Some commenters initially attribute the failure to ethics/“woke” anti-bias filters around primate recognition, drawing analogy to earlier Google photo incidents.
  • Others push back, calling that speculative and noting the setup is different (statistical EDA + scatterplot, not person-labeling).
  • Alternative explanations raised:
    • Architectural limits: the model is doing text/statistics-first reasoning, not deep visual pattern search.
    • RLHF/behavioral training: models are strongly optimized to agree with user framing and not question assumptions.

Image vs raw data, prompting, and context

  • Key point: in the article, the model mostly “saw” the code and statistical framing, not the plotted image it generated.
  • When people upload the PNG directly and ask “What do you see?”, many models do identify a “monkey/gorilla/cartoonish figure” or at least “artistic pattern.”
  • Results vary across models (GPT-4o, Claude, Gemini, DeepSeek, Mistral) and even across runs; randomness and prompt phrasings matter.
  • Several suggest the prior conversation about summary statistics biased the model away from visual interpretation.

Is the experiment fair? What should EDA include?

  • One camp: expecting an AI to automatically do pareidolia-like shape finding in scatterplots is unreasonable and wasteful; if you want that, ask explicitly.
  • Opposing camp: if an AI is acting as an “expert analyst,” it should flag glaring anomalies or contrived structure (like the gorilla), akin to Anscombe’s quartet/Datasaurus.
  • Some note ambiguity: the model may have “seen” a pattern but judged it irrelevant given the user’s stated goal.

Human parallels and broader vision failures

  • Multiple references to the “Invisible Gorilla” inattentional blindness experiments; humans also miss obvious patterns under misdirective tasks.
  • Anecdotes of misclassification (cats as people, dogs as humans speaking, Gemini mislabeling a bald person as a plant) illustrate general brittleness in vision systems.
  • A few argue anti-primate mislabeling scars (e.g., earlier gorilla incidents) might make models overly cautious about primate-like shapes.

LLMs as agreeable assistants and weak statisticians

  • Several stories show models blithely accepting absurd steps (“and then a gorilla appears”) as if they were technical terms.
  • Concern that models act as “yes-men”: they affirm user claims (e.g., “roughly normal distributions”) and rarely challenge underlying data quality.
  • Commenters highlight this as the deeper “gorilla”: models don’t “trust but verify,” and RLHF encourages outputs that match user expectations over rigorous scrutiny.

Eggs US – Price – Chart

Bird flu and the supply shock

  • Many comments attribute the spike almost entirely to the current H5N1 outbreak, which has killed or forced culling of tens of millions of US laying hens.
  • Several note the impact is especially visible because eggs are largely regional products; outages in major producing states quickly hit local shelves.
  • Others point out that H5N1 is a longstanding, global avian pandemic affecting wild birds and multiple regions, not just the US, and that it’s now spilling into other animals.

Why mainly the US? International comparisons

  • Explanations offered for milder price moves abroad:
    • Smaller average flock sizes (e.g., Canada, Denmark), so each cull removes fewer birds.
    • Stricter biosecurity and salmonella controls in some European countries.
    • Supply‑management systems in Canada that cap farm size and stabilize prices.
  • Multiple comments contrast US policy with Mexico and Canada, where poultry vaccination against avian flu is more common.

Factory farming, farm size, and resilience

  • One camp argues US industrial methods (millions of birds per site, dense housing, heavy antibiotic use) make the system extremely vulnerable to disease and create “disease factories.”
  • Others counter that the main vector is wild birds, so concentration is less about cause and more about the scale of loss once a virus enters.
  • There’s recurring debate over whether food systems should optimize for maximum efficiency and low prices versus resilience and redundancy.

Vaccination and trade policy

  • Several note that US producers largely avoid H5 vaccination because vaccinated flocks can be barred from export markets under existing trade agreements.
  • Some argue that vaccinating at least a core of birds—as Mexico does—would dramatically stabilize supply and prices, but would require rethinking export‑oriented policy.

Cage‑free / free‑range rules and disease risk

  • New cage‑free mandates (e.g., in California, Michigan) are cited by some as contributing to higher costs and possibly higher exposure to wild birds.
  • Others clarify that “cage‑free” mostly means large indoor barns without individual cages, not true outdoor free‑range, so biosecurity remains crucial either way.
  • Evidence and anecdotes conflict on whether free‑range vs indoor housing is the dominant factor in current outbreaks.

Local eggs, backyard flocks, and decentralization

  • Many report that small local farms and backyard producers have had stable prices and better availability; in some areas these eggs are now cheaper than supermarket brands.
  • Others note local flocks are also at risk from H5N1 and predators, and that true cost (labor, infrastructure, losses) often exceeds the nominal feed cost.
  • There’s a strong thread in favor of decentralizing food production (local farms, backyard chickens, even quail), but with pushback that this cannot realistically supply large cities at current consumption levels.

Price‑gouging vs genuine cost increases

  • Some commenters see the spike as mostly genuine supply shock, pointing to flock losses and historical correlations between H5N1 waves and prices.
  • Others highlight past price‑fixing cases in the egg and potato industries and note recent record profits at large egg companies, arguing that firms are using disease and “inflation” narratives to mask opportunistic hikes.
  • Several suggest eggs—and food generally—illustrate a broader pattern where corporate concentration allows margins to widen during crises.

Politics, public health, and communication

  • Multiple comments criticize the current US administration for muzzling federal health agencies, restricting communication on H5N1, and cutting infectious‑disease capacity.
  • There’s a long digression into culture‑war issues (language policing around “women,” DEI, “woke” vs right‑wing extremism), with some arguing these distractions crowd out serious focus on food prices and pandemics.
  • Others frame egg prices as one visible symptom of deeper structural choices: deregulation, trade priorities, and tolerance for fragile, highly concentrated food systems.