Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 120 of 350

Unlocking free WiFi on British Airways

Technical Approaches to Bypassing Paywalled WiFi

  • Discussion centers on exploiting “free messaging” tiers by:
    • Spoofing SNI to look like permitted apps (e.g., WhatsApp) while tunneling arbitrary HTTPS through a proxy.
    • Using domain fronting–style techniques, where the visible hostname differs from the true backend.
    • Running VPNs over unusual ports (notably UDP 53) and DNS-tunneling tools like iodine to smuggle traffic in TXT/subdomain payloads.
    • Using pluggable transports (e.g., Lyrebird, Xray) that hide proxy traffic behind seemingly legitimate TLS handshakes to allowed domains.
  • Several commenters report success with WireGuard/OpenVPN on nonstandard ports or over DNS, but also note that many modern captive portals now block everything except specific IPs/hosts.

How Airlines and Cruises Enforce Restrictions

  • Many providers inspect TLS ClientHello:
    • Basic setups only check SNI against a whitelist (e.g., airline site, messaging apps, visa sites).
    • More advanced firewalls (e.g., Fortinet-style) verify that the certificate CN/SAN and CA match the SNI.
  • Some systems allow a few initial packets of any TCP flow, then classify and reset connections if not whitelisted.
  • “Free messaging” often also whitelists push-notification services so onboard apps can receive messages.
  • There’s debate on whether IP whitelisting is feasible:
    • Hard in general due to CDNs and changing IPs.
    • Easier when platforms cooperate and publish ranges or provide zero-rating integrations.
  • Cruiselines and airlines sometimes block websites for known circumvention tools and may ban travel routers or personal satellite gear.

Broader Protocol and Censorship Context

  • SNI is criticized for enabling easy traffic classification and censorship; its historical role in enabling HTTPS virtual hosting is noted.
  • Encrypted ClientHello (ECH) is mentioned as a future obstacle to SNI-based filtering and “free messaging” offers.
  • These techniques are also linked to evading national-level censorship (e.g., Tor transports, Great Firewall–style probing).

Ethics, Legality, and Risk

  • Ethical views split:
    • Some see this as theft of service and unnecessary for well-paid professionals.
    • Others view it as harmless use of spare capacity and praise the educational value.
  • Legal risk on aircraft is highlighted:
    • Concern about broad interpretations (e.g., “tampering with aircraft systems”) and possible severe consequences, even if actual safety impact is unclear.
  • A few commenters emphasize that the annoyance or danger of legal trouble far outweighs saving a modest WiFi fee.

User Experience, Capacity, and Business Models

  • Multiple anecdotes from flights and cruises:
    • Pricing (e.g., ~$50/day on cruises) seen as excessive, especially when performance can be poor.
    • Others report very usable Starlink-backed service, suggesting variability by ship/installation.
  • Some argue bandwidth is now sufficient (Starlink, specialized LTE backhaul), so strict gating is mainly revenue-driven.
  • Counterpoint: providers must still limit access to keep shared links workable.

Security Culture and Pen-Testing

  • BA’s overall security posture is critiqued, with references to past web compromises.
  • Pen-tests are described as useful for regression detection but insufficient as a sole security strategy; organizations often over-rely on them instead of listening to internal engineers.

Miscellaneous

  • Some readers enjoy being forced offline and worry about more ubiquitous inflight connectivity.
  • Accessibility point: this case is cited as exactly why proper alt attributes for images matter—when images can’t load, content should remain understandable.

First convex polyhedron found that can't pass through itself

Clarifying the Rupert Property and Problem Scope

  • Discussion centers on the “Rupert property”: one copy of a convex polyhedron can pass straight through a hole in another congruent copy, leaving nonzero material (“not cutting it in half”).
  • In practice, this is phrased as: does there exist an orientation where one 2D projection (“shadow”) of the shape fits strictly inside another projection of the same shape?
  • Commenters stress that equality of shadows is trivial and uninteresting; a strict margin is required.
  • The result concerns convex polyhedra only; several people note the article’s “shape” title is misleading without that qualifier.

Spheres, Limits, and Nonconvex Shapes

  • Many initially point to a sphere (and donuts, cylinders) as obvious shapes that can’t pass through themselves.
  • Others counter: spheres and torii are not convex polyhedra, so they were never part of the conjecture.
  • Attempts to treat a sphere as a “limit” of increasingly fine polyhedra are rejected: limiting behavior is subtle, the limit object is no longer a polyhedron, and properties like Rupertness need not carry over.
  • Nonconvex examples (donut, T-tetromino) are easy noperts, reinforcing why convexity is central.

Computation and Search Strategy

  • The core difficulty is ruling out all orientations for a candidate polyhedron; brute force is impossible.
  • The proof strategy uses projections and parameter-space pruning: if a protruding “shadow” requires large rotations to fix, whole regions of orientations can be discarded.
  • More faces and symmetry make checking Rupertness harder; earlier work (e.g., triakis tetrahedron) already revealed extremely tight fits.
  • The computational part is implemented in SageMath and shared openly; some plan to 3D-print the resulting Noperthedron from provided STL files.

Rotation and Motion

  • Several ask whether twisting or helical motion (sofa-around-a-corner style) could allow passage where straight motion cannot.
  • Replies note the standard Rupert problem assumes straight-line passage and, for convex shapes, rotation during transit likely doesn’t fundamentally change the feasibility condition defined via shadows.

Communication, Naming, and Audience

  • Multiple comments criticize the title’s looseness (“shape” vs “convex polyhedron”) but praise the article’s level of detail as accessible yet substantial.
  • Debate arises over whether Quanta targets laypeople or a technically inclined audience, and whether its headlines verge on clickbait.
  • The coined name “Noperthedron” triggers a deep side thread on how portmanteaus work in English (and even comparisons to Mandarin), illustrating the community’s fondness for linguistic as well as mathematical play.

Value and Funding of Pure Math

  • Some question why such problems are studied at all; others defend curiosity-driven math as legitimate and historically fruitful, with applications often emerging decades later.
  • There’s discussion about who pays for such work (sometimes hobbyists, sometimes institutions), and analogies to past “useless” mathematics that later underpinned computer graphics and logic.

Broader Context and Cultural Reactions

  • Commenters connect this result to a recent popular video exploring Rupert/nopert problems and attempts to show familiar solids (e.g., snub cube) are non-Rupert.
  • There’s enthusiasm for the aesthetics of the shape, suggestions to include it (and other recent mathematical curiosities) on future space probes, and general appreciation for the whimsy, history, and bet-driven origins of the problem.

Asahi Linux Still Working on Apple M3 Support, M1n1 Bootloader Going Rust

Asahi’s Pace vs Apple’s Chip Cadence

  • Some see supporting each new M‑series chip as a Sisyphean task; others (including a contributor) say most non‑GPU/NPU interfaces evolve incrementally, so once a base of drivers exists, a small team can keep up.
  • Many note that even an M1 remains very capable for years, so lagging behind the latest hardware is acceptable, especially for Linux users who often prefer older/used machines.

Openness, Secure Boot, and General‑Purpose Computing

  • Several worry that Macs are a last bastion of general‑purpose computing as platforms drift toward locked‑down, signed‑only ecosystems.
  • Apple deliberately allowed other OSes to boot on Apple Silicon Macs, unlike iOS/iPadOS, but people fear this could be revoked in future generations.

User Experience: What Works Well, What Doesn’t

  • Many report Asahi on M1/M2 as remarkably polished: smooth installation, good daily usability, strong performance, even working 3D gaming for some.
  • Key missing pieces remain: no Thunderbolt/DP Alt‑Mode on some models, no reliable suspend‑to‑RAM or hibernate, and notable sleep battery drain. These keep some users on macOS as primary OS, with Asahi only for specific tasks.

Bare‑Metal Linux vs Virtualization

  • Several insist VMs/containers (Docker, Orbstack, UTM, Apple’s container project) can’t replace bare‑metal Linux for things like Wi‑Fi promiscuous mode, low‑level debugging, or obscure kernel features.
  • Others argue macOS + a well‑integrated Linux VM is more pragmatic than fighting incomplete hardware support.

Mac Hardware vs Linux‑Native Laptops

  • Strong divide: some claim no PC vendor matches MacBook build quality, battery life, and reliability; others point to ThinkPads, Framework, and Linux‑preloaded OEMs as “good enough” or ethically preferable, despite worse battery life.
  • Cost, repairability, and upgradability (soldered RAM/SSD vs modular designs) are major axes of disagreement.

Project Health and Strategy

  • Commenters note key reverse‑engineering figures leaving and worry Asahi is “on life support.”
  • Others counter that the current focus is upstreaming and maintenance; GPU for M3+ is hard because Apple changed the instruction set, but core platform support continues.

Apple’s Incentives and Documentation

  • Many argue Apple has little financial reason to fund Linux drivers: profit comes from ecosystem lock‑in, not from selling Macs to Linux users.
  • Apple is seen as “hands‑off but not hostile”: they neither document nor actively block Linux, which forces Asahi to continue its reverse‑engineering approach.

A sharded DuckDB on 63 nodes runs 1T row aggregation challenge in 5 sec

Sharded / Distributed Query Engines

  • Question about open-source sharded planners over DuckDB/SQLite led to mentions of Apache DataFusion Ballista and DeepSeek’s “smallpond” as comparable approaches.
  • GizmoEdge itself is not open source; the author intends to mature it into a product. Smallpond is cited as an OSS alternative for similar distributed DuckDB-style workloads.
  • Other systems suggested as “already built for this”: Trino, ClickHouse, Spark, BigQuery, and Redshift; some see GizmoEdge as re-implementing a familiar MapReduce-style pattern (worker SQL + combinatorial SQL).

Hardware Scale, Cost, and Practicality

  • The cluster used 63 Azure E64pds v6 nodes (64 vCPUs, ~500 GiB RAM each), totaling ~4,000 vCPUs and ~30 TiB RAM.
  • Multiple commenters argue this is “overpowered” and question whether it’s cheaper than Snowflake/BigQuery.
  • Rough cost math in the thread: about $236/hour on-demand (~$0.33 for a 5-second query) vs a single Snowflake 4XL at ~$384/hour, but critics note this ignores cluster setup, engineering, and always-on costs.
  • A single-node DuckDB setup by the same author reportedly did the challenge in ~2 minutes for about $0.10, raising questions about where the scale-out point really pays off.

Challenge Methodology & Fairness

  • Key caveat: the 5-second time excludes loading/materializing data. Workers spend 1–2 minutes downloading Parquet from cloud storage and converting to DuckDB files on local NVMe first.
  • Some argue this violates the spirit of the “One Trillion Row Challenge,” which they interpret as timing from raw files to result; pre-materializing and then measuring only query latency is called “cheating” or at least misleading.
  • Others request explicit cold-vs-hot-cache benchmarks and clearer disclosure; filesystem caching and lack of cache dropping may affect comparability.

Architecture & Implementation Choices

  • Each node ran ~16 worker pods (3.8 vCPU, 30 GiB RAM) due to Kubernetes overhead and cloud quota; the author admits shard sizing is heuristic, not fully optimized.
  • Workers execute DuckDB queries locally and stream Arrow IPC results back to a central server via WebSockets. The server merges partial results.
  • A long subthread debates WebSockets vs raw TCP/UDP:
    • Pro-WebSocket arguments: easy framing, TLS termination, existing libraries, multiplexing via HTTP routing.
    • Skeptical views: extra protocol complexity, HTTP parser, and SHA-1 for minimal benefit in a non-browser context; alternatives like raw sockets, ZeroMQ, or Arrow Flight are mentioned.
  • Filesystem choice (ext4 vs XFS) and Linux page cache behavior are raised as potentially material to performance; reproducibility concerns are noted.

OLAP vs OLTP and Other Databases

  • Several comments contrast DuckDB (columnar, OLAP) with OLTP systems like MSSQL, explaining why analytical aggregations can be orders of magnitude faster on OLAP engines.
  • DuckDB’s “OLAP-ness” is briefly questioned due to writer blocking readers, but others clarify “online” refers to interactive analytics, not realtime streaming.
  • ClickHouse is cited as a market leader in real-time analytics, though some note it still favors throughput over ultra-low-latency ingestion.
  • DuckLake is described as solving upserts over data lakes; some confusion remains about what it adds beyond reading Parquet directly.

Use Cases, Robustness, and Skepticism

  • One commenter worries that DuckDB’s strength is single-node, one-off analytics and that bolting it into a persistent Kubernetes cluster sidesteps hard problems (fault tolerance, re-planning on failure, multi-query resource management, distributed joins).
  • Others see the experiment as a “fun demo” and a proof-of-possibility for edge/observability scenarios, but not yet production-grade.
  • A notable criticism is that sustaining this performance implies keeping 30 TiB of RAM and 4,000 vCPUs warm, which many organizations would balk at paying for continuously.

Miscellaneous Technical Points

  • COUNT DISTINCT at scale is discussed: approximate HLL-based sketches vs exact bitmap-based methods, with mention of a DuckDB extension.
  • Some joking asides: Tableau generating huge queries, quantum-computing hype, and sortbenchmark.org’s insistence on including I/O in benchmarks.

Typst 0.14

Role of Typst vs Other Tools

  • Commenters stress that Typst is a typesetter and LaTeX competitor, not a converter like Pandoc.
  • Pandoc is framed as a powerful but different tool: it converts between markup formats and calls external typesetters.
  • Compared with LaTeX, Typst is praised for a cleaner language, single-pass compilation, easier styling, integrated scripting, and a self‑contained binary rather than gigabyte distributions.
  • Compared with Markdown/Asciidoc/Org, Typst is seen as better for complex documents (contracts, specs, books) while still feeling lightweight.

New 0.14 Features & PDF Handling

  • Native PDF-as-image support is widely celebrated as removing a major blocker to leaving LaTeX.
  • The new Rust PDF engine (hayro) impresses people with speed, portability, and standalone reuse; large PDFs render almost instantly.
  • Character‑level justification and early microtypography work are viewed as a big quality upgrade.
  • PDF/UA‑1 export and accessibility checks are praised; some note LaTeX now has tagging too but with more complexity and gaps in package support.

Ecosystem, Tooling, and Business Model

  • Core compiler/CLI is open source; the web editor is proprietary. Many use only the CLI plus TinyMist language server in VS Code and other IDEs.
  • The open‑core model and relatively generous pricing are generally viewed positively, with some caution that many OSS companies change later.
  • Typst’s built‑in package manager and growing ecosystem (slides packages like Touying/Slydst, drawing via cetz, indexing with in‑dexter, games, Tufte‑style templates) are highlighted.

Use Cases and Strengths

  • Users report successfully replacing LaTeX, PowerPoint/Marp, Markdown+Pandoc, and Asciidoc for: theses, books, lecture slides, posters, invoices, CVs, specs, and e‑reader article conversion.
  • Fast incremental compilation, clear diagnostics, Unicode support, and simpler layout/footers are recurring themes.
  • Single‑binary deployment makes it attractive for embedding in Rust/Go services to generate PDFs on the fly.

Limitations, Academic Adoption, and Missing Features

  • Major blockers: lack of official support from journals and arXiv, weaker collaborative web experience vs Overleaf, and incomplete parity with LaTeX’s Beamer and TikZ (though Touying/cetz narrow the gap).
  • Other issues mentioned: locale‑aware decimal formatting, citation-style glitches, video/animation in slides, indexing depth, and still‑incomplete accessibility (tables).
  • Backwards‑compatibility policy is seen as unclear; some expect breaking changes until 1.0.

LLMs and Learning Curve

  • Experiences with LLMs generating Typst are mixed: some find them very helpful for templates and snippets, others report constant syntax errors and hallucinations.
  • Regardless, documentation quality and language simplicity make Typst approachable compared with LaTeX.

Poker fraud used X-ray tables, high-tech glasses and NBA players

NBA, Gambling, and Fan Alienation

  • Several commenters say this story is “the last straw” in their relationship with the NBA, tying it to:
    • The league’s aggressive embrace of gambling and constant betting ads.
    • Long, foul-heavy games, load management, tanking, and a very long season.
    • Fragmented TV rights that require multiple subscriptions.
  • Some argue gambling promotion is like cigarette advertising: socially harmful and especially predatory toward kids, with language like “fun” and “play” normalizing addictive behavior.
  • Others note smoking-prevention campaigns and regulation worked only as part of a broader mix (taxes, public bans, de-normalization) and worry similar tools are being abandoned for gambling.

Should the State Police Cheating in Illegal Gambling?

  • One camp calls this a waste of resources: gambling is socially harmful by default, so “fair” vs “unfair” games shouldn’t matter, and legitimizing enforcement could even boost trust in other crooked games.
  • Opponents counter that:
    • This was organized crime, not a kitchen-table game: fraud, extortion, and money laundering are squarely in law enforcement’s remit.
    • $7m in cash/crypto is far more valuable to crime families than equivalent taxed, traceable business revenue.
    • If police don’t intervene, victims may resort to violence.

Cheating Tech: “X-Ray” Tables, Shufflers, and Marked Cards

  • Multiple commenters doubt literal X-rays; the consensus is:
    • Likely IR or similar wavelengths through an IR-transparent tabletop, misbranded as “X-ray” by media or prosecutors.
    • Rigged shufflers that read deck order (often via barcode-like marks on edges) and either:
      • Re-stack decks algorithmically, or
      • Are swapped with pre-arranged decks.
  • Marked “reader” cards plus special glasses/contacts are described as relatively old-school; many note there are simpler, low-tech ways to cheat once you control the environment.
  • Broader point: there are so many cheating methods that playing in private games with strangers is inherently risky.

Economics and Purpose of the Scam

  • Some think $7m over years, split among ~30 people and multiple families, is barely worth the risk and effort.
  • Others suggest:
    • That figure is likely a floor, not the full take.
    • The real leverage may be blackmail and sports betting/fixing tied to indebted NBA figures.
    • The thrill, access to celebrities, and untraceable cash can matter as much as pure ROI.

Poker, Gambling, and Morality

  • Mixed attitudes toward poker:
    • Critics see it as paying to sit for hours, deceive people, and take their money.
    • Fans defend it as a deep skill game (math + psychology) and a structured social activity; low-stakes home games are framed as paying for entertainment, not “trying to get rich.”
  • Several emphasize that pros target wealthy “recreational” players and that variance makes “just play better poker” an unrealistic alternative to guaranteed cheating.

Twake Drive – An open-source alternative to Google Drive

Tech stack & architecture

  • Backend is TypeScript/Node.js with MongoDB, which triggers debate:
    • Some see Node/TS as reasonable for I/O‑heavy services and code-sharing with frontend.
    • Others argue a file‑sync system is also CPU‑heavy (hashing, crypto, concurrency) and that JS performance and single‑threaded model will become a bottleneck, similar criticism as for PHP‑based Nextcloud/ownCloud.
  • MongoDB choice is contentious:
    • Several report bad experiences and warn against using it for critical data; others say it’s been “rock solid” for years with WiredTiger.
    • Some note it’s at odds with a “fully open” mission; FerretDB is mentioned as an alternative.
  • Long back‑and‑forth on whether a database is needed at all:
    • One camp says filesystem/ACLs/snapshots/xattrs could store users, permissions, versions, and shareable links.
    • Others counter that complex metadata, joins, transactions, version history, and scalable sync essentially demand a DB.

Comparison with existing tools

  • Twake is compared heavily to Nextcloud/ownCloud:
    • Critics: Nextcloud seen as bloated, slow, and painful to install/maintain (especially outside their AIO stack).
    • Defenders: report years of stable use with Docker or Snap, good ecosystem, but admit rough edges and “2015‑era” web UI.
  • Seafile is praised as fast and reliable but upgrades can be painful.
  • Syncthing widely liked for peer‑to‑peer sync, but mobile and large‑file use cases are weaker.
  • Simple alternatives: filebrowser, Samba shares, rsync; plus other projects like CryptPad, Peergos, Seafile, Immich for photos.

UX, clients, and core features

  • Unclear whether Twake has polished native or mobile clients; screenshots exist but app‑store links are missing.
  • Many emphasize must‑haves for any Drive replacement:
    • Sync that is predictable and explainable to non‑technical users.
    • Simple conflict handling.
    • Zero‑drama upgrades and easy, testable backups.
    • Selective sync with placeholders (Dropbox/OneDrive‑style) is seen as a major gap in many OSS tools.
  • Integration with collaborative editors is crucial; Twake reportedly bundles OnlyOffice for realtime Docs/Sheets‑style editing.
  • Some users care strongly about advanced search (image/content understanding) where Google remains far ahead.

Security, deployment & sustainability

  • Strong warnings against exposing Samba to the internet; VPN (Tailscale/Wireguard) recommended.
  • Concerns about whether Twake can build a durable community and business model so it doesn’t disappear; corporate backing (Linagora, ex‑Cozy Cloud/Cozy Drive) is noted but not deeply analyzed.
  • Debate over name “Twake” and domain; some think it’s hard to say/spell and thus hurts adoption.

Debian Technical Committee overrides systemd change

Context: /run/lock permission change and Debian TC override

  • systemd upstream made /run/lock root‑writable only, citing security and robustness.
  • Debian’s systemd maintainer followed upstream, which broke older software assuming a world‑writable lock directory.
  • The Debian Technical Committee overrode this, restoring the previous behavior for now in the interest of stability and compatibility.
  • Some argue this is exactly Debian’s role; others see it as an unhealthy clash between upstream and a distro maintainer wearing both hats.

Legacy serial tools and lockfile behavior

  • A side thread debates serial console tools: cu vs minicom, picocom, screen. Some prefer cu for simplicity and ssh‑like escapes; others find it outdated.
  • The traditional UUCP‑style locking model (/var/lock, LCK..device) is still used by some tools; others use flock or newer mechanisms.

Security vs compatibility of world‑writable lock dirs

  • Pro‑change side:
    • World‑writable shared dirs are long known footguns: symlink attacks on root processes and DoS by exhausting tmpfs inodes/space.
    • Modern practice favors flock() and per‑user runtimes ($XDG_RUNTIME_DIR = /run/user/$uid) instead of global /var/lock.
    • Given increased threat models (untrusted code, supply‑chain issues, AI‑generated bugs), the old design is seen as indefensible long‑term.
  • Skeptical side:
    • The concrete risk from /var/lock is seen as theoretical or niche compared to other attack surfaces.
    • Many legacy or unmaintained tools cannot realistically be fixed; making /run/lock root‑only forces awkward workarounds or containers.
    • Some suggest separate mounts or quotas as less disruptive mitigations.

FHS, UAPI, and filesystem layout politics

  • One camp says FHS 3.0 is effectively abandoned: it hasn’t tracked /run, /sys, /run/user, /usr‑merge, or container realities, and contains obsolete details (/var/games, /var/mail, UUCP locks).
  • Another argues a filesystem standard should be slow‑moving; “not updated” can mean “mature”, not “dead”.
  • systemd’s file‑hierarchy spec and the Linux UAPI Group are seen by some as a needed de‑facto successor; others view them as systemd/Fedora capturing standardization to legitimize their own layout choices.

Debian culture and pace vs “modernization”

  • Many commenters defend Debian’s “slow‑cooking” ethos: they value never having to reinstall and high upgrade stability, even if it delays changes like this.
  • Others criticize Debian for resisting long‑foreseen cleanups (global writable dirs, /usr merge) and making life hard for upstreams.

Views on systemd and its maintainers

  • Strongly mixed sentiment:
    • Supporters credit systemd with dramatically better service management, logging, and consistency across distros.
    • Critics see a pattern of arrogance, dismissing “niche” breakages, using warnings like “degraded/tainted” for unmerged /usr, and pushing the world to conform to systemd’s assumptions.
    • Some inject distrust over large‑vendor employment and speculate about motives; others push back, noting that upstream reasonably says “distros can patch behavior they want”.

Overall framing of the conflict

  • One reading: a straightforward distro‑vs‑upstream division of labor—systemd tightens defaults, Debian restores legacy behavior for its users.
  • Another reading: a recurring governance and culture clash where systemd unilaterally redefines long‑standing interfaces and Debian must either absorb the fallout or actively resist.

Interstellar Mission to a Black Hole

Primordial / Small Black Holes in the Solar System

  • Some imagine discovering an asteroid‑mass primordial black hole locally, avoiding interstellar travel.
  • Multiple comments stress that black holes are not “cosmic vacuums”: a Moon‑mass black hole would gravitationally behave like the Moon; tides and orbits would remain essentially unchanged.
  • The danger is from Hawking evaporation, not accretion: very small black holes could undergo runaway evaporation if their Hawking temperature exceeds the cosmic microwave background, potentially ending in intense gamma bursts.
  • Detection would be hard:
    • Gravitational effects or microlensing are primary options.
    • Hawking radiation might be detectable only in the final stages.
    • Some argue dust accretion should create faint but detectable X‑rays; others counter that matter densities are too low for significant accretion.
  • Ideas surface about black holes captured inside asteroids, making them anomalously dense.

Compact Objects as Megastructures / Sci‑Fi Concepts

  • Thought experiments: replacing the Moon with a black hole; building a mini‑Dyson shell around a black hole or neutron star to create a 1g “mini‑world”.
  • Limits noted: white dwarfs likely can’t be Moon‑sized; black holes/neutron stars make more sense.
  • Stability of Dyson‑like structures is highlighted as a major unsolved issue.

Light Sails, Steering, and Relativistic Hazards

  • Clarifications: Breakthrough Starshot–style designs are laser‑driven light sails, not solar‑wind sails; “light sail” is the generic term.
  • Stopping/steering:
    • You can tilt a sail to change direction; destination‑star light or a second reflector could in principle brake the craft.
    • Practically, deceleration forces at high speed and large distances are tiny, making orbital insertion extremely challenging; flyby missions seem more realistic.
  • Concerns raised about relativistic travel:
    • Interstellar medium impacts at ~0.5c could be catastrophic; “deflectors” à la Star Trek are invoked as a useful fiction.
    • Time dilation at 0.1–0.33c is acknowledged but calculated to be small (percent‑level), not millions of years.

Mission Feasibility: Trajectory Control and Communication

  • Several readers argue the key issue—how a ~1 g probe changes trajectory at ~0.3c—is largely hand‑waved in the referenced paper.
  • Proposed workarounds:
    • Fire large swarms of probes and rely on statistics (criticized as still inadequate in vast space).
    • Accept unbound flybys and use multiple daughter probes for local experiments and comparative trajectory measurements.
    • Use the sail itself for steering; more speculative ideas include paired probes with springs, which are dismissed as extremely inefficient “rockets” with terrible specific impulse.
  • Communication challenges:
    • Skepticism that a 1 g craft can transmit useful data over tens of light‑years; Voyager‑style high‑gain antennas and power sources are far too massive.
    • Suggestions include probe relays, return‑trajectory probes, or nuclear/betavoltaic power, but none are worked out in detail.
    • One commenter notes we also haven’t actually located a nearby black hole; relying purely on statistics is itself a “blocking” issue.

Scientific Payoff vs Alternatives and Priorities

  • Some see an interstellar black hole mission as inspirational but question the practical return: “nothing to see” versus strong counter‑claims about rich physics from accretion disks and lensing.
  • The Solar Gravitational Lens (SGL) mission and large orbital interferometric telescopes are proposed as more realistic, near‑term “aggressive” projects with clear payoff (e.g., imaging exoplanet surfaces).
  • Meta‑discussion laments funding going to AI‑pornbots and near‑term commerce rather than deep‑space infrastructure, though others note that profitable tech tends to get built, whereas pure exploration struggles.
  • A few broaden to long‑term human constraints: need to solve launch costs, longevity/aging, and perhaps FTL or cryosleep, or else missions become multi‑generation endeavors.

Apple will phase out Rosetta 2 in macOS 28

Timeline, Precedent, and Apple’s Philosophy

  • Rosetta 2 launched with M1 in 2020; removal in macOS 28 (2027) gives ~7 years of support.
  • Some argue this is generous and consistent with previous transitions (68k→PPC, PPC→Intel, Rosetta 1, 32‑bit drop); Apple has never prioritized long‑term backward compatibility.
  • Others say 6–7 years is short compared to Windows, where very old binaries often still run, and see this as planned obsolescence rather than necessity.

Impact on Existing Mac Apps and Plugins

  • Users rely on Intel‑only apps: scanner software, OCR tools, audio plugins, Photoshop plugins, DAW ecosystems, even current products that still tell users to run DAWs under Rosetta.
  • Many expect “long tail” software (older audio plugins, games, niche tools, studio setups) will simply die; some plan to freeze machines or keep old Macs offline for stability and compatibility.
  • There is concern about losing access to old creative projects that depend on discontinued plugins or formats.

Gaming, Wine/Crossover, and Apple’s “Subset for Games”

  • Apple says a subset of Rosetta will remain for “older unmaintained gaming titles” using Intel‑based frameworks.
  • Debate over what that actually covers: games touch large portions of Cocoa, Metal/OpenGL, AVFoundation, input, etc.; unclear how Apple will support games but not other apps using similar APIs.
  • Wine/Crossover and Game Porting Toolkit rely on the same translation tech; some fear newer Windows‑only games or Mac ports via Wine could be collateral damage despite the “games” carve‑out.

Containers, Docker, and Dev Workflows

  • Big concern from developers: x86‑only Docker images (e.g., SQL Server, corporate stacks) and desire to run exactly the same images as x86 production.
  • Confusion over whether Rosetta for Linux VMs and Apple’s containerization framework (which uses Rosetta) are affected; some read the notice as Mac‑app‑only, others note Apple’s language is vague.
  • Many report they already ship multi‑arch images; others say duplicating builds for ARM adds cost and isn’t always justified.

Virtualization and Technical Details

  • Rosetta never emulated whole VMs; Parallels/QEMU can emulate x86 independently but are much slower without Rosetta.
  • Apple Silicon includes hardware assists (TSO, flag instructions) for fast x86 translation; those will likely remain as long as any Rosetta subset exists, so chip die area isn’t saved by dropping Mac‑app support alone.

Reactions and Alternatives

  • Supporters see this as a necessary push to finish the ARM64 transition and reduce maintenance/QA burden.
  • Critics emphasize loss of user trust, broken workflows, and contrast with Linux/WINE or Windows’ longer compatibility horizons; some recommend not upgrading or switching platforms.
  • Several wish Apple would open‑source Rosetta so the community could maintain long‑term x86 support independently.

Alaska Airlines' statement on IT outage

Compensation policy and source confusion

  • One early subthread debates a quoted list of remedies (hotels, ground transport, meals, rebooking on other carriers).
  • Confusion arose because this text was on a linked “flexible travel policy” page, not the main statement page.
  • People argue over citation norms: whether quoting from a linked document without an explicit link is misleading, and whether linked pages should be treated as part of “the document.”

Passenger experiences during the outage

  • Multiple passengers report 4–8+ hour delays, tarmac waits, and arrivals at 3am.
  • Communication is described as poor: ground stops and repeated system failures weren’t clearly explained to passengers or gate staff.
  • Crew duty-time limits created extra uncertainty, with some flights ultimately canceled when crews “timed out.”
  • Offered compensation ranged from small meal vouchers ($12–$24) to potential discount codes, seen by some as inadequate given airport prices and lost time.

Legal and financial compensation debates (EU vs US)

  • Commenters contrast EU261-style compensation (250–600 EUR for long delays) with weaker or dismantled protections in the US.
  • Many recount European airlines resisting payouts, requiring escalation to regulators, small-claims court, or third‑party claim services.
  • There’s discussion of airlines exploiting technicalities (e.g., cancel vs delay, “extraordinary circumstances”) to avoid liability.

Operational impact and flight diversions

  • Some flights in the air were diverted or even returned to origin, possibly to avoid gate gridlock at Seattle.
  • Commenters note that once airborne, core operational IT needs are limited; the choke point is gates and ground operations.

Speculation about technical root cause

  • Some joke about expired certificates or DNS; others cite the airline’s wording about a “failure at a primary data center.”
  • One commenter claims many certificates are manually managed and prone to expiry; “autorotate everything” is the suggested best practice.
  • Others question the vague phrase “IT outage” and whether it masks internal mistakes vs external attacks.

Airline IT culture, infrastructure, and pay

  • Several threads describe Alaska’s infrastructure as old, fragmented, and dominated by internal “fiefdoms” resistant to modernization or best practices.
  • There are anecdotes of critical processes hinging on fragile components (e.g., SMTP), lack of cross‑team collaboration, and high turnover.
  • Reported compensation for engineers and SREs is considered low for mission-critical roles in the Seattle market.
  • Some defend older, mainframe-based cores (e.g., TPF) as stable, arguing that outages usually arise in newer middleware and integration layers.
  • Debate centers on culture and incentives more than raw technology: reliable systems could be built with 2015-era tech, but organizations don’t prioritize or staff that work.

Broader concerns about airline reliability and regulation

  • Commenters note that all major US carriers have had large IT failures recently, with repeated nationwide ground stops.
  • Perceived lack of consumer or regulatory pressure leads to minimal investment in resilience; many expect such disruptions to remain common.
  • Outsourcing to large IT vendors is blamed by some for systemic fragility.

Website / UX side-notes

  • The outage statement page is criticized for heavy weight due to a 2.4MB SVG logo that embeds an unoptimized PNG.
  • Commenters view this as emblematic of sloppy implementation and easy, low‑hanging performance fixes being ignored.

'Attention is all you need' coauthor says he's 'sick' of transformers

Dominance of Transformers and Research Monoculture

  • Several comments argue that transformers’ success has created an unhealthy monoculture: funding, conferences, and PhD work overwhelmingly chase incremental transformer gains instead of exploring other paradigms.
  • One analogy compares this to the entire food industry deciding to only improve hamburgers; another frames it as an imbalance between “exploration vs. exploitation.”
  • Others counter that this is just natural selection in research: the approach that works best (right now) wins attention and resources.

How Transformative Have Transformers Been?

  • Supporters say transformers have radically changed NLP, genomics, protein structure prediction (e.g., AlphaFold), drug discovery, computer vision, search/answer engines, and developer workflows.
  • Some practitioners describe LLM coding assistants as personally “transformative,” turning stressful workloads into mostly AI-assisted implementation.
  • Critics claim impacts in their own fields are “mostly negative,” with transformers driving distraction, noise, and shallow work rather than genuine scientific progress.

Slop, Spam, and Societal Harms

  • A recurring theme: transformers drastically lower the cost of producing plausible but wrong or low‑quality content (“slop”).
  • People highlight spam, scams, propaganda, astroturfing, robocalls, and degraded student learning as domains where LLMs currently excel.
  • Others argue models can also be used to filter and analyze such content, but acknowledge that incentives currently favor mass low-quality generation.

Architecture Debates and Alternatives

  • Some view transformers as an especially successful instance of a broader class (probabilistic models over sequences/graphs) and expect future gains from combining them with older ideas (PGMs, symbolic reasoning, causal inference).
  • Others emphasize architectural limits: softmax pathologies, attention “sinks,” positional encoding quirks, and scaling/energy costs. Various papers and ideas (e.g., alternative attention mechanisms, hyper-graph models, BDH) are mentioned as promising.
  • A minority is skeptical that a radically new architecture is the key; they see more upside in better training paradigms (e.g., reinforcement learning, data efficiency) than in replacing transformers.

AGI, Deduction, and Cognition

  • Some argue transformers are fundamentally inductive and can’t truly perform deduction without external tools; others respond that stochasticity doesn’t preclude deductive reasoning in principle.
  • A long subthread debates whether LLM capabilities imply “nothing special” about the human brain vs. the view that human cognition is grounded in desire, embodiment, and neurobiology in ways transformers do not capture.
  • There’s disagreement over whether LLM-generated work is genuinely “original” or just sophisticated plagiarism, and whether hallucination makes them categorically unlike human reasoning or just a noisier analogue.

Research Culture, Incentives, and Productization

  • Commenters note short project horizons (e.g., 3-month cycles) aimed at top conferences and benchmarks, favoring shoddy but fast incremental work.
  • Much of what the public sees as “AI” is described as 90% product engineering (RLHF, prompt design, UX) built on a small core of foundational research.
  • True non-transformer research is perceived as a small, underfunded fraction, overshadowed by the “tsunami of money” for transformer-based products.

Hardware, Energy, and Lock‑In

  • Transformers are praised for aligning extremely well with parallel GPU hardware, in contrast to RNNs; this hardware match is seen as a major reason they won.
  • Some worry that massive investment in GPU-style infra could trap the field on suboptimal architectures; others say good parallel algorithms are inherently superior, and hardware will evolve with any better approach.
  • Energy use and data center build‑out are flagged as looming constraints; some hope this will force more fundamental innovation.

Reactions to the Sakana CTO’s Anti‑Transformer Stance

  • Some dismiss the “sick of transformers” line as fundraising theater—positioning around “the next big thing” without specifying it.
  • Others see it as a normal researcher reaction: once a technique is “solved” and industrialized, curious people move on to more open problems.
  • A few compare this to artists abandoning a popular style, driven by boredom, stress, or ambition rather than purely by money.

Roc Camera

Purpose and Concept

  • Device is a Raspberry Pi–based camera that attaches a cryptographic proof (marketed as a ZK proof) to each image, claiming to attest that a given photo came from that camera and is unmodified.
  • Many commenters note it does not prove that the depicted scene is “real” or non‑AI, only that “this file was produced by this device with this firmware and metadata.”

Attacks and Limitations

  • “Analog hole” is repeatedly raised: you can photograph a screen, projection, or high‑quality print of an AI image and still get a valid proof.
  • Depth/LiDAR and extra sensors (IMU, GPS, ambient audio, etc.) are suggested to make such rebroadcasting harder, but others point out those signals can be spoofed (e.g., FPGA feeding CSI-2, HDMI‑to‑CSI adapters, fake sensor boards).
  • Even perfect attestation cannot address staging or selective framing; you can cryptographically prove a real photo of a misleading or manipulated scene.
  • Without a secure element on the sensor or SoC, several argue the current design cannot meaningfully prevent fully synthetic input.

Existing Standards and Alternatives

  • Multiple references to C2PA and camera-vendor schemes (Sony, Leica, Nikon, Canon). These sign images and/or edit histories; some earlier implementations were cracked.
  • Some say a simple per‑camera signing key is enough and ZK is just hype; others emphasize that richer, chained provenance (device + software edits) is the more mature direction.

Hardware and Product Design Reactions

  • Strong criticism of the $399 price for what appears to be a Pi 4, off‑the‑shelf IMX519 module, and visibly 3D‑printed case with cheap buttons.
  • Concerns about image quality (tiny sensor), Pi boot time and power draw, lack of current export function, and janky marketing site (scroll hijacking).
  • A minority defend it as a scrappy hardware experiment worth supporting even if rough; others call it a “toy” or “crypto gimmick.”

Open Source, Security Model, and ZK Debate

  • One side claims open-sourcing would break trust (users could sign AI images); others explain secure boot / HSM designs where user-modified firmware simply doesn’t get the vendor’s attestation key.
  • Several people ask what the ZK proof is actually proving beyond what a standard signature would, and note the site gives almost no technical detail.

Use Cases, Trust, and Social Implications

  • Suggested serious uses: journalism, courts, law enforcement, insurance, bodycams, real estate documentation.
  • Others argue that in practice authenticity will remain a matter of institutional and personal reputation, and that cryptographic “realness” may be overvalued, dystopian, or quickly undermined, much like DRM or NFTs.

Computer science courses that don't exist, but should (2015)

Resource-Constrained & Historical Computing

  • Many commenters like the idea of “Classical Software Studies”: writing games or apps on 64KB-era machines (Commodore, Game Boy, PDP/VAX, etc.) to relearn discipline around memory, modularity, and system limits.
  • Others warn this can give warped intuitions for modern tradeoffs (maintainability vs. micro-optimizing for RAM/CPU). Good as an introductory or elective experience, not the core of a curriculum.

Real-World Software Engineering Skills

  • Strong support for courses on:
    • Handling shifting requirements and “moving goalposts” via realistic group projects where specs change late.
    • Version control theory and practice (branches, merges, bisect, strategies).
    • CI/CD, build systems, containers, deployment pipelines, and debugging real systems.
    • Unix/Linux fundamentals (shell, text tools, processes), basic sysadmin/networking.
    • Teamwork, project management, and client communication.
  • Some report such courses exist in pockets (e.g., sneaky spec changes, devops-focused Java courses), but they’re rare and hard to fit into standard curricula.

CI/CD, DevOps, and What Counts as “Computer Science”

  • One side: CI/CD, Kubernetes, Jenkins, etc. are “not CS” but tooling; curriculum should focus on theory.
  • Other side: there’s plenty of CS in these topics (distributed systems via k8s, algorithms/policies in merging, conflict resolution, scheduling) and they are central to modern practice.
  • Practical obstacle: teaching CI/CD often devolves into teaching specific vendor stacks, and students may lack prerequisite skills (shell, Git, Linux).

History of Computing & “Classical Software Studies”

  • Many argue computing should treat its history more like physics or civil engineering: understanding past systems (VisiCalc, MacPaint, early hypermedia, Smalltalk-like systems) prevents constant reinvention and can inspire better designs.
  • Pushback: computing’s history is short and deeply tied to changing hardware “substrate,” so 1970s constraints may not generalize to today.
  • Counter-pushback: for most everyday applications, hardware stopped being the limiting factor decades ago, so historical software and interaction ideas are still highly relevant.

Object-Oriented vs Functional Programming / “Unlearning OOP”

  • The proposed “Unlearning OOP” course triggers heated debate:
    • Critics: this reflects a shallow or strawman view of OOP; many large systems rely on it successfully, especially in enterprise environments.
    • Supporters: OOP as commonly practiced (especially mutation-heavy, inheritance-centric designs) creates tightly coupled “webs of state” that resist refactoring, whereas functional styles encourage small, composable, testable units.
  • Several note that OOP and FP are often complementary in practice (e.g., functional core, imperative shell), and that the real goal is to loosen the dogma that “everything must be an object.”

Performance, “Slow Languages”, and Algorithmic Complexity

  • Enthusiasm for a “Writing Fast Code in Slow Languages” course:
    • Show that algorithmic improvements (O(N) vs O(N²)) often dominate language speed.
    • Teach vectorization, leveraging C-backed libraries (NumPy, Python sort), and architecture-aware design.
  • Some argue Big-O ignores constants and hardware effects; in practice, “worse” algorithms on linear, vectorizable data can beat “better” algorithms on pointer-heavy structures.

Debugging, Legacy Code & Software Archaeology

  • Many want explicit training in:
    • Systematic debugging (beyond print statements): debuggers, logs, profilers, memory tools; avoiding “debugging hypnosis.”
    • Reading and modifying large, messy, or abandoned codebases; understanding tickets, wikis, version history, and “digital archaeology.”
  • Suggestions include courses modeled on labs: given a legacy system with many bugs/perf issues; course ends when tests pass and scaling goals are met.

Programmer Psychology & Workplace Dynamics

  • Popular proposed courses:
    • “Refusal Lab” and ethics labs: practicing saying no to unethical or impossible demands.
    • “Obsessions of the Programmer Mind”: code formatting, type wars, over-refactoring, novelty-driven development, hype-chasing vs. principled skepticism.
    • Team dynamics, meeting behavior, blame management, and career realities (cheating analogies, “success has many parents; failure is an orphan”).

Broader Curriculum & CS vs Software Engineering

  • Recurrent theme: many of the proposed classes are really software engineering, devops, or professional practice, not “computer science” in the narrow theoretical sense.
  • Some fear CS degrees are drifting into coding bootcamps; others argue current programs already overemphasize theory and underprepare students for actual software development.
  • Shared undercurrent: whatever the division, students benefit from exposure to multiple paradigms (imperative, OO, functional, logic), historical systems, and at least one deep dive into how real software is built and maintained.

Counter-Strike's player economy is in a freefall

Artificial scarcity and digital ownership

  • Many see CS skins as classic artificial scarcity: trivially reproducible assets whose supply is constrained by a central database (Valve), similar in structure to NFTs but centralized.
  • Others note this mirrors long-standing real-world practices: luxury brands limiting runs, art markets where provenance drives price, designer sneakers, beanie babies, trading cards.
  • Skeptics argue digital items differ because their existence and rules depend entirely on a single company, which can change value “at the press of a button.”

China, capital controls, and shadow banking

  • Several comments describe CS skins as a de facto RMB↔USD bridge under Chinese capital controls.
  • Skins can be bought and sold in both currencies and transferred across regions; this enables informal FX and “discounted” Steam balance.
  • There’s disagreement on scale: some say it’s meaningful shadow banking; others claim Steam’s volume is too small to matter compared with other, more established channels.

Gambling, lootboxes, and kids

  • Strong sentiment that lootboxes are unregulated gambling, often targeted at minors, with case openings openly marketed on Twitch/YouTube.
  • Comparisons made to Pokémon/TCG packs, arcade ticket games, coin pushers, and casino gambling; several want lootboxes banned outright, at least for under‑18s.
  • Debate over responsibility: some blame parents for lax controls; others argue it’s unreasonable to expect families to individually counter highly-optimized gambling systems.

Status signaling and “irrational” value

  • High-priced skins are compared to designer bags, watches, sneakers, gold, and fine art: largely about status signaling and group norms, not intrinsic utility.
  • Some note online identities (pseudonyms) can be as important as offline status, so digital flexing feels meaningful to many players.
  • Others remain baffled that a $20k cosmetic knife can seem rational to anyone, seeing it as speculation, money laundering, or pure vanity.

Reactions to Valve’s changes

  • Many active players welcome cheaper knives and the blow to third‑party gambling sites, hoping this “re‑centers the game” on gameplay.
  • Traders and “investors” describe large paper losses and fear further unexpected rule changes; some mention extreme outcomes like reported suicides.
  • Several frame this as a textbook lesson in closed, centrally controlled economies: one policy change can wipe out billions in notional value.

Valve’s incentives and ethics

  • Two main theories: (1) product/legal strategy—reduce gambling risk and external speculation; (2) economic strategy—shift trades back on‑platform, where Valve captures marketplace fees.
  • An ex‑developer describes long‑standing internal concerns about off‑platform markets, scams, and players grinding purely for money.
  • Some view Valve as relatively pro‑consumer compared to the industry; others see the entire skins/lootbox system as “running a casino for kids” regardless of this tweak.

Apple loses UK App Store monopoly case, penalty might near $2B

Scope of Monopoly and the 30% Figure

  • Debate over how broad Apple’s power really is: critics frame it as “every business pays 30% to reach mobile users”; others counter that:
    • Only iOS app distribution is affected, not all businesses.
    • Historically 30% applied to paid apps and digital goods; now many apps pay 15% or nothing.
  • Clarification that pre‑2020 it was effectively a blanket 30% on paid apps and most in‑app digital purchases; some categories (Uber, Amazon, etc.) were exempt from IAP but still paid the 30% “distribution” cut on paid apps.

How the Tribunal Got to 17.5%

  • People ask where 17.5% comes from.
  • Quoted judgment: tribunal compared other platforms (Epic Games Store, Microsoft Store, Steam’s lower tier), concluded a competitive range of 15–20%, and simply took the midpoint (17.5%) via “informed guesswork.”
  • Some call this “made up”; others reply that this is literally what courts do when reconstructing a counterfactual in a monopolized market.
  • Disagreement whether Google Play should be a benchmark, given it is itself alleged to have similar competition issues.

What Is a “Fair” Commission?

  • Answers range from 0% (“already paid via hardware margins”) to 5–10% (anchored to payment processors) to “anything the market will bear if there is real competition.”
  • Strong view that the core problem isn’t the percentage but exclusivity: Apple both owns the OS and is the only allowed store and payment channel.
  • Many argue the only principled answer is: let third‑party stores and payments compete; then the market discovers the fair rate.

Sideloading, Third‑Party Stores, and Security

  • One camp: iOS’s gatekeeping is essential to prevent mass malware, spyware, and app‑driven fraud; opening to sideloading would re‑create the Windows XP “adware hell.”
  • Opposing camp: security comes from sandboxing and OS design, not store monopoly; macOS and PCs allow arbitrary installs and are not overrun; App Store already hosts scams and spyware‑like apps.
  • Some argue for strong warnings and “dumb‑user” modes, not a universal ban on alternative stores or direct .ipa installs.
  • Concern that Google is now copying Apple’s lockdown (verified‑developers signing, attestation), creating a de facto duopoly.

Comparisons: Steam, Consoles, and Others

  • Steam also takes ~30% but:
    • Operates on open platforms (Windows/Linux).
    • Decreases its cut at high volumes and provides substantial discovery/promotion.
  • Console stores (Sony/Nintendo) similarly charge 30%, but their hardware doesn’t dominate general computing and critical services (banking, messaging) the way phones do.
  • Many say the Apple case is different because iOS devices are effectively essential infrastructure and Apple bars rival stores and toolchains.

Fines vs Structural Remedies

  • $2B is characterized as “a couple of days of global revenue” or “a few days of profit”; likely cheaper than giving up monopoly rents.
  • Some suggest fines should scale with global revenue or escalate daily until compliance, or be paired with personal liability for executives.
  • Others argue fines are inherently weak; real remedy should be structural (forcing separation of OS and store, or mandatory openness).

Who Ultimately Pays

  • Tribunal assumed developers pass ~50% of the overcharge to users; several commenters call that economically naive, arguing prices are set by willingness to pay and devs will mostly keep the windfall if fees drop.
  • Broader point: in most cases, end users ultimately bear much of the cost of monopolistic rents, though incidence depends on market elasticity.

How memory maps (mmap) deliver faster file access in Go

When mmap is faster (and when it isn’t)

  • mmap can eliminate copies (kernel → user buffer → app buffer), giving big wins for read‑heavy, random access workloads (e.g., CDB, custom DBs, large matrices, append‑only logs).
  • For already-cached data, mmap can be as fast as a memcpy from RAM, while read/pread adds syscall overhead plus copies.
  • However, for typical sequential reads into a buffer, buffered I/O (fread/read/pread) and modern async mechanisms (io_uring) are often as fast or faster, with better readahead behavior.
  • Multiple commenters stress mmap is “sometimes” faster, not a general replacement for read/write.

Interaction with Go’s runtime and page faults

  • With mmap, a page fault blocks the OS thread running that goroutine; the Go scheduler can’t run another goroutine on that M/P while the fault is serviced.
  • With explicit file I/O, the goroutine blocks in a syscall and the runtime can schedule other goroutines, giving better utilization under I/O latency.
  • There is currently no cheap, ergonomic OS/API model for “async page faults”; proposals involving mincore, madvise, userfaultfd, signals, or rseq-style schemes are seen as complex and/or slow.

Benchmark design and the “25x faster” claim

  • Several commenters say the showcased benchmark is flawed: the mmap version often doesn’t actually touch data, only returns slices, so it measures “getting a pointer” vs “copying data”.
  • To be fair, both paths should read or copy the bytes and page cache should be controlled (e.g., drop_caches or touch pages).
  • With realistic access (actually reading bytes), a 25x gap is considered unlikely; more like small constant-factor differences depending on call size and access pattern.

APIs, OS design, and alternatives

  • Unix historically standardized on read/write to work uniformly across files, ttys, pipes, etc.; mmap arrived later and has different semantics (e.g., mapping can change under you, SIGBUS on shrink).
  • io_uring is repeatedly cited as a better modern primitive for high‑performance I/O: async, controllable readahead, zero copies with the right setup, and no hidden page-fault stalls.
  • Some argue OS-level abstractions like scheduler activations or newer proposals (UMCG) could better integrate user scheduling with page faults, but these are not widely available today.

Pitfalls and gotchas

  • Mappings outlive file descriptors; truncating or shrinking a mapped file can cause SIGBUS on access and unspecified behavior.
  • mmap allocations don’t show up clearly in Go’s pprof heap reports, making memory pressure/debugging harder.
  • Writes via mmap are tricky; in-place random writes can be problematic, though append-only patterns can work well.
  • Some filesystems or drivers can misbehave with mmap (e.g., reports of issues on macOS ExFAT with SQLite WAL), though the exact root causes are debated.

Real-world usage patterns

  • Positive reports: custom DBs, key–value stores, and log-structured systems see large gains from mmap, especially for random reads and read‑mostly workloads that fit in RAM.
  • Negative/skeptical reports: for typical apps doing buffered sequential reads or needing robust concurrency semantics, mmap adds complexity and hidden latency points without dramatic wins.

Context around Varnish

  • There’s discussion clarifying that “Varnish Cache” today has both a corporate fork and another version (renamed Vinyl Cache), and that the company behind the blog post has long funded and maintained the codebase rather than it being a one‑person effort.

/dev/null is an ACID compliant database

Humorous framing of /dev/null as a database

  • Most comments lean into the joke: /dev/null as the perfect, infinitely scalable, ACID- and CAP-compliant “database” with zero bugs, no support issues, and massive cost savings once all data is piped into it.
  • People riff on real-sounding productization: “Null as a Service” (NaaS), devnull-as-a-service.com, enterprise policies with /dev/null0 and /dev/null1, audits, and DR runbooks.
  • Support and analytics jokes: routing tickets and logs to /dev/null “solves” reliability and throughput problems.

ACID, CAP, and pedantic pushback

  • Several point out that /dev/null is not actually a database, more a storage sink.
  • Durability is debated: some argue it trivially satisfies durability because nothing is stored; others insist durability refers to preserving user data, which clearly fails here.
  • Similar nitpicks about “state transitions” when there is effectively only one state, and that the joke relies on colloquial, not formal, definitions of ACID.

Real-world behavior and failure modes

  • /dev/null can fail in practice: missing /dev, running out of file descriptors or memory, or /dev/null being replaced by a regular file or a different device (allowing data capture or system instability).
  • Disaster recovery is tongue-in-cheek: you can recreate it with mknod and chmod, and “all the same data will be there” because it’s always empty.

Performance, scalability, and “web scale”

  • Benchmarks show tens of GB/s when writing /dev/zero to /dev/null; others test pipes (yes | pv) and observe pipe or tool overheads.
  • Many jokes about horizontal sharding, global sharding “across the universe,” Kubernetes deployments, and “Heisen-throughput” that scales as high as you can measure.

Related satire and analogies

  • References to S4 (Super Simple Storage Service), write-only memory, “mangodb,” nocode, supersimplestorageservice.com, and pipe-logic/circuit analogies.
  • A few criticize the joke as weak or sloppy; others defend it as classic nerd humor and a “trivial solution” that usefully highlights how ACID/CAP definitions can be vacuous.

Security / tampering angle

  • One concern: a privileged user can replace /dev/null with a normal file or different device, allowing interception of “discarded” data.

When is it better to think without words?

Experimental psychology and “insight”

  • Commenters connect the essay’s themes to “insight” problem solving: getting stuck due to fixation, then suddenly seeing a solution.
  • Verbalization is said to increase fixation, while visualization can help break it—consistent with some lab findings.
  • Some criticize the essay as unmoored from existing research and wish it cited the problem‑solving literature more directly.

Different inner modes of thought

  • Wide variation is reported:
    • Constant, sentence‑like inner monologue.
    • Primarily images, spatial/kinesthetic “feel”, or emotion with little or no words.
    • Aphantasia (no imagery) with purely linguistic or “silent text” thought.
    • Hyper‑vivid imagery, sometimes remembered decades later.
  • People describe reading either as “hearing every word” (slower, high comprehension) or as absorbing paragraph shapes and concepts (fast, lower comprehension).
  • Many note difficulties visualizing abstractions or, conversely, difficulty thinking in pure language.

When wordless thinking seems better

  • Visual art, music, athletics, mushroom hunting, and some forms of coding are described as almost entirely nonverbal.
  • Several programmers and mathematicians report “seeing” structures, architectures, or proofs as shapes, flows, or motions, then later translating to code or notation.
  • Insight “in the shower,” dreams that metaphorically encode technical problems, and naps/meditation are framed as high‑value nonverbal processing.

Costs and constraints of language

  • Language is seen as both helpful structure and restrictive abstraction: it compresses, constrains, and can freeze an idea prematurely.
  • Some feel a rich, multidimensional idea becomes a “crippled shadow” when verbalized, sometimes even breaking their connection to the original intuition.
  • Others feel wordless thinking risks confusing feelings with logic and makes external critique impossible until ideas are eventually expressed.

Writing, communication, and collaboration

  • Many insist that wrapping insights in words (or code, diagrams) is essential for checking rigor and sharing knowledge, even if discovery was nonverbal.
  • Writing is described as a crucial “translation layer” and trainable skill; speaking is often seen as a weaker, more lossy channel.
  • People report powerful “second‑processor” effects when collaborating with similarly wired thinkers, though some caution this may involve projection rather than true shared mental content.

Meta‑claims about thought itself

  • Some argue all real thinking is inherently nonverbal, with words as “sportscasting” layered on top; LLM‑style language manipulation is contrasted with this view.
  • Others push back on superiority claims, noting every style has blind spots: verbal thinkers ruminate; nonverbal thinkers struggle to explain.

FocusTube: A Chrome extension that hides YouTube Shorts

Tools and Techniques to Block Shorts

  • Many alternatives to FocusTube are shared: uBlock Origin filter lists, custom CSS/filters, Brave’s “YouTube Anti-Shorts” list, FreeTube, NewPipe, SmartTube, TizenTube, Unhook, UnTrap, Control Panel for YouTube, AppBlock, ScrollGuard, Leechblock, Tampermonkey/Greasyfork scripts, and router/hosts-level blocking.
  • Common strategies:
    • Hide Shorts sections and other “doomscroll” surfaces (home, sidebar, search).
    • Redirect youtube.com/shorts/VIDEOID to watch?v=VIDEOID to avoid the vertical, infinite-scroll UI.
    • Disable watch history to kill recommendations and Shorts on web and app (though this also removes useful recs).
  • On Android, people discuss ReVanced and potential future limitations from Google’s signing requirements; some fall back to sideloading via ADB or custom ROMs.

Sentiment on Shorts vs Long-Form Content

  • Strong hostility: “AI slop”, stolen clips, low-context, highly addictive, and “brainrot.” Many say they never intentionally click a Short.
  • Some praise: quick sketches, cooking/woodworking tips, previews that help decide if a full video is worth it, and relief from padded 10–20 minute “YouTube-optimized” videos.
  • Several note the Shorts algorithm often surfaces more relevant topics than the main feed, but they still prefer to find or watch the full-length versions.

Addiction, Time-Wasting, and Mental Health

  • Numerous comments describe Shorts/TikTok-style feeds as “digital drugs” or akin to gambling: infinite scroll plus variable reward loop leads to hours lost and post-use “hangover.”
  • Some users explicitly block Shorts because they’re too personally addictive; others say they feel no pull at all and see the panic as overblown.
  • Debate arises over whether any leisure that’s “just for fun” is hedonistic vs a legitimate use of time.

Critique of YouTube’s Design and Incentives

  • Widespread frustration that Premium users can’t opt out of Shorts or algorithmic feeds; “don’t show me this” is seen as a placebo.
  • Product decisions are framed as hostile: forced Shorts entry points, limited controls, “show fewer” instead of “never,” and AI/ads pushed everywhere.
  • Many see this as engagement-metric optimization by PMs and executives, likened to addictive industries (gambling, tobacco).

Kids, Regulation, and Broader Media Debate

  • Parents express difficulty blocking Shorts on kids’ devices; some advocate focusing on encouraging creation/other activities rather than pure technical blocking.
  • Multiple calls for legal requirements to let users opt out of algorithmic feeds or for broader regulation of attention-extractive design.
  • Others argue that panic over Shorts repeats historic moral panics about novels, TV, or writing itself; the real issue is structural incentives and modern work/life patterns, not the medium alone.