Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 175 of 525

WebDAV isn't dead yet

Partial updates and upload model

  • Lack of random writes is viewed by some as a “nail in the coffin” for WebDAV.
  • Others point to existing, non-standard extensions: PATCH with custom range headers, PUT with Content-Range, and experimental drafts using PATCH+Content-Range.
  • rclone’s efforts show partial updates are possible but messy; participants want a formal, interoperable standard.
  • WebDAV’s default “single big POST/PUT blob” upload is criticized for large files and servers with request-size caps; chunked uploads are seen as an obvious missing piece.

WebDAV vs S3, FTP, SFTP

  • Several comments argue the article conflates “S3” with the “S3 API.” Many products implement an S3-compatible API successfully; complaints about MinIO/AWS are seen as orthogonal.
  • Some dislike that the AWS S3 SDK has become a de facto web protocol and criticize S3’s authentication complexity.
  • There is strong pushback on “FTP is dead”: shared hosting, B2B file exchange, industrial systems, and healthcare workflows still rely heavily on FTP/SFTP/FTPS.
  • Security debate: some insist unencrypted FTP is unacceptable on today’s Internet; others argue that for low-sensitivity content it’s “good enough” in practice, provoking counterarguments about MITM, malware injection, and credential theft.
  • Multiple commenters feel SFTP (or SFTPGo) is usually a better fit than WebDAV for the article’s deployment scenarios.

Real-world WebDAV use cases

  • WebDAV underpins Tailscale’s Taildrive, Fastmail file storage, CopyParty shares, and various personal homelab setups.
  • It’s widely used for app sync: Devonthink, Joplin, Zotero, OmniFocus, Nextcloud/ownCloud clients, Android DAV sync, and media apps (e.g., Infuse).
  • Hardware integrations appear: document scanners uploading directly to Paperless-NGX, and NAS/phone sync where SMB/SFTP are impractical.

Performance and client quality

  • Multiple experiences of WebDAV being “painfully slow,” especially on Windows Explorer; users report confusion and random breakage.
  • Linux gio-based clients (Nautilus/Thunar) are praised as stable and responsive.
  • One implementer claims WebDAV is inherently fast—much faster than SFTP—and can outperform NFS at high throughput when properly parallelized.
  • Others wonder if newer HTTP versions (HTTP/3) would improve multi-file performance.

Spec gaps, ecosystem, and tooling

  • The spec leaves important behaviors underspecified (modification times, hashes), forcing per-server workarounds.
  • Compatibility notes (e.g., “works with Nextcloud clients”) are seen as evidence of rough edges in the standard.
  • Java library support is described as underwhelming, though long-lived projects like Sardine are cited positively.
  • Despite flaws, several implementers emphasize that adding WebDAV atop existing HTTP/TLS stacks is very low-complexity compared to other file protocols, making it an attractive “boring” choice.

OS and browser integration / alternatives

  • OS vendors are criticized for neglecting WebDAV clients since ~2010, limiting its potential as a universal network filesystem.
  • Android lacks native WebDAV mounts; third-party apps work but feel clunky.
  • Browsers’ inability to easily use PROPFIND and other methods is seen as an Achilles’ heel for WebDAV as a Google-Drive-style backend.
  • Alternatives mentioned include Syncthing for sync, SMB/NFS/SSHFS for LAN, 9p (locked to internal uses on Windows/macOS), and JMAP (with skepticism about its role for file transfer).

FBI Agents Visit Anti-ICE Protester: "Your name was brought up."

Perceptions of the Administration and Authoritarian Drift

  • Several commenters frame the episode as part of a broader slide toward authoritarianism and “gleefully cruel fascism,” tying it to Trump’s rhetoric about paid protesters and “anarchists.”
  • Some argue the “turning” to soft dictatorship has already happened; others see it as still-contested but clearly accelerating.
  • Comparisons are made to Gestapo/SS, communist Poland, and Orwell’s 1984, with emphasis on how familiar these tactics feel to those who grew up under authoritarian regimes.

“Immigration Radical” and Antifa Terminology

  • Confusion and concern over the label “immigration radical”; some define it as open-borders advocacy, others say “radical” just means far from current norms.
  • Extended debate over whether “Antifa” exists:
    • One side: antifa is an adjective/ideology (anti‑fascism), not a formal organization.
    • Another side: right‑wing actors deliberately frame it as a terrorist “group” to justify broad investigations.

FBI Visit, Chilling Effect, and Rights

  • The visit is widely seen as an attempt to chill lawful protest; the fact the target skipped the protest is viewed as proof it worked.
  • Others argue a single visit can’t scale, but acknowledge the stories of such visits can produce a “Panopticon-lite” deterrent.
  • Strong, repeated advice: never talk to law enforcement (especially federal agents) without a lawyer; invoke the right to remain silent and request counsel.
  • Some push back that lawyers often just tell you to answer questions, calling the “never talk” meme overblown.

Effectiveness of Protest vs. Strikes and Other Actions

  • Sharp disagreement over peaceful protest:
    • Critics call weekend sign‑waving “morale theater” that changes nothing without strikes, leverage, or implied threat.
    • Defenders say visible, peaceful mass opposition encourages others, supports legal challenges, and absolutely matters.
  • Suggestions include strikes, contacting representatives, donating, local electoral work, and sustained organizing.

Skepticism and Verification

  • A minority questions the story’s verification (e.g., whether the agents were really FBI), calling it “hearsay” and demanding higher reporting standards.
  • Others respond that the pattern fits numerous recent events and that reflexive “fake news” claims function as bad‑faith deflection.

Meta: Hacker News Moderation and “Censorship”

  • Multiple comments note the thread being flagged, debating whether this is neutral off‑topic moderation or political suppression.
  • Some argue HN’s flagging is effectively centralized, systematically discouraging politically sensitive stories despite written guidelines.

Disable AI in Firefox

Reaction to Firefox’s AI Features

  • Many see the new AI panel and text‑selection popups as “enshittification”: clutter they don’t want, enabled by default and pushed via UI prompts.
  • Some long‑time users say this is “the last straw” and are moving to other browsers (Vivaldi, Waterfox, Mullvad Browser, qutebrowser, etc.).
  • Others think people are overreacting: AI is local-first, you must explicitly connect it to a service, and it’s just another feature that can be ignored or turned off.

How to Disable & Limitations

  • The article’s about:config tip (browser.ml.enable = false) is welcomed; people also note needed extras:
    • browser.ml.chat.enabled = false
    • browser.ml.chat.menu = false
  • Some report context-menu AI options remain until they’re explicitly hidden via the UI.
  • Concern that Mozilla will later split/rename flags, making a single “off” switch fragile.
  • Users note additional AI-related defaults like an @perplexity search shortcut, which they manually remove.

Firefox Quality, Features & Customization

  • Mixed views: some say Firefox is still performant, standards‑compliant, with good ad‑blocking and vertical tabs; others complain about regressions and UI annoyances (sponsored tiles reappearing, limited keyboard shortcut customization).
  • A long AppleScript subthread: one camp says lack of AppleScript support is a blocker for serious automation on macOS, and Mozilla’s handling of related bugs is poor; the opposing camp argues AppleScript is niche/bad tech and not worth the maintenance burden.

Alternatives & Engine Monoculture

  • Suggestions include Brave, Waterfox, Librewolf, Mullvad Browser, Vivaldi, Orion, Ladybird.
  • Brave is controversial: some call it privacy‑centric “de‑Googled Chromium”; others see it as scammy or still dependent on Google’s engine.
  • Several emphasize that Chromium monoculture is dangerous; a truly independent engine (Gecko, future Ladybird) is important even if feature‑lagging.

Mozilla’s Strategy, Money & Management

  • Repeated criticism that Mozilla chases gimmicks (AI, Pocket, VPNs) instead of strengthening the core browser.
  • Financial context: heavy dependence on Google search deals; some argue this pressures Mozilla to boost engagement and accept questionable defaults.
  • CEO compensation is cited as disproportionate to Mozilla’s precarious state.
  • Some defend Mozilla: modern browser development is enormously complex and expensive; it’s “admirable” Firefox exists at all.
  • Debate over forking Firefox:
    • Skeptics note you can fork code but not funding, update channels, or engineers.
    • Supporters argue forks like Waterfox/Librewolf show Mozilla’s bad decisions can be undone; propose user‑driven bounty systems for features, though others doubt such models scale.

Attitudes Toward AI in Software Generally

  • Many are tired of AI being injected everywhere (e.g., Acrobat’s AI summarizing sheet music, OS/browser popups), and want simple, non‑AI tools.
  • Some are fine with small, on‑device models for tasks like translation, tab grouping, or accessibility (PDF alt‑text), provided nothing is sent to servers.
  • There’s skepticism that AI can reliably detect or correct bias; generating manipulation is seen as easier than detecting it.
  • A minority enjoys AI in Firefox, sees it as useful and unobtrusive, and views the backlash as anti‑AI hysteria.

Why I code as a CTO

Ambiguity of the CTO Role and Title

  • Many argue “CTO” has no consistent meaning: it can mean technical cofounder, VP of Engineering in disguise, product-facing “sales CTO,” or pure honorific.
  • Some see this case as really a “founding/staff engineer with a fancy title,” especially given zero direct reports.
  • Others note that in small companies it’s normal for CTOs to be hands-on and that titles at that stage are mostly signaling for the outside world.

Scale and Stage: When Should a CTO Stop Coding?

  • Several commenters frame it as a scale question: at 5–20 people, coding CTO is normal; at ~100–250 it’s debatable; at 500+ it’s usually impossible and undesirable.
  • A recurring view: a good CTO must repeatedly change their role as the org grows, shifting from coding to hiring, direction, and cross‑functional leadership.

Critiques of a Coding, No-Reports CTO

  • Strong skepticism that a CTO who writes “substantial features” has time for core CTO responsibilities: strategy, org design, prioritization, and removing blockers.
  • Concern that if only a “handful” of people (including the CTO) can ship major features, that’s a structural problem the CTO should fix, not personally paper over.
  • Many call out process and culture issues: bypassing or outpacing normal product/legal/eng workflows, weekend/holiday coding as a bad cultural signal, and risk of hero/“cowboy” development.
  • Some note power dynamics: people won’t do honest code review or push back on a C‑level’s code, so contributions can be low‑quality or unmaintainable yet unchallenged.

Arguments in Favor of Hands-on CTOs

  • Supporters value a CTO who understands the codebase and current tools deeply, especially in small startups or hard‑tech contexts.
  • Coding is seen as a way to:
    • Prototype risky ideas, clarify architectural direction, and de‑risk bets.
    • Maintain technical credibility and avoid becoming a pure “slideware” executive.
    • Build internal tooling or refactors that no one else has time for.

Org Design, Empowerment, and Alternatives

  • A common “best of both” model: strong VP Eng (or similar) runs people and process; CTO focuses on technology vision, prototyping, and external evangelism.
  • Critics emphasize leverage: a CTO’s highest value is often enabling others—partnering with senior ICs, delegating ownership, and fixing constraints so teams can do the kind of work the CTO is currently doing alone.
  • Thread repeatedly highlights title inflation, misaligned expectations in hiring, and the risk of confusing “what you enjoy” with “what the role should be.”

"ChatGPT said this" Is Lazy

Expectations in conversation and advice

  • Many commenters dislike replies framed as “I asked ChatGPT and it says…”, especially to personal questions or code reviews.
  • The complaint: they already know LLMs exist; they’re asking for your judgment, context, and experience, not for you to act as a dumb terminal.
  • It’s compared to “let me Google that for you”: sometimes meant to shame laziness, but often just feels dismissive or spammy.

Disclosure, responsibility, and citations

  • Some view “I asked ChatGPT…” as a useful disclosure that lets readers discount or ignore the content.
  • Others see it as responsibility‑dodging: signaling “if it’s wrong, blame the AI.”
  • Fear: backlash against explicit disclosure will just drive people to hide AI use.
  • Another camp says tools need not be named if you’ve verified the content and take ownership, similar to using Google/Wikipedia but summarizing in your own words.

Quality, laziness, and epistemic issues

  • Strong view: LLMs are optimized for plausible language, not truth, so unfiltered outputs are “bullshit” — confident but unreliable.
  • Dumping long AI answers offloads cognitive work onto the reader and pollutes discussions with cheap, low-effort text.
  • Wikipedia/Google analogies split: some say it’s all just fallible sources requiring verification; others say LLM hallucinations make them categorically worse.
  • A minority sees value in LLMs as “median opinion polls” or brainstorming tools rather than authorities.

Impact on engineering and code review

  • Multiple stories of PRs, specs, and comments filled with obvious LLM text: generic summaries, irrelevant requirements, pointless changes with leftover AI remarks.
  • Reviewers resent being the first human to actually read and reason about “your” code.
  • Proposed norms: AI-assisted suggestions are fine, but reviewers must (a) filter them, (b) explain tradeoffs, and (c) stand behind concrete recommendations.
  • Some teams run automated AI code reviews and find them genuinely helpful for spotting issues in asymmetrical review situations.

Ethics, culture, and polarization

  • Hardline critics refuse AI for engineering at all, arguing it weakens thinking, resembles plagiarism, and is built on unethical data scraping; they liken widespread use to “mental obesity.”
  • Others see this as technophobic or dogmatic, emphasizing that critical, curious users can leverage LLMs to learn faster and tackle more ambitious work.
  • Broad agreement on one norm: using AI privately as a tool is fine; pasting unvetted output as your contribution is not.

Code like a surgeon

Surgeon Analogy & Professionalism

  • Several commenters reject “code like a surgeon” as grandiose, given surgeons’ long, structured training and strong professional regulation versus typical software careers.
  • Others note the analogy is meant to highlight focus and leverage, not literal equivalence, and remind that analogies are partial, not one-to-one comparisons.
  • Some argue the article misunderstands surgery: surgeons are managers of a complex team, anesthesiologists often hold ultimate go/no‑go responsibility, and all “support” tasks are critical, not mere grunt work.

Programmers vs Doctors/Engineers

  • One thread claims programmers are more inventive than most engineers and that medical error rates dwarf direct software-caused deaths, suggesting doctors shouldn’t be on a pedestal.
  • Others counter that IT’s lower casualty count mostly reflects lower direct coupling to life-and-death systems, not superior skill or rigor.
  • There’s interest in indirect harms from software (e.g., delays, inefficiencies) that are hard to measure.

AI Coding Tools and Agents

  • Experiences are sharply split.
    • Enthusiasts describe “coding agents” (e.g., Claude Code) as transformative, especially in auto-approve mode: they scaffold features, run tests, and iterate while the human focuses on design and decisions.
    • Skeptics report agents getting stuck, producing “cake rockets” that look plausible but fail under scrutiny, forcing exhaustive re‑validation and negating productivity gains.
  • A widely appreciated use case is analysis rather than generation: scanning large codebases for risky queries, debugging hints, or likely pain points.

Brooks, Chief Programmer, and Process Models

  • Multiple comments tie the article back to The Mythical Man‑Month and the “surgical team” / Chief Programmer model.
  • Some feel LLMs revive older, spec‑heavy, architect‑driven styles by making detailed implementation more delegable.
  • Others warn that in serious “skyscraper‑scale” systems, you can’t safely gloss over details; they’re foundational, just as bolt and steel choices are in real engineering.

Codebase Design for AI Assistance

  • Suggested enablers: rich automated tests, clear commands to run them, linters, type checkers, and concise agent-oriented docs (e.g., AGENTS.md).
  • Documentation can also mislead agents when it drifts out of sync with code; many argue agents read code faster than they read prose.

Roles, Status, and Juniors

  • Some readers are uneasy with talk of “lower-status” team members and “grunt work,” seeing it as status-laden and egocentric.
  • Others stress that tasks are experienced differently by seniors vs juniors; what’s grunt work for one can be valuable growth for the other, especially with mentoring.

Alternative Metaphors & Tone

  • Alternative metaphors include sous-chefs, painters with workshops, or surgeons working on legacy enterprise “patients” nobody fully understands.
  • The thread also contains substantial humor (sturgeon puns, song parodies), underlining both skepticism and anxiety about AI-assisted “surgery” on code.

Automatically Translating C to Rust

Autotranslating C to Rust: Value and Limitations

  • Many report that C→Rust tools (e.g., c2rust) produce Rust that’s “compiler output”: heavy on unsafe, hard to read, and semantically still “C that crashes the same way.”
  • Others counter that such tools are still useful as a bootstrap: get a whole C codebase building as Rust, then gradually refactor toward safe, idiomatic Rust.
  • There are real-world successes (e.g., translating bzip2), but even those often retain 100+ unsafe uses and are far from fully safe.

Fil-C, GC, and Rust’s Niche

  • Fil-C is highlighted as making C “memory safe” via a smart compiler + GC, sometimes outperforming naïve .clone()-heavy Rust-style code.
  • However, performance overhead (up to ~4× in some cases) and lack of data-race prevention mean it doesn’t solve Rust’s problem set, especially around concurrency.
  • Suggested division of labor: Fil-C for running legacy/unported C; automatic C→Rust for starting a port; hypothetical “Fil-Rust” to sandbox unsafe Rust during migration.

Incremental Migration and FFI

  • Some argue auto-translating to unsafe Rust is pointless; you still need deep understanding and major redesign to reach safe, idiomatic Rust.
  • Others say incremental, function-by-function migration is possible using unsafe wrappers or less idiomatic abstractions (Cell, RefCell, etc.), with most benefits arriving near the end.
  • A competing view: what’s really needed is “painless FFI” and tools that let Rust call C using slices and safe types rather than rewriting everything.

Hard Technical Problems: Arrays, Aliasing, and Provenance

  • A key unsolved challenge is inferring array sizes and bounds globally so C pointers can be turned into Rust slices/Vecs; this is tied to ongoing work (e.g., DARPA TRACTOR).
  • Discussion dives into strict aliasing in C vs Rust’s model:
    • Rust lacks C-style strict aliasing but has validity rules (trap representations) and evolving notions of pointer provenance.
    • Type punning that is UB in C due to effective types may be allowed in Rust at the aliasing level but can still be UB via invalid value representations (e.g., punning into bool).

Rust Popularity, LLMs, and Future Rewrites

  • Some speculate about a future where Rust falls out of favor and people want Rust→C translators; others see Rust as having reached “critical mass” for secure, performant systems code.
  • Debate over LLMs:
    • Claims that “Rust is the winner of the LLM era” clash with reports that current models struggle with lifetimes and complex Rust, requiring significant human correction.
    • Separate thread: GitHub’s vulnerability graph allegedly spikes post-LLMs, suggesting a growing class of simple, non-memory-safety bugs.

Idiomatic vs Safe Rust

  • Several commenters distinguish “idiomatic” from merely “safe”:
    • It may be feasible to auto-generate non-idiomatic but safe Rust (correct lifetimes, Box/slices) for simpler C code.
    • Truly idiomatic Rust requires recognizing higher-level patterns (data structures, ownership models) that C couldn’t abstract; this is seen as closer to a creative or AGI-level task.

Rust Coreutils Size Concern

  • A side discussion notes a seemingly huge /usr/bin/ls after switching to Rust coreutils; clarified that:
    • Rust coreutils are shipped as one ~12–13 MB binary hardlinked under many names.
    • Overall size increase over GNU coreutils is modest and dwarfed by the rest of /usr/bin.

Unlocking free WiFi on British Airways

Technical Approaches to Bypassing Paywalled WiFi

  • Discussion centers on exploiting “free messaging” tiers by:
    • Spoofing SNI to look like permitted apps (e.g., WhatsApp) while tunneling arbitrary HTTPS through a proxy.
    • Using domain fronting–style techniques, where the visible hostname differs from the true backend.
    • Running VPNs over unusual ports (notably UDP 53) and DNS-tunneling tools like iodine to smuggle traffic in TXT/subdomain payloads.
    • Using pluggable transports (e.g., Lyrebird, Xray) that hide proxy traffic behind seemingly legitimate TLS handshakes to allowed domains.
  • Several commenters report success with WireGuard/OpenVPN on nonstandard ports or over DNS, but also note that many modern captive portals now block everything except specific IPs/hosts.

How Airlines and Cruises Enforce Restrictions

  • Many providers inspect TLS ClientHello:
    • Basic setups only check SNI against a whitelist (e.g., airline site, messaging apps, visa sites).
    • More advanced firewalls (e.g., Fortinet-style) verify that the certificate CN/SAN and CA match the SNI.
  • Some systems allow a few initial packets of any TCP flow, then classify and reset connections if not whitelisted.
  • “Free messaging” often also whitelists push-notification services so onboard apps can receive messages.
  • There’s debate on whether IP whitelisting is feasible:
    • Hard in general due to CDNs and changing IPs.
    • Easier when platforms cooperate and publish ranges or provide zero-rating integrations.
  • Cruiselines and airlines sometimes block websites for known circumvention tools and may ban travel routers or personal satellite gear.

Broader Protocol and Censorship Context

  • SNI is criticized for enabling easy traffic classification and censorship; its historical role in enabling HTTPS virtual hosting is noted.
  • Encrypted ClientHello (ECH) is mentioned as a future obstacle to SNI-based filtering and “free messaging” offers.
  • These techniques are also linked to evading national-level censorship (e.g., Tor transports, Great Firewall–style probing).

Ethics, Legality, and Risk

  • Ethical views split:
    • Some see this as theft of service and unnecessary for well-paid professionals.
    • Others view it as harmless use of spare capacity and praise the educational value.
  • Legal risk on aircraft is highlighted:
    • Concern about broad interpretations (e.g., “tampering with aircraft systems”) and possible severe consequences, even if actual safety impact is unclear.
  • A few commenters emphasize that the annoyance or danger of legal trouble far outweighs saving a modest WiFi fee.

User Experience, Capacity, and Business Models

  • Multiple anecdotes from flights and cruises:
    • Pricing (e.g., ~$50/day on cruises) seen as excessive, especially when performance can be poor.
    • Others report very usable Starlink-backed service, suggesting variability by ship/installation.
  • Some argue bandwidth is now sufficient (Starlink, specialized LTE backhaul), so strict gating is mainly revenue-driven.
  • Counterpoint: providers must still limit access to keep shared links workable.

Security Culture and Pen-Testing

  • BA’s overall security posture is critiqued, with references to past web compromises.
  • Pen-tests are described as useful for regression detection but insufficient as a sole security strategy; organizations often over-rely on them instead of listening to internal engineers.

Miscellaneous

  • Some readers enjoy being forced offline and worry about more ubiquitous inflight connectivity.
  • Accessibility point: this case is cited as exactly why proper alt attributes for images matter—when images can’t load, content should remain understandable.

First convex polyhedron found that can't pass through itself

Clarifying the Rupert Property and Problem Scope

  • Discussion centers on the “Rupert property”: one copy of a convex polyhedron can pass straight through a hole in another congruent copy, leaving nonzero material (“not cutting it in half”).
  • In practice, this is phrased as: does there exist an orientation where one 2D projection (“shadow”) of the shape fits strictly inside another projection of the same shape?
  • Commenters stress that equality of shadows is trivial and uninteresting; a strict margin is required.
  • The result concerns convex polyhedra only; several people note the article’s “shape” title is misleading without that qualifier.

Spheres, Limits, and Nonconvex Shapes

  • Many initially point to a sphere (and donuts, cylinders) as obvious shapes that can’t pass through themselves.
  • Others counter: spheres and torii are not convex polyhedra, so they were never part of the conjecture.
  • Attempts to treat a sphere as a “limit” of increasingly fine polyhedra are rejected: limiting behavior is subtle, the limit object is no longer a polyhedron, and properties like Rupertness need not carry over.
  • Nonconvex examples (donut, T-tetromino) are easy noperts, reinforcing why convexity is central.

Computation and Search Strategy

  • The core difficulty is ruling out all orientations for a candidate polyhedron; brute force is impossible.
  • The proof strategy uses projections and parameter-space pruning: if a protruding “shadow” requires large rotations to fix, whole regions of orientations can be discarded.
  • More faces and symmetry make checking Rupertness harder; earlier work (e.g., triakis tetrahedron) already revealed extremely tight fits.
  • The computational part is implemented in SageMath and shared openly; some plan to 3D-print the resulting Noperthedron from provided STL files.

Rotation and Motion

  • Several ask whether twisting or helical motion (sofa-around-a-corner style) could allow passage where straight motion cannot.
  • Replies note the standard Rupert problem assumes straight-line passage and, for convex shapes, rotation during transit likely doesn’t fundamentally change the feasibility condition defined via shadows.

Communication, Naming, and Audience

  • Multiple comments criticize the title’s looseness (“shape” vs “convex polyhedron”) but praise the article’s level of detail as accessible yet substantial.
  • Debate arises over whether Quanta targets laypeople or a technically inclined audience, and whether its headlines verge on clickbait.
  • The coined name “Noperthedron” triggers a deep side thread on how portmanteaus work in English (and even comparisons to Mandarin), illustrating the community’s fondness for linguistic as well as mathematical play.

Value and Funding of Pure Math

  • Some question why such problems are studied at all; others defend curiosity-driven math as legitimate and historically fruitful, with applications often emerging decades later.
  • There’s discussion about who pays for such work (sometimes hobbyists, sometimes institutions), and analogies to past “useless” mathematics that later underpinned computer graphics and logic.

Broader Context and Cultural Reactions

  • Commenters connect this result to a recent popular video exploring Rupert/nopert problems and attempts to show familiar solids (e.g., snub cube) are non-Rupert.
  • There’s enthusiasm for the aesthetics of the shape, suggestions to include it (and other recent mathematical curiosities) on future space probes, and general appreciation for the whimsy, history, and bet-driven origins of the problem.

Asahi Linux Still Working on Apple M3 Support, M1n1 Bootloader Going Rust

Asahi’s Pace vs Apple’s Chip Cadence

  • Some see supporting each new M‑series chip as a Sisyphean task; others (including a contributor) say most non‑GPU/NPU interfaces evolve incrementally, so once a base of drivers exists, a small team can keep up.
  • Many note that even an M1 remains very capable for years, so lagging behind the latest hardware is acceptable, especially for Linux users who often prefer older/used machines.

Openness, Secure Boot, and General‑Purpose Computing

  • Several worry that Macs are a last bastion of general‑purpose computing as platforms drift toward locked‑down, signed‑only ecosystems.
  • Apple deliberately allowed other OSes to boot on Apple Silicon Macs, unlike iOS/iPadOS, but people fear this could be revoked in future generations.

User Experience: What Works Well, What Doesn’t

  • Many report Asahi on M1/M2 as remarkably polished: smooth installation, good daily usability, strong performance, even working 3D gaming for some.
  • Key missing pieces remain: no Thunderbolt/DP Alt‑Mode on some models, no reliable suspend‑to‑RAM or hibernate, and notable sleep battery drain. These keep some users on macOS as primary OS, with Asahi only for specific tasks.

Bare‑Metal Linux vs Virtualization

  • Several insist VMs/containers (Docker, Orbstack, UTM, Apple’s container project) can’t replace bare‑metal Linux for things like Wi‑Fi promiscuous mode, low‑level debugging, or obscure kernel features.
  • Others argue macOS + a well‑integrated Linux VM is more pragmatic than fighting incomplete hardware support.

Mac Hardware vs Linux‑Native Laptops

  • Strong divide: some claim no PC vendor matches MacBook build quality, battery life, and reliability; others point to ThinkPads, Framework, and Linux‑preloaded OEMs as “good enough” or ethically preferable, despite worse battery life.
  • Cost, repairability, and upgradability (soldered RAM/SSD vs modular designs) are major axes of disagreement.

Project Health and Strategy

  • Commenters note key reverse‑engineering figures leaving and worry Asahi is “on life support.”
  • Others counter that the current focus is upstreaming and maintenance; GPU for M3+ is hard because Apple changed the instruction set, but core platform support continues.

Apple’s Incentives and Documentation

  • Many argue Apple has little financial reason to fund Linux drivers: profit comes from ecosystem lock‑in, not from selling Macs to Linux users.
  • Apple is seen as “hands‑off but not hostile”: they neither document nor actively block Linux, which forces Asahi to continue its reverse‑engineering approach.

A sharded DuckDB on 63 nodes runs 1T row aggregation challenge in 5 sec

Sharded / Distributed Query Engines

  • Question about open-source sharded planners over DuckDB/SQLite led to mentions of Apache DataFusion Ballista and DeepSeek’s “smallpond” as comparable approaches.
  • GizmoEdge itself is not open source; the author intends to mature it into a product. Smallpond is cited as an OSS alternative for similar distributed DuckDB-style workloads.
  • Other systems suggested as “already built for this”: Trino, ClickHouse, Spark, BigQuery, and Redshift; some see GizmoEdge as re-implementing a familiar MapReduce-style pattern (worker SQL + combinatorial SQL).

Hardware Scale, Cost, and Practicality

  • The cluster used 63 Azure E64pds v6 nodes (64 vCPUs, ~500 GiB RAM each), totaling ~4,000 vCPUs and ~30 TiB RAM.
  • Multiple commenters argue this is “overpowered” and question whether it’s cheaper than Snowflake/BigQuery.
  • Rough cost math in the thread: about $236/hour on-demand (~$0.33 for a 5-second query) vs a single Snowflake 4XL at ~$384/hour, but critics note this ignores cluster setup, engineering, and always-on costs.
  • A single-node DuckDB setup by the same author reportedly did the challenge in ~2 minutes for about $0.10, raising questions about where the scale-out point really pays off.

Challenge Methodology & Fairness

  • Key caveat: the 5-second time excludes loading/materializing data. Workers spend 1–2 minutes downloading Parquet from cloud storage and converting to DuckDB files on local NVMe first.
  • Some argue this violates the spirit of the “One Trillion Row Challenge,” which they interpret as timing from raw files to result; pre-materializing and then measuring only query latency is called “cheating” or at least misleading.
  • Others request explicit cold-vs-hot-cache benchmarks and clearer disclosure; filesystem caching and lack of cache dropping may affect comparability.

Architecture & Implementation Choices

  • Each node ran ~16 worker pods (3.8 vCPU, 30 GiB RAM) due to Kubernetes overhead and cloud quota; the author admits shard sizing is heuristic, not fully optimized.
  • Workers execute DuckDB queries locally and stream Arrow IPC results back to a central server via WebSockets. The server merges partial results.
  • A long subthread debates WebSockets vs raw TCP/UDP:
    • Pro-WebSocket arguments: easy framing, TLS termination, existing libraries, multiplexing via HTTP routing.
    • Skeptical views: extra protocol complexity, HTTP parser, and SHA-1 for minimal benefit in a non-browser context; alternatives like raw sockets, ZeroMQ, or Arrow Flight are mentioned.
  • Filesystem choice (ext4 vs XFS) and Linux page cache behavior are raised as potentially material to performance; reproducibility concerns are noted.

OLAP vs OLTP and Other Databases

  • Several comments contrast DuckDB (columnar, OLAP) with OLTP systems like MSSQL, explaining why analytical aggregations can be orders of magnitude faster on OLAP engines.
  • DuckDB’s “OLAP-ness” is briefly questioned due to writer blocking readers, but others clarify “online” refers to interactive analytics, not realtime streaming.
  • ClickHouse is cited as a market leader in real-time analytics, though some note it still favors throughput over ultra-low-latency ingestion.
  • DuckLake is described as solving upserts over data lakes; some confusion remains about what it adds beyond reading Parquet directly.

Use Cases, Robustness, and Skepticism

  • One commenter worries that DuckDB’s strength is single-node, one-off analytics and that bolting it into a persistent Kubernetes cluster sidesteps hard problems (fault tolerance, re-planning on failure, multi-query resource management, distributed joins).
  • Others see the experiment as a “fun demo” and a proof-of-possibility for edge/observability scenarios, but not yet production-grade.
  • A notable criticism is that sustaining this performance implies keeping 30 TiB of RAM and 4,000 vCPUs warm, which many organizations would balk at paying for continuously.

Miscellaneous Technical Points

  • COUNT DISTINCT at scale is discussed: approximate HLL-based sketches vs exact bitmap-based methods, with mention of a DuckDB extension.
  • Some joking asides: Tableau generating huge queries, quantum-computing hype, and sortbenchmark.org’s insistence on including I/O in benchmarks.

Typst 0.14

Role of Typst vs Other Tools

  • Commenters stress that Typst is a typesetter and LaTeX competitor, not a converter like Pandoc.
  • Pandoc is framed as a powerful but different tool: it converts between markup formats and calls external typesetters.
  • Compared with LaTeX, Typst is praised for a cleaner language, single-pass compilation, easier styling, integrated scripting, and a self‑contained binary rather than gigabyte distributions.
  • Compared with Markdown/Asciidoc/Org, Typst is seen as better for complex documents (contracts, specs, books) while still feeling lightweight.

New 0.14 Features & PDF Handling

  • Native PDF-as-image support is widely celebrated as removing a major blocker to leaving LaTeX.
  • The new Rust PDF engine (hayro) impresses people with speed, portability, and standalone reuse; large PDFs render almost instantly.
  • Character‑level justification and early microtypography work are viewed as a big quality upgrade.
  • PDF/UA‑1 export and accessibility checks are praised; some note LaTeX now has tagging too but with more complexity and gaps in package support.

Ecosystem, Tooling, and Business Model

  • Core compiler/CLI is open source; the web editor is proprietary. Many use only the CLI plus TinyMist language server in VS Code and other IDEs.
  • The open‑core model and relatively generous pricing are generally viewed positively, with some caution that many OSS companies change later.
  • Typst’s built‑in package manager and growing ecosystem (slides packages like Touying/Slydst, drawing via cetz, indexing with in‑dexter, games, Tufte‑style templates) are highlighted.

Use Cases and Strengths

  • Users report successfully replacing LaTeX, PowerPoint/Marp, Markdown+Pandoc, and Asciidoc for: theses, books, lecture slides, posters, invoices, CVs, specs, and e‑reader article conversion.
  • Fast incremental compilation, clear diagnostics, Unicode support, and simpler layout/footers are recurring themes.
  • Single‑binary deployment makes it attractive for embedding in Rust/Go services to generate PDFs on the fly.

Limitations, Academic Adoption, and Missing Features

  • Major blockers: lack of official support from journals and arXiv, weaker collaborative web experience vs Overleaf, and incomplete parity with LaTeX’s Beamer and TikZ (though Touying/cetz narrow the gap).
  • Other issues mentioned: locale‑aware decimal formatting, citation-style glitches, video/animation in slides, indexing depth, and still‑incomplete accessibility (tables).
  • Backwards‑compatibility policy is seen as unclear; some expect breaking changes until 1.0.

LLMs and Learning Curve

  • Experiences with LLMs generating Typst are mixed: some find them very helpful for templates and snippets, others report constant syntax errors and hallucinations.
  • Regardless, documentation quality and language simplicity make Typst approachable compared with LaTeX.

Poker fraud used X-ray tables, high-tech glasses and NBA players

NBA, Gambling, and Fan Alienation

  • Several commenters say this story is “the last straw” in their relationship with the NBA, tying it to:
    • The league’s aggressive embrace of gambling and constant betting ads.
    • Long, foul-heavy games, load management, tanking, and a very long season.
    • Fragmented TV rights that require multiple subscriptions.
  • Some argue gambling promotion is like cigarette advertising: socially harmful and especially predatory toward kids, with language like “fun” and “play” normalizing addictive behavior.
  • Others note smoking-prevention campaigns and regulation worked only as part of a broader mix (taxes, public bans, de-normalization) and worry similar tools are being abandoned for gambling.

Should the State Police Cheating in Illegal Gambling?

  • One camp calls this a waste of resources: gambling is socially harmful by default, so “fair” vs “unfair” games shouldn’t matter, and legitimizing enforcement could even boost trust in other crooked games.
  • Opponents counter that:
    • This was organized crime, not a kitchen-table game: fraud, extortion, and money laundering are squarely in law enforcement’s remit.
    • $7m in cash/crypto is far more valuable to crime families than equivalent taxed, traceable business revenue.
    • If police don’t intervene, victims may resort to violence.

Cheating Tech: “X-Ray” Tables, Shufflers, and Marked Cards

  • Multiple commenters doubt literal X-rays; the consensus is:
    • Likely IR or similar wavelengths through an IR-transparent tabletop, misbranded as “X-ray” by media or prosecutors.
    • Rigged shufflers that read deck order (often via barcode-like marks on edges) and either:
      • Re-stack decks algorithmically, or
      • Are swapped with pre-arranged decks.
  • Marked “reader” cards plus special glasses/contacts are described as relatively old-school; many note there are simpler, low-tech ways to cheat once you control the environment.
  • Broader point: there are so many cheating methods that playing in private games with strangers is inherently risky.

Economics and Purpose of the Scam

  • Some think $7m over years, split among ~30 people and multiple families, is barely worth the risk and effort.
  • Others suggest:
    • That figure is likely a floor, not the full take.
    • The real leverage may be blackmail and sports betting/fixing tied to indebted NBA figures.
    • The thrill, access to celebrities, and untraceable cash can matter as much as pure ROI.

Poker, Gambling, and Morality

  • Mixed attitudes toward poker:
    • Critics see it as paying to sit for hours, deceive people, and take their money.
    • Fans defend it as a deep skill game (math + psychology) and a structured social activity; low-stakes home games are framed as paying for entertainment, not “trying to get rich.”
  • Several emphasize that pros target wealthy “recreational” players and that variance makes “just play better poker” an unrealistic alternative to guaranteed cheating.

Twake Drive – An open-source alternative to Google Drive

Tech stack & architecture

  • Backend is TypeScript/Node.js with MongoDB, which triggers debate:
    • Some see Node/TS as reasonable for I/O‑heavy services and code-sharing with frontend.
    • Others argue a file‑sync system is also CPU‑heavy (hashing, crypto, concurrency) and that JS performance and single‑threaded model will become a bottleneck, similar criticism as for PHP‑based Nextcloud/ownCloud.
  • MongoDB choice is contentious:
    • Several report bad experiences and warn against using it for critical data; others say it’s been “rock solid” for years with WiredTiger.
    • Some note it’s at odds with a “fully open” mission; FerretDB is mentioned as an alternative.
  • Long back‑and‑forth on whether a database is needed at all:
    • One camp says filesystem/ACLs/snapshots/xattrs could store users, permissions, versions, and shareable links.
    • Others counter that complex metadata, joins, transactions, version history, and scalable sync essentially demand a DB.

Comparison with existing tools

  • Twake is compared heavily to Nextcloud/ownCloud:
    • Critics: Nextcloud seen as bloated, slow, and painful to install/maintain (especially outside their AIO stack).
    • Defenders: report years of stable use with Docker or Snap, good ecosystem, but admit rough edges and “2015‑era” web UI.
  • Seafile is praised as fast and reliable but upgrades can be painful.
  • Syncthing widely liked for peer‑to‑peer sync, but mobile and large‑file use cases are weaker.
  • Simple alternatives: filebrowser, Samba shares, rsync; plus other projects like CryptPad, Peergos, Seafile, Immich for photos.

UX, clients, and core features

  • Unclear whether Twake has polished native or mobile clients; screenshots exist but app‑store links are missing.
  • Many emphasize must‑haves for any Drive replacement:
    • Sync that is predictable and explainable to non‑technical users.
    • Simple conflict handling.
    • Zero‑drama upgrades and easy, testable backups.
    • Selective sync with placeholders (Dropbox/OneDrive‑style) is seen as a major gap in many OSS tools.
  • Integration with collaborative editors is crucial; Twake reportedly bundles OnlyOffice for realtime Docs/Sheets‑style editing.
  • Some users care strongly about advanced search (image/content understanding) where Google remains far ahead.

Security, deployment & sustainability

  • Strong warnings against exposing Samba to the internet; VPN (Tailscale/Wireguard) recommended.
  • Concerns about whether Twake can build a durable community and business model so it doesn’t disappear; corporate backing (Linagora, ex‑Cozy Cloud/Cozy Drive) is noted but not deeply analyzed.
  • Debate over name “Twake” and domain; some think it’s hard to say/spell and thus hurts adoption.

Debian Technical Committee overrides systemd change

Context: /run/lock permission change and Debian TC override

  • systemd upstream made /run/lock root‑writable only, citing security and robustness.
  • Debian’s systemd maintainer followed upstream, which broke older software assuming a world‑writable lock directory.
  • The Debian Technical Committee overrode this, restoring the previous behavior for now in the interest of stability and compatibility.
  • Some argue this is exactly Debian’s role; others see it as an unhealthy clash between upstream and a distro maintainer wearing both hats.

Legacy serial tools and lockfile behavior

  • A side thread debates serial console tools: cu vs minicom, picocom, screen. Some prefer cu for simplicity and ssh‑like escapes; others find it outdated.
  • The traditional UUCP‑style locking model (/var/lock, LCK..device) is still used by some tools; others use flock or newer mechanisms.

Security vs compatibility of world‑writable lock dirs

  • Pro‑change side:
    • World‑writable shared dirs are long known footguns: symlink attacks on root processes and DoS by exhausting tmpfs inodes/space.
    • Modern practice favors flock() and per‑user runtimes ($XDG_RUNTIME_DIR = /run/user/$uid) instead of global /var/lock.
    • Given increased threat models (untrusted code, supply‑chain issues, AI‑generated bugs), the old design is seen as indefensible long‑term.
  • Skeptical side:
    • The concrete risk from /var/lock is seen as theoretical or niche compared to other attack surfaces.
    • Many legacy or unmaintained tools cannot realistically be fixed; making /run/lock root‑only forces awkward workarounds or containers.
    • Some suggest separate mounts or quotas as less disruptive mitigations.

FHS, UAPI, and filesystem layout politics

  • One camp says FHS 3.0 is effectively abandoned: it hasn’t tracked /run, /sys, /run/user, /usr‑merge, or container realities, and contains obsolete details (/var/games, /var/mail, UUCP locks).
  • Another argues a filesystem standard should be slow‑moving; “not updated” can mean “mature”, not “dead”.
  • systemd’s file‑hierarchy spec and the Linux UAPI Group are seen by some as a needed de‑facto successor; others view them as systemd/Fedora capturing standardization to legitimize their own layout choices.

Debian culture and pace vs “modernization”

  • Many commenters defend Debian’s “slow‑cooking” ethos: they value never having to reinstall and high upgrade stability, even if it delays changes like this.
  • Others criticize Debian for resisting long‑foreseen cleanups (global writable dirs, /usr merge) and making life hard for upstreams.

Views on systemd and its maintainers

  • Strongly mixed sentiment:
    • Supporters credit systemd with dramatically better service management, logging, and consistency across distros.
    • Critics see a pattern of arrogance, dismissing “niche” breakages, using warnings like “degraded/tainted” for unmerged /usr, and pushing the world to conform to systemd’s assumptions.
    • Some inject distrust over large‑vendor employment and speculate about motives; others push back, noting that upstream reasonably says “distros can patch behavior they want”.

Overall framing of the conflict

  • One reading: a straightforward distro‑vs‑upstream division of labor—systemd tightens defaults, Debian restores legacy behavior for its users.
  • Another reading: a recurring governance and culture clash where systemd unilaterally redefines long‑standing interfaces and Debian must either absorb the fallout or actively resist.

Interstellar Mission to a Black Hole

Primordial / Small Black Holes in the Solar System

  • Some imagine discovering an asteroid‑mass primordial black hole locally, avoiding interstellar travel.
  • Multiple comments stress that black holes are not “cosmic vacuums”: a Moon‑mass black hole would gravitationally behave like the Moon; tides and orbits would remain essentially unchanged.
  • The danger is from Hawking evaporation, not accretion: very small black holes could undergo runaway evaporation if their Hawking temperature exceeds the cosmic microwave background, potentially ending in intense gamma bursts.
  • Detection would be hard:
    • Gravitational effects or microlensing are primary options.
    • Hawking radiation might be detectable only in the final stages.
    • Some argue dust accretion should create faint but detectable X‑rays; others counter that matter densities are too low for significant accretion.
  • Ideas surface about black holes captured inside asteroids, making them anomalously dense.

Compact Objects as Megastructures / Sci‑Fi Concepts

  • Thought experiments: replacing the Moon with a black hole; building a mini‑Dyson shell around a black hole or neutron star to create a 1g “mini‑world”.
  • Limits noted: white dwarfs likely can’t be Moon‑sized; black holes/neutron stars make more sense.
  • Stability of Dyson‑like structures is highlighted as a major unsolved issue.

Light Sails, Steering, and Relativistic Hazards

  • Clarifications: Breakthrough Starshot–style designs are laser‑driven light sails, not solar‑wind sails; “light sail” is the generic term.
  • Stopping/steering:
    • You can tilt a sail to change direction; destination‑star light or a second reflector could in principle brake the craft.
    • Practically, deceleration forces at high speed and large distances are tiny, making orbital insertion extremely challenging; flyby missions seem more realistic.
  • Concerns raised about relativistic travel:
    • Interstellar medium impacts at ~0.5c could be catastrophic; “deflectors” à la Star Trek are invoked as a useful fiction.
    • Time dilation at 0.1–0.33c is acknowledged but calculated to be small (percent‑level), not millions of years.

Mission Feasibility: Trajectory Control and Communication

  • Several readers argue the key issue—how a ~1 g probe changes trajectory at ~0.3c—is largely hand‑waved in the referenced paper.
  • Proposed workarounds:
    • Fire large swarms of probes and rely on statistics (criticized as still inadequate in vast space).
    • Accept unbound flybys and use multiple daughter probes for local experiments and comparative trajectory measurements.
    • Use the sail itself for steering; more speculative ideas include paired probes with springs, which are dismissed as extremely inefficient “rockets” with terrible specific impulse.
  • Communication challenges:
    • Skepticism that a 1 g craft can transmit useful data over tens of light‑years; Voyager‑style high‑gain antennas and power sources are far too massive.
    • Suggestions include probe relays, return‑trajectory probes, or nuclear/betavoltaic power, but none are worked out in detail.
    • One commenter notes we also haven’t actually located a nearby black hole; relying purely on statistics is itself a “blocking” issue.

Scientific Payoff vs Alternatives and Priorities

  • Some see an interstellar black hole mission as inspirational but question the practical return: “nothing to see” versus strong counter‑claims about rich physics from accretion disks and lensing.
  • The Solar Gravitational Lens (SGL) mission and large orbital interferometric telescopes are proposed as more realistic, near‑term “aggressive” projects with clear payoff (e.g., imaging exoplanet surfaces).
  • Meta‑discussion laments funding going to AI‑pornbots and near‑term commerce rather than deep‑space infrastructure, though others note that profitable tech tends to get built, whereas pure exploration struggles.
  • A few broaden to long‑term human constraints: need to solve launch costs, longevity/aging, and perhaps FTL or cryosleep, or else missions become multi‑generation endeavors.

Apple will phase out Rosetta 2 in macOS 28

Timeline, Precedent, and Apple’s Philosophy

  • Rosetta 2 launched with M1 in 2020; removal in macOS 28 (2027) gives ~7 years of support.
  • Some argue this is generous and consistent with previous transitions (68k→PPC, PPC→Intel, Rosetta 1, 32‑bit drop); Apple has never prioritized long‑term backward compatibility.
  • Others say 6–7 years is short compared to Windows, where very old binaries often still run, and see this as planned obsolescence rather than necessity.

Impact on Existing Mac Apps and Plugins

  • Users rely on Intel‑only apps: scanner software, OCR tools, audio plugins, Photoshop plugins, DAW ecosystems, even current products that still tell users to run DAWs under Rosetta.
  • Many expect “long tail” software (older audio plugins, games, niche tools, studio setups) will simply die; some plan to freeze machines or keep old Macs offline for stability and compatibility.
  • There is concern about losing access to old creative projects that depend on discontinued plugins or formats.

Gaming, Wine/Crossover, and Apple’s “Subset for Games”

  • Apple says a subset of Rosetta will remain for “older unmaintained gaming titles” using Intel‑based frameworks.
  • Debate over what that actually covers: games touch large portions of Cocoa, Metal/OpenGL, AVFoundation, input, etc.; unclear how Apple will support games but not other apps using similar APIs.
  • Wine/Crossover and Game Porting Toolkit rely on the same translation tech; some fear newer Windows‑only games or Mac ports via Wine could be collateral damage despite the “games” carve‑out.

Containers, Docker, and Dev Workflows

  • Big concern from developers: x86‑only Docker images (e.g., SQL Server, corporate stacks) and desire to run exactly the same images as x86 production.
  • Confusion over whether Rosetta for Linux VMs and Apple’s containerization framework (which uses Rosetta) are affected; some read the notice as Mac‑app‑only, others note Apple’s language is vague.
  • Many report they already ship multi‑arch images; others say duplicating builds for ARM adds cost and isn’t always justified.

Virtualization and Technical Details

  • Rosetta never emulated whole VMs; Parallels/QEMU can emulate x86 independently but are much slower without Rosetta.
  • Apple Silicon includes hardware assists (TSO, flag instructions) for fast x86 translation; those will likely remain as long as any Rosetta subset exists, so chip die area isn’t saved by dropping Mac‑app support alone.

Reactions and Alternatives

  • Supporters see this as a necessary push to finish the ARM64 transition and reduce maintenance/QA burden.
  • Critics emphasize loss of user trust, broken workflows, and contrast with Linux/WINE or Windows’ longer compatibility horizons; some recommend not upgrading or switching platforms.
  • Several wish Apple would open‑source Rosetta so the community could maintain long‑term x86 support independently.

Alaska Airlines' statement on IT outage

Compensation policy and source confusion

  • One early subthread debates a quoted list of remedies (hotels, ground transport, meals, rebooking on other carriers).
  • Confusion arose because this text was on a linked “flexible travel policy” page, not the main statement page.
  • People argue over citation norms: whether quoting from a linked document without an explicit link is misleading, and whether linked pages should be treated as part of “the document.”

Passenger experiences during the outage

  • Multiple passengers report 4–8+ hour delays, tarmac waits, and arrivals at 3am.
  • Communication is described as poor: ground stops and repeated system failures weren’t clearly explained to passengers or gate staff.
  • Crew duty-time limits created extra uncertainty, with some flights ultimately canceled when crews “timed out.”
  • Offered compensation ranged from small meal vouchers ($12–$24) to potential discount codes, seen by some as inadequate given airport prices and lost time.

Legal and financial compensation debates (EU vs US)

  • Commenters contrast EU261-style compensation (250–600 EUR for long delays) with weaker or dismantled protections in the US.
  • Many recount European airlines resisting payouts, requiring escalation to regulators, small-claims court, or third‑party claim services.
  • There’s discussion of airlines exploiting technicalities (e.g., cancel vs delay, “extraordinary circumstances”) to avoid liability.

Operational impact and flight diversions

  • Some flights in the air were diverted or even returned to origin, possibly to avoid gate gridlock at Seattle.
  • Commenters note that once airborne, core operational IT needs are limited; the choke point is gates and ground operations.

Speculation about technical root cause

  • Some joke about expired certificates or DNS; others cite the airline’s wording about a “failure at a primary data center.”
  • One commenter claims many certificates are manually managed and prone to expiry; “autorotate everything” is the suggested best practice.
  • Others question the vague phrase “IT outage” and whether it masks internal mistakes vs external attacks.

Airline IT culture, infrastructure, and pay

  • Several threads describe Alaska’s infrastructure as old, fragmented, and dominated by internal “fiefdoms” resistant to modernization or best practices.
  • There are anecdotes of critical processes hinging on fragile components (e.g., SMTP), lack of cross‑team collaboration, and high turnover.
  • Reported compensation for engineers and SREs is considered low for mission-critical roles in the Seattle market.
  • Some defend older, mainframe-based cores (e.g., TPF) as stable, arguing that outages usually arise in newer middleware and integration layers.
  • Debate centers on culture and incentives more than raw technology: reliable systems could be built with 2015-era tech, but organizations don’t prioritize or staff that work.

Broader concerns about airline reliability and regulation

  • Commenters note that all major US carriers have had large IT failures recently, with repeated nationwide ground stops.
  • Perceived lack of consumer or regulatory pressure leads to minimal investment in resilience; many expect such disruptions to remain common.
  • Outsourcing to large IT vendors is blamed by some for systemic fragility.

Website / UX side-notes

  • The outage statement page is criticized for heavy weight due to a 2.4MB SVG logo that embeds an unoptimized PNG.
  • Commenters view this as emblematic of sloppy implementation and easy, low‑hanging performance fixes being ignored.

'Attention is all you need' coauthor says he's 'sick' of transformers

Dominance of Transformers and Research Monoculture

  • Several comments argue that transformers’ success has created an unhealthy monoculture: funding, conferences, and PhD work overwhelmingly chase incremental transformer gains instead of exploring other paradigms.
  • One analogy compares this to the entire food industry deciding to only improve hamburgers; another frames it as an imbalance between “exploration vs. exploitation.”
  • Others counter that this is just natural selection in research: the approach that works best (right now) wins attention and resources.

How Transformative Have Transformers Been?

  • Supporters say transformers have radically changed NLP, genomics, protein structure prediction (e.g., AlphaFold), drug discovery, computer vision, search/answer engines, and developer workflows.
  • Some practitioners describe LLM coding assistants as personally “transformative,” turning stressful workloads into mostly AI-assisted implementation.
  • Critics claim impacts in their own fields are “mostly negative,” with transformers driving distraction, noise, and shallow work rather than genuine scientific progress.

Slop, Spam, and Societal Harms

  • A recurring theme: transformers drastically lower the cost of producing plausible but wrong or low‑quality content (“slop”).
  • People highlight spam, scams, propaganda, astroturfing, robocalls, and degraded student learning as domains where LLMs currently excel.
  • Others argue models can also be used to filter and analyze such content, but acknowledge that incentives currently favor mass low-quality generation.

Architecture Debates and Alternatives

  • Some view transformers as an especially successful instance of a broader class (probabilistic models over sequences/graphs) and expect future gains from combining them with older ideas (PGMs, symbolic reasoning, causal inference).
  • Others emphasize architectural limits: softmax pathologies, attention “sinks,” positional encoding quirks, and scaling/energy costs. Various papers and ideas (e.g., alternative attention mechanisms, hyper-graph models, BDH) are mentioned as promising.
  • A minority is skeptical that a radically new architecture is the key; they see more upside in better training paradigms (e.g., reinforcement learning, data efficiency) than in replacing transformers.

AGI, Deduction, and Cognition

  • Some argue transformers are fundamentally inductive and can’t truly perform deduction without external tools; others respond that stochasticity doesn’t preclude deductive reasoning in principle.
  • A long subthread debates whether LLM capabilities imply “nothing special” about the human brain vs. the view that human cognition is grounded in desire, embodiment, and neurobiology in ways transformers do not capture.
  • There’s disagreement over whether LLM-generated work is genuinely “original” or just sophisticated plagiarism, and whether hallucination makes them categorically unlike human reasoning or just a noisier analogue.

Research Culture, Incentives, and Productization

  • Commenters note short project horizons (e.g., 3-month cycles) aimed at top conferences and benchmarks, favoring shoddy but fast incremental work.
  • Much of what the public sees as “AI” is described as 90% product engineering (RLHF, prompt design, UX) built on a small core of foundational research.
  • True non-transformer research is perceived as a small, underfunded fraction, overshadowed by the “tsunami of money” for transformer-based products.

Hardware, Energy, and Lock‑In

  • Transformers are praised for aligning extremely well with parallel GPU hardware, in contrast to RNNs; this hardware match is seen as a major reason they won.
  • Some worry that massive investment in GPU-style infra could trap the field on suboptimal architectures; others say good parallel algorithms are inherently superior, and hardware will evolve with any better approach.
  • Energy use and data center build‑out are flagged as looming constraints; some hope this will force more fundamental innovation.

Reactions to the Sakana CTO’s Anti‑Transformer Stance

  • Some dismiss the “sick of transformers” line as fundraising theater—positioning around “the next big thing” without specifying it.
  • Others see it as a normal researcher reaction: once a technique is “solved” and industrialized, curious people move on to more open problems.
  • A few compare this to artists abandoning a popular style, driven by boredom, stress, or ambition rather than purely by money.

Roc Camera

Purpose and Concept

  • Device is a Raspberry Pi–based camera that attaches a cryptographic proof (marketed as a ZK proof) to each image, claiming to attest that a given photo came from that camera and is unmodified.
  • Many commenters note it does not prove that the depicted scene is “real” or non‑AI, only that “this file was produced by this device with this firmware and metadata.”

Attacks and Limitations

  • “Analog hole” is repeatedly raised: you can photograph a screen, projection, or high‑quality print of an AI image and still get a valid proof.
  • Depth/LiDAR and extra sensors (IMU, GPS, ambient audio, etc.) are suggested to make such rebroadcasting harder, but others point out those signals can be spoofed (e.g., FPGA feeding CSI-2, HDMI‑to‑CSI adapters, fake sensor boards).
  • Even perfect attestation cannot address staging or selective framing; you can cryptographically prove a real photo of a misleading or manipulated scene.
  • Without a secure element on the sensor or SoC, several argue the current design cannot meaningfully prevent fully synthetic input.

Existing Standards and Alternatives

  • Multiple references to C2PA and camera-vendor schemes (Sony, Leica, Nikon, Canon). These sign images and/or edit histories; some earlier implementations were cracked.
  • Some say a simple per‑camera signing key is enough and ZK is just hype; others emphasize that richer, chained provenance (device + software edits) is the more mature direction.

Hardware and Product Design Reactions

  • Strong criticism of the $399 price for what appears to be a Pi 4, off‑the‑shelf IMX519 module, and visibly 3D‑printed case with cheap buttons.
  • Concerns about image quality (tiny sensor), Pi boot time and power draw, lack of current export function, and janky marketing site (scroll hijacking).
  • A minority defend it as a scrappy hardware experiment worth supporting even if rough; others call it a “toy” or “crypto gimmick.”

Open Source, Security Model, and ZK Debate

  • One side claims open-sourcing would break trust (users could sign AI images); others explain secure boot / HSM designs where user-modified firmware simply doesn’t get the vendor’s attestation key.
  • Several people ask what the ZK proof is actually proving beyond what a standard signature would, and note the site gives almost no technical detail.

Use Cases, Trust, and Social Implications

  • Suggested serious uses: journalism, courts, law enforcement, insurance, bodycams, real estate documentation.
  • Others argue that in practice authenticity will remain a matter of institutional and personal reputation, and that cryptographic “realness” may be overvalued, dystopian, or quickly undermined, much like DRM or NFTs.