Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 98 of 520

Toll roads are spreading in America

Who Should Pay for Roads? Taxes vs Tolls

  • One camp argues “pay at point of use” is fairest: drivers, especially heavy users, should fund roads directly via tolls or per‑mile/per‑weight charges rather than broad income taxes.
  • Others say roads underpin all economic activity, so they should be funded from progressive general taxation, not user fees.

Equity and Regressive Impacts

  • Many call tolls a regressive tax: flat per‑trip fees bite much harder for low‑income commuters, who often have the least flexibility in where they live and work.
  • Supporters counter that regressiveness can be offset by rebates, UBI‑style dividends, or using toll revenue to improve transit and reduce other regressive taxes.
  • There’s concern that premium lanes produce “two‑tier” infrastructure: rich drivers buy speed while poorer drivers sit in worse traffic.

Economics of Road Wear and Vehicle Types

  • Multiple comments note that road damage scales roughly with the fourth power of axle load, so trucks and buses cause orders of magnitude more wear than cars or bikes.
  • Some advocate truck‑only tolls or strongly weight‑tiered fees; others worry about large pass‑through cost increases on goods.
  • Gas taxes are seen as inadequate and eroded by inflation, efficiency, and EVs; ideas include odometer/weight fees and tire‑based taxes, with debate over fairness to EV owners.

Design of HOT/HOV and Congestion Pricing

  • Economists and some drivers favor dynamically priced HOT lanes and congestion pricing to keep traffic flowing near the speed limit and internalize congestion costs.
  • Many Bay Area and Texas users describe HOV/HOT lanes as poorly enforced, widely cheated, and socially corrosive for rule‑followers.
  • Critics say using existing lanes for toll/HOT converts public capacity into paywalled capacity; others reply that total throughput can rise and emergency access improves.

Privatization, Governance, and “Temporary” Tolls

  • Strong distrust of public–private toll concessions: claims of guaranteed returns, rising tolls, weak maintenance, and state bailouts when projects fail.
  • Several note that many roads were sold or leased after being built with public money; “temporary” tolls rarely end.
  • Some argue even public toll agencies become growth‑oriented cash cows, and money is fungible so toll revenue often just backfills general budgets.

Urban Form, Transit, and Induced Demand

  • Urbanists emphasize induced demand and the Lewis–Mogridge effect: new lanes fill up; only pricing and better alternatives really cut congestion.
  • Others stress that adding lanes still increases freight/person throughput and that US land use and culture make high‑quality transit difficult outside a few metros.
  • Ongoing tension between visions: dense, transit‑oriented, walkable cities vs spacious, car‑dependent suburbs; each side accuses the other of ignoring preferences or externalities.

Technology, Surveillance, and Evasion

  • The shift to transponders and ALPR makes tolling far cheaper and more scalable but raises surveillance concerns about ubiquitous tracking of movement.
  • Some discuss widespread plate‑obscuring (including by police), fake plates, and hitch balls or trailers blocking plates; enforcement is uneven.

International and Regional Examples

  • Commenters cite Norway, Germany, the UK, and especially Sydney as cautionary tales or models: heavy reliance on tolls, private operators, and sophisticated electronic systems.
  • Within the US, experiences vary: Florida, Texas, Chicago, and the Northeast are cited as heavily tolled; much of the country still has few toll roads.

Proposed Compromises

  • Ideas include: dynamic congestion pricing earmarked for transit, truck‑focused tolling, income‑scaled fines and rebates, per‑mile/weight fees, or even universal tolls with equal per‑resident dividends.
  • Underlying divide remains: whether roads should behave like priced, scarcity‑managed infrastructure or like universal, tax‑funded public services.

Nvidia's $20B antitrust loophole

Deal structure & regulatory arbitrage

  • Nvidia didn’t buy Groq the company; it licensed Groq’s IP and hired key leadership/engineering talent.
  • Commenters argue this structure likely avoids:
    • CFIUS review, given Groq’s large Saudi government contracts.
    • Formal antitrust merger review and its delays.
  • Several see the ~$20B price (far above recent valuation) as paying for speed and certainty by dodging regulatory processes.
  • Others note it’s not a “loophole” if regulators simply choose not to treat such acquihires as de facto mergers.

What Nvidia wants

  • Groq is viewed as a serious inference competitor: an LPU architecture and chip reputedly better than GPUs for low-latency, high-throughput inference.
  • Nvidia gains: IP, compiler stack knowledge, and the people who built and understand it.
  • Some note that “non-exclusive” licensing may be largely cosmetic if Nvidia has all the top talent; others counter that IP can still be licensed to new implementers.

Fate of Groq, GroqCloud, and Saudi assets

  • Many assume GroqCloud will be wound down over 12–18 months and the remaining company will wither.
  • Others push back: Groq still has data centers, major Saudi commitments, and could survive as a cloud/infrastructure/IP-licensing business.
  • Unclear from the thread whether Saudi investors are being cashed out and how much value remains in the “stub” company.

Employee equity, cap tables, and fairness

  • A major thread: this structure may leave non-executive employees with worthless common stock while investors and founders capture most of the upside via secondary share sales or bespoke arrangements.
  • Some argue employees in late-stage startups rarely hold more than ~0.01% and often don’t participate in such transactions at all.
  • Others believe the headline price is high enough that common shareholders likely get something, but concede deal engineering could route most value to preferred holders and leadership.
  • General advice trend: treat startup equity as having near-zero expected value; negotiate cash, and assume you may be excluded from liquidity events unless explicitly protected.

Antitrust, IP law, and state power

  • Many see this as a case study in weak US antitrust: a dominant player can effectively absorb a serious rival’s brain trust and tech without merger review.
  • Some contend that robust antitrust would treat such “IP + talent” deals like acquisitions when they have similar competitive effects.
  • Others argue regulating where people can work would be unacceptable; that points to a mismatch between traditional antitrust tools (focused on corporate control) and modern competition centered on talent and IP.
  • A few blame broader IP and corporate law for enabling consolidation; others propose tax/UBI schemes to disincentivize value extraction from labor.

Shifting startup and AI acquisition norms

  • Commenters link this to a pattern in AI: “non-acquisitions” (licensing + key hires) instead of traditional M&A, seen as:
    • Easier on regulators.
    • More targeted: big companies buy only elite researchers/architects, not whole orgs.
  • This is viewed as corrosive to the startup bargain: rank-and-file workers take risk and lower pay but can be cut out of big outcomes.
  • Some predict labor will adapt by demanding higher salary, bonuses, and severance instead of banking on options.

Technical side notes

  • Debate over whether Groq’s LPU is practically inference-only due to limited on-chip SRAM vs GPU HBM needs for training.
  • Some challenge specific technical claims in the article (e.g., model sizes and throughput numbers; energy savings from reduced data movement), and a few suspect parts were AI-written or outdated.

Janet Jackson had the power to crash laptop computers (2022)

Storage media reliability and “spinning rust”

  • Discussion branches into HDD vs SSD reliability, especially for long‑term offline storage.
  • Several comments claim HDDs, if stored well and not used, can retain data for decades and even be partially recoverable after mechanical issues; SSDs and flash are said to risk silent data loss if left unpowered for long periods.
  • Others clarify that SSD “unpowered retention” is usually ≥1 year after the full write endurance is exhausted; cited torture tests found only tiny corruption after far exceeding spec, so the risk is framed as real but “extremely minor” for normal users.
  • HDD magnetic domains weaken over time, but error correction adds a large safety margin, so practical corruption appears much later than raw decay suggests.
  • Optical media are discussed: CD‑ROM vs CD‑R/RW, M‑Disc marketing claims, and the reality of media degradation over years.
  • Overall message: no medium is perfect; periodic refresh and multiple backups are more important than HDD vs SSD tribalism.

Resonance, vibration, and the Rhythm Nation bug

  • Many see the Janet Jackson story as an example of mechanical resonance in 2.5" laptop HDDs driven by specific audio frequencies, analogous to known issues where sound or vibration degrades disk I/O.
  • Links are shared to demos of shouting at storage servers and Apple’s similar anecdotes; some note a video that matches a note in the song to measured resonant frequencies of certain laptop drives.
  • There is curiosity about mechanism (audio vs video, EMI vs vibration), but consensus that audio‑induced head/disk resonance is at least plausible.

Authenticity, folklore, and naming

  • Some question whether the story is accurately remembered or partly apocryphal, given it’s second‑hand and decades old.
  • Others defend it as credible engineering lore and cite similar “weird bug” experiences where the exact cause was never fully understood.
  • The blog’s deliberate choice not to name the OEM is seen by some as principled (teaching, not shaming) and by others as limiting verifiability and deeper investigation.

Sound myths, infrasound, and physics tangents

  • A famous “7 Hz kills chickens” anecdote is brought up and then dismissed via skeptical analysis: 7 Hz wavelengths are far too large to couple meaningfully to a chicken’s skull.
  • Extended discussion covers how hard it is to generate clean sub‑10 Hz tones with conventional speakers, marketing vs real capabilities, and comparison to pipe organ low notes.
  • Related corrections include the Tacoma Narrows bridge collapsing from aeroelastic flutter, not simple resonance, and broader musings on mechanical resonance in engineering (e.g., F1 axles, vinyl cutting).

USD share as global reserve currency drops to lowest since 1994

Political leadership, the Fed, and trust

  • Several commenters tie reserve erosion to perceived chaos and tariffs from the current US administration, plus “weaponized” use of the financial system.
  • Others argue the Fed has tried to offset presidential policies, but its credibility is being eroded by politicization of appointments.
  • Trust is framed as central: countries, investors, and allies don’t “invest in chaos,” and some firms are reportedly shelving US expansion plans.

How reserve status erodes

  • Multiple comments clarify that central banks don’t need to “dump” Treasuries; simply not rolling maturing bonds is equivalent to gradual selling.
  • Higher US deficits mean more debt must be sold; if external demand weakens, yields rise and debt service becomes more expensive, potentially feeding back into more borrowing.
  • Debate over whether this is a slow “whimper” (multipolar reserves, higher rates, stagflation risk) or could trigger crisis dynamics akin to a bank run; skeptics note that openly dumping Treasuries is self-destructive.

Alternatives: multipolarity, BRICS, and the euro

  • Common view: no single successor exists; most likely outcome is baskets of currencies and a more multipolar reserve system.
  • The euro is seen by some as the only serious rival but hampered by incomplete fiscal integration and security dependence on the US.
  • BRICS payment systems are described by critics as mostly symbolic so far; proponents see them as early infrastructure for a future alternative.
  • The yuan and rupee are viewed as constrained by capital controls and institutional trust issues.

Gold, crypto, and backing debates

  • Long, contentious sub-thread on gold- or commodity-backed multinational currencies versus fiat:
    • Gold supporters emphasize scarcity and discipline; opponents stress rigid supply, vulnerability to miners, and historical instability under gold standards.
    • Some argue any hard backing just reassigns power to “mine and vault operators.”
    • Crypto is mostly dismissed as impractical for reserves, though a few suggest a central-bank digital dollar as a defensive move.

Sanctions, “petrodollar,” and diversification

  • Many link diversification away from USD reserves to the freezing of Russian assets and broader US sanctions policy.
  • Some say this has pushed countries, especially autocracies, to add gold and explore non-dollar payment rails (e.g., yuan oil deals, CIPS, BRICS).
  • Others downplay “end of the petrodollar” narratives, arguing oil trade is a smaller share of global trade and “petrodollar” was always mainly geopolitical.

Implications for the US and world

  • Being reserve currency is described as a mixed blessing: it enables “exorbitant privilege” and export of inflation but also forces the US to supply safe assets and tolerate large deficits.
  • Several argue reduced reserve share could be healthy long term—forcing US fiscal discipline and limiting its ability to fund wars—though others fear inflation, loss of living standards, and reduced geopolitical leverage.
  • Broad consensus: the dollar still dominates by trade and reserves, but inertia is being gradually replaced by hedging and parallel systems, with the endgame and timing seen as highly uncertain.

Gpg.fail

Scope of the Vulnerabilities and Talk

  • Thread centers on the 39c3 “to sign or not to sign” talk and gpg.fail, which documents ~14 practical vulnerabilities, many in GnuPG, some in other tools (Sequoia, minisign, age).
  • Key themes: signature type confusion (cleartext vs detached), malleability leading to plaintext recovery, odd parsing behaviors (e.g., formfeed allowing unsigned data injection), and unsafe handling of ANSI escape sequences in terminal output.
  • Several commenters stress these are mostly local/interaction attacks, not “remote worm” style bugs, but still serious because PGP tools are expected to safely handle untrusted input.

GnuPG vs PGP vs Protocol Design

  • Strong criticism that PGP’s packet system and state machine are “fundamentally broken” and too complex, making such bugs almost inevitable.
  • Others argue the current OpenPGP standard itself isn’t the problem; gpg.fail mostly hits legacy parts and implementation bugs in GnuPG.
  • Debate whether the opening ISO verification attack affects Sequoia as well; some say yes, others say Sequoia’s behavior is less confusing.

Maintainer Responses and WONTFIX

  • Significant frustration that some GnuPG issues were marked WONTFIX, including attacks that allow plaintext exfiltration while only emitting a generic error.
  • A recent GnuPG blog post on cleartext signatures is seen as unsatisfying: if something has been “considered harmful” for decades, it should be deprecated and removed, not left in by policy.

Impact on Existing Workflows

  • Questions about whether git tag/commit signing, Linux distro package verification, and enterprise email encryption are at risk.
  • Consensus: most distro package-signing use is narrowly constrained and often layered on HTTPS, so not immediately in flames, but the ecosystem is brittle.
  • Some still use GPG heavily for backups, password stores, SSH keys, and smartcards, arguing the ecosystem and hardware support are hard to replace.

Alternatives and “Use the Right Tool”

  • Many recommend replacing PGP with task‑specific tools:
    • age for file encryption, minisign or SSH signatures for signing, Signal/WhatsApp for messaging, Sigstore/SLSA-like systems for software supply chain.
  • Pushback: PGP still dominates for Maven, Linux releases, and cross‑org email; migrating ecosystems and solving key distribution is non-trivial.

Key Distribution, Web of Trust, and Standards Schism

  • Broad agreement that the traditional web of trust and keyservers effectively failed; most modern use relies on pre-established or HTTPS-delivered keys.
  • Discussion of the OpenPGP schism: LibrePGP (GnuPG-aligned, minimalist) vs RFC 9580 (more changes). Some see both sides as heading toward an interoperability mess.

Licensing and Ecosystem Concerns

  • Separate thread worries that Rust-era rewrites default to MIT, enabling corporate “embrace, extend, extinguish,” unlike GPLv3-licensed GnuPG.
  • Others counter that users evidently prefer permissive-licensed replacements, and that forking remains possible even if corporations create proprietary variants.

Nvidia just paid $20B for a company that missed its revenue target by 75%

Regulators, Monopolies, and Political Corruption

  • Many see the deal as an example of weak or captured market regulators, likening the US government to being “for sale” and comparing Nvidia’s position to other big-tech monopolies.
  • Others push back, arguing corruption and regulatory capture have worsened recently, not remained constant, with debate over whether the degree of corruption matters.
  • Some cite FTC underfunding and staff cuts as concrete evidence regulators are being weakened.
  • There’s mention of DOJ merger guidelines that could allow future administrations to revisit serial anti‑competitive behavior, though several doubt this will actually happen.

What Nvidia Really Bought

  • Multiple comments argue this isn’t a classic acquisition but effectively an expensive “acqui-hire”: Nvidia paying to secure Groq’s key technical leaders and IP without absorbing the whole org.
  • Others note Nvidia structured this as IP licensing plus hiring, potentially to dodge CFIUS/antitrust scrutiny while still neutralizing a competitor.

Impact on Groq and Innovation

  • One side claims Nvidia is stifling independent innovation by removing a differentiated hardware competitor and consolidating AI compute under one giant.
  • Another side counters that:
    • Groq the company still exists, retains its hardware, gets ~$20B, and may expand GroqCloud or build “version 2.0.”
    • The IP license is non‑exclusive, so in theory other companies could still use Groq’s tech.
  • Several doubt the optimistic view, predicting most of the cash will go to investors, with employees and long‑term R&D underfunded, but this is speculative and currently unclear.

Bubble vs Strategic Logic

  • Some think the transaction is pure AI‑bubble behavior: huge multiples on shaky revenue projections and hype over fundamentals.
  • Others see a clear strategic fit:
    • Nvidia shoring up its weak spot in low‑latency inference.
    • Removing a future rival while they’re still “cheap.”
    • Accessing non‑TSMC fabrication and specialized architecture.
  • A few stress that both can be true: real technology plus bubble-level pricing.

Revenue Projections, Valuation, and Misrepresentation

  • Several commenters highlight the article’s confusion between valuation and forecast revenue (e.g., $2B valuation vs $500M revised 2025 revenue).
  • Clarifications:
    • Groq reportedly cut 2025 revenue projections from $2B to $500M.
    • Its valuation later increased to ~$6.9B in a subsequent round.
  • Some float the idea that overly rosy projections could be fraud; others note missed forecasts are common and only fraudulent if knowingly false—something that is currently unclear.

Startups as Big-Tech R&D and Equity Concerns

  • There’s broad agreement that this fits a long‑standing pattern: startups act as external, high‑risk R&D labs for giants (similar to biotech/pharma and prior Cisco/Google playbooks).
  • Multiple comments worry about a growing trend where leadership and IP are bought out, but common employees with equity are left with little in “acqui-hire” style deals.

Geopolitics and China

  • One thread warns that selling advanced chip tech and allowing certain foreign ties risks enabling cheaper Chinese clones via reverse engineering.
  • Others agree this could weaken long‑term US competitiveness, though details here are sparse and mostly alarmist, not deeply substantiated in the discussion.

Article Style and Reliability

  • Some readers liked the technical explanations and visualizations (e.g., money stacks); others found them condescending or irrelevant.
  • The author engaged in the thread, clarified using transcription (not LLM generation), and corrected factual errors, but several still view the piece as biased toward an “AI bubble” narrative and mixing up key financial concepts.

Floor796

Overall reception

  • Widely praised as “incredible,” “stunning maximalism,” and one of the best things people have seen online; many call it a true labor of love.
  • Seen as the kind of whimsical, personal project that “makes the internet a better place” and recalls the early-2000s web/StumbleUpon era.
  • Some mention it’s a time sink that could productively replace doomscrolling.

Interactivity and Easter eggs

  • Many comments trade coordinates and hints for hidden interactions: Naruto, Jaws, Fight Club poster, Chuck Norris, black hole, Ninja Turtle, Goofy, Duke Nukem cocoon, Steven Seagal, Monkey Island, Half-Life tentacle, Lexx, Walt & Jesse, and more.
  • FAQ-listed features are highlighted: quests, payphone with discoverable numbers, Racer796 arcade game, project stats screen, Change My Mind sign, 10‑second melodies, pixel animations, Free Ads Board, Wally/Waldo, and 20+ special actions.
  • Sound effects exist on some elements; users enjoy hunting for them.
  • A neon “Hacker News” ad and other user-created ads show off community participation.

Art style, references, and themes

  • Strong reactions to the dense, hand-crafted pixel-art vignettes; compared to Moebius, Flashback, eBoy, Theme Hospital, XCOM, Alien Syndrome, Habbo Hotel, MAD magazine, xkcd’s “Click & Drag,” and even Bosch’s “Garden of Earthly Delights.”
  • Viewers note how iconic characters are recognizable with minimal pixels, while unfamiliar (often Soviet-era) ones read as generic extras.
  • Some pick up on thematic scenes, e.g., a sequence echoing “Another Brick in the Wall” as a critique of schooling and conformity.

Technical design and performance

  • The “mega‑gif” concept is explained: 796 refers to G‑I‑F (7th, 9th, 6th letters); the whole floor is effectively one huge animated GIF on a space station level.
  • Custom video/animation format: the floor is split into sections, each compressed and rendered in a worker thread to a shared canvas.
  • Creator built the site, engine, and editor from scratch (no big JS frameworks; minimal dependencies like Less). Several admire how fast it loads versus modern web apps.
  • One person reports a severe crash on Win10/Firefox; others on similar setups report smooth performance, suggesting a machine- or driver-specific issue.

Creator and background

  • FAQ notes the project began in 2018, with a year spent on tooling; first block took ~8 months, now ~1–1.5 months per block.
  • Clues (Russian-language article, UI languages, Soviet-era characters) indicate a Belarus-based, Russian‑speaking web programmer doing nearly everything solo.

AI, games, and future possibilities

  • Some fantasize about giving NPCs AI, turning it into an explorable, first-person world or XR experience.
  • Others argue that adding AI would betray the hand-made artistic intent, while a few stress AI is broader than just LLMs.
  • One commenter uses “recreating this world in 3D with full interaction” as a personal benchmark for future AGI/world‑model capabilities.

Apple releases open-source model that instantly turns 2D photos into 3D views

Model capabilities & applications

  • Converts a single 2D image into a 3D-like “spatial” scene using Gaussian splats; demos are widely seen as impressive, especially for photos of people and rooms.
  • Expected uses: Apple Vision Pro content, iOS “spatial scenes,” lock-screen effects, real estate imaging, VR experiences, and personal memory enhancement (e.g., deceased relatives, historical footage).
  • Some foresee major impact on creative workflows (film, graphics, VR worldbuilding) by dramatically lowering content-creation time.

Technical characteristics & limitations

  • Uses Gaussian splats rather than meshes; some users have hacked in mesh export via other projects, but note artifacts and holes when moving the camera off-axis.
  • Limited viewpoint freedom: good for small viewpoint shifts / stereoscopy, not full 6DOF; glitches appear with larger movements.
  • Resolution and layers are capped, and it doesn’t handle multi-image fusion or robust inpainting of unseen regions.
  • Others mention alternative or related projects (StereoCrafter, GeometryCrafter, HunyuanWorld, Marble) that may be better for level design or video/temporal consistency.

Relation to Apple products

  • Several commenters believe this (or a close variant) underpins iOS / visionOS “Spatial Scenes” and Photos 3D effects.
  • Users of those features report strong emotional impact but also note they are more constrained than the research demos, likely to preserve illusion quality.

Tooling and accessibility

  • Some frustration over Conda-based setup; others report the repo’s instructions work and suggest alternatives like uv or pixi, or just plain virtualenv.
  • A Hugging Face space makes it usable in the browser; one commenter says Apple isn’t “serious” without an official frontend, others counter it’s a research release.

Licensing and “open source” debate

  • Weights are licensed for “research purposes” only; many argue this is not open source under OSI’s “no field-of-endeavor restrictions.”
  • Distinction is drawn between open source, source-available, and “open weights”; some say the HN title is misleading, as the repo itself doesn’t claim to be open source.
  • Broader frustration that big tech (Apple, Meta, others) market such releases as “open” while retaining commercial control, and that AI threads constantly devolve into definitions of “open source.”

Copyright status and ethics of weights

  • Heated discussion on whether neural network weights are copyrightable “tables of numbers,” with conflicting claims and references to database protections.
  • Some argue the license mainly signals litigation risk rather than clear legal boundaries.
  • Moral questions raised about enforcing proprietary rights on models trained on unlicensed copyrighted data; replies note that if training is legal (fair use), that doesn’t automatically justify ignoring the model license.

Researchers’ origins & US STEM pipeline

  • Side thread on many authors having non-US educational backgrounds.
  • Points raised: you can’t infer birthplace from names; the US is a small share of world population; foreign-born researchers form a large fraction of CS PhDs and often stay.
  • One long analysis frames the situation as universities backfilling funding with international students, creating a pipeline industry later tapped for AI research; scaling a purely domestic PhD pipeline to similar levels is seen as unrealistic.

Broader attitudes toward Apple

  • Mixed views: some praise Apple for impactful, user-facing applications of AI; others distrust its closed ecosystem and see this release as another example of pseudo-openness.
  • Historical grievances aired (OpenCL, lack of Linux on Apple Silicon, App Store control), contrasted with Apple’s substantial genuine open-source contributions in other areas.

OrangePi 6 Plus Review

Performance and Power Use

  • Idle power draw (~15 W) is widely criticized as excessive for an SBC, especially compared to:
    • Raspberry Pi 5 (~3 W idle),
    • Intel N100/N150 mini-PCs (typically 5–8 W idle),
    • Older x86 thin clients (~5–10 W total).
  • Single-core performance appears roughly comparable to Intel N150; multi-core performance of the 12-core ARM SoC is substantially higher.
  • Some note that many home workloads (streaming, routing, file serving, retro emulation) are more sensitive to single-thread performance than many cores.
  • Others give counterexamples where heavy multithreaded tasks (e.g., Plex analysis) fully use many cores.

ARM SBCs vs x86 Mini PCs

  • Many argue that at ~$200, x86 mini-PCs (N100/N150/N300, small Ryzen systems) are a better deal:
    • Include case, PSU, storage, cooling.
    • Much better OS and driver ecosystem; “just install a mainline distro.”
  • Repeated N150/N300 recommendations are defended as pragmatic, not shilling: performance-per-watt is strong and GPU/drivers are mature.
  • Some ARM users report smooth experiences on Rockchip-based boards and ARM VPSs, claiming they “just work” for common server workloads.

Software Support, Standards, and E-Waste

  • Core complaint: non-mainlined kernels and board-specific hacks mean:
    • Stale kernels, abandoned vendor images, fragile Google Drive downloads.
    • Per-board device trees instead of standardized ACPI/UEFI.
  • Several call such boards “e-waste” unless drivers are upstreamed; some propose avoiding any SBC without mainline support.
  • ARM SystemReady/UEFI is seen as the path forward; a few boards (Pine64, Radxa, some Pi 4 setups) are cited as partial successes.
  • CIX is reported to be working on kernel patches, but GPU/NPU support is still missing in mainline.

NPU and AI Acceleration

  • 30 TOPS NPU is viewed skeptically:
    • Often requires proprietary SDKs (e.g., NeuralONE) and custom stacks.
    • RAM bandwidth/size limitations make such NPUs marginal for LLMs; CPU often beats them in practice.
    • Common complaint that NPUs are “decorative” without robust, open drivers.

Use Cases and Positioning of ARM SBCs

  • Some see the “sweet spot” of ARM SBCs as cheap, low-power, headless IoT/edge devices with GPIO, not near-desktop machines.
  • Others appreciate ARM diversity (and dislike x86 dominance) and use ARM boards successfully in home labs.
  • Multiple commenters say they’ve been burned by OrangePi (and similar) before and won’t buy again without clear, long-term, mainline support.

Show HN: Ez FFmpeg – Video editing in plain English

LLMs as ffmpeg frontends

  • Many use chatbots specifically to generate ffmpeg commands (including complex effects like “bounce” forward/backward loops), iterating until it works and then saving the command.
  • People emphasize LLMs as “text → formal directive” translators (CLI, SQL, legalese), with the user still inspecting and learning from the output.
  • Others report LLM hallucinations on ffmpeg edge cases and stress that understanding codecs/containers is still often necessary.

Learning vs offloading complexity

  • Some argue it’s not worth learning ffmpeg if used once or twice a year; they just want a one-off solution.
  • Others counter that you don’t memorize options, you learn concepts and where to look in the docs; LLMs can’t yet be trusted like well‑vetted libraries or distro packages.
  • Debate over “trust”: library code is implicitly tested in real-world use; LLM output must be individually verified.

Value and limits of plain-English wrappers

  • Supporters like ezff for covering common patterns (“90% use cases”) and avoiding manual syntax.
  • Critics say oversimplified wrappers that always re‑encode (e.g., mkv→mp4, extracting audio as MP3) hide important distinctions like remux vs transcode and quality settings, reinforcing misconceptions about video.
  • Concern about the “cliff”: once a need falls outside the 20 patterns, users are stuck without having learned ffmpeg itself; some suggest falling back to an LLM in those cases.

Implementation concerns (Node/npm)

  • Multiple commenters dislike a CLI that depends on Node/npm due to bloat and supply‑chain risk, preferring a single static binary (Rust/Go/C) or distro packages.
  • The GitHub link on npm returns 404, which some find off‑putting; people resort to the “code” tab on npm.

Alternative UX ideas and tools

  • Suggestions: tab completion, TUI/interactive builders that explain each flag, “typed” CLIs with dropdown-like choices, or generalized helpers like helpme and llmwrap.
  • Some rely on GUI tools (e.g., Lossless Cut, ScreenFlow) or their own wrapper scripts/aliases instead.

Video/GIF and codec tangents

  • Discussion on macOS failing to open some MP4s due to codecs; advice to re‑encode to H.264+AAC or use players like VLC/mpv/IINA.
  • Multiple detailed ffmpeg snippets for high‑quality GIFs (palettegen, paletteuse, dithering), mentions of gifski and palette‑per‑frame.
  • One nitpick about ambiguous phrasing like “slow down by 2x” in the English interface.

Pre-commit hooks are broken

Role of hooks vs CI

  • Strong consensus that hooks are not enforceable or reliable for policy; CI must be the source of truth.
  • Hooks are compared to client-side validation; CI to server-side validation.
  • Many argue hooks should be opt‑in productivity tools to reduce CI churn, not gatekeepers for correctness.

Impact on developer workflows

  • Mandatory or auto-installed hooks are widely reported as frustrating, especially when slow, flaky, or stateful.
  • Hooks often break workflows involving:
    • WIP commits, frequent small “checkpoint” commits.
    • Interactive rebases, merges, and partial staging (git add -p).
    • Non-standard or power-user workflows, or less-technical users (e.g., game studio artists/designers).
  • Because --no-verify and local hook directories are trivial workarounds, strict hooks tend to be bypassed, undermining their purpose.

What hooks are appropriate

  • Many recommend only fast, deterministic hooks that don’t depend on network, external services, or complex state.
  • Several argue hooks should be pure: no mutating the working tree, ideally just reporting issues or rejecting commits.
  • Secret detection and large-file checks are cited as rare cases where pre-commit/pre-push is valuable because CI is “too late.”
  • Formatting and lint enforcement are seen as better suited to:
    • Editors/IDEs and local scripts.
    • Pre-push hooks or CI pipelines, not pre-commit.

History rewriting and rebasing

  • Debate over whether branches are personal and mutable until PRs vs. becoming “shared” once a PR is open.
  • Some teams treat working/PR branches as mutable and squash to a single commit; others value atomic commits that always build, for bisecting and comprehension.
  • Strong disagreements over heavy rebasing vs. merge-based workflows and linear main branches vs. preserving merge history.

Tooling and alternative approaches

  • Mentioned tools/workflows: pre-commit framework, lefthook, git filters, git-absorb, git rebase -x, JJ/jujutsu (jj fix, planned jj run), git worktrees.
  • Some say frameworks mitigate common hook pitfalls; others note they can’t fix bad hook design or fundamental UX issues.
  • Several prefer explicit local scripts or editor integration over automatic hooks for formatting/linting.

Philosophy of commit history

  • Split between:
    • Treating commits as disposable local checkpoints, with squash at merge.
    • Treating commit history as a structured narrative aiding review, debugging, and long-term maintenance.
  • Tension between individual productivity and shared-team readability/maintainability is a recurring theme.

CEO of health care software company sentenced for $1B fraud conspiracy

Overall reaction to the case

  • Many see the conviction and 15-year sentence as rare but welcome accountability in a system where white-collar and healthcare fraud often feel under-punished.
  • Others are immediately cynical: focus on the fact restitution is less than half the $1B figure, and speculate he still comes out net-positive or could “buy” leniency via political donations or pardons.

Doctors’ role and telemedicine fraud

  • Commenters are especially dismayed that doctors signed off on false orders with no real exams, viewing this as a total betrayal of professional ethics.
  • Some argue the core problem is billing government insurance; if patients pay cash for questionable but harmless “gratifications,” it’s less of a societal issue.
  • Others note similar patterns in online pill mills and TV-driven Medicare scams targeting the elderly.

Deterrence, restitution, and how to punish fraud

  • Strong sentiment that current penalties don’t deter billion‑dollar frauds: paying back a fraction if caught is seen as a bad gamble, not a deterrent.
  • Proposals include: forcing fraudsters to work off the debt at prison wages; seizing generational wealth; aggressively clawing back funds from family unless they can prove legitimate origin.
  • Critics say lifelong compulsory prison labor is effectively slavery, morally unacceptable, and would create perverse incentives to imprison more people.
  • Debate over whether punishment deters crime: many argue severity matters less than certainty of being caught.

Prison labor and the 13th Amendment

  • Several note that forced or near-unpaid prison labor already exists, protected by the U.S. Constitution’s exception for punishment of crime.
  • Others counter that legality ≠ morality, comparing this to modern slavery and pointing to racially skewed impacts and private prison profiteering.
  • Alternative models like Scandinavian rehabilitation-focused systems are raised as preferable.

Corporate accountability vs. recovery for victims

  • Some want “death penalty” for corporations: dissolve entities, wipe out investors, seize IP and assets, and bar executives/boards.
  • Others caution that maximizing victim restitution often requires keeping operating units intact (e.g., factories, staff), effectively recreating the company under a new name.
  • Tension highlighted between using corporations as examples for deterrence versus prioritizing compensation for victims and societal continuity.

Systemic Medicare/Medicaid fraud and politics

  • Commenters see this as a tiny piece of a much larger, long-running Medicare/Medicaid fraud ecosystem.
  • Criticism of Medicare’s weak upfront controls and reliance on catching only the biggest scams after the fact.
  • Political cynicism runs throughout: references to past massive healthcare fraud cases where leaders went on to successful political careers; speculation about future pardons for well-connected fraudsters; and a broader view of U.S. healthcare as “deathcare” optimized for shareholder value.

QNX Self-Hosted Developer Desktop

Overall Reaction to the New QNX Desktop

  • Many are surprised to see a full QNX desktop again, especially with Wayland and XFCE.
  • Enthusiasm from people nostalgic for QNX/BB10 and those interested in modern tooling on a microkernel RTOS.
  • Some see it mainly as a demo / dev environment rather than a serious desktop OS competitor.

Bare Metal vs Virtualized Environment

  • Current release runs under QEMU on Ubuntu; several commenters want a native, bare‑metal image.
  • Roadmap mentions a Raspberry Pi desktop image and bare-metal support in the short term.
  • Experienced users note that QNX has long supported bare-metal deployment via board support packages; the VM image is aimed at people who think of the UI as “the OS.”

Licensing History and Trust Concerns

  • Strong skepticism due to repeated past “bait and switch”: hobbyist / non-commercial licenses and partial source access were granted and then revoked, sometimes abruptly.
  • View that this destroyed the community, stalled open‑source ports, and pushed developers away.
  • Suggestions that QNX needs a stable, Unreal‑style revenue-sharing license and a contractual commitment not to yank access.
  • Some argue it’s wiser now to invest in open alternatives (Linux RT, Redox) given this track record.

Automotive Use and Linux Competition

  • QNX widely used in cars, both bare metal and virtualized; some OEMs reportedly moving to Linux, especially in China.
  • Reasons cited for switching: QNX commercial cost, Linux now having mainline real-time patches, existence of open RTOSes for smaller/critical systems.
  • Counterpoint: PREEMPT_RT is still soft real time and AGL hasn’t yet produced certifiable safety systems; OEMs sometimes “come back” to QNX after failed Linux experiments.

Technical Merits of QNX

  • Praised as a “true” microkernel with hard real-time guarantees, O(1) messaging/scheduling (historically), and very small footprint.
  • Kernel services run as processes; drivers are sandboxed so failures don’t take down the system.
  • Reported decade‑long uptimes on deployed devices without reboots.
  • Network‑transparent IPC (Qnet) was a standout feature, but its removal in 8.0 is widely criticized.

UI Stack and Tooling

  • Strong nostalgia for the Photon microGUI and its robustness (e.g., restarting the GUI without killing apps).
  • Disappointment that the new desktop uses Wayland/Weston + XFCE/GTK instead of Photon or full KDE Plasma.
  • Qt is supported and upstream; much KDE software can run on QNX.
  • Weston is confirmed as the compositor in this release.

Ecosystem, Languages, and Phones

  • Rust and Swift are supported as languages, but commenters stress this does not imply iOS app compatibility.
  • QNX’s role in BlackBerry 10 and older QNX demo disks (single‑floppy GUI + browser) inspire considerable nostalgia.
  • Some are hopeful QNX could again power secure mobile or embedded devices; others think mismanagement and closed development have limited that potential.

The battle to stop clever people betting

Smart People, Edges, and Prediction Markets

  • Some argue genuinely “smart” people avoid gambling, seeing belief in one’s immunity to addiction as delusional.
  • Others counter that many clever people do live by betting (stocks, derivatives, sports, prediction markets) and that learning to bet with an edge is adaptive.
  • Several posters say they enjoy betting as a way to quantify probabilities and test forecasts, especially on platforms like Polymarket, but still frame it as gambling.
  • There’s consensus that “smart” gamblers only play when they have an edge; most real‑world bets are negative expected value.

Fairness, Banning Winners, and the House Model

  • Many see sportsbooks as explicitly designed to part gamblers from their money, not to be fair games.
  • Discussion highlights two models:
    • Traditional bookies/casinos: you often bet against the house; they limit or ban winning players.
    • Exchanges/prediction markets/financial markets: operators earn a fee or spread and don’t directly lose to winners.
  • Some think banning sharp players is rational business (to preserve casual demand); others see it as proof of an exploitative industry.

Gambling vs Financial Markets

  • Multiple comments compare gambling with stock/derivatives markets: both involve risk, but casinos are clearly negative‑sum after the rake, while financial markets are closer to zero‑sum (or positive‑sum in utility).
  • “Neo‑brokers” with leveraged products and casino‑like UX are criticized as turning securities trading into de facto gambling.
  • Some recommend using regulated financial markets instead of sports betting, noting you can’t be barred simply for being good.

Addiction, Harm, and Regulation

  • One commenter cites DSM‑5’s reclassification of gambling disorder as an addiction, placing it alongside substance dependencies.
  • Disagreement over whether “everyone is addicted to something” vs proper clinical definitions of addiction.
  • Some see gambling as mostly harmless entertainment with a small minority of severe cases; others stress high rates of ruined lives and suicide, especially with easy phone access.
  • Debate over how “heavily regulated” gambling actually is; online sports betting is contrasted with prediction markets regulated more like financial firms, which often lack responsible‑gaming controls.

Moral, Religious, and Social Perspectives

  • Several view the industry as predatory, “legalized crime,” or net negative to society; comparisons are made to fast food and alcohol.
  • Extended subthread on the “Christian right”: claims they underreact to gambling compared to other moral issues; counterclaims emphasize church charity work and finite attention.
  • Others argue you don’t need religion to oppose gambling; purely humanist arguments about exploitation and externalities suffice.

Economic Context and “Financial Nihilism”

  • Some link rising gambling to a shrinking middle class: younger people feel normal life goals (like homeownership) are unattainable without a “lottery win.”
  • This feeds a “financial nihilism” mindset: if you’ll never afford a house by saving, you might as well gamble.
  • Critics push back, arguing significant gambling spend is itself a major reason some can’t accumulate savings, and suggest moving to cheaper areas instead of gambling for a miracle.
  • Others note that beyond the dream of winning, gambling can express self‑harm or depressive tendencies.

Prediction Markets and Crypto Platforms

  • Polymarket/Kalshi are described as “insider trading festivals” where knowledgeable players can profit from mispriced outcomes (“insane thing does not happen” bets).
  • Some see them as closer to fair exchanges that welcome informed traders; others emphasize platform fees, low liquidity, ambiguous resolution criteria, and the same zero‑sum dynamics.
  • One commenter claims decentralized venues “even the playing field”; another replies that the core problem is overspending, not fairness of odds.

Pharmacological Angle (Ozempic/GLP‑1s)

  • Brief side thread on whether GLP‑1 drugs like Ozempic reduce gambling or other compulsive behaviors.
  • Some say yes broadly; another notes mixed reports and emphasizes that while there may be an effect across behaviors (gambling, shopping, nail‑biting), more data is needed and the magnitude is unclear.

Publishing your work increases your luck

Positive experiences with publishing and “luck”

  • Many commenters report concrete benefits from blogging and open source: job offers, referrals, consulting work, and easier interviews because their public work acts as a “pre-screen.”
  • Publishing is framed as increasing the “surface area” for serendipity: more chances for others to find you, especially if you explain complex topics clearly and consistently over years.
  • Several note that results are slow and uneven: most posts disappear into the void, a few unexpectedly “hit,” and persistence over 3–5+ years matters more than any single piece.

Skepticism: corporate incentives, AI training, and free labor

  • A strong thread criticizes the article’s origin on GitHub as self-serving: encouraging more public code and writing is seen as feeding unpaid training data into LLMs and megacorps.
  • Some argue the message ignores how corporations extract value from OSS without giving back, which has already pushed maintainers toward more restrictive licenses or burnout.
  • Others accept this asymmetry as the “rules on the field” and focus on maximizing personal gain (social capital, jobs) despite LLM scraping.

Open source “success” as a burden

  • Multiple maintainers explicitly hope their projects never “take off” because popularity brings unpaid support, entitlement, and long-term maintenance pressure.
  • There’s discussion of:
    • Being harassed to fix old code you no longer understand.
    • Ossifying “ownership” around whoever last touched a messy component.
    • Difficulty balancing a demanding job with serious OSS maintenance.
  • Proposals include clearer maintenance-status badges, normalizing forks, self-hosted git, and deliberately releasing code as archives with no implied support.

Toxic feedback, platforms, and mental health

  • Several recount early experiences where a single kind comment outweighed multiple cruel ones and kept them publishing.
  • Others left Reddit/Twitter due to hostility and low-quality engagement, and now avoid comments entirely or publish where blocking/banning is easy.
  • Some advocate blogging without comments, or interacting only in smaller, vetted communities.

Broader reflections on luck, content, and barriers

  • Luck is seen as probability, not guarantee: publishing raises odds but doesn’t ensure payoff; many never recoup time invested.
  • Attention fragmentation and “content slop” make it harder to be heard, but also increase the value of clear, grounded work.
  • Regional laws (e.g., mandatory personal imprints) can make publishing risky by forcing doxxing.

Exe.dev

What exe.dev Is Supposed to Be

  • SSH-first subscription service that gives you Linux VMs with persistent disks, sudo, and no per-VM marginal cost.
  • Multiple VMs share a fixed pool of CPU/RAM per account (e.g., 2 CPUs / 8GB across up to 25 VMs on individual plan).
  • Intended for quick experiments that can seamlessly become long-lived, internet-facing services.

UX and Developer Experience

  • Many users praise the “ssh exe.dev → you’re in” flow as unusually smooth and “magical,” especially the built‑in coding agent (Shelley) with screenshot support and a simple web UI.
  • Ability to instantly share HTTP services via managed TLS and link-based access is seen as a major convenience for demos and “perfect software for an audience of one.”
  • Some confusion coming from the initial shell being an exe.dev control REPL, not a VM shell; real shell requires connecting to the specific VM.

Architecture and Technical Details

  • Backed by KVM VMs using a crosvm‑derived VMM; earlier docs mentioning Kata/Cloud Hypervisor are acknowledged as outdated.
  • VMs can do “real VM things” like TUN devices; no custom kernels yet.
  • No per‑VM public IPv4; HTTP is proxied via exe.xyz with optional public exposure and CNAME support. Public IPs and IPv6 are planned but nontrivial.
  • SSH routing to vmname.exe.xyz is done via an SSH multiplexing layer; commenters infer sshpiper-style machinery.

Auth, Sharing, and Security Concerns

  • First SSH with any key prompts for email verification; that key becomes your identity.
  • HTTP access can be: fully public, email‑gated, or via share links that require registration; links don’t auto‑revoke existing users.
  • Some worry about it being a “honeypot” tying SSH keys to identities; others note you can use dedicated keys and that it’s a normal paid service model.

Pricing, Value, and Comparisons

  • Confusion over whether resource limits are per VM or per account; clarified as per account, shared by all VMs.
  • Some see $20/month as expensive versus Hetzner/OVH DO-style VPS (more disk, often unmetered bandwidth); others think the UX and integrated agent/HTTPS/auth justify it.
  • Requests for cheaper, smaller individual tiers and/or usage-based pricing; 100GB bandwidth/month is viewed as tight for public sites.

Website, Docs, and Onboarding Feedback

  • Strong complaints that the landing page is cryptic (“ssh exe.dev” plus faint text, poor contrast), with pricing/docs buried and mobile UX buggy for some.
  • Others like the minimalism and think “ssh exe.dev” is self‑explanatory for the target audience.
  • Docs and blog are incomplete and in some places inconsistent with current implementation; founders say the launch was earlier than planned and docs are being updated.

Reliability, Limits, and Future Work

  • Persistence: disks are replicated to a disk cluster; exact durability model and frequency of replication still being tuned and not fully documented.
  • Occasional early network issues observed (e.g., DNS/Go module timeouts), reportedly fixed.
  • Feature roadmap includes public IPv4, IPv6, better cloning/base images, more docs, and posts detailing the SSH proxying and VM internals.

Always bet on text (2014)

Text vs. Audio/Video for Information

  • Many commenters strongly prefer text for learning and reference: faster skimming, higher information density, easier revisiting, and better fit with personal study habits (e.g., reading on commutes).
  • Audio (podcasts) is often described as poor for serious information transfer but good for entertainment or use when reading isn’t possible (driving, walking).
  • Video is praised for concrete, spatial tasks (car repair, hidden fasteners, climbing, CAD demos, cooking) where visual intuition matters.

Text Maximalism, Tools, and Plain Formats

  • Several participants embrace “text maximalism”: plain text as the natural interface between humans and machines, easy to search, version, and transform.
  • UNIX-style tooling, Emacs/Vim/shell, markdown, and text-based config are cited as powerful, durable, and LLM‑friendly.
  • Concerns are raised about proprietary or GUI‑only tools becoming opaque “walled gardens” for both humans and AI.

Text vs. Binary Protocols (JSON, base64, Protobuf, etc.)

  • One camp argues that text‑first protocols (JSON, base64-encoded blobs) offer transparency, flexibility, and easier debugging; bandwidth/CPU savings from binary are often negligible in typical business software.
  • The other side stresses:
    • CPU and memory costs on constrained devices (phones, large‑scale systems).
    • Streaming and performance issues with text+base64.
    • Value of schemas, strong compatibility, and efficiency in binary formats like Protobuf.
  • There’s debate over whether readability truly dominates once tooling is in place, and whether “30% more bandwidth” is trivial or huge.

Limits of Text & Need for Other Modalities

  • Many highlight domains where text is weak: motor skills (riding a bike, throwing, rock climbing), physical intuition (untangling cords), emotional impact, taste/smell, and rich spatial understanding.
  • Graphs, visualizations, sheet music, CAD, and staff notation are given as irreducibly powerful non‑text representations; text can’t fully substitute them, though it can describe or generate them.
  • Some reframe the issue as “always bet on symbolics” or “always bet on language” rather than text alone.

Durability, History, and Accessibility

  • Text (especially Unicode/plain formats) is seen as highly archival and portable, critical in fields like endangered language documentation.
  • Others counter that images/PDFs have also proven robust in practice.
  • Several note that speech, visual art, and possibly genetic code predate writing, challenging claims that text is the “oldest” medium.
  • Literacy limits and the rise of short-form video raise questions about how far a text‑centric worldview can reach the broader population.

Toys with the highest play-time and lowest clean-up-time

Overall take on the article and metrics

  • Many readers agree “play-time vs clean-up-time” is a useful lens, especially for exhausted parents.
  • Some find the article too short and narrow (mostly magnet tiles), suspecting light affiliate marketing.
  • Others note the scoring ignores developmental value: by the criteria used, phones/tablets would “win,” which feels wrong to many.

Magnetic tiles: big winner, with caveats

  • Widespread consensus that Magna-Tiles (and similar) are exceptional: years of use across ages, high replayability, very fast cleanup, fun even for adults.
  • Knockoff brands are mixed: some compatible and fine, others weaker or dimensionally off, causing collapsed builds and frustration.
  • Safety concerns: cheap magnets breaking out and being swallowed are flagged as genuinely dangerous.
  • Oversized “fort-building” tiles get enthusiasm but less direct experience; a few report toddlers loving them in libraries.

Lego and other construction toys

  • Strong nostalgia for older Lego: fewer ultra-tiny/specialized pieces, more general bricks, more “play” and less display.
  • Complaints:
    • Modern sets too intricate to disassemble; kids treat them as models.
    • High perceived cost, though some argue inflation-adjusted price per brick has dropped.
  • Duplo is praised as more age-appropriate and easier to clean than Lego.
  • Alternatives: K’NEX, wooden blocks, cardboard bricks, Tubelox/Quadro-style tube systems, Matador, Kapla planks, marble runs, Snap Circuits.

Simple, open-ended classics

  • Plain wooden blocks are repeatedly singled out as near-perfect: durable, versatile, multigenerational, easy to toss back in a bag.
  • Balls, cardboard boxes, paper+crayons, plasticine/Play-Doh, train tracks, wire bead mazes, and matchbox cars all get strong endorsements.

Screens and “is an iPad a toy?”

  • Multiple commenters note that by the article’s metrics, phones/tablets/consoles are clearly top-ranked (huge playtime, zero cleanup).
  • This is seen as either a bug in the metric or a reason to exclude screens from “toys”; some call tablet‑as‑toy outright harmful, others say it depends on constrained, offline use.

Cleanup, “doneness,” and parenting angles

  • Some ask “why not just teach kids to clean up?”; responses emphasize age, months of training, and toy design that makes fast cleanup easier.
  • The idea of game “doneness” matters: finite games (board games) are easier to put away than open-ended builds that kids claim are “still in use.”

“Opposite list” for uncles

  • For maximum chaos (high mess, low value), suggestions include glitter, kinetic sand, slime, noisy instruments (drums, vuvuzelas), complex gluey craft kits, and small-piece games like Perfection.

T-Ruby is Ruby with syntax for types

Motivations for Types in Ruby and Other Dynamic Languages

  • Large, long-lived Ruby/Python/Rails codebases become hard to reason about; types help document function boundaries, clarify “what this argument/return value actually is,” and reduce “pinballing” through call sites.
  • Progressive typing lets teams add safety without rewriting into another language or retraining on a new stack.
  • Types improve editor/IDE/LSP features (autocompletion, navigation, inline docs) and help static analysis/LLMs understand APIs.
  • For some, static typing in other languages (Go, C#, TypeScript) proved to speed teams up on larger systems, especially by eliminating whole classes of tests for “what if this is the wrong type?”
  • Type info can also benefit Ruby JITs (YJIT/ZJIT) by enabling better specialization and optimization.

Critiques of Gradual Typing and Typed Ruby

  • Several argue gradual typing “combines the worst of both worlds”: added complexity and verbosity, but without the performance/lowering-of-abstraction benefits of a natively static language.
  • Dynamic Ruby with good tests, naming, and REPL use is seen as sufficient by some; type-related bugs are described as rare compared with the overhead and friction of annotations.
  • Aesthetic objections: inline annotations are viewed as making Ruby “objectively uglier,” undermining one of the language’s main appeals.
  • There’s concern that complex type definitions (unions, nested structures) slow development and encourage rewriting code to satisfy the checker rather than the domain.
  • Some see the push for types as cultural carryover from statically-typed backgrounds rather than an intrinsic Ruby need.

T-Ruby Itself: Design, Issues, and Comparisons

  • Positively received for:
    • Translating to standard Ruby plus RBS, integrating with existing tooling.
    • Clear documentation and a more unified, syntax-level approach compared with Sorbet/RBS separate files and DSLs.
  • Criticisms/questions:
    • Handling of keyword arguments is confusing/buggy in the playground; docs and behavior appear inconsistent.
    • Limited or unclear support for metaprogramming patterns (define_method, dynamic instance variables).
    • Playground currently accepts syntactically invalid input, suggesting tooling immaturity.
  • Compared frequently with Crystal (“Ruby-like with types”), Sorbet, RBS-inline, and low_type; consensus is that Crystal is not a drop-in “Ruby with types” but a different language with similar syntax.

NYC phone ban reveals some students can't read clocks

Prevalence and role of analog clocks

  • Several comments note digital clocks have been common since before smartphones, yet analog wall clocks and watches remain widespread in homes, schools, public places, and as luxury/status items.
  • Some see analog clocks as “objectively inferior” and expect them to disappear; others argue they’re still common enough that reading them is a practical skill.

Education system, testing, and missing basics

  • Teachers reportedly focus heavily on test content because of truancy and accountability pressures; one commenter cites RAND estimates of very high unexplained absences.
  • There’s debate over whether schools should teach every basic life skill versus parents handling some (e.g., clock reading, tying shoes).
  • Some see this as another symptom of “teaching to the test” and warped incentives tied to funding and metrics.

Skill decay vs never learning

  • Multiple commenters stress that many NYC students were taught analog clocks in early grades but didn’t use the skill for years, so it atrophied.
  • Others doubt that such a simple concept can be truly forgotten and blame poor instruction or lack of reinforcement.

Is analog clock reading worth teaching?

  • One side: analog reading is near-obsolete, learnable in under an hour if ever needed, and time is better spent on more relevant topics.
  • Other side: analog faces are still common; reading them exercises spatial reasoning, fractions, approximation, and has broader educational value.

Analog vs digital interfaces

  • Analog is praised for at‑a‑glance comprehension and conveying trends/rate of change (similar to aircraft instruments and “tape” displays).
  • Critics counter that digital is clearer, needs no special skill, and analog’s supposed speed is overstated.

Obsolete and niche skills

  • Analog clocks are compared to rotary dials, abaci, cursive, Morse code, shorthand, and other fading notations.
  • Some argue we can’t (and shouldn’t) preserve every old system; others lament the quiet loss of information-transfer methods and symbolic systems.

Curiosity and culture

  • There’s disagreement over whether failing to self‑learn clock reading reflects a lack of curiosity or just rational prioritization amid information overload.
  • International comments (India, Europe, Canada, Chile) suggest analog clocks and clock-reading instruction are still common elsewhere, though practical use is declining.