Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 455 of 543

Bosch's brake-by-wire system may be the next big leap in automotive tech

Safety and Reliability Concerns

  • Many commenters reject brake‑by‑wire outright if there’s no mechanical backup. Brakes are seen as the one system that must still work with the engine off, battery dead, or electronics failed.
  • People worry about “cliff” failure modes: chips, wiring, or CAN bus faults can fail instantly and completely, unlike hydraulic systems that often degrade gradually and remain partly functional.
  • Edge cases discussed: alternator failure draining the battery while driving, shorts near the battery, crashes that cut power, or total electrical failure while still moving. In those scenarios, a purely electronic brake is viewed as unacceptable unless redundancy is extremely robust.

Mechanical vs Electronic Tradeoffs

  • Supporters argue modern brakes are already complex: ABS, stability control, auto‑braking, radar/lane systems all add valves, pumps, sensors, and software on top of the “simple” hydraulic core. A clean-sheet brake‑by‑wire design might actually reduce overall complexity and enable better automated control.
  • Critics counter that existing electronics assist but don’t break the direct hydraulic path from pedal to caliper; you can still stop if electronics die. Removing that last mechanical link is seen as a step too far.

Weight, Space, and Cost

  • Skepticism that deleting a few millimeters of brake line meaningfully reduces weight or volume, especially on heavy EVs.
  • Others note that even small per‑vehicle savings multiplied across millions of cars matter, and that assembly, plumbing, and variant reduction (one module instead of many left/right-specific parts) lower factory cost and complexity.
  • Several comments assert the real driver is cost and ease of assembly, not consumer benefit.

Brake Feel and Driving Experience

  • Loss of tactile feedback is a major complaint. With hydraulics, drivers can feel tire grip through the pedal; that’s both confidence- and safety-relevant.
  • Reported experiences with existing brake‑by‑wire systems describe vague, “tennis ball” pedal feel.
  • Some suggest feedback can be emulated with sensors and actuators, even personalized per driver, but others expect this to be cost‑cut over time.

Right-to-Repair, Lock-In, and Service

  • Strong suspicion that integrated electronic brake modules will be dealer‑only, expensive, and model‑specific, worsening long-term repairability and parts availability.
  • Bosch and German OEMs are criticized for proprietary tools, anti‑repair policies, and high service pricing; commenters expect similar behavior here.
  • More electronics also shifts failures from cheap, generic parts to costly modules and harder diagnostics.

Regulation, Precedent, and Security

  • Some believe brake‑by‑wire is currently illegal or tightly restricted in certain jurisdictions and would require rule changes.
  • Past systems like Mercedes’ Sensotronic Brake Control are cited as cautionary tales: complex, failure‑prone, and ultimately abandoned despite backups.
  • Another thread worries about cybersecurity: as connectivity and by‑wire controls spread, a compromise of the telematics unit plus CAN bus access could permit remote interference with brakes.

Rust Is Eating JavaScript (2023)

Article scope, title, and dating

  • Many note the piece is really about Rust in the JavaScript toolchain (bundlers, linters, formatters), not Rust replacing JS in app code. Suggested title: “Rust is Eating JavaScript Tooling.”
  • Some think the article is misleading by omitting its original 2021 date and talking about now-archived projects like Rome as if current; others point out there is a short 2023 update.
  • A few tools referenced as “dead” now have successors (e.g., Biome) and new Rust companies, so the ecosystem is in flux.

Rust’s ecosystem and funding

  • Concern that Rust tooling sees many ambitious projects abandoned due to lack of deep, sustained funding compared to C/C++ (hardware vendors) or Java/JS (big tech frameworks).
  • Others counter that major companies have funded Rust and interop work, but agree we haven’t yet seen Rust equivalents of React‑scale ecosystem projects.

WebAssembly’s current limitations

  • Multiple comments stress that WASM is not yet a first‑class browser language because:
    • It cannot directly access the DOM; everything must go through JS glue, with significant marshalling overhead.
    • The standard ABI is low‑level (integers, pointers into linear memory only), making rich bindings hard.
  • Work on the WebAssembly Component Model and WIT is cited as a future path to safer, higher‑level host–WASM interaction, but it’s still “in the works.”

Memory management, GC, and performance

  • Long subthread on “garbage” in Rust vs GC’d languages:
    • One side: Rust’s RAII and deterministic frees mean no runtime GC; heap allocations aren’t “garbage” in the technical sense.
    • Others argue “automatic memory management” (including RAII and reference counting) shares many downsides with GC, and that in some workloads GC can be competitive or faster.
  • For short‑lived tools like Babel/Prettier, several argue GC vs non‑GC is mostly irrelevant; speed and safety (null‑pointer, type) matter more.

Rust vs Go vs JavaScript/Python for tooling and CLIs

  • Broad agreement that compiled languages dramatically outperform JS tooling; esbuild (Go) is often cited as an example.
  • Disagreement on Rust vs Go:
    • Some prefer Rust for safety and maximum performance; others find Rust’s borrow checker overkill for CLIs and favor Go’s simplicity and faster development.
    • A view emerges that Rust shines where low overhead, startup time, and static binaries are crucial (developer tools, search utilities, etc.).
  • Java is debated:
    • Pro‑Java side highlights decades‑old strengths: shared types across client/server, hot reload, rich tooling and libraries.
    • Critics cite verbosity, startup/memory overhead, ecosystem “enterprise” complexity, and build tools (Maven/Gradle) as reasons many moved to Rails, Node, Go, .NET, etc.

Rust hype, fashion, and backlash

  • One camp says Rust is being adopted because it’s appropriate for systems‑style tasks and WASM, not just “cool kids” fashion.
  • Another compares Rust hype to past C++/Rails/Scala/Go cycles and complains about “X but in Rust” projects where GC languages would suffice.
  • A counter‑camp pushes back on dismissive attitudes toward experimentation, seeing these projects as healthy exploration even when not strictly necessary.

Impact beyond JavaScript: Python, WASM, and “fast cores”

  • Discussion notes Rust “eating” Python too via tooling like Ruff and uv; concern that strong type tools (mypy/pyright, future alternatives) are under‑adopted in industry.
  • Several foresee a future where JS/TS (like Python) remains the ergonomic “glue,” while performance‑critical library internals are written in Rust and compiled to WASM (e.g., DOM diffing engines, crypto libs).

Type sharing, API schemas, and DX

  • Some praise full‑stack TypeScript: shared types between React frontends and TS backends make API changes safer and faster.
  • Others argue you can get similar DX cross‑language via OpenAPI/Swagger or GraphQL schemas, with generators producing typed TS clients from Rust, C#, or other backends.
  • Multiple real‑world setups are described that generate TS clients from C# or Rust servers using OpenAPI and various tooling.

JavaScript on the backend and job market perceptions

  • Strong anti‑Node/JS‑backend sentiment appears: critics call JS poorly designed and ill‑suited for mission‑critical backends, preferring Go, Rust, C#, Python, etc.
  • Others defend TS backends as “more than fine,” noting expressive type systems and strong ecosystems.
  • Perceptions of job trends diverge by region: some report fewer Node backend roles and more Go/Rust/C#, others see Node growing alongside Java.

Meta: thread quality and language tribalism

  • A few participants lament the thread as dominated by subjective language‑bashing and recurring “X vs Y” flamewars, with little new evidence, but acknowledge that these debates resurface regularly whenever Rust and JS intersect.

“A calculator app? Anyone could make that”

Floating point, real numbers, and computability

  • Much discussion revolves around IEEE‑754’s limits: many real numbers are not representable; operations like 1e100 + 1 - 1e100 or e^-1000 illustrate catastrophic cancellation and underflow.
  • Commenters distinguish:
    • Countably many computable numbers vs uncountably many reals; “almost all” reals are uncomputable.
    • Proof sketches via Gödel numbering and Cantor’s diagonal argument; Chaitin’s constant appears as a canonical uncomputable number.
  • Some point out that calculators only ever deal with computable, finitely describable numbers (button sequences), but you still need to manage precision and equality (e.g., knowing something is exactly 40 vs seeing 40.000… with unknown tail).

What users actually need from calculators

  • One camp argues everyday users don’t care about 10⁻⁴⁰ errors; they just want “napkin math” for budgets, DIY, homework, etc.
  • Others insist that even “seriously curious” users deserve correct symbolic behavior (e.g., sin(k·π) = 0, avoiding treating tiny nonzero values as zero) and that educational use cases justify exactness.
  • Several wish for calculators that explicitly track units and measurement uncertainty, propagating error rather than chasing more digits of π.

Implementation strategies: RRA, CAS, and alternatives

  • The Android work (recursive real arithmetic + rationals) is praised as a practical middle ground between plain floats and full CAS: exact where possible, approximate with proofs of correctness otherwise.
  • Some note similar ideas: continued fractions with Gosper-style exact arithmetic, constructive/recursive reals, interval arithmetic, and libraries like Calcium/Flint or crcalc.
  • Others question why not just embed an existing CAS, arguing that complexity and speed constraints for a phone calculator might be overestimated.

State of real-world calculator apps

  • Many report that popular physical and software calculators (TI, HP, Windows, Google search, various mobile apps) still fail tests like 10^100 + 1 - 10^100 or sqrt(10)*sqrt(10).
  • There’s extensive sharing of preferred tools: Qalculate!, bc, Maxima, RealCalc, HiPER Calc, TechniCalc, TI/HP emulators, Wolfram Alpha, etc., with debates over RPN vs algebraic and the importance of keypress‑optimized workflows.

Reactions to article style and site

  • Mixed response to the article’s “LinkedIn/broetry” tone and cat images: some find it fun and accessible; others find it distracting or “unserious.”
  • A small subthread complains about scroll behavior and UI elements on the hosting site, while the author explains they’re using Obsidian defaults and tries to adjust.

50 Years of Travel Tips

Traveling Light & Luggage Strategy

  • Strong consensus that lighter is better; many say 40–45L backpack + small personal item is enough for indefinite city travel.
  • Mixed business/hiking/formal trips are the hardest to keep carry-on only; shoes and dress wear are the main constraint.
  • Debate on wheeled vs non‑wheeled: backpacks give more freedom, but small checked bags can make boarding easier if you tolerate lost‑luggage risk.
  • General rule: one main bag plus at most one small extra; more than three items becomes unmanageable.

Planning, Packing & Checklists

  • Many advocate detailed checklists (prep, packing, last‑minute house checks) refined over multiple trips.
  • Others intentionally underpack, relying on buying missing items on arrival, but several push back that this wastes precious vacation time.
  • Packing cubes and small pre‑assembled “kits” (first aid, chargers, toiletries) are popular to reduce decision overhead.

Money, Phones & Digital Backup

  • Core minimum: passport + at least one credit/debit card; several add “some cash” and a backup card from a different bank.
  • Disagreement on “you can always buy a phone locally”: concerns around 2FA, password managers, eSIMs, and account recovery.
  • Suggested mitigations: hardware keys (Yubikey), printed recovery codes/secret keys, spare old phone, paper itinerary and key contacts.

Costs & Styles of Travel

  • Big gap between “$2,000/night luxury” and “$1,000/month backpacking”; some call the former “vacationeering,” not “travel.”
  • Multiple reports of transformative, ultra‑budget trips (India, Nepal, Latin America) versus more comfortable mid‑range month‑long trips (~$10k).

Safety, Risk & Transport

  • Strong disagreement with “trust everyone and smile” as primary safety advice, especially from women and big‑city residents; scams and theft are common.
  • Heavier caution for solo women: always have an exit, avoid isolated situations, don’t follow the “just smile” rule.
  • Moto‑taxis: some embrace them as essential in traffic‑choked cities; others refuse due to lack of helmets, poor road safety, and weak healthcare.

Food & Health

  • Many agree sickness doesn’t neatly correlate with “fancy vs street,” but: avoid raw/uncooked items and suspect water/ice.
  • Emphasis on hand hygiene and tiny first‑aid kits (painkillers, anti‑diarrheals, blister care) as high‑value, low‑weight items.

Interaction with Locals & “Main Character” Behavior

  • Gentle version endorsed: be curious, talk to drivers/locals, accept genuine invitations, join when asked.
  • Harsh criticism of tips like “visit your driver’s mother” or “crash a wedding”: seen as rude, exploitative, “white tourist in poor country” behavior that ignores the mother’s perspective and local norms.
  • Several note that advice clearly reflects a privileged white male experience; can be unsafe or impossible for women and non‑white travelers.

Tools, Apps & Logistics

  • Frequently mentioned: FlightAware/FR24, Google Maps (with offline maps), OSM-based apps, airline/hotel/taxi apps, Seat61, TripIt, Wikivoyage, local taxi apps (Grab, G7, FreeNow).
  • Airport lounges viewed as formerly great but increasingly overcrowded; some prefer spending that money on better experiences instead.

Other Perspectives

  • Some don’t enjoy travel at all and feel pressured by its status‑symbol role.
  • Travel with kids is acknowledged as a fundamentally different category requiring its own strategies.

670nm red light exposure improved aged mitochondrial function, colour vision

Evidence and study quality

  • Commenters note “thousands” of red/NIR (near‑infrared) photobiomodulation papers showing mitochondrial effects, but many describe the evidence as “large but mostly inconclusive.”
  • Complaints: weak or absent control groups, few well‑designed double‑blind human trials, unclear optimal wavelength, intensity, and dosage.
  • Some point out UV phototherapy has standardized wavelengths/dosages (e.g., 311 nm), whereas red/NIR work is more heterogeneous and commercially driven.
  • Several call for controls with other wavelengths to rule out diurnal or non‑specific effects.

Proposed mechanisms

  • Common mechanistic story: red/NIR photons interact with cytochrome c oxidase in mitochondria, displacing nitric oxide, restoring electron transport and ATP, reducing ROS, and releasing NO for vasodilation.
  • Others mention links to melatonin production in mitochondria and vision protection from blue/UV oxidative stress.
  • Some warn that interfering with oxidative‑stress regulation could have unknown long‑term downsides and that individual factors (genetics, nutrition, magnesium, etc.) may strongly modulate effects.

Anecdotes and perceived benefits

  • Users report mixed but often positive experiences: reduced joint pain, faster healing of sprains, better mood, improved sleep, less fatigue, and subjective cognitive benefits.
  • Specifics include: red/NIR helmets/panels for brain injury and TBI, red lamps for minor burns and cuts, red glasses for vision, red headlamp reading leading to rapid sleep onset.
  • Others say they notice no effect, or can’t distinguish from natural healing or placebo.

Hardware, dosing, and DIY

  • Strong interest in building cheap systems from commodity LEDs (Digikey, Mouser, AliExpress) versus expensive branded panels and masks.
  • Key parameters discussed: wavelength (around 650–850 nm), irradiance (~mW/cm²), duty cycle/PWM to balance penetration vs heating, beam shaping (lenses), and distance from tissue.
  • Several worry that consumer specs (wavelength, power) are often inaccurate or outright fraudulent; spectrometers and datasheets are suggested to verify.
  • Debate over how deeply NIR actually penetrates skin and skull; multi‑watt systems can deliver measurable energy several cm deep, but low‑power LEDs may not.

Sunlight, fire, and everyday light

  • Many ask whether simple sunlight exposure (especially morning, eyes closed) is “good enough,” given the strong red/NIR component.
  • Others note limitations of sun: UV damage, difficulty targeting specific bands, no control over intensity/PWM, latitude/season constraints.
  • Fire and IR heaters are proposed analogs; some report pain relief, others say the spectrum/heat makes dosing hard and may mainly just warm tissue.
  • Screen red‑shifting (f.lux, SunsetScreen) changes visible balance but does not increase total red/NIR power meaningfully.

Safety, skepticism, and commercialization

  • Concerns raised about eye safety (avoid staring into intense IR without protection), localized overheating, and theoretical risk of stimulating existing cancers.
  • Several emphasize “more is not better” and that over‑irradiance mainly heats superficial tissue.
  • Commenters highlight rampant grift: overpriced panels, $8k “red light guns” with disease dials, and anti‑aging masks sold with dubious claims.
  • Overall tone: cautious interest—red/NIR is promising and biologically plausible, but current commercial hype far outpaces rigorous, standardized clinical evidence.

US government struggles to rehire nuclear safety staff it laid off days ago

Twitter-style “scream test” in a nuclear context

  • Many see the layoffs/rehire scramble as Musk’s Twitter playbook reused: cut first, see what breaks, then add back “essential” people.
  • Supporters argue this is a fast way to expose bloat and identify critical staff, akin to zero‑based budgeting or a “scream test.”
  • Critics stress that nuclear safety is not a web startup: failure modes are delayed, opaque, and potentially catastrophic; you can’t simply “roll back” a meltdown or deterrent failure.

Human capital, incentives, and institutional damage

  • Commenters emphasize that federal nuclear staff are scarce specialists who accept lower pay for mission, stability, and pensions.
  • “Fire and rehire” is seen as destroying trust: people now know loyalty and performance don’t protect them; many will exit or demand higher pay elsewhere.
  • Government pay scales and rigid processes make quick counter‑offers difficult; some think the best people (those able to move fastest) are least likely to return.

DOGE, budgets, and regulation

  • One camp believes the agenda is about shrinking an overgrown, inefficient state and cutting long‑run spending, even at the cost of “temporary hardship.”
  • Others argue cost savings are tiny versus total spending and that the real aim is:
    • defunding regulators (CFPB, OSHA, USAID, etc.),
    • freeing room for corporate tax cuts,
    • weakening oversight before launching new private financial and tech ventures.
  • There is discussion of impoundment (refusing to spend appropriated funds) as a constitutional test likely headed to the courts.

Nuclear and wider safety risks

  • People familiar with NNSA describe it as central to maintaining an aging arsenal, nuclear propulsion, and emergency response.
  • Firing such staff en masse is seen as:
    • weakening deterrence and safety culture,
    • creating potential recruitment opportunities for hostile states,
    • imposing multi‑year setbacks that can’t be fixed by rehiring for a few days.

Authoritarian drift and rule of law

  • A widely cited Trump statement that “saving the country” puts one beyond the law is viewed as explicit justification for ignoring legal constraints.
  • Combined with SCOTUS immunity rulings, interference in prosecutions, and mass politicized firings, many see a shift toward “rule by law” (laws as tools of the leader) rather than rule of law.

Global and geopolitical fallout

  • Non‑US commenters expect accelerated efforts, especially in Europe, to decouple from US tech, payments, defense guarantees, and the dollar.
  • The episode is framed as part of a broader US self‑inflicted decline, beneficial to rivals like Russia and China, and likely irreversible even under future administrations.

Softmax forever, or why I like softmax

Critique of the post’s treatment of the “Distance Logits” paper

  • Some object that dismissing a paper after noticing a single hyperparameter setting is understandable as a decision not to engage, but not a license for sloppy critique.
  • A key technical objection: the post assumes the distances ( |a_k| \approx 0 ) initially, but in the referenced paper the (a_k) are distances between vectors and unlikely to be near zero; thus the gradient issues near zero may be overstated.

Naming and mathematical framing of softmax

  • Several argue that log-sum-exp is the true “soft maximum” and should have been called softmax; the current “softmax” is really the gradient of log-sum-exp and might better be called “softargmax” or “grad softmax.”

Statistical mechanics, maximum entropy, and softmax

  • One line of discussion defends softmax via its Boltzmann-distribution roots: exponentials arise from counting microstates and maximizing entropy under constraints.
  • Others note that in ML, the interpretation of “energy,” fixed average energy, and temperature is often loosely applied or ignored, so the physical analogy is more motivational than fundamental.
  • There’s skepticism about the maximum entropy principle itself and whether it is uniquely justified or “natural.”

Alternative parameterizations of categorical distributions

  • Commenters stress that softmax is just one parametrization; other mappings from reals to categorical distributions can work.
  • A Bayesian/Dirichlet example is given: “add-one” (or more generally add-α) updating yields normalized probabilities with all outcomes nonzero, differing qualitatively from softmax’s tendency to push probabilities close to 0 or 1.

Reception of the explanation and usefulness of softmax

  • Some find the post’s explanation intuitive and helpful for understanding why naive mappings from logits to probabilities are hard for networks to learn.
  • Others feel the author overcomplicates things or is trying to show off.
  • Practitioners note softmax’s practical utility for turning arbitrary real-valued scores (including negatives) into a clean probability distribution, and point to related work on classifiers as energy-based models.

Strong reaction to all-lowercase style and writing norms

  • A large subthread criticizes the author’s refusal to capitalize as distracting, “cognitively disruptive,” and unprofessional for a technical essay.
  • Defenders frame lowercase as a generational/medium-specific norm originating in IM/SMS/Twitter, or as a legitimate stylistic choice on a personal blog.
  • Broader debate ensues about language evolution, reader vs writer optimization, autocapitalization on phones, and whether lowercase in serious writing is a passing fad or the future of informal text.

Jellyfin: The Free Software Media System

Emby/Plex/Kodi Comparison & Licensing

  • Jellyfin is a fork of Emby from before Emby went closed‑source; it keeps all core features free (including hardware transcoding) with no paid tiers or ads.
  • Compared to Plex, Jellyfin is praised for avoiding “enshittification” (no injected streaming content, no cloud login requirement) but widely viewed as less polished in UX and client support.
  • Several users say Jellyfin now matches or surpasses Plex for their needs; others feel it’s where Plex was ~5 years ago and still not worth switching if you already have Plex Pass.
  • Kodi is seen as great for purely local playback; Jellyfin complements it by adding multi-user, remote access, transcoding, and watch history.

Clients, UI & UX Quality

  • Big spread of experiences: some find the web UI and Android/TV apps fast and smooth; others report laggy scrolling, visible placeholder tiles, and general “web page” feel even on fast LANs.
  • Apple TV / iOS: native Jellyfin clients (notably Swiftfin) are criticized as unstable or janky; many people solve this by using Infuse or other third‑party players against Jellyfin.
  • Console and TV platforms: lack of official PS5 app and patchy support for Tizen/Samsung, LG, etc., are recurring pain points. Some rely on DLNA or browser workarounds.

Performance, Transcoding & Codec Support

  • Jellyfin uses ffmpeg (with optional hardware accel) and supports AV1 “if the device does,” with server‑side transcoding as needed.
  • Some users get rock‑solid 4K HDR/Dolby Vision + Atmos/DTS passthrough on Shield or similar; others report stutter, audio desync, or fallback to stereo where Plex works fine.
  • There is debate over auto HDR→SDR tone mapping: some value it highly for mixed-display setups; others argue SDR masters are usually preferable.

Library Management & Features

  • Strengths: on‑the‑fly bitrate downgrades for poor networks, multi-user logins, SyncPlay/watch‑together, rich metadata plugins, trailers (if enabled), trick‑play thumbnails, live TV via plugins (Ersatztv/Tunarr, HDHomeRun etc.).
  • Weaknesses:
    • Strict expectations for folder/filename layouts, especially for TV, music, and disc structures; some users give up after failed attempts and find Plex more forgiving.
    • Movies/TV separation annoys people who want unified franchises (e.g., Star Trek across shows + films).
    • No strong Netflix‑style recommendation engine; some don’t miss it, others do.
    • Blu‑ray folder (BDMV) support via concatenation/transcode is seen as fragile and frequently broken.

Scale, Reliability & Large Libraries

  • One commenter claims Jellyfin “simply does not work” beyond ~1000 items; multiple others contradict this, reporting smooth operation with multi‑TB, tens‑of‑thousands‑files libraries given decent hardware and SSD-backed DB.
  • Music libraries: workable but weaker than dedicated music servers; issues include slow rescans, poor CUE support, and simplistic queue UI.

Remote Access, Networking & Security

  • Common patterns: Jellyfin behind nginx/Caddy/Traefik/SWAG, with local DNS for pretty hostnames and HTTPS; or fully private via Tailscale/WireGuard/ZeroTier.
  • There’s concern about exposing Jellyfin directly to the internet; many recommend VPN-based access or at least reverse proxies with tight rules.
  • Cloudflare Tunnel is used by some, though ToS and “no streaming” concerns are raised; others prefer Tailscale for app compatibility.

Ecosystem & Use Cases

  • Heavily used with Radarr/Sonarr/*arr, Filebot, MakeMKV, HandBrake, yt‑dlp, etc.
  • Use cases include ripped DVDs/BDs, home videos, OTA live TV+DVR, curated YouTube for kids, and sharing with friends over VPN.
  • Some users prefer the simplicity of SMB/NFS + VLC, but others emphasize Jellyfin’s advantage for guests, watch tracking, transcoding, and multi-device convenience.

Community, FOSS Expectations & Criticism

  • Strong appreciation for a volunteer‑run, ad‑free, fully FOSS media server; several people donate.
  • Debate over “don’t complain, send PRs”: some argue criticism and UX pain reports are valuable even if users can’t contribute code; others stress maintainers’ need to avoid hacks and platform-specific one-offs for long-term sustainability.

The European VAT Is Not a Discriminatory Tax Against US Exports

Whether VAT Is Discriminatory or a Tariff

  • Most commenters argue VAT is not discriminatory: it’s a consumption tax applied at the point of sale, at the same rate for domestic and imported goods in the destination country.
  • Mechanically: VAT is charged at each stage on the full price, but businesses deduct the VAT they paid on inputs, so only the “value added” is taxed; the final consumer bears the full tax.
  • On exports, VAT is refunded or zero-rated; on imports, VAT is charged just like on domestic products. This is framed as border-neutral, unlike a tariff that applies only to imports.
  • A minority insists that, from a US viewpoint, VAT “feels” like a tariff because US goods pay home-country taxes and face VAT abroad, while EU exporters get VAT refunds. Others counter that both sides still pay their own corporate taxes and that US sales tax design, not EU VAT, creates any extra burden.

US Sales Tax vs. VAT and “Tax Pyramiding”

  • Several comments highlight that US state sales taxes often apply to business inputs and are not systematically creditable, causing “tax pyramiding” as goods move through the supply chain.
  • VAT, by contrast, lets businesses reclaim input tax, so only final consumption is taxed once.
  • Some argue this makes the US system effectively an “export tax” compared to VAT countries; others note many US inputs are exempt and say the problem is domestic policy, not foreign discrimination.

Trade Policy, Trump, and Tariffs

  • Many see the VAT argument as a political pretext to justify new US tariffs on the EU, fitting a broader protectionist or isolationist agenda.
  • Others focus on genuine tariff asymmetries (e.g., higher EU auto tariffs vs. US rates) and say that, while VAT isn’t a tariff, there is still a broader competitiveness issue.
  • Some predict that technical correctness about VAT won’t matter: tariffs are likely regardless, and the VAT story is mainly for domestic messaging.

Complexity, Burden, and Fairness Debates

  • Practitioners describe VAT as conceptually simple but administratively heavy: registration, invoices, cross-border rules, and reverse-charge mechanisms create significant compliance cost, especially for small or non-EU firms.
  • VAT is widely criticized as regressive, though many countries mitigate this with lower rates on essentials.
  • A few commenters claim VAT systems encourage fraud and bureaucracy; others defend VAT as relatively efficient and neutral compared to US-style sales taxes.

Is the ArXiv safe from the current US Government attacks?

Perceived risks to arXiv and related archives

  • Several commenters note arXiv is primarily supported by Cornell, foundations, and member institutions, with a sizable but time-limited NSF grant; they see Cornell’s stability as a buffer against direct political shutdown.
  • Others emphasize a broader “funding purge,” especially targeting NSF projects framed as DEI-related, creating uncertainty for research infrastructure in general.
  • Multiple people argue that US government–run open-access repositories (e.g., PubMed, NASA, DoE/DoD archives) are more vulnerable than arXiv because they are directly under federal control and could be altered, defunded, or removed.

US politics, “weaponization,” and fear vs. skepticism

  • One side claims the current administration is willing to punish disfavored speech and actors, cite examples involving the press, political corruption cases, social-media pressure, and recent mass firings at nuclear agencies, and extrapolate to potential pressure on private hosts (Apple, Microsoft, GitHub).
  • Opponents dispute these examples (or attribute similar behavior to prior administrations), argue there is little concrete evidence arXiv is a target, and characterize the thread as partisan or “unsubstantiated fearmongering.”

Data control, safety, and decentralization

  • Several participants argue “nothing is safe”: self-hosting improves protection against quiet legal seizure but not against coercion; cloud services lower the bar for state access.
  • LOCKSS-style redundancy is raised: personal control best protects confidentiality, but distributed mirroring best protects against destruction.
  • There are calls for organized efforts to decentralize important scientific repositories with open protocols and many independent mirrors; arXiv’s older mirror network is mentioned as having been cut.

Shadow libraries and legitimacy

  • Suggestions that Sci‑Hub or Anna’s Archive could “take arXiv under their wing” draw criticism: integrating a legal, mainstream preprint server with large illegal collections is seen as likely to destroy arXiv’s institutional credibility, even though Sci‑Hub is acknowledged as widely used for convenience.

Broader geopolitical and technical fallout

  • Some predict that political instability and perceived unreliability of the US will push Europe off US clouds, onto Linux, and away from US weapons systems; others call this unrealistic, pointing to massive Windows/Active Directory lock-in and the cost of migration.

Governance, disputes, and moderation

  • A detailed anecdote about a plagiarism dispute on arXiv highlights that repositories are poor venues for adjudicating copyright and authorship conflicts.
  • Meta-discussion centers on HN moderation: whether flagging this topic illustrates the very vulnerability to non-technical pressures being discussed, or simply reflects HN’s long-standing policy to de-emphasize political threads.

NASA has a list of 10 rules for software development

Context and intent of the rules

  • Rules originate from JPL’s “Power of 10” paper, targeted mainly at C for deeply embedded, safety‑critical systems.
  • Goal: maximize static analyzability, predictability, and the ability to debug from millions of miles away with minimal observability.
  • They’re better viewed as strong guidelines or proposed practices, not NASA‑wide mandatory NPR requirements.
  • Several commenters think the linked critique ignores this context and plays “gotcha” games instead of engaging with the rationale.

Recursion, loops, and stack/timing bounds

  • Ban on recursion is defended as enabling an acyclic call graph, which allows static stack‑usage analysis and worst‑case execution time (WCET) proofs.
  • Recursion and function pointers make formal guarantees about stack and timing either impossible or far more complex.
  • “All loops must have a fixed upper bound” is about making WCET provable, not about merely ensuring eventual termination.
  • Contrived examples with absurdly large bounds are seen as missing the point; in practice, reviewers would flag unrealistic bounds.

Hardware, radiation, and reliability constraints

  • Spacecraft hardware is typically old, rad‑hardened, and extremely resource‑constrained: minimal RAM, often no heap, static allocation preferred.
  • Radiation induces bit flips in registers and logic even with ECC‑protected memory; software must assume occasional random corruption.
  • Techniques mentioned: watchdog timers, NOP‑filled memory regions to trap wild jumps, dual redundant computers, sometimes lockstep CPUs.

Language choices and coding subsets

  • Original context assumes C; some argue for “Rule 0: avoid writing critical code in C” in favor of Ada or tightly controlled C subsets (MISRA‑style).
  • Others note newer missions using C++, and even JavaScript for higher‑level scripting, but not for the most timing‑critical control loops.
  • Garbage‑collected or soft real‑time languages (Go, Erlang, etc.) are seen as unsuitable for hard real‑time safety‑critical control, though fine elsewhere.
  • Debate over Rust/Zig: potentially safer than C, but toolchains, standards, and certification ecosystems are still maturing.

Specific rules and criticisms

  • Ban on function pointers and recursion is seen by some as overly restrictive, especially for expressing state machines, but justified for analyzability.
  • “Two assertions per function” is widely viewed as arbitrary; some reinterpret it as encouraging explicit pre/postconditions and invariants.
  • Defense of setjmp/longjmp as “exception handling” is called out as unrealistic and error‑prone in real C systems.
  • Several commenters emphasize that while these rules are overkill for typical web/business software, the underlying principles—clarity, determinism, and verifiability—remain broadly valuable.

Perplexity Deep Research

Performance vs Other Deep Research Tools

  • Mixed comparisons to OpenAI’s Deep Research: some find Perplexity’s outputs shorter, shallower, or “worse,” others report equally strong results and better UX (e.g., CSV export working properly).
  • Users note Perplexity struggles with tasks needing exhaustive coverage and data joining (e.g., “college majors of all Fortune 100 CEOs,” full 50-state tables), where OpenAI/Gemini sometimes do better.
  • Several say Perplexity’s free Deep Research is “good enough” that they’ll drop paid OpenAI tiers; others find it clearly inferior or underwhelming on first try.

Use Cases, Strengths, and Failures

  • Works well for:
    • Summarizing single topics (e.g., Amiga 500 sound chip, niche political issues, GDP PPP ratios).
    • Rapid initial exploration, surfacing relevant papers and references.
  • Fails or under-delivers for:
    • Complex recommender-system design, where it regurgitates boilerplate from common blogs.
    • Domain-heavy or activist-skewed fields, where it uncritically amplifies utopian or unrealistic proposals.
    • Trending topics or “how to combine X and Y” queries, where it rephrases the question without real implementation depth.

Speed, Thoroughness, and “Expertise”

  • Perplexity’s research completes in seconds to ~1 minute; OpenAI’s often runs for several minutes.
  • Some suspect OpenAI’s longer duration is partly artificial (traffic smoothing / marketing signal: “it took a long time, so it must be deep”).
  • Multiple experts describe all current “deep research” systems as shallow: authoritative tone, neat structure, but weak insight and limited contextual understanding.

Commoditization, Moats, and Product-Market Fit

  • Commenters see rapid feature copying: Gemini → OpenAI → Perplexity → open-source clones. “Deep Research” is widely seen as a generic term and now a “term of art.”
  • Debate over Perplexity’s moat:
    • Pro: early, polished LLM+search experience; multi-model access; credible Google replacement for some.
    • Con: seen as a thin wrapper over foundation models, with no clear defensible edge, and “desperate” recent moves.
  • Broader worry that foundation-model vendors are commoditizing entire app categories (RAG, vision, agents, research tools), threatening vertical AI startups.

Evaluation, Hallucination, and Ecosystem Effects

  • No strong, agreed-on benchmark; GAIA and “test it on my expertise” are mentioned, with frequent hallucinations and mis-weighted sources.
  • Some describe Perplexity+DeepSeek R1 as notably better at targeted sourcing, but still not academically rigorous.
  • Concerns that AI search/deep research will drain traffic from publishers, likely accelerating paywalls.
  • Ethical reservations surface around leadership behavior (e.g., strike-breaking offer) and lack of visible dogfooding inside AI companies.

My Life in Weeks

Emotional impact and perspective

  • Many find the visualization powerful, “terrifying,” or “horrifying” in how starkly it shows life’s finiteness.
  • Others experience it as motivating or hopeful: seeing how much time may still remain and how much the author has already done.
  • A subset say they avoid such visuals entirely because they strongly trigger depression or anxiety.
  • Several note that visualizing time left can either inspire better use of weeks or simply increase dread with little practical gain.

Memory, journaling, and meaning

  • The sparse early timeline prompts reflection on how little of life we remember in detail; people are disturbed by not recalling basic dates.
  • Suggestions include using cues (smells, seasons, historical events), constraint reasoning, or “life on a page” timelines to reconstruct the past.
  • Some propose simple practices: weekly or yearly lists of “one memorable thing,” short-text logs, pocket notebooks, or daily photos of mundane life.
  • One long comment quantifies remaining “project hours” over decades; replies argue this is both sobering and narrow, and that “productive” shouldn’t mean only side projects.

Work, money, and how to use limited time

  • Seeing how many weeks are consumed by work evokes a tension: optimize productivity vs. prioritize relationships and experiences.
  • Several argue that expecting work to provide both income and deep purpose is unrealistic; purpose can and often should be found outside jobs.
  • Others defend work-as-central-meaning, especially when it’s enjoyable and impactful.
  • There’s a long subthread on high tech salaries, FIRE, “FU money,” and whether chasing extreme income in youth is wise or self-defeating.
  • People who’ve faced serious illness describe a sharp shift away from career optimization toward relationships and lived experience.

Weeks as a unit & perception of time

  • Weeks are seen as a uniquely “terrifying” unit: concrete enough to feel, yet few enough (~4,000 in a life) to count.
  • Nordic and some European commenters note that week numbers are widely used in calendars and planning.
  • Others discuss time-perception research and media (books, podcasts, Radiolab) about why years feel short and how novelty can make life feel longer.

Tools, clones, and implementation

  • Multiple commenters share similar “life in weeks” or “life in months/days” projects, apps, and LaTeX generators, often inspired by the same earlier blog post.
  • People praise the UX and suggest the interface could generalize to other domains (e.g., organizational timelines) and wish for open-sourcing.

Privacy and exposure

  • A brief thread questions whether publishing such a detailed life timeline is safe.
  • Consensus: it adds some risk, but for many public figures it’s not much more revealing than LinkedIn or long-term blogging; threat model and personal tolerance matter.

Basketball has evolved into a game of calculated decision-making

Three-Point Revolution and “Basic Math”

  • Many comments center on the rise of 3s: teams favor threes and layups over mid-range shots because of higher expected points per attempt.
  • Some argue this really is “basic math,” citing league-average percentages where 3s clearly outperform long 2s on expected value.
  • Others push back:
    • Percentages changed over time (3P% used to be lower, few players could shoot 36–40% from deep).
    • Shot quality isn’t independent; if you increase volume, you’re forced into tougher attempts.
    • Defensive adaptation, rebounding, shot-clock context, and spacing make the optimization problem nontrivial.

Is Basketball “Solved”? Strategy Cycles and Counterplay

  • Many see the current meta as “3s and dunks,” claiming mid-range is “dead” and games feel like shooting contests.
  • Others insist the sport isn’t solved: defenses have evolved (switching, scrambling, help), opening counter-opportunities like mid-range pull-ups and post play again.
  • Examples cited: teams winning with relatively fewer threes (e.g., Jokic-led offenses), or small, fast lineups with heavy defensive innovation.
  • Analogies are drawn to Go/AI and NFL offenses: once a strategy is widely adopted, counter-strategies emerge and the equilibrium shifts.

Specialization vs Versatility

  • The article’s claim that “do-it-all” players are gone is widely disputed.
  • Commenters note that modern bigs are more versatile (handling, passing, shooting 3s, switching on defense), and many wings are true all-around players.
  • The rise of “3-and-D” specialists is acknowledged, but several argue this reduced old-school one-dimensional roles (e.g., pure rebounders/shot-blockers).

Rule and Format Change Proposals

  • Frequent suggestions to rebalance incentives:
    • Eliminate or move back the corner three; make the arc a true semicircle or widen the court.
    • Push the 3-point line significantly back, or radically change scoring (2→3, 3→4; or even remove 3s).
    • Tweak free throws (automatic points plus one shot) to speed games and encourage interior play.
    • Restore more defensive physicality; crack down on flopping and intentional fouling.
    • Shorten the regular season or adjust playoffs to reduce injuries and make early games matter.

Spectator Experience and State of the NBA

  • A sizable group finds the modern NBA less watchable: too many threes, stoppages, load management, foul-baiting, and long end-games.
  • Others see today’s game as the best ever: more skill, athleticism, spacing, and tactical sophistication—“high-speed chess” that mainstream commentary fails to explain.
  • There’s concern that media and league incentives (TV money, gambling, ad breaks) drive formats that devalue regular-season games and distort fan engagement.

Comparisons to Other Sports and Analytics

  • Parallels drawn to:
    • NFL 4th-down decisions and evolving offensive schemes.
    • Baseball’s “three true outcomes” era and subsequent rule tweaks (pitch clock, shift limits, base size) to restore excitement.
    • Soccer’s data revolution, tactical cycles, and debates about lost “flair.”
  • General theme: analytics push toward efficiency, leagues respond with rule changes when the product becomes dull, and invasion sports rarely reach a fixed strategic optimum.

Critique of the Article Itself

  • Multiple commenters find the article shallow, absolutist, or written by someone who doesn’t “speak basketball” (odd phrasing, limited history awareness).
  • Several argue it overstates specialization, underrates current diversity of playstyles, and conflates one offensive trend with the entire richness of the modern game.

Why aren't we losing our minds over the plastic in our brains?

Role of Capitalism and Incentives

  • Many argue plastics are ubiquitous because they’re cheap, profitable, and supported by lobbying against stricter environmental rules.
  • Counterargument: plastics are cheap mainly due to physics and scalable production, not capitalism per se; abolishing capitalism wouldn’t change material properties or the need for energy.
  • Ongoing debate over whether profit motives uniquely drove fossil fuel/plastic dependence, or whether any industrial system would converge on plastics for their functionality.
  • Broader political-philosophical tangent: critiques of private property vs concerns that alternatives just re-centralize power and recreate hierarchies.

Why We’re Not “Losing Our Minds” About It

  • People heavily discount long-term, uncertain risks relative to immediate convenience and low cost.
  • Lack of clear causal evidence of harm makes it easy for the public (and regulators) to delay action: “do more research first.”
  • Some see “health-fatigue”: with so many warnings (foods, chemicals, climate, etc.), many tune out or accept plastics as unavoidable.
  • Others note limited consumer choice; corporations optimize for profit, and structural alternatives are scarce.

Comparisons to Lead, Asbestos, and Other Hazards

  • Leaded gasoline and asbestos are cited as historical examples of widespread exposure where harms were recognized only after decades and heavy resistance from industry.
  • Some see microplastics as a similar multigenerational problem; others note plastic is not inherently toxic like lead and warn against vague “X might be killing us” panics.
  • One commenter challenges the “7 grams of plastic in the brain” figure, noting possible measurement artifacts (fat mistaken for polyethylene).

Benefits and Tradeoffs of Plastic

  • Plastics are described as a “miracle material”: light, durable, moldable, chemically inert for many uses, and critical for food safety, sterility, and shelf life.
  • Alternatives (glass, metal, natural fibers) have weight, fragility, reactivity, cost, or environmental tradeoffs of their own.

Personal Mitigation Strategies

  • Common suggestions:
    • Avoid heating food in plastic; use glass/ceramic; avoid hot drinks in lined takeaway cups.
    • Prefer metal/glass bottles; avoid soda in plastic; consider RO filtration.
    • Use wooden (or some bamboo/wood) cutting boards; avoid plastic utensils and cutting boards.
    • Choose more natural fibers (cotton, wool) and reduce fast fashion; vacuum and filter indoor air.
    • Reduce seafood and large-animal consumption to limit bioaccumulated microplastics.
    • Add washing-machine filters; consider blood/plasma donation as a possible (but not yet proven) way to lower body load.
  • Several stress choosing a few high-impact changes rather than obsessively eliminating all plastics, which is seen as impossible.

Systemic Solutions and Limits

  • Ideas raised: banning or heavily taxing single-use plastics, removing tariffs from alternatives, reducing car use and tire wear via transit, bikes, and denser cities.
  • Pessimistic voices doubt political will given petrochemical lobbying and car dependence; some conclude significant reversal is unlikely.

More Solar and Battery Storage Added to TX Grid Than Other Power Src Last Year

Market Dynamics and Texas’s Energy Mix

  • Commenters highlight Texas’s relatively open power market: generation competes on profitability while transmission remains regulated, which they see as enabling rapid solar and battery buildout.
  • Despite cheap natural gas, investors are choosing large-scale batteries and solar because they are increasingly cheaper and faster to deploy than new gas plants, especially for peaking and frequency regulation.
  • Several argue the real story is “renewables and fossil fuels”: both capacities are growing to meet rising demand, so renewables aren’t yet replacing fossil fuels, just slowing their growth.
  • Texas uses far more electricity per capita than California; CA’s slower capacity additions are partly attributed to efficiency standards and more rooftop/distributed solar.

Subsidies, Costs, and the IRA

  • Multiple comments note federal subsidies are significant in Texas’s solar and storage economics, but emphasize that:
    • Solar is already the cheapest new generation in many places.
    • Fossil fuels also benefit from large (often hidden) subsidies and favorable tax treatment.
  • Batteries are said to be within 1–2 years of being subsidy-free competitive; building more accelerates cost declines.
  • The Inflation Reduction Act is framed as an “electrification is anti‑inflationary” policy: shifting from fuel costs to financed capital costs stabilizes long-term prices.

Reliability, Technology, and Grid Operations

  • The 2021 Texas freeze is widely discussed:
    • Gas and some wind failed due to lack of weatherization; wind actually outperformed ERCOT’s frozen-weather forecasts.
    • Nuclear units also had weather-related issues; no technology is inherently immune.
  • Technical clarifications:
    • Storage projects are rated in MW (power) and hours (duration); 1–4 hours is typical.
    • Batteries already provide frequency regulation and can replace the need for inertial generators; grid-forming inverters are maturing.
  • Proposals for flywheel storage are dismissed by several as impractical and uneconomic compared with batteries and inverters.

Safety and Environmental Tradeoffs

  • There is concern over tightly packed lithium BESS installations and fire risk, with the Moss Landing incident debated:
    • Some describe major damage and toxic releases; others counter that impacts are limited and far smaller than routine fossil pollution.
    • One commenter asserts large BESS fires have not escaped container-level containment; others dispute details but agree risk is manageable relative to fossil impacts.

Politics, Regulation, and Future Trajectory

  • Tension is noted between:
    • Republican rhetoric attacking “green energy” and culture-war framing.
    • Strong Republican-aligned business interests making money from renewables, which may constrain anti-renewable regulation.
  • Some blame “environmentalist” regulation (especially in California and Nevada) for slowing renewables; others stress these laws were bipartisan and are now being reconsidered.
  • Nuclear is widely portrayed as too slow, capital-intensive, and risky for profit-driven U.S. markets compared with rapidly deployable wind/solar plus storage.
  • Several argue the deeper unsolved issue is demand and consumption: long-term climate goals likely require using less energy and driving less, which remains politically taboo.

Deepseek R1 Distill 8B Q40 on 4 x Raspberry Pi 5

Raspberry Pi clusters vs. alternative hardware

  • Many see the 4×RPi5 setup as a “modern Beowulf cluster” demo, but argue that for similar money a used 1U Epyc server or a few Ryzen/mini‑PCs deliver far more performance, PCIe, and normal firmware.
  • Others point out Pis are quiet, low‑TDP, small, and easier to power/cool in a home than 180W+ servers or GPU rigs; noise from 1U servers is a real blocker for apartments.
  • Several note that the same distributed-llama approach works on x86 boxes running Debian, so Pis are more about accessibility and fun than optimal perf/$.

Power, hosting, and cost debates

  • Disagreement over how hard it is to “just run a 1U server at home”: some say residential power and bills are limiting, others say a few hundred watts is normal for gaming PCs.
  • Confusion about kW vs kWh and colocation pricing is called out; many emphasize that idle draw of small servers/mini‑PCs can be only a few watts.

Distilled R1 vs “real” R1 and naming

  • Thread stresses these are DeepSeek‑R1 distill models (e.g., Distill‑Llama‑8B), not the full 604B reasoning model.
  • Pushback that calling them “DeepSeek R1” is misleading; they’re Llama/Qwen finetunes with R1‑style chain‑of‑thought, not from‑scratch distilled replicas.
  • Distillation is described as training a smaller model on prompts + outputs (and ideally token probabilities) from a larger “teacher”; quantization is compressing weights to fewer bits.
  • Some note reports that DeepSeek itself may be (partly) distilled from OpenAI models, but that remains contested.

Performance, tokens/sec, and use cases

  • Skepticism about reported tok/s, since short demos hide slowdown at long context lengths; even Epyc drops to a few tok/s at 8–16k tokens.
  • Critics ask who wants a reasoning model at very low speed; others say many tasks are non‑interactive: background agents, CI, nightly jobs, home automation watchers.
  • Key interest is the distributed inference itself—sharding a model across CPUs over Ethernet—even though scaling is non‑linear and quickly bottlenecked by interconnect.

Bias, censorship, and responsible AI

  • Multiple comments report strongly pro‑China behavior and censorship around topics like Tiananmen in DeepSeek’s ecosystem, with reasoning traces that explicitly mention needing to follow Chinese guidelines.
  • Some see this as intentional propaganda; others frame it as dataset/policy bias. There’s disagreement over how much this should dominate discussion versus pure technical merit.

Memory, quantization, and hardware limits

  • Back‑of‑envelope: 8B @ Q4 ≈ 4 GB plus overhead; 8 GB Pis can fit such a model, 16 GB helps only for larger contexts/models.
  • Discussion of tradeoffs: more parameters at lower precision vs fewer at full precision; memory bandwidth vs compute‑bound regimes, especially for small models.

Tooling and packaging

  • People highlight Ollama, llama.cpp, MLX, Home Assistant, etc., and wish for an “apt‑get install” LLM stack and an Alexa‑like local smart speaker product.
  • Packaging work is noted as volunteer‑heavy; complaints without contributions won’t fix missing Debian packages.

Carbon capture more costly than switching to renewables, researchers find

Role of Trees and Ecosystems vs Technological Capture

  • Many argue “plant trees” is a form of carbon capture, but point out trees are good at uptake and often poor at long-term storage: they burn, rot, or are logged, returning CO₂.
  • Counterpoints note significant carbon can be stored in soils, humus, peat, mature forest ecosystems, and even buildings/wood products; some say mature biodiverse forests may outperform plantations.
  • Proposals include biomass burial (in mines, deep pits, swamps), biochar added to soils, kelp or algae farming, and enhanced rock weathering; all face scale, cost, and logistics challenges.
  • Forest loss, desertification, and fire risk under warming climate are cited as reasons trees alone cannot offset ongoing fossil emissions.

Economics, Thermodynamics, and Feasibility of CCS/DAC

  • A recurring “napkin math” argument: burning carbon releases energy, and any process to recapture CO₂ must at least pay that energy back plus losses, so capture is inherently expensive.
  • Direct air capture (DAC) is criticized as thermodynamically and economically punishing because it must separate a dilute gas; current costs (~$1000/ton CO₂) are seen as politically impossible at scale.
  • Point-source capture (from smokestacks, gas processing, cement, etc.) is viewed as more plausible, but still costly and only justified for “hard-to-abate” sectors.
  • Some engineers push back on absolute impossibility claims: CO₂ can be stored as supercritical fluid in deep saline aquifers or mineralized in basalts; these formations have held fluids geologically. Others highlight leakage risks and survivor bias.

Motives, Greenwashing, and Who Pays

  • Strong skepticism that CCS is being pushed mainly to prolong fossil fuel extraction (e.g., for enhanced oil recovery), capture subsidies, and avoid structural change.
  • Long list of fossil externalities is discussed: subsidies, health impacts, climate damage, military costs, pollution, and land use (e.g., corn ethanol), arguing CCS adds another revenue stream to incumbents.
  • Debate over whether emitters or consumers should pay, and whether carbon taxes pegged to actual removal costs could drive cleaner alternatives without banning fossil fuels outright.

Renewables, Nuclear, and System Costs

  • Broad agreement that, for power generation, renewables plus storage are already cheaper or close to cheaper than fossil with CCS; CCS for electricity is thus seen as an “opportunity cost.”
  • Disputes over nuclear: some claim it is essential for reliable, low-carbon baseload; others cite system-cost studies showing nuclear must get dramatically cheaper to compete with wind/solar plus flexibility.
  • Grid issues (intermittency, storage, transmission) and end uses like aviation, shipping, fertilizer, and cement are flagged as domains where some form of carbon capture or synthetic fuels may still be needed.

Beyond “Either/Or”: Study Critique and Future Role of Removal

  • Several commenters criticize the paper’s framing as a forced choice between 100% WWS (renewables) and heavy CCS, calling it unrealistic and policy-weak; they argue what matters is marginal cost and best mix over time.
  • Widespread view: even with rapid decarbonization, existing atmospheric CO₂ likely requires some net-negative strategies (natural or engineered) later in the century; CCS should be treated as a niche, long-term cleanup tool, not a primary climate plan.

Dust from car brakes more harmful than exhaust, study finds

Study validity and media framing

  • Several commenters note a mismatch: Yale’s summary describes lab-grown lung cells exposed to brake and diesel particulates, while the linked paper appears to be an epidemiological cohort study on metals (copper, zinc, iron) and childhood asthma/allergies.
  • Some argue the headline “brake dust more harmful than exhaust” overstates what is actually shown and ignores exposure quantities and real‑world context.
  • Others stress that even without a neat ranking versus exhaust, the takeaway is that modern brake dust is clearly harmful and warrants regulation (e.g., removing copper).

Health effects and particulate behavior

  • Discussion around particle size: exhaust tends to produce finer PM2.5 that penetrates deeper into lungs; brake dust may be coarser and more effectively filtered by airways, but lab-cell results suggest toxicity from metal content.
  • People highlight “resuspended” road dust and overall PM2.5 near traffic as major, sometimes underestimated, health risks (asthma, cardiovascular disease, etc.).
  • Personal anecdotes from people living near busy roads or working in garages describe pervasive black dust on surfaces and concern about long-term health impacts.

EVs, regenerative braking, and brake dust

  • Broad agreement that EVs and hybrids with strong regenerative braking massively reduce friction‑brake use; some EVs even have underused, rusting discs or rear drums.
  • This is widely considered a real advantage for EVs on brake‑dust emissions, regardless of their higher mass. Debate continues on the exact magnitude.

Tire wear, vehicle weight, and road wear

  • Strong disagreement on whether EVs “eat tires”: some cite studies and experiences showing faster wear due to weight and high torque; others point to newer EV‑specific tires and modest weight differences vs comparable ICE cars.
  • Several mention the “fourth power law” for road wear and argue heavier vehicles (EVs, SUVs, trucks, buses) disproportionately damage roads and generate particulates, though real‑world fleet mix complicates attribution.
  • Tire pollution (microplastics, toxic additives) is flagged by many as at least as important as brake dust, but the relative health impact remains unclear.

Technical and mitigation ideas

  • Proposals include: mandatory regenerative braking, ceramic or low‑metal pads, stricter standards on total non‑exhaust emissions (brakes, tires, road wear), natural‑latex or less toxic tire compounds, and improved water treatment for runoff.
  • Others suggest indoor air purifiers, mechanical ventilation with heat recovery, and even city street‑washing to reduce dust and urban heat.

Cars vs. broader transport choices

  • A major subthread argues that focusing on tailpipe vs brake vs tire dust misses the “root problem”: high car dependence.
  • Many advocate lighter EVs, e‑bikes, cargo bikes, scooters, and better transit and cycling infrastructure; others push back, citing safety, climate, disability, weather, family logistics, and US land‑use patterns.
  • There is tension between “reduce car use” perspectives and those who see anti‑car arguments or tire/brake concerns as being used to attack EVs or sustain fossil fuels.

Amazon's killing a feature that let you download and backup Kindle books

Reaction to Removal of USB Download Feature

  • Many power users say this was a “last straw,” since USB download was central to backing up, de-DRMing, and format-converting their purchases (especially via Calibre).
  • Some plan to cancel Prime or stop buying Kindle books entirely; others note they never used or cared about offline backups and will just keep reading as before.

Ownership, DRM, and Trust

  • Strong sentiment that Kindle buyers never really “owned” their books, only revocable licenses, citing past remote deletions and silent updates.
  • The change is viewed as “welding shut a fire exit” that enabled interoperability and long‑term access, especially for non-Kindle devices.

Piracy, Ethics, and Libraries

  • One camp advocates outright pirating ebooks, then optionally buying physical copies or paying afterward if they liked the work.
  • Others argue this mainly harms authors, not Amazon, and insist on either buying from non-Amazon stores or using public libraries.
  • Counterpoint: some see little moral difference between “pirate then buy if liked” and reading from a library then buying; critics reply that libraries pay and operate under agreed terms, whereas piracy does not.

Impact on Libraries and Workflows

  • For some US library systems using OverDrive/Libby with Kindle, the USB download path runs through Amazon’s “Download & Transfer via USB,” so the change may break long‑standing workflows, including “offline forever” loans via airplane mode.
  • There is disagreement and confusion about exactly which library setups are affected.

Alternatives and Technical Workarounds

  • Kobo is frequently recommended: more open devices, easy sideloading, ACSM/Adobe DRM that can be stripped, and good Calibre integration.
  • Users describe elaborate self‑hosted setups (Calibre-Web-Automated, Kavita, KoReader, Tailscale/OPDS) that treat their own libraries as a “store.”
  • Other sources mentioned: Ebooks.com, some small/specialty presses, Humble Bundles, certain DRM‑free light novel publishers, and Bookshop.org (criticized for app‑only DRM).

Device Longevity and E‑Waste

  • Older Kindles that lost network support or can no longer authenticate are cited as examples of planned obsolescence; some use this to justify piracy.
  • A side debate emerges: whether accepting frequent device replacement is reasonable progress or wasteful and exploitative.