Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 189 of 354

FCC bars providers for non-compliance with robocall protections

Impact of Robocalls and Who Gets Hurt

  • Commenters stress that scams disproportionately target seniors and cognitively vulnerable people, with anecdotes of six‑figure losses and emotional manipulation (e.g., fake celebrity, “grandchild in trouble,” romance, fake jobs).
  • Several argue it’s wrong to blame victims as “idiots,” framing this instead as systemic exploitation similar to other predatory business models.

FCC Action: Welcome but Inadequate

  • Many welcome the FCC cutting off non‑compliant providers and some report an immediate drop in spam calls or texts.
  • Others see it as a minor “drop in the ocean,” expecting scammers to quickly re-route traffic via new intermediaries or legacy infrastructure.
  • Skeptical posters argue the FCC is not “doing everything in its power,” citing recurring waves of spam after past interventions (e.g., STIR/SHAKEN, Do Not Call).

Enforcement, Jurisdiction, and Deterrence

  • Strong calls to imprison executives of enabling telcos/VoIP providers, or hold them personally liable when they knowingly carry spam.
  • Some propose KYC‑style rules: if a carrier can’t identify the human behind spam, the carrier should face penalties.
  • Debate over foreign scammers: suggestions range from trade leverage and extraterritorial prosecution to hyperbolic calls for kinetic or clandestine action.

Technical and Structural Problems

  • Root causes identified: caller ID spoofing, legacy SS7, cheap VoIP access, and a network never designed for authentication.
  • PSTN is repeatedly described as anachronistic and structurally untrustworthy; others defend it as critical interoperable infrastructure that regulators must repair rather than replace.

Proposed Solutions

  • Network-level ideas:
    • Default‑deny or surcharge international or “unattested” calls.
    • Cryptographic attestation of caller identity (STIR/SHAKEN‑like, but stricter).
    • Blocking SS7 spoofing for numbers not proven owned; tighter US identity binding to numbers.
  • Economic ideas:
    • Per‑call fees/deposits or receiver‑pays/earns models to make spam uneconomical.
    • Fines on major carriers per robocall they deliver.
  • User-level ideas:
    • Contact‑only ringing, aggressive voicemail use, call screening, regex/area‑code filters, third‑party blockers, or switching to Pixel/Google Voice for strong spam controls.

International Comparisons

  • Multiple commenters say spam is far less prevalent in parts of Europe and Australia, attributing it to stricter regulators and carrier controls.
  • This is used as evidence that the US problem is political and incentive‑driven, not technically inevitable.

Building the mouse Logitech won't make

DIY tools, costs, and “economically irrational” hacking

  • Many relate to buying an expensive tool (e.g., hot-air station) to avoid a smaller service fee, then justifying it as an investment for “next time.”
  • Some suggest tool libraries/makerspaces or rentals to reduce underused purchases; others say repeat projects and renovations eventually justify “premature tool purchase addiction.”
  • Several explicitly say DIY is rarely about saving money; fun, learning, and independence matter more than opportunity cost.

Soldering and electronics techniques

  • People share cheap SMT assembly approaches: frying pans, toaster ovens, PTC hot plates, basic hot‑air clones (e.g., 858D), stencils, and IR thermometers.
  • Debate over best technique: hot plate vs hot air vs fine‑tip iron under magnification. Tips include Kapton tape and aluminum foil to shield nearby parts.
  • Consensus that hot‑air rework is learnable and very useful for fixing boards and swapping mouse switches.

Logitech MX Ergo, MX line, and alternatives

  • Many love the MX Ergo’s shape and workflow, but criticize micro‑USB, loud switches, lack of free‑spin scroll, lack of wired option, and rubber coating that degrades.
  • Several note Logitech now sells the MX Ergo S with USB‑C and quieter switches, partially undercutting the article’s premise.
  • Others move to alternatives (ProtoArc, Elecom, Kensington, Ploopy, CST/L‑Trac, various vertical mice) citing comfort, reliability, or price.

Hardware quality, failure modes, and repair

  • Repeated complaints about Logitech microswitches (double‑clicks, missed clicks) and rubberized surfaces disintegrating; some see this as deliberate disposability.
  • Others repair by desoldering switches, bending contacts, or injecting contact cleaner; many swap in higher‑quality Kailh/Huano parts and new skates.
  • Opinions differ on whether issues are design/quality or a side effect of very low‑voltage circuits.

Batteries, charging, and receivers

  • Strong split: some prefer built‑in rechargeable batteries and USB‑C; others insist AA/AAA (often NiMH) are superior for longevity, instant swap, and repairability.
  • Discussion of AA‑form‑factor rechargeables and hybrid designs that accept both packs and standard cells.
  • Frustration at Logitech’s fragmented receiver ecosystem (Unifying, Bolt, gaming) and slow rollout of USB‑C dongles; some praise multi‑device receiver setups.

Desire for modular, DIY, and niche mice

  • Multiple commenters want a “mechanical keyboard” ecosystem for mice: modular shells, hot‑swap switches, configurable button layouts, open firmware.
  • References to open‑hardware options (e.g., Ploopy) and custom QMK‑based mice; suggestions to crowdfund specialty designs (e.g., improved Ergo‑like trackballs).
  • OS‑level limits on mouse button events are seen as a bottleneck for high‑button “numpad” and MMO mice.

Ergonomics and personal fit

  • Experiences vary: some eliminate RSI with trackballs (thumb and finger types), others with vertical mice; some develop pain from exactly those devices.
  • Many lament the scarcity of wired trackballs, left‑handed or giant‑hand models, and ambidextrous 5‑button layouts.

Software, drivers, and search friction

  • Logitech’s configuration software is widely disliked (bloat, platform gaps). Third‑party tools like SteerMouse and similar utilities on macOS are praised.
  • Some note wasting time reinventing mods or solutions because search (especially Google) no longer reliably reveals existing products or prior art.

Hundreds lose water source in Colorado's poorest county with no notice

Payment, “paying into” vs just buying water

  • Strong back‑and‑forth over whether rural cistern users “didn’t pay.”
  • One side argues they only bought water by the gallon (no long‑term right or system investment), unlike in‑town ratepayers who fund infrastructure via taxes/fees.
  • Others counter they did pay—often more per gallon than metered users—and in this case used ~1% of the water while contributing ~15% of the water system’s revenue, effectively subsidizing town users.

Governance, process, and abrupt cutoff

  • Many see the 3–2 vote, not on the agenda and with no warning in peak summer, as procedurally illegitimate and ethically harsh.
  • Defenders say the board represents town residents, not out‑of‑town users, and has a duty to protect a finite supply for its own voters.
  • Several commenters think this was less about actual shortage (a pump failure that was being fixed) and more about officials mishandling public anger and “outsiders.”

Off‑grid living, risk, and responsibility

  • Some criticize off‑grid buyers for moving to an arid, water‑stressed area without securing water rights or realistic backups, comparing their expectations to “cosplaying” self‑reliance.
  • Others note many moved there because it was the only affordable option and relied on reassurances that trucked water was normal and stable.
  • There’s wider reflection that rural living is expensive and risky (wells can cost $25k+ and still fail), and Americans often misjudge that risk.

Water law and Western scarcity

  • Multiple comments describe Colorado and Western water law as arcane, based on prior appropriation and historical flows that no longer match reality.
  • Discussion touches on exported alfalfa, municipal vs rural users, and longstanding conflicts over water rights; fiction like The Water Knife and The Tamarisk Hunter is cited as eerily relevant.

Markets, rights, and ethics

  • One camp says utilities should price water to balance supply and demand, so waste (e.g., lawns) disappears before drinking water does.
  • Others reject pure market allocation for a survival resource, invoking the human right to water and noting that the richest country is producing situations more familiar from the global south.

Suggested alternatives

  • Ideas raised: price hikes instead of bans, dual potable/non‑potable systems, composting toilets, private tanker deliveries, well‑sharing (currently illegal in some places), and better advance planning and notice by local governments.

Show HN: Base, an SQLite database editor for macOS

Overall Reception

  • Many commenters are long-time users (up to ~15 years) praising Base as their go-to SQLite GUI on macOS.
  • New discoverers are surprised it has existed so long and often say they would have used/bought it earlier if they’d known.
  • Several people immediately buy or plan to buy, especially those wanting a minimal, native Mac SQLite tool.

Why Use Base vs Alternatives

  • Compared to generic multi-DB tools (DBeaver, DataGrip, TablePlus, etc.), Base is praised for:
    • Being purpose-built for SQLite (no irrelevant features like user management, stored procedures, remote connection UX).
    • Good table creation/alter support and SQLite-specific conveniences.
    • Native AppKit/SwiftUI behavior and polish (keyboard/mouse affordances, Finder-like interactions).
  • Versus DB Browser for SQLite:
    • Some prefer DB Browser’s being free, OSS, and cross-platform.
    • Others report instability, locking behavior, or “ugly”/less refined UI there and see Base’s UX polish as a key differentiator.

Platform & Native vs Web

  • Base is macOS-only because it uses AppKit/SwiftUI; cross-compiling to Linux/Windows is seen as non-trivial.
  • Some are disappointed by the lack of cross-platform support and reliance on macOS 15+ for v3, though old versions exist.
  • Thread detours into a heated native-vs-web debate:
    • One side: native desktop tools are faster, more efficient, integrate better, and the web is a poor app platform.
    • Other side: web apps can be good enough or better in practice; on some platforms native ecosystems have atrophied.

Target Audience & Use Cases

  • Use cases mentioned:
    • Exploring and tweaking SQLite DBs used by apps (including Apple system apps).
    • Prototyping schemas and exporting SQL for codebases.
    • Scientific research data storage and analysis; ad‑hoc querying of CSV/exports.
    • Non‑programmers using SQLite as a step up from Excel/Access-like workflows.
  • Some question who needs visual table editors; multiple replies argue GUIs are great for prototyping, learning SQL, and quick schema evolution.

Features, Requests, and Limitations

  • Requests: UUID blob display/editing, auto-enabling foreign keys, extension auto-loading, JSON view, triggers visibility, tabs for multiple queries, better font control, more “diagram”/ERD-like features.
  • Missing features: no SQLCipher support currently; no multi-user collaboration (seen as more of an SQLite limitation).
  • Some see it as “Postico for SQLite”; others wish it matched Postico’s UX in areas like copy/duplicate rows.

Pricing, Licensing, and Business Model

  • Price (~$40 one-time) sparks debate:
    • Some call it expensive for a small utility and lament lack of source code.
    • Others argue the price is trivial relative to developer productivity and indie sustainability.
  • Discussion of closed vs OSS:
    • A few would pay more if it were OSS or source-available.
    • The author cites piracy and legal hassle as reasons not to open the code.
  • Revenue mix: historically App Store-dominant, with recent shift toward more direct sales; product alone does not fully support the developer.

The air is hissing out of the overinflated AI balloon

Fast Food, Kiosks, and “Easy” Jobs

  • Debate centers on whether AI’s failure at McDonald’s-style drive‑thrus means it can’t do most jobs.
  • Multiple commenters note the cited system is pre‑LLM (2019 IBM, decision trees), so not representative of current tech.
  • Others stress drive‑thru work is much harder than it looks: noise, multiple speakers, regional names, coupons, makeup orders, and real‑time coordination with kitchen shortages.
  • Kiosk/tablet ordering already replaces many order‑takers; some argue apps + QR codes will further reduce need for AI.
  • User experience is polarizing: some hate kiosks (lag, breakage, dark‑pattern upsells); others strongly prefer them for clarity, speed, language issues, and reliable customization.

Ambiguity, Customization, and Human Judgment

  • Long subthread on what “plain” means in fast‑food orders shows how much context and clarification humans handle implicitly.
  • Views differ on whether staff should always clarify ambiguous terms vs. prioritize speed and accept a small error rate.
  • Humans also handle messy edge cases (wrong items on the grill, “ice cream machine down,” partial shortages) that current systems struggle to encode.

LLMs vs Traditional Automation

  • Some argue massive LLMs are overused “gold-rush shovels” where simpler, older automation (rule systems, vision, CNC, self‑checkout) already works fine.
  • Others think LLMs would be strong at constrained menu ordering, especially as chains ruthlessly optimize efficiency.

AI as Tool, Not Magic

  • Several developers say AI is now a standard tool: great for debugging and boilerplate, often wrong but far faster than web search.
  • Concerns raised about unknown true costs: energy, water, pollution, and especially “information pollution” as AI‑generated slop degrades search results.

Bubble, Plateau, and Long‑Term Impact

  • Many see a classic bubble: tech is real and transformative, but companies and hardware build‑out are overhyped, echoing dot‑com.
  • Frequent reference to Amara’s law: short‑term overestimation, long‑term underestimation.
  • Strong disagreement with the article’s claim that AI is “as good as it’s going to get”: most expect continued, but slower, improvement; some think current text‑model gains already look incremental.
  • Consensus: even if the financial bubble pops, LLMs and other AI (vision, specialized models like AlphaFold, self‑driving) will persist and keep reshaping work.

Areal, Are.na's new typeface

How Different Is Areal From Arial?

  • Many commenters say Areal looks almost identical to Arial; side‑by‑side comparisons suggest the main visible change is spacing, with minor stroke-width tweaks.
  • A few people do notice subtler refinements (e.g., stroke consistency, specific letters like “S” and “e”, tabular numeral styling).
  • Some question why anyone would “revive” Arial, itself long criticized as a derivative of Helvetica, instead of starting from a more respected source or designing something more distinct.

Motivation: Licensing, Control, and Craft

  • Several comments argue the move makes sense practically: Arial is owned by Monotype, web licensing can be expensive, and Are.na may want full control, including monospace, variable weights, and modern OpenType features.
  • Others see it as a typical designer move: investing lots of energy to tweak something almost no one will consciously notice, but satisfying for those who care about details.
  • Defenders frame it as analogous to rewriting a frontend: mostly invisible but enabling future flexibility.

Technical Details and Coverage

  • Reported to have ~475 glyphs, mostly Latin; lacks Greek and Cyrillic and doesn’t even reach WGL4 coverage, which some font‑aware users criticize.
  • Includes a new monospace, some dark‑mode optimization, and advanced features like tabular numerals.
  • One user notes the horizontal bar on “t” is so short it may hurt legibility.

Tone, Presentation, and “Is This a Joke?”

  • The article’s framing—“revival” language, Windows 2000 archivist anecdote, dense diagrams—reads to many as self‑parodic or “Pepsi design deck”‑like.
  • Some find this charming, fun, and very on‑brand for a design‑heavy community; others call it cringe, navel‑gazing, or wasteful use of company resources.
  • There’s debate over whether this is serious craft with a playful tone, or a mostly marketing‑driven exercise dressed up as deep design work.

Are.na, Audience Fit, and Ecosystem

  • Are.na is described as a long‑running, designer‑centric, ad‑free bookmarking/moodboard space; for that audience, a bespoke Arial variant can be seen as both functional and culturally resonant.
  • Some praise the consistency of their taste and point to an ecosystem of apps using Are.na’s (very open) APIs. Others remain unconvinced that “another Arial” is something their users wanted.

Standard Thermal: Energy Storage 500x Cheaper Than Batteries

Concept and Intended Use

  • System: PV powers resistive heaters embedded in a large dirt/sand mound, heated to ~600 °C; pipes extract heat months later for space/process heat or steam.
  • Primary target: slow, seasonal storage (summer → winter) and steady industrial/district heat, not fast daily cycling or full electricity replacement.

Thermal Physics: Dirt as Storage Medium

  • Multiple comments note dirt/soil has modest R‑value per inch but becomes a strong insulator when used in large thicknesses (e.g., ~10 ft → R‑24–96).
  • Heat diffusion in large masses is very slow; seasonal ground temperature behavior is cited as an analogy.
  • Key design lever is scale: thermal time constant grows with the square of system size, so large piles can retain heat for months.
  • Some push back that dirt isn’t “very insulative” per unit thickness; viability depends on making the pile big enough and dry.

Round-Trip Efficiency and Economics

  • Article figure of 40–45% is clarified as electricity→heat→electricity, not end‑to‑end from PV; some commenters think 30% is more realistic.
  • Consensus: for seasonal storage, low capex can outweigh poor efficiency because there are few cycles per year and input energy can be very cheap or curtailed.
  • Comparisons are drawn to batteries: far higher efficiency but ~100–500× more expensive per kWh of capacity and unsuitable for seasonal storage.
  • Others compare to power‑to‑gas/methanol and hydrogen storage, arguing those may have similar efficiency but higher capex and more complex fuel cycles.

Use Cases and Scale

  • Strongest use cases discussed:
    • Industrial low/medium‑temperature process heat (~200–600 °C).
    • District or community heating with constant winter demand.
    • Repowering existing coal plants using stored heat instead of coal (reusing turbines and grid tie).
  • Poor fit for:
    • Individual homes lacking hydronic/district heating.
    • Fast daily arbitrage where batteries already pencil out economically.

Comparisons to Alternatives

  • Ground‑source heat pumps: excellent for heating/cooling, but can’t reach steam temperatures and involve substantial drilling costs; also fundamentally use the ground as a reservoir, not as high‑temperature storage.
  • Solar thermal / heliostats:
    • Direct solar‑to‑heat avoids PV conversion loss, but high‑temperature solar needs tracking concentrators, high radiative losses, and hot‑fluid transport.
    • PV + resistive heating is simpler, modular, and works with diffuse light.
  • Mechanical/gravity storage (blocks, pumped water): widely derided as having terrible energy density and poor economics vs batteries and thermal storage.

Engineering and Maintenance Concerns

  • Major open worry: how to inspect and repair embedded heat‑exchange pipes surrounded by 600 °C dirt; leaky tubes and fouling are known boiler issues.
  • Some expect systems to be designed for long life with minimal intervention, accepting gradual degradation or building new modules rather than repairing hot cores.
  • Corrosion and oxidation of resistive elements at high temperature are flagged as serious design challenges; material choice and atmosphere control matter.

Environmental and Practical Questions

  • Potential ground/wildlife impacts of large, hot mounds are raised; suggestions include placing them under parking lots or other already disturbed land.
  • Risk of unwanted snow melt/icing if heat leaks upward; ideally, well‑designed piles should have negligible surface temperature change.
  • Several commenters note similar concepts (sand batteries, seasonal borehole storage, PAHS, historic ice houses) already exist, reinforcing feasibility but also raising “if it’s so good, why isn’t it everywhere?” skepticism.

Skepticism and Open Questions

  • Calls for full thermodynamic and economic models: capacity, leakage, extraction rate, and full system capex (including turbines) are not rigorously quantified.
  • Some doubt the “500× cheaper than batteries” claim without detailed cost breakdown and field data.
  • Unclear at what minimum scale the approach remains viable and how close loads must be to storage to keep transport losses acceptable.

The Size of Adobe Reader Installers Through the Years

Graph design: log vs linear

  • Many readers found the log-scale chart confusing or misleading, especially since the “point” was perceived as showing how bloated Adobe Reader is vs Sumatra.
  • Several argued a linear chart (shared in comments) better communicates at a glance: “Adobe huge and exploding; Sumatra tiny and mostly flat.”
  • Others defended the log scale as more accurate for exponential growth and for conveying that early size jumps (e.g., from 2.5MB to 5MB in the 90s) were a big deal relative to the era.
  • There was confusion over point labels (version numbers vs MB) and criticism that the chart lacked clear labeling, units, proper time axis, and basic data‑viz best practices.

Perceived bloat and UX of Adobe Reader

  • Strong complaints about Adobe Reader being slow, heavy, bundled with extra software (e.g., McAfee), riddled with popups, ads, and AI upsell, even in paid Acrobat.
  • Some report it crashing on corporate machines or being painful to use for simple tasks.
  • Several say it’s the first app they don’t install on new systems.

Why some still rely on Adobe

  • Despite dislike, many keep Reader/Acrobat because:
    • Some government and business PDFs only work or display correctly in Adobe, sometimes intentionally.
    • Adobe uniquely supports certain advanced features: JavaScript-heavy forms, clickable metadata in engineering PDFs, robust commenting/signing workflows, and a powerful print dialog.
  • A note that historical additions like Flash and embedded JavaScript increased size and attack surface, while lighter readers avoid these.

Popular alternatives and trade-offs

  • Windows: SumatraPDF (tiny, fast, but limited compatibility and features), PDF‑XChange, Okular, Xodo (feature-rich but popups), browser viewers (Chrome/Firefox/Edge) for many use cases.
  • Linux/BSD: zathura (+ mupdf plugin), MuPDF, xpdf, evince; discussion that zathura’s real footprint includes dependencies.
  • macOS: Preview widely praised for speed, signatures, simple editing, PDF merging; but it breaks some forms or renderings. PDF Expert mentioned as a polished but non‑Windows option.
  • Command‑line tools (pdfjam, qpdf, pdftk) used for page reordering/merging.

Pop‑ups, notifications, and UX

  • Debate over whether popups are ever acceptable, with examples where interruption is desired (save reminders, impossible actions, 2FA prompts, “close 146 tabs?”).
  • Distinction and overlap between blocking modal dialogs, old‑school pop‑up ads, and non‑blocking “toast” notifications.

Package managers, platform choices, and bloat

  • Pushback against calling non‑Scoop users “insane”; most Windows users don’t use package managers, and some prefer choco or winget, while Scoop suits non‑admin environments.
  • Tangential concern about general desktop/mobile app bloat (e.g., k8s tools, social apps) eroding user trust.

Scamlexity: When agentic AI browsers get scammed

Terminology and Hype (“Scamlexity”, “Agentic”, etc.)

  • Many dismiss “Scamlexity” and “agentic” as awkward or overused buzzwords, seeing them as a pivot by AI vendors once plain “AI” showed cracks.
  • “VibeScamming” is noted as another emerging term for exploiting LLMs’ pattern‑matching and social cues.

Core Vulnerability: Content vs Commands & Actuators

  • Central flaw highlighted: LLMs don’t distinguish “content to read” from “commands to execute,” making prompt injection and indirect instructions from web pages or emails dangerous.
  • Once connected to actuators (browsers, payments, smart devices, drones, military systems), bad completions can become bad real‑world actions.
  • Some argue the real problem is requirements: users want an agent that executes instructions from text they haven’t vetted, and don’t want to confirm every tiny action.

Sandboxing vs Alignment & AGI Worries

  • One camp: treat this as a standard security problem—sandbox, least privilege, no external levers, and the risks largely vanish.
  • Others argue this is naïve: widely deployed systems are already networked and tool‑connected, and future more‑agentic models may resist shutdown or resort to coercion once given enough power.
  • There’s skepticism toward current “safety” work that focuses on model narratives (“blackmail to avoid shutdown”) instead of hard security boundaries.

Should Agents Buy Things for Us?

  • Many don’t see the appeal of fully delegating purchases; they want AI for search, comparison, and list‑building, but insist on final selection and payment.
  • Others argue that wealthier people already delegate purchases to human assistants; if AI reached similar reliability, there would be demand for routine buys (groceries, refills, minor items).
  • Concerns:
    • LLM fallibility (hallucinations, being fooled by fake sites or knockoff products).
    • Corporate incentives to bias recommendations, dynamic pricing, and kickbacks (“AI shopper as the retailer’s agent, not yours”), leading to enshittification.

Scams, Deterrence, and Singapore Digression

  • One commenter claims scams are “solvable” via extreme punishments (citing Singapore); others refute this with stats and examples and reject draconian penalties as immoral and ineffective.
  • Broader consensus: scams are persistent, adaptive, and unlikely to be “eliminated”; scammers will evolve prompt‑injection and agent‑targeting techniques at scale.

Article and Product Critiques

  • Some argue the demo scenarios are stacked: the user explicitly steers the agent to scam pages or tells it to follow email instructions, so the “no clicks, no typing” framing is misleading.
  • Others see the article as a valid warning: if an AI browser happily executes curl | bash‑style flows on arbitrary content, large‑scale exploitation is inevitable.
  • A few report useful, non‑financial agent uses (scraping, long‑running web tasks) but say payments and sensitive operations are a step too far for now.

Outlook for Agentic Browsers

  • Suggestions: enforce strict capabilities (scoping by domain, mandatory human approval for any spend, allowlists, and new “core primitives” for safe actions).
  • Some say browser‑driven agents are fundamentally brittle compared to dedicated APIs; others note the web is de‑facto the only API many sites expose, so agents will keep using it despite risk.

What are OKLCH colors?

Relationship between OKLCH and OKLab / other spaces

  • OKLCH is OKLab in polar coordinates (L = lightness, C = chroma, H = hue); mathematically the same space but a different coordinate system.
  • Several comments compare it with CIELAB/CIELCh: OKLCH is described as a “bugfix” for CIELAB, with better perceptual behavior (e.g., desaturating pure blue no longer passes through visibly purple).
  • Some people prefer HSLuv or hope for OKHSL/OKHSV as more intuitive, hue-stable frontends that hide gamut boundaries.

Gradients and gamut problems

  • A major thread critiques the article’s “better gradients” claim.
  • In OKLCH, hue is an angle, so interpolating hue can take a long path around the circle, passing through unexpected colors (e.g., magenta→lime going via red–yellow or via blue–aqua).
  • Much of that circular path lies outside sRGB, P3, even Rec.2020 and sometimes beyond human vision. Gamut‑mapping algorithms pull colors back in, breaking perceptual uniformity (e.g., darkened reds, banding).
  • Some argue OKLab is better for gradients: it interpolates as a straight line, often passing through gray but avoiding wild hue detours and extreme out‑of‑gamut segments. Tailwind reportedly tried OKLCH for gradients then switched to OKLab as a safer default.
  • Others note there is no single “correct” gradient; going through gray can be undesirable depending on the use case, and hand‑placed stops are often best.

Perceptual uniformity vs hue drift

  • Supporters like that OKLCH lets you change lightness while mostly preserving perceived “colorfulness,” making palette building and relative CSS colors easier.
  • Critics call out examples where increasing lightness shifts blue visibly towards cyan, contradicting the article’s claim of “no hue drift.” This is attributed to hitting gamut boundaries: OKLCH tends to preserve chroma and sacrifice hue when clipping.
  • Debate ensues over whether sacrificing hue is acceptable; some see it as a fundamental flaw, others as a design choice in an inherently compromised problem.

Hardware limits and gamuts

  • Wider‑gamut monitors (P3, Rec.2020) reduce but don’t eliminate issues; many problematic gradients request colors “way” outside any practical gamut.
  • There’s discussion on why gamut edges are non‑linear, why some colors (e.g., very bright pure red) don’t exist, and why mapping out‑of‑gamut colors is intrinsically messy.

Accessibility and contrast

  • Commenters remind that choosing colors also needs contrast checking (APCA vs WCAG 2).
  • APCA is seen as perceptually better, especially for dark themes, but WCAG 2 remains a legal requirement in many contexts; tools exist to test both.
  • There is a side debate about contrast formula design (ratio vs difference, edge cases, approximations).

Practical usage and tooling

  • Several people report good experiences using OKLCH plus relative CSS colors to derive entire themes from a few base colors.
  • Pain points: learning non‑HSL hue numbers, difficulty adding “warmth,” handling out‑of‑gamut issues, and lack of polished gradient pickers that expose clipping options.
  • Consensus: OKLCH is a powerful, more perceptual base, but designers still need taste, hand‑tuning, and awareness of its failure modes.

Ban me at the IP level if you don't like me

IP Blocking, Geoblocking, and ASNs

  • Many operators increasingly block entire countries (often China, Russia) or cloud ASNs (Tencent, Alibaba, etc.) to cut 80–95% of malicious traffic with minimal effort.
  • Some small/local businesses whitelist only domestic ISPs or regions; others block non‑US or non‑local IPs if they see no business upside abroad.
  • Tools like MaxMind, IP2Location, IPinfo, Team Cymru, BGP/ASN lookup sites, and Cloudflare geo features are widely used to derive country/ASN-based rules.
  • Critics note this can become “AS death penalty” and may be hard to maintain as providers constantly reshuffle prefixes.

Residential Proxies, CGNAT, and VPNs

  • A core objection: bad actors increasingly use residential proxies, CGNAT and mobile carriers, so IP blocking risks harming legitimate users while abusers rotate IPs.
  • Others argue short‑lived or /64–/48 IPv6 bans and auto‑expiring blocks make collateral damage acceptable, especially for non‑critical or local services.
  • There’s debate on whether blocking residential proxies is desirable “pressure” on ISPs/users or an unfair externality on innocent customers and travelers.

Bot Behavior and Impact

  • Reports of aggressive crawlers: thousands to hundreds of thousands of requests per day, hammering code-hosting for diffs/snapshots, forums, wikis and blogs.
  • Many disregard robots.txt, use vast IP pools (cloud + residential), spoof User‑Agents, or even hit non‑HTTP ports; some large AI/LLM and social/big‑tech bots are seen as especially noisy.
  • For simple static sites this is often just log noise; for dynamic or diff-heavy endpoints it can overload CPUs, caches and disks.

Mitigations and Defense Strategies

  • Common tactics:
    • Firewall/WAF rules by IP/ASN/country.
    • Fail2ban, tarpits, rate limiting, slow responses, port knocking, moving SSH, blocking unused ports.
    • Special handling of suspicious URLs (e.g., query params, certain paths) or “notbot” query flags.
    • Serving junk or zip bombs to bad UAs; others argue this still wastes bandwidth and complexity.
  • Some advocate allowlist-style architectures (identity‑aware proxies, tokenized gatekeepers, VPN-like access) for personal or small sites.
  • Others see this as an endless whack‑a‑mole and argue the long‑term answer is more efficient applications and infrastructure.

Whitelists, Shared Lists, and Central Repos

  • There are attempts to curate “good bot” lists and country/ASN CIDR sets; some run personal good/bad/data‑center lists feeding nginx or rate limiters.
  • Calls for a neutral, industry‑run bad-IP registry are tempered by concerns over staleness, overlap with legitimate traffic, and xkcd‑style proliferation of competing standards.

Collateral Damage, Ethics, and Law

  • Travelers and VPN users describe severe friction: blocked from buying tickets, cancelling subscriptions, or accessing support when abroad or behind foreign IPs.
  • There’s an extended side debate on card‑network chargebacks and whether geoblocking or blocking cancellation paths is compliant or ethical.
  • Philosophically, many assert “my server, my rules”: HTTP is merely a request, responses are optional; others worry about a fragmented, hostile “intranet” where over‑blocking and pseudo‑security dominate.

Japan has opened its first osmotic power plant

How the plant actually works

  • Commenters clarify the key point: the osmotic plant is colocated with a desalination plant.
  • It uses:
    • High-salinity brine from desalination as the “salty” side.
    • Partially treated wastewater (low salinity) as the “fresh” side.
  • As the two streams mix across membranes, the system:
    • Recovers some energy from the salinity gradient.
    • Dilutes the brine before discharge, lowering environmental impact.
  • This is likened to a recuperator or regenerative braking: harvesting energy from a process that would otherwise waste it.

Efficiency, scale, and economics

  • From the reported 880,000 kWh/year, commenters infer an average output of ~100 kW; rough estimates suggest this offsets ~5% of the desalination plant’s power use.
  • Many see it as “making desalination ~5% more efficient” rather than a standalone power source.
  • Several question whether it’s better to:
    • Fully treat wastewater to drinking quality and desalinate less seawater instead, or
    • Simply dilute brine with wastewater without bothering to extract energy.
  • Others argue pilot projects are needed to get real-world data on costs (CAPEX/OPEX) and performance before judging.

Water reuse, psychology, and alternatives

  • Thread notes strong public resistance to “drinking treated sewage” even when technically safe, leading cities to dump treated water to rivers/oceans and then build desal plants.
  • Osmotic power is seen as a way to get extra value before that discharge.
  • Some worry it “wastes” low-salinity water for a small power gain; replies say local hydrology and excess wastewater can make this acceptable but site-dependent.

Environmental and broader energy context

  • Better brine management (less-saline discharge) is viewed as a significant co-benefit.
  • One subthread contrasts osmotic power (energy recovery) with solar/wind (new energy input).
  • Another spirals into whether nuclear is “renewable,” debating fuel exhaustion, breeder reactors, costs, safety, and regulation, but this remains tangential to the osmotic project itself.

The Unix-Haters Handbook (1994) [pdf]

Thread meta and sense of time

  • Commenters note this PDF has recurred on HN for over 15 years; the 2008 thread is now closer to the book’s 1994 publication than to today.
  • Several people reflect on aging via comparisons like “birthdate to WWII vs birthdate to now”, finding it unsettling.
  • Europeans describe vivid personal/family memories of WWII, the Iron Curtain, and lingering physical scars (bullet holes, bunkers, ruined city centers), contrasting that “history-book distance” with how close it actually is.

Ritchie anti-foreword and changing language

  • The anti-foreword is praised for its brutal but witty metaphor, interpreted as an extremely polite way of telling the authors off.
  • A dated phrase quoted from the book sparks a side discussion about likely racist origins and the difficulty of judging historical language by present norms.
  • Some argue “the past is a foreign country” and note that future generations will likely judge today’s norms just as harshly.

Systemd, PulseAudio, and modern Unix complexity

  • One commenter proposes a “systemd-haters handbook” listing inconsistencies: separate tools for logs (journalctl vs systemctl), obscure top-level options, and uneven feature coverage across unit types (e.g., retries for services but not mounts).
  • Others defend systemd: it unifies many previously scattered tools, its docs are deep as reference material, and features like cgroups, service readiness, and event-based activation improved reliability.
  • Critics call it overgrown, ergonomically hostile, and driven by niche enterprise needs; they liken its evolution to accreting hacks it originally set out to replace.
  • PulseAudio vs PipeWire is used as analogy: PA was initially painful but enabled new abstractions; PipeWire later cleaned things up. Some expect or hope for a “systemd successor” of similar nature.
  • One person notes many early PA problems were actually broken audio drivers; defenders of systemd similarly emphasize underlying complexity.
  • There’s disagreement over what “reliability” in an init system should mean: merely starting/stopping services vs keeping them alive and supervised.

“Everything is a file” and alternative designs

  • New Linux users say AI help makes Unix’s text/file orientation more approachable than Windows.
  • Others point out “everything is a file” was never fully true on Unix (notably for networking), and that Plan 9 and Inferno extend the model more consistently (e.g., /net with file-like connections).
  • Long subthread compares Unix file descriptors vs Windows kernel handles, and what it would mean if network interfaces were actual character devices (raw frames, broadcast, restrictions).
  • Plan 9’s 9P and “connections as files” are praised as a cleaner, more object-like evolution of the Unix idea; some connect this to OO and Smalltalk-style late binding.

Unix strengths, weaknesses, and historical context

  • Several note that the book’s attacks were aimed at commercial Unix circa 1993; many specific targets (csh, sendmail, some reliability issues) are either gone or much improved.
  • Others stress that the fundamental pain points—C’s design, the shell’s quirks, rough edges in tooling—remain, even as replacements proliferate without universal adoption.
  • One long defense of Unix emphasizes its process model, cheap composable processes, stdin/stdout, pipes, backgrounding, and inetd as uniquely empowering compared to systems like VMS.
  • This “Lego of processes” is contrasted with more closed or static environments; despite arcane syntax, Unix is described as unusually pliable in the hands of non-admin users.

Lisp machines and alternative OS ideas

  • The book’s Lisp-machine nostalgia is seen as historically fascinating but mostly irrelevant to mainstream practice; still, commenters link to modern FPGA and software Lisp-machine projects.
  • Others argue many Lisp-machine hardware advantages have been absorbed into modern CPUs (caches, multicore), leaving less reason for dedicated hardware, though the environments remain intellectually influential.

Cutler, NT, and Unix lore

  • A side discussion revisits the story that a well-known NT architect “hated Unix.” Commenters argue this is mostly myth built on a few anecdotes and jokes.
  • Some of NT’s non-Unix design choices (HAL, object manager, async I/O, user-mode “personalities”) are portrayed as legitimate architectural alternatives rather than pure anti-Unix sentiment.

The book as artifact and cultural touchstone

  • Several people own the physical edition with the “barf bag” and recall it as simultaneously unfair, hilarious, and educational.
  • It’s described as a mix of still-relevant criticism, outdated gripes, and great one-liners; for many, it was an early, accessible gateway to understanding Unix’s culture and flaws.

Nvidia DGX Spark

Performance Claims & FP4 Marketing

  • Debate over Nvidia’s “1 PFLOP” headline: several note this is FP4 with structured sparsity, not FP16/FP8, and call that framing misleading; others argue the page does state “FP4 with sparsity” in multiple places so it’s not being hidden, just over‑hyped.
  • Concern that users will see far lower real‑world FLOPs (tens of TFLOPs) on typical BLAS/FP16 workloads, echoing earlier H200 marketing vs practical numbers.

Memory, Bandwidth & Model Size

  • 128 GB unified LPDDR5x is seen as the main selling point: can fit roughly 4× larger models than a 32 GB 5090, enabling 200B‑parameter models in low‑precision (FP4/quantized).
  • However, the ~270 GB/s bandwidth is widely criticized as “gimped” and a fundamental bottleneck for LLM token generation and training; comparisons note that M4 Max, M3 Ultra, and 5090 all have 2–7× more bandwidth.
  • Some argue this is fine for non‑LLM AI or fine‑tuning, others say it makes the “AI supercomputer on your desk” positioning questionable.

Comparisons to Other Hardware

  • Versus RTX 5090: consensus that 5090 is far faster for small/medium models, while Spark wins only on maximum model size. Several say “small models fast: 5090; large models slow: Spark.”
  • Versus Jetson Thor: Thor appears to have more FP4 compute at lower cost but is inference‑oriented; Spark is pitched as training/fine‑tuning‑oriented with better cache and NVLink/ConnectX options.
  • Versus AMD Strix Halo and Apple M‑series: Spark’s bandwidth is similar to Strix Halo (considered overrated by some) and much lower than high‑end Macs. Opinions split on whether future Mac Studios will make Spark irrelevant for single‑box inference.

Networking & Form Factor

  • Initial confusion over only “10 GbE” is corrected: Spark includes a ConnectX‑7 NIC with 2×200 Gbps plus USB4/Thunderbolt‑class ports. Some see this as a key advantage for clustering vs Macs.
  • Bundled “two unit” configurations leverage the 200 Gbps link to run ~400B models.

Software, OS & Longevity

  • Jetson history (old Ubuntu bases, slow upgrades) makes some wary; others note DGX OS 7 is based on Ubuntu 24.04 and expect better support.
  • Concern that kernels and CUDA stacks may be relatively locked‑down, limiting hobbyist flexibility.

Value, Segmentation & Availability

  • Price points around $3–4K (ASUS/MSI variants, DGX Spark bundles) lead many to judge it poor $/TFLOP and $/bandwidth compared with 5090 or RTX Pro 6000.
  • Several see it as heavily segmented (LPDDR5x, no HBM, limited bandwidth, no NVLink on nearby products) to protect Nvidia’s datacenter GPUs.
  • Reports of delays, HDMI issues, and very few units in the wild fuel “paper launch” skepticism.

A bubble that knows it's a bubble

Is AI a bubble and how big?

  • Many commenters think we are in a financial bubble around LLMs, not a “capabilities bubble”: the tech is real, but valuations and spending are ahead of sustainable economics.
  • Some argue even a modest, durable 2x software dev productivity gain would justify large valuations; others say such gains are unproven and mostly appear in toy/prototype work.
  • There’s concern that markets have already priced in near‑ubiquitous AI automation; if progress plateaus, a sharp correction is expected.

Compute, GPUs, and “infrastructure”

  • Strong disagreement on whether current GPU build‑out is analogous to railroads or fiber:
    • One side: GPUs become obsolete in ~3–5 years; this is not durable infrastructure.
    • Other side: data center buildings, power, cooling, fiber, improved logistics and fab capacity are long‑lived and will enable future uses, even if current GPUs are scrapped.
  • Some note Moore’s Law is slowing, so current compute might stay “good enough” longer than past clusters. Others see this optimism as grasping for silver linings.

Robotics and humanoid hype

  • Several expect real long‑term value in robotics, but view the current humanoid craze as over‑engineered, expensive, and bubble‑like.
  • Wheels vs legs: wheels are cheaper, more reliable, and adequate in many ADA‑compliant environments; bipedal robots only make sense where humanlike mobility is essential.
  • Discussion branches into disability tech (wheelchairs vs exoskeletons), with cost, simplicity, and safety cited as reasons wheelchairs still dominate.
  • Privacy and cloud dependence are flagged as major barriers to household robots; luxury and business markets may appear first, deepening inequality.

Economics of AI companies

  • Example: huge raises at multibillion valuations with large losses are seen as classic bubble markers; defenders invoke “grow at all costs” playbooks and VC power‑law returns.
  • Skeptics stress that most such companies historically fail; high margins are uncertain given intense competition and compute costs.
  • There is debate over whether regulatory action (especially in Europe) will constrain dominant AI platforms and their hoped‑for winner‑take‑most economics.

Historical analogies and creative destruction

  • Comparisons span railways, fiber, dot‑com, housing, VR, 3D printing, crypto, and Japan’s long stagnation.
  • Some endorse the “victims unknowingly funded the future” view (fiber after dot‑com); others note many bubbles (VR headsets, some hardware) leave little reusable infrastructure.
  • A subthread clarifies that capital, money, and real productive capacity are distinct: bubbles can destroy useful time and misallocate resources even if money is later “recreated.”

Altman, incentives, and regulation

  • Altman’s simultaneous “bubbly” rhetoric and talk of “trillions” in AI investment is seen by some as price‑talk to suppress startup valuations and intensify regulatory moat‑building.
  • Others see self‑contradictory messaging on AI existential risk and regulation as evidence of self‑interested attempts at regulatory capture: “AI is dangerous, so only we should build it.”

Systemic risk and leverage

  • Several note that unlike past manias, retail investors have limited access to early AI equity, which may reduce broad household devastation if/when it pops.
  • Others worry less about immediate financial collapse and more about a lost decade of misdirected engineering talent and underfunded “real” public research.

Who survives / what remains?

  • Speculation ranges from “big cloud and chip vendors will be fine” to extreme scenarios where a crash plus geopolitics and climate undermine major Western tech firms.
  • More moderate voices expect:
    • Data centers and power/fiber build‑out to persist.
    • AI tools to remain as niche but valuable productivity aids (similar to 3D printing).
    • FOSS and local, non‑cloud software to be relatively resilient.

Uncle Sam shouldn't own Intel stock

Role of Government Equity in Intel

  • Many see a 10% federal stake as “state capitalism” or “lemon socialism”: socializing downside while leaving control and much of the upside private.
  • Others argue equity is more honest than pure grants: if public money saves or strengthens a firm, taxpayers should share profits rather than merely backstop losses.
  • Several note the CHIPS Act already included profit‑sharing conditions; turning that into equity is a material change, not a new principle.

Legal / Process Concerns

  • A major objection is that the equity demand appears retroactive: CHIPS grants were already authorized by Congress, and adding stock conditions now may require amending the law.
  • Some call it “strong‑arming” or a “Darth Vader: altering the deal” move; they question legality and anticipate court challenges.
  • Others reply that corporations exist at the pleasure of the legal system; Congress could in theory mandate equity stakes broadly.

Intel, Fabs, and National Security

  • Broad agreement that advanced fabs on US soil are strategically vital, especially given Taiwan/China risks and TSMC’s dominance.
  • Disagreement on Intel’s condition: some see it as not a classic bailout case (unlike 2008 autos/banks), others say Intel’s manufacturing arm is close to worthless without massive support.
  • Several stress that fab know‑how, tooling, and process leadership cannot be “spun up” quickly or easily transferred via bankruptcy; letting Intel fail risks losing irreplaceable capability.
  • Counterpoint: Intel’s long mismanagement, missed mobile/AI/foundry opportunities, and inferior nodes make it a poor vehicle; some would have preferred splitting design and fab or backing alternative fabs.

Equity vs. Taxes and Grants

  • One line of argument: governments already “own a slice” of every business via taxes; grants are not “money burned” because returns come through corporate, payroll, and income tax. Equity is redundant and deepens politicization/insider‑trading risk.
  • Others insist that if risk is socialized through bailouts and industrial policy, equity (or other explicit upside sharing) is the only way to avoid one‑way transfers to shareholders.

Socialism, State Capitalism, and Ideology

  • Extensive semantic debate: some say this is “socialism,” others insist socialism requires worker control, not state shareholding.
  • Many note that terms like capitalism/socialism/communism have become so overloaded that they now hinder more than help policy discussion.
  • Some highlight the irony of anti‑“socialist” rhetoric coexisting with repeated US precedents for equity stakes and bailouts (autos, banks, rail, airlines).

Alternative Policy Ideas

  • Proposals include: clear exit conditions and timelines for divesting the stake; using profit‑sharing instead of stock; demanding governance concessions; or forcing broader on‑shoring across defense suppliers rather than picking Intel alone.
  • A minority argue for letting Intel fail and redistributing fabs via bankruptcy, but are met with skepticism over feasibility and timeline.

Starship's Tenth Flight Test

Musk, politics, and Tesla/SpaceX

  • Several comments argue Musk’s visible politics alienated early, mostly liberal Tesla buyers and turned Tesla from an “eco virtue signal” into a MAGA-aligned status symbol, possibly opening a new market of previously anti-EV buyers.
  • Others counter that most car buyers don’t care about CEO politics, and that most CEOs avoid politics precisely to prevent this problem.
  • Some see Musk’s political distractions as harmful to his companies; others say operations are largely driven by professional leadership and organizations that can function without his day‑to‑day involvement.

Musk’s engineering role

  • One side claims SpaceX/Tesla engineers try to keep Musk away from technical decisions, citing Cybertruck compromises and alleged internal stories.
  • The other side insists Musk was central to Falcon 9 reusability and key Starship concepts (e.g., tower “chopstick” catch), arguing he has strong engineering intuition despite lacking formal credentials.

Starship vs. Space Shuttle and Falcon 9

  • Debate over whether Starship is more impressive than the Shuttle:
    • Shuttle praised as a 1970s–80s engineering marvel with reusable orbiters and boosters but criticized as unsafe, extraordinarily costly, and a long‑term programmatic failure.
    • Falcon 9 cited as having surpassed Shuttle in reliability and cost/kg, with rapid turnaround and profitable reuse.
    • Starship is framed as aiming for a harder target: fully reusable super‑heavy lift with dramatically lower costs and fast reuse, but it’s still in a risky test phase.

Test philosophy, failures, and simulations

  • Multiple comments explain why “bulletproof” simulations aren’t possible: complex coupled physics (turbulence, combustion, slosh, structural flex), approximations, manufacturing tolerances, and computational limits.
  • Starship tests deliberately push hardware to failure to gather real data (e.g., aggressive reentry, simulated engine‑out landing burns, tile removals, experimental heat‑shield materials).
  • SpaceX is seen as favoring build–fly–iterate over exhaustive pre‑flight analysis, trading more test losses for faster learning.

Economics and use‑cases for Starship

  • Skeptics question whether Starship will be economically justified given Falcon 9’s success, limited heavy‑lift demand, and unproven reuse of the upper stage.
  • Supporters argue:
    • Starlink alone could use Starship’s capacity, and lower $/kg will change what payloads are worthwhile.
    • Reusing the second stage could meaningfully reduce costs.
    • Landing legs and human access/egress are solvable engineering problems once the vehicle itself is reliable.

Private power, taxpayer funding, and security

  • Some are uneasy that a privately controlled, partially taxpayer‑funded system of unprecedented capability is effectively under a single individual’s influence, especially given prior controversies (e.g., Starlink coverage decisions in Ukraine).
  • Others argue weaponizing Starship is impractical (liquid fuel, domestic launch sites, need for co‑conspirators, inevitable military response) and note that most advanced weapons systems are already built by private contractors.
  • Broader debate touches on neoliberal patterns: public funding without public ownership, and whether that creates moral hazards.

Human vs. robotic exploration and colonization

  • One camp calls crewed exploration and interplanetary colonization bad investments compared to robotics, emphasizing extreme cost, risk, and hostile environments.
  • Counterarguments:
    • Human presence drives political support and budgets that also fund robotic missions.
    • Humans on site remain far more capable than robots for complex, improvisational work.
    • Long‑term goals include exploiting off‑Earth resources, moving industry off‑planet, and building large human habitats; crewed infrastructure is seen as a prerequisite.

Inspiration vs. criticism

  • Many express deep awe at Starship and Falcon launches, seeing them as humanity’s most inspiring current technological efforts.
  • Others see Starship as also reflecting humanity’s flaws: concentration of power, political toxicity, and opportunity costs relative to social needs.
  • There’s recurring tension between admiration for SpaceX’s engineers and discomfort with Musk’s behavior; some choose to disengage entirely, others separate the work from the individual.

Launch status and infrastructure

  • The specific Flight 10 attempt discussed was scrubbed due to ground-system issues and later weather (“anvil cloud”).
  • Commenters note ongoing iterative changes to launch infrastructure (e.g., constant rebuilds of the Starship pad at KSC informed by Texas experience) and share personal observations of the sheer scale of modern launch facilities.

Looking back at my transition from Windows to Linux

Microsoft Office & File-Format Lock-In

  • Many see Office, not Windows, as the real barrier to Linux: users mainly need Word/Excel/PowerPoint and must interoperate perfectly with partners.
  • Proprietary formats are described as a “roach motel”: data goes in but can’t leave with full fidelity; XML standardization hasn’t fixed this in practice.
  • Google Docs is sufficient for some, but many orgs block it for policy/security reasons. Compatibility between Office and G Suite (or others) is seen as a hard non‑negotiable.
  • LibreOffice is viewed as “good enough” for 80–90% of everyday tasks, but inadequate for advanced Excel usage and fragile on complex .docx/.xlsx; UI quality and annoyances are recurring complaints.
  • OnlyOffice and WPS are cited as having better MS compatibility; browser Office 365 is considered usable but feature‑limited and buggy by some.

Gaming on Linux

  • Big progress via Steam/Proton; many single‑player titles “just work,” and some users now game almost entirely on Linux.
  • Online games with invasive anti‑cheat (Battlefield, Call of Duty, Fortnite, some racing sims) remain major holdouts; this is framed as a chicken‑and‑egg problem with anti‑cheat vendors and Linux support.
  • Debate over kernel‑level anti‑cheat: some refuse it on security/privacy grounds; others prioritize stopping cheaters and are willing to trust game vendors.
  • Workarounds include dual‑booting, Windows VMs with GPU passthrough, or keeping a separate Windows box; opinions vary on whether this is acceptable overhead.

Linux Desktop Usability & Reliability

  • Several commenters report that Linux desktops have become significantly smoother in the last 5–10 years, often feeling more stable and fixable than Windows/macOS once set up.
  • Pain points: Bluetooth quirks, trackpad behavior and gestures, battery management, swap/Out‑Of‑Memory behavior, occasional crashes or hangs under memory pressure, and hardware acceleration issues (e.g., YouTube playback).
  • Some argue these can be mitigated with tuning (swap settings, earlyoom) or careful hardware choices (ThinkPads, AMD GPUs, OEMs like System76/Lenovo with Linux preinstalled). Others see this need for tuning as disqualifying for average users.
  • ChromeOS/Chromebooks and ChromeOS Flex are highlighted as the only truly mass‑market “Linux desktop” that hides complexity, especially for older or nontechnical users.

Freedom, Surveillance, and Corporate Control

  • A strong thread links migration to Linux with concerns about privacy, telemetry, cloud lock‑in, and dark patterns in Windows 10/11 and mainstream software.
  • Some see “RMS‑style freedom” as increasingly vindicated; others consider the freedom rhetoric overwrought but agree that vendors are growing more hostile to user control.
  • GitHub and VS Code are mentioned as extending similar surveillance/lock‑in dynamics into the open‑source ecosystem.

Adoption Barriers & Ecosystem Issues

  • OEM “Windows tax” and difficulty buying machines without Windows remain structural barriers.
  • For many, work constraints (Citrix, Adobe Creative Suite, advanced Office workflows) force retention of at least one Windows machine or VM.
  • Cross‑distro app distribution, proprietary software support, and economic incentives for maintaining unglamorous parts of the stack are seen as unresolved systemic problems.

Everything I know about good API design

Authentication & Credentials

  • Long‑lived API keys vs tokens sparked heavy debate. Several argue a refresh‑token + short‑lived access‑token model is strictly more secure: smaller blast radius, leak containment (esp. logs/backups), forced rotation, better auditing, and natural rate‑limiting.
  • Others note that a refresh token is just another long‑lived secret unless you build supporting flows and monitoring; the simplest model is still a static header key or PAT, especially for scripts and non‑engineers.
  • Some propose treating the “API key” itself as a refresh token with a tiny extra auth endpoint, avoiding full OAuth/OIDC complexity.
  • Mutual TLS is floated as an ideal replacement for shared secrets, but deployment and operational friction make it unpopular for many consumers.
  • One contrarian view: for one‑off or classroom usage, allowing username/password auth directly to the API can dramatically simplify onboarding, with tight rate limits as mitigation.

API Versioning Strategies

  • Strong split: some want versioning (“/v1”) baked in from day one for future‑proofing, discoverability, and the ability to deprecate old semantics cleanly.
  • Others say most APIs never actually reach “/v2”; real‑world evolution tends to add fields/options, introduce new endpoints with better names, or replace whole services, making URL versioning mostly ceremony.
  • Concern: multiple versions multiply maintenance and bug‑fix surfaces; better to treat new versions as a last resort and, if needed, implement old versions as shims atop new ones.
  • Alternatives discussed:
    • Version in headers or media types (e.g., custom “Version” header or vendor MIME types), keeping URLs as stable resource identifiers.
    • Version in client headers with servers rejecting too‑old clients (useful for apps you control).
  • Some object philosophically to “/v1/” in URLs because it versions the implementation, not the resource; others prioritize practical migration over purity.

Idempotency & Data Consistency

  • Many insist idempotency support is essential, not optional; Stripe‑style idempotency keys are cited positively.
  • Storing idempotency keys in Redis as a separate store is criticized: without an atomic write with the underlying mutation, certain failure modes (key written but DB change failed) break guarantees.
  • Multiple commenters prefer storing idempotency identifiers alongside the domain data in the same transaction.
  • Semantics of DELETE: some APIs always return 204 if the resource ends up absent (stronger idempotent feel), others prefer 404 for better client information.

Pagination & Cursors

  • Cursor‑based pagination is praised for stability with concurrent inserts and for infinite scrolling, especially when cursors are opaque and can embed query state or routing hints.
  • Downsides: hard to “jump to page N,” and more complexity versus simple page/offset.
  • There’s frustration with tiny page sizes (or mandatory pagination) that force many sequential round‑trips; recommendation is generous defaults and pagination as a tunable option.
  • On performance: OFFSET requires scanning/counting preceding rows, while “WHERE id > cursor LIMIT N” can leverage indexes more efficiently; some DB implementations might optimize, but not all do.

Stability, Users, and “Never Break Userspace”

  • The “never break userspace” principle is widely endorsed but clarified: you must clearly declare what’s stable and can be relied on, versus internal/kernel‑like interfaces you reserve the right to change.
  • Internal consumers are “real users” too; even if you can force them to update, churn still creates real cost, so dogfooding and upfront spec collaboration are encouraged.
  • With internal APIs, instrumentation enables targeting specific consumers during migrations and sunsetting old versions more aggressively than with external customers.

Broader Meaning of “API”

  • Several participants lament that “API” is now often used as shorthand for “HTTP+JSON web API,” whereas historically it referred to any application programming interface: libraries, syscalls, ABIs, in‑process interfaces, etc.
  • Distinctions are drawn between API vs protocol, API vs ABI, and function signatures vs the higher‑level contract and usage rules.
  • Some younger developers explicitly say they still primarily think of APIs/ABIs rather than web endpoints, while others find the web‑only usage imprecise and confusing.

GraphQL’s Role

  • One viewpoint: omitting GraphQL from a 2025 API design discussion is a significant gap; it’s described as a paradigm shift that gives clients flexible querying, typed schemas, and reduces over/under‑fetching. GraphQL‑specific caches (Apollo/Relay) and CDN‑level tooling are cited as evidence that caching concerns are overstated.
  • Counterpoints:
    • Implementing a secure, performant GraphQL backend is seen as more complex than conventional REST/OpenAPI, with hazards like recursive queries, denial‑of‑service risk via introspection, and subtle performance issues.
    • HTTP status codes often no longer map cleanly to success/error semantics, making logging/monitoring trickier.
    • Named REST endpoints are considered easier to talk about and optimize, and GraphQL is characterized as powerful but easy to “hold wrong.”

Other Design Observations

  • API shape tends to mirror underlying product resources; awkward internal models or over‑abstracted resources lead directly to confusing APIs and painful debugging.
  • Idempotency tokens, deadlines, backpressure behavior, and “static stability” (read‑only operations working even when writes fail) are called out as important but under‑discussed concerns.
  • Good documentation access is taken as a proxy for API quality; requiring contracts or sending password‑protected spreadsheets for docs is treated as a red flag.
  • Standardized error payloads (e.g., RFC‑style problem details) and clear, descriptive error responses are recommended for better client experience.

Paracetamol disrupts early embryogenesis by cell cycle inhibition

Paracetamol’s Mechanism and Embryo Effects

  • Commenters summarize the paper as showing paracetamol (acetaminophen/APAP) can inhibit early embryonic cell division, potentially acting as a mild contraceptive by impairing implantation.
  • Some extrapolate to broader concerns about cell division–dependent processes (wound healing, bone formation, gonadal/brain development), but emphasize this is speculative and needs more research.
  • One person jokes this logic might imply anticancer potential, but no evidence is discussed.

Use in Pregnancy, Autism/ADHD, and Risk Framing

  • Several note APAP is currently the only widely recommended OTC painkiller in pregnancy in many countries, largely because NSAIDs and opioids have clearer known risks.
  • Links are shared for studies and meta-analyses suggesting associations between prenatal APAP and autism/ADHD, and for a large sibling-controlled JAMA study finding no causal signal once confounding is accounted for.
  • Debate centers on genetics vs environment/epigenetics and whether observed associations are causal or confounded; no consensus emerges.
  • Some doctors/patients advocate “total drug avoidance” in early pregnancy; others stress that untreated infection or severe pain is also harmful.

Toxicity, Overdose, and Regulation

  • Strong thread on paracetamol’s narrow margin between therapeutic and toxic dose and its status as a leading cause of acute liver failure.
  • Contrast with NSAIDs: those hit kidneys/stomach more; APAP hits the liver and interacts badly with alcohol.
  • Discussion of how easy accidental overdose is (multiple products, small text, different pill strengths, liquid dosing errors in children).
  • UK data is cited showing that blister-pack limits and pack-size restrictions significantly reduced fatal overdoses and liver transplants.
  • Some argue “any medicine can be poison,” others counter that paracetamol is unusually unforgiving at modest overdoses.

Pain, Inflammation, and Other Analgesics

  • Tangent on NSAIDs and muscle growth: anti-inflammatories may blunt adaptation by suppressing prostaglandin-mediated repair; effect thought to be small for occasional dosing.
  • Clarifications that paracetamol is not a classic NSAID but overlaps partially in COX-related pathways.
  • Comparisons with metamizole (dipyrone) and ibuprofen: different organ toxicities, different national regulatory stances.

Fever, Parenting, and Pseudoscience Drift

  • Heated argument about whether routinely lowering fever (especially in children) is helpful or harmful, with some advocating minimal intervention and others calling that irresponsible.
  • Thread drifts into vaccine skepticism, COVID treatment claims, and accusations of overprotective vs negligent parenting, which other commenters flag as pseudoscientific and off-topic.

General Trust in Drugs and Oversight

  • Several express unease that a ubiquitous “safe” drug is now implicated in subtle developmental risks, raising worries about regulatory oversight and long-term side effects of widely used medications.