Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 149 of 523

A new chapter begins for EV batteries with the expiry of key LFP patents

Role of LFP Patents

  • Some argue core LFP patents were already a “non-issue”: royalty flows are tiny relative to the battery market, and real differentiation now comes from newer, still-patented advances (additives, coatings, manufacturing).
  • Others counter that patents did suppress competition outside China, via blocking licenses, high fees, and “mutual assured destruction” patent thickets that deter new entrants and invite trolls.
  • Suggestions include patent pools or “GPL-like” cross-licensing schemes to neutralize trolls, though concerns remain about fees and incentives for low‑quality patents.

Battery Chemistries: LFP, Solid-State, Sodium, Others

  • LFP is seen as practical, cheap, and safe, with sufficient energy density for most cars but less ideal for large trucks/SUVs and cold climates.
  • Cold charging is a recurring theme: heaters and thermal management are viewed by some as an adequate workaround; others call this a “hack” that adds complexity, energy cost, and edge‑case failure modes.
  • Sodium-ion is widely discussed as a likely successor for many uses: better cold performance and charge rates but lower volumetric density and trickier power electronics; timelines to cost parity with LFP range from a few to 10–15 years.
  • Solid-state batteries are compared to fusion: impressive lab results and one‑off demos (cars, bikes, drones), but still extremely expensive and not yet mass‑produced.
  • Niche chemistries like lithium‑titanate are praised for ultra‑fast charging but criticized for low energy density.

IP History and China’s Lead

  • Foundational LFP work came from a Canadian/Quebec lab, with patents licensed via Hydro‑Québec and a major US company allegedly infringing and triggering long legal battles.
  • Commenters say this litigation chilled Western investment while China received favorable domestic licensing and built a huge LFP ecosystem.
  • China is now seen as dominant in batteries and EVs, with recent moves to restrict export of advanced LFP tech and equipment.

Recycling and Regulation

  • EU recycled‑lithium quotas are criticized as potentially constraining growth when total deployed capacity is still ramping; others see them as necessary to capture waste (e.g., vape cells).
  • Concern that “recycled content” rules might incentivize scrapping still‑usable large packs rather than repurposing; practitioners note second‑life use of car packs is logistically and economically limited versus modular stationary batteries.

EV Markets, Tariffs, and Chinese Cars

  • Forecasts discussed: EV adoption slowing in the US but accelerating in countries like Vietnam, driven by cheap Chinese models.
  • Europe and the US are using tariffs to slow Chinese EV imports; some see this as delaying the inevitable, given China’s cost and tech advantages.
  • Several participants want access to $20–30k Chinese EVs in the US but worry about both Chinese and domestic car software collecting data.

Energy Prices and Renewables

  • One side claims an “ideological push” for renewables is driving up European electricity prices via storage, grid, and capex needs.
  • Others counter that wind and solar are now the cheapest new generation, pointing to:
    • Very low off‑peak EV tariffs in the UK tied to renewables.
    • The South Australia experience, where high renewable penetration plus batteries is pushing prices down after an investment phase.
  • There is debate over whether apparent inefficiencies (e.g., old wind turbines removed when subsidies end) reflect bad policy design or rational asset replacement.

Cars vs Public Transit

  • Some see EV focus as perpetuating car dependency; they argue for dramatically expanded, cheap public transit and denser land use.
  • Counterarguments:
    • Many regions are too low‑density or poorly planned for efficient mass transit; personal vehicles remain more practical.
    • Experiences with unreliable or inconvenient transit (especially in US cities) push people toward cars.
    • Others provide examples (Europe, parts of Australia, some US suburbs) where buses and trains work well even for families with small children, with “last mile” handled by walking, bikes, or small vehicles.
  • Historical notes point out that US cities once had extensive transit networks and that auto industry lobbying contributed to their dismantling.

Article as Law-Firm Marketing

  • Multiple comments note the linked piece is effectively an advertisement for legal services (freedom‑to‑operate analyses), likely to emphasize ongoing patent risks even after key expiries.
  • Some think this doesn’t necessarily undermine factual accuracy; others stress the need to view its framing through the lens of attracting clients.

Goldman Sachs asks in biotech Report: Is curing patients a sustainable business? (2018)

Profitability of Cures and Patent Dynamics

  • Multiple comments argue curing patients can be extremely profitable, citing blockbuster examples (e.g., Hep C cure, major cancer drugs) generating tens of billions in revenue.
  • “Unsustainable” is seen as a bad framing: all patented drugs (cures and chronic treatments) are inherently time-limited due to patent expiry and generics, so firms must continually find new products anyway.
  • Some note pricing for certain cures was “scandalous” and triggered investigations, but others counter that high prices are what enabled those profits and further R&D.
  • Analogy is made to oil/mining: one finite “deposit” can still be a fantastic investment even if it eventually runs out.

Market Incentives, Competition, and Cartels

  • A recurring theme: if a market is full of chronic treatments, any firm that launches a cure gains huge competitive advantage; coordinated withholding of cures is described as an unstable equilibrium.
  • Patents and time limits incentivize bringing cures to market rather than hiding them; sitting on a patented cure would forfeit revenue and eventually allow competitors free use.
  • Some push back, raising concerns about cartels, regulatory capture, and buyouts where firms purchase cures to protect existing treatment franchises.
  • Conspiracy-style ideas about systematically suppressing cures are generally rejected as implausible given the number of independent labs and the dynamics of competition.

Capitalism, Ethics, and Role of Government

  • Several comments argue that for-profit medicine misaligns incentives: recurring treatment revenue can be more attractive than one-shot cures (“health as subscription”).
  • Others emphasize higher-order societal benefits of cures (longer, healthier lives, more economic contribution) that aren’t fully captured by private firms—classic externality problem.
  • This leads to advocacy for socialized healthcare or stronger government role in funding R&D, especially for antibiotics, rare diseases, and unprofitable cures.
  • There is debate over whether “economic value” should be the justification at all; some insist healthcare should be provided on moral grounds, not just ROI.

Industry Structure and R&D Risk

  • One detailed thread explains the biotech pipeline: university research → VC-backed biotech → pre‑revenue IPO → eventual acquisition by big pharma that handles late-stage trials, manufacturing, and global roll‑out.
  • Scientific and clinical risk is largely borne by startups and public investors; big pharma competes to acquire successful assets, including cures.
  • Because of this structure, as long as the market isn’t a tight oligopoly with captured regulators, there are strong incentives to develop and commercialize cures.

Alternative Business Models and Policy Ideas

  • The Goldman report’s own “solutions” are noted: focus on large markets, high-incidence disorders, and continuous portfolio expansion; genetic and personalized medicine can generate a continuing stream of new cures.
  • Suggestions include “post-scription” models (small lifelong payments after a cure), insurer structures where cures can be priced against lifetime treatment costs, and tax/market designs that reward social outcomes (e.g., reduced disease incidence).
  • Some analogies to public transit highlight that activities can be socially valuable but privately unattractive, implying need for mixed public–private or redesigned incentive systems.

Peter Thiel sells off all Nvidia stock, stirring bubble fears

How to Interpret the Nvidia Sale

  • Some view the move as a “signal” or part of a larger, opaque strategic game among billionaire “whales,” not just an economic act.
  • Others argue Occam’s razor applies: he bought low, the position was up massively, so he’s locking in gains and reducing concentration risk.
  • Several note this Nvidia stake (~$100M) is tiny relative to his reported net worth (tens of billions), likening it to a small retail investor selling a few thousand dollars of stock.
  • Filing after market close raised eyebrows, but Nvidia traded up after hours, undercutting immediate “bad omen” narratives.

Bubble Fears, Macro Risk, and Timing

  • Many commenters are convinced AI/Nvidia is a bubble; some say “at this point if you don’t think AI is a bubble, I don’t know what to tell you.”
  • A minority warn of an extreme systemic crash (USD collapse, whole financial system at risk); others respond this doesn’t match past bubbles and that central banks exist to prevent total meltdowns.
  • There’s concern that private credit and liquidity issues, combined with AI capex excess, could trigger broader sell-offs.
  • Some push back: bubbles are hard to time, prior “insider” exits (e.g. big funds selling Nvidia in 2019) missed huge upside.

Rotation into Microsoft and Apple

  • The disclosed move was largely from Nvidia into Microsoft and Apple.
  • Some see this as a modest hedge: shifting from a pure AI hardware play into mega-cap tech with diversified cash flows that would likely survive any AI correction.
  • Others say these are still AI-exposed, so this is a half-hearted bubble hedge at best; Apple is seen as somewhat less AI-dependent than hyperscalers.
  • A few think the sale would have made more sense redirected into fabs/equipment (e.g., chip manufacturers), though others note those usually fall alongside chipmakers in downturns.

AI Fundamentals and Nvidia’s Moat

  • Bears: Nvidia is priced as if it will permanently dominate AI; yet AMD, big-cloud in-house chips, and other accelerators are gaining. Efficiency improvements (e.g. GPU pooling, model advances) could drastically cut demand versus the most optimistic projections.
  • Bulls: Even huge efficiency gains still leave enormous GPU demand; hyperscalers openly report AI infrastructure as a growth bottleneck with 20–40% revenue growth.
  • There’s active debate over hardware depreciation and whether current spending is sustainable or a classic capex overshoot.

Is This a Reliable Signal?

  • Some distrust his judgment due to highly controversial religious and political statements (e.g., “antichrist” rhetoric), arguing this undercuts his perceived rationality.
  • Others insist political or theological beliefs don’t negate decades of strong investing performance and insider knowledge of the sector.
  • Several note he’s not alone: large Nvidia sales by other major players (SoftBank, prominent fund managers, tech insiders) make this feel like a stronger sell signal—though still not conclusive.

I have recordings proving Coinbase knew about breach months before disclosure

Legal and Regulatory Issues

  • Commenters discuss whether the described timeline violates SEC cyber-incident rules that require disclosure within four business days once an incident is deemed “material.”
  • Speculated potential violations: late disclosure, misleading omissions to investors, inadequate internal controls, and broken disclosure processes—though nothing in the thread proves regulators’ view.
  • On suing, several note the need to show concrete, quantifiable harm; user agreements and mandatory arbitration may further constrain options.

Did This Prove Coinbase “Knew”?

  • Some readers think the January report plus Coinbase’s acknowledgment (“robust report, investigating”) indicates early awareness of a systemic breach.
  • Others argue it only proves Coinbase knew of a sophisticated attack against one user, not that they had concluded a company-wide compromise.
  • Skeptics emphasize that customers are frequently compromised via malware, OSINT, or prior breaches, so initial suspicion naturally falls on the user.
  • A few other organizations/users report similar targeted scams in early 2025, suggesting a broader pattern but not conclusively tying it to Coinbase’s internal systems.

Email, DKIM, and Technical Confusion

  • Multiple commenters are puzzled how a phishing email could have a valid DKIM signature for coinbase.com.
  • Confusion centers on a claim that both amazonses.com and coinbase.com DKIM checks passed; several note SES should not be able to sign as a domain without control of its DNS, implying either misinterpretation or a more serious compromise.
  • This part of the story is seen as unclear and under-documented in the blog post.

AI-Written Article and Style Backlash

  • A large subthread criticizes the article’s style as stereotypical “LLM slop”: overlong, heavy on bullets, dramatic section titles, neutral-but-grandiose tone.
  • The author confirms extensive AI assistance (transcription, structuring, drafting, editing) and defends it as a time-saver compared to not writing at all.
  • Many readers object that AI makes it too easy to generate thousands of words of marginal value, wasting reader time; some ask for explicit AI disclaimers so they can auto-summarize instead.
  • A minority defend the practice, arguing content should be judged on substance, not its production method.

Security, Outsourcing, and Crypto Context

  • The breach being linked to bribed overseas contractors at an outsourcing firm prompts calls to ban offshoring of sensitive financial data, with doubts about enforceability.
  • One commenter with Coinbase experience says the whiteboard-password anecdote refers to a building vendor, not Coinbase, and asserts Coinbase had a strong internal security culture.
  • Others broaden this to fintech/crypto generally, describing unreliable APIs, operational chaos, and frequent hacks, while noting that “Bitcoin” the protocol is distinct from exchanges like Coinbase.

User Experiences and Mitigations

  • Several users report Coinbase-themed scams (calls, emails, “security alerts”) in the same general period.
  • One highlights using unique, per-service email aliases so any mail to the leaked alias can be treated as hostile post-breach.
  • There is brief debate over self-custody vs custodial exchanges: “not your keys, not your coins” versus the high rate of lost wallets and keys.

Disclosure Practices and Trust

  • Some users say they only learned of the breach via social media, not direct notice, and question whether Coinbase’s customer communication met legal or ethical expectations.
  • Reports of failed account deletion, unresponsive privacy channels, and under-rewarded or buried vulnerability reports contribute to a perception that Coinbase’s handling of user data and security disclosures is opaque and self-protective.

Open-source Zig book

AI authorship controversy

  • The site prominently claims “zero AI” and “hand-written” content. Many readers find the prose and structure highly reminiscent of LLM output: repeated “not just X, but Y” constructions, breathless marketing tone, generic “transformation” language, and odd flowcharts and headings.
  • Others argue style alone is not evidence; those rhetorical patterns predate LLMs and are also taught in writing classes. Overuse might indicate mediocre human writing rather than AI.

Trust, ethics, and AI detection

  • Several commenters run the intro (or whole chapters) through Pangram, which flags it as AI-generated with high confidence. Some treat this as strong evidence; others cite Pangram’s reported false positives and consider it unreliable proof.
  • The ethical concern is not AI use per se but the explicit “no AI” claim. Many say that, if LLMs were used (even for drafting or rewriting), the statement is misleading and undermines trust in the technical content.
  • Repo signals heighten suspicion: anonymous author, entire book pushed at once, odd Git history, deleted issues, and even issue labels like “AI ALLEGATION.”

Perceived quality and pedagogy

  • Some readers praise the breadth, detail, and apparent correctness of chapters they understand, and see it as the best Zig resource they’ve found.
  • Others find the pacing and ordering chaotic: chapter 1 dives into symbol exporting and platform details before basic control flow; “how it works”/“key insights” sections feel like generic summaries; flowcharts and headings are seen as clutter with little conceptual payoff.

Technical accuracy and possible hallucinations

  • Commenters point out concrete errors typical of LLMs: references to non-existent, renamed, or internal Zig std APIs, and misleading details about the compiler and build system.
  • This reinforces concern that hallucinations may be scattered throughout, making the book risky as a primary learning source.

Zig’s value proposition and comparisons

  • Side discussions debate whether Zig really “fundamentally changes how you think about software” versus being “modern C with a good stdlib.”
  • Supporters highlight explicit memory management, allocator-passing, comptime, strong C interop, and cross-compilation; skeptics feel the intro overpromises compared to truly paradigm-shifting languages (Lisp, APL, Prolog, Erlang).

Site design and format

  • Several usability complaints: tiny fonts, slow site, hard-to-find table of contents, distracting animated progress bar, no official PDF. One commenter shares a script/command to generate a PDF from the AsciiDoc sources.

Meta: AI accusations on HN

  • Some want an HN guideline against casually accusing content of being AI-written, saying it derails discussion.
  • Others argue public scrutiny of AI-authorship claims is a necessary defense against fraudulent “human-only” branding.

Supercookie: Browser Fingerprinting via Favicon (2021)

Favicon behavior and bugs across browsers

  • Many commenters report long‑standing favicon glitches: wrong icons shown for specific sites, icons “stuck” for months or years, persisting across profiles, private mode, OS updates, and possibly iCloud sync.
  • Bugs appear across Safari, Firefox, Chrome, iOS Safari, and other WebKit-based browsers, suggesting deep or shared caching issues.
  • Safari’s favicon cache is described as extremely persistent; some mention only extreme measures (e.g., deleting cache files or changing system time) fully resetting it.

Live demo and whether the attack still works

  • Several users can’t get the demo working (infinite 1–18 redirect loops, especially on iOS Safari and Firefox private mode). Others report seeing a unique ID after the first cycle.
  • Some note the GitHub repo is old (Edge 87 mentioned) and conclude the specific exploit is largely patched; a linked issue states major browsers fixed this years ago.
  • However, another link suggests Chrome briefly patched and then regressed, with a more recent note that favicon tracking should now reset on cache deletion and incognito entry.

Effectiveness, limitations, and mitigations

  • Users observe different IDs between normal and private windows, and even between separate incognito sessions, implying at least some mitigation in Firefox and elsewhere.
  • Deleting cookies and site data in Firefox is reported to remove the identifier.
  • One commenter questions practicality: 32 redirects to construct an ID seems heavy; others reply that ad networks value any extra bits of identity, even if costly.
  • Disabling favicons is discussed: some argue that being “favicon-less” could itself be a distinctive fingerprint; others say it would just look like a fully cached state, depending on implementation (details remain unclear).

Favicons vs usability and privacy

  • Some users happily run favicon‑free browsers and question why they’re needed.
  • Others defend favicons as essential for tab‑heavy workflows, where icons are easier to scan than truncated titles.

Ethics, regulation, and business models

  • Strong criticism of hidden tracking: some want it criminalized, likening it to stalking or malware.
  • Debate over GDPR: some say it already covers such tracking; others highlight weak enforcement or “legitimate interest” loopholes.
  • One long subthread argues:
    • Tracking within a single site to improve services is acceptable; reselling data and third‑party brokers are the core problem.
    • GDPR and similar rules may inadvertently entrench large incumbents and hurt small, data‑driven businesses.
    • Opponents push back, emphasizing user consent, the difficulty of opting out when most sites require JS, and the need for regulation because users can’t realistically audit code.

Hardened browsing setups and practical issues

  • Some describe extreme isolation: running browsers in disposable VMs with qemu and sandboxing, deleting state on exit.
  • Others note that such setups can themselves become fingerprints (e.g., odd GPU/rendering behavior, missing fonts), triggering CAPTCHAs and suspicion.

Broader tracking landscape and related techniques

  • Commenters expect similar attacks on other long‑lived browser artifacts and caches.
  • A GPU-based fingerprinting technique (“DrawnApart”) using WebGL timing is mentioned as another example of increasingly sophisticated tracking.

Reception of the research

  • Several find the favicon “supercookie” technically clever or “lovely” as an attack vector.
  • Others are more interested in using it (or similar tools) for non-ad-tech purposes like detecting banned users who try to evade bans.

Dark Pattern Games

Scoring System and Taxonomy Concerns

  • Many see the numerical ratings as “dubious”: games perceived as benign (e.g., traditional roguelikes, HyperRogue) score poorly because any checked pattern counts negatively.
  • The implementation treats patterns like “competition,” “grind,” or “collecting items” as uniformly bad, despite the textual descriptions saying they are only dark in certain contexts.
  • Several commenters say the site is more useful as a pattern database than as a comparative scoring tool.

What Is a Dark Pattern? Mechanics vs Monetization

  • One camp argues: the overlap between “fun mechanics” and the site’s “psychological dark patterns” is huge; what matters is when those mechanics are tied to monetization (loot boxes, microtransactions, wait-to-play with paid skips).
  • Others reply that some items (daily rewards, friend spam, social pyramid schemes) are intrinsically manipulative and not fun.
  • There is debate over whether competition, grinding, reciprocity, power creep, and “wait to play” are inherently dark or only when used to drive spending, guilt, or habitual daily logins.
  • Some define a true dark pattern as any design that serves the business at the expense of the player’s own goals (e.g., obscured subscriptions, tracking, loot boxes with paid keys).

Monetization Models and Live Service Debate

  • Many praise pay-once, no-IAP games as healthiest; others defend trials, shareware-style unlocks, or modest F2P models.
  • Debate over live-service / card games: controlled power creep can keep a meta fresh, but is criticized when tied to paid card acquisition and devaluing prior purchases.
  • Examples discussed include PoE stash tabs, Fortnite’s cosmetics and battle pass, and War Thunder’s “pay to progress” grind.
  • A proposal to fund games via background crypto-mining is widely viewed as parasitic or untrustworthy.

Addiction, Players, and Children

  • Multiple commenters describe personal or observed harm: lost time, depression masked by grind loops, kids normalizing exploitative designs.
  • Others note that “addictive” is not automatically bad if aligned with beneficial goals (e.g., language learning apps), though some argue gamified education still uses the same hooks.

Usefulness and Author’s Clarifications

  • The site creator explains the project arose from their own game addiction; learning the patterns helped them quit.
  • They emphasize the written pattern descriptions as the core value; the crowdsourced game ratings are outdated, likely noisy, and may be removed.
  • Several people report the site (and similar “no BS games” lists) as valuable for finding healthier games and understanding manipulative design, even if the scoring is imperfect.

The fate of "small" open source

AI slop, spam, and the degraded web

  • Many see AI as “industrializing” existing bad behaviors: spammy tutorials, phishing, scraped blogs, low‑effort PRs, SEO garbage.
  • Others argue this era already existed pre‑LLM; AI just changes the flavor, not the underlying problem.
  • YouTube and web search are described as increasingly overrun by AI‑generated content; some imagine “human‑only” or paid, curated services as an escape, but doubt this can scale or be reliably enforced.

Search engines, SEO, and AI summaries

  • Some praise AI search summaries for bypassing clickbait and SEO slop for simple queries.
  • Others say summaries are often subtly wrong, strip context, and make it harder to judge source quality; “slop vs condensed slop.”
  • Strong disagreement over whether Google “nerfed” search intentionally for ads versus just losing the fight to SEO. Internal-ad-economics stories are cited as evidence of deliberate degradation.

Fate of small / micro open source libraries

  • Many distinguish between genuinely useful small tools and trivial micro‑dependencies (e.g., “left-pad”–style utilities) that arguably never made sense.
  • Several see LLMs as the final nail in the coffin for these micro-libs: developers can just generate a 10‑line helper instead of adding a dependency.
  • Others counter that mature utilities (e.g., Apache‑style commons) encode years of bugfixes and edge cases; LLM‑generated code is “instant legacy” with unknown behavior.
  • Vendoring tiny snippets or header‑only style libs is praised as a middle ground, though critics worry about updates, security, and licensing.

Open source maintenance, spam, and gatekeeping

  • Maintainers report a surge of AI‑generated PRs/issues from contributors who don’t understand the project, treating maintainers as free QA.
  • This is framed as a “care” problem amplified by AI: low‑effort code at much higher volume.
  • Proposed responses: stricter vetting, filters (possibly AI‑based), closed or “cathedral” contribution models—at the cost of making FOSS more gatekept and less welcoming.

Motivations, licensing, and training data backlash

  • Some creators now refuse to open source new work (or release binaries only) to avoid it being used as free AI training data, seeing current AI as one‑way extraction with privatized gains.
  • Suggestions include copyleft/AGPL to deter corporate use, or “source‑available” licenses, though many doubt this will meaningfully stop scraping.
  • Others argue that broad reuse—including via models—is aligned with the original spirit of free software and that obsession over attribution misses the larger benefits.

Education, documentation, and learning

  • Concern: LLMs shift culture toward “instant answers” over deep understanding; small OSS and blog posts once served as educational material.
  • Counterpoint: LLMs can be superb tutors—patient, interactive, and able to explain code or docs at arbitrary depth. Some projects now ship LLM‑friendly documentation (e.g., llms.txt‑style outputs).
  • There’s skepticism about whether people will study LLM‑generated inlined code more than they ever read code buried in dependencies; careless developers may read neither.

Broader outlook

  • Some think open source will remain central and even get stronger as motivated developers use AI to tackle more ambitious projects.
  • Others foresee burnout: rising spam, corporate control of ecosystems (package hosts, search), and a sense that anything open will just be harvested into proprietary models.
  • A shared theme: the real scarcity is care and high‑quality human attention; AI can either free that up for harder problems—or flood it with even more low‑value noise.

The government has no plan for America’s 300 billion pennies

Tone of the article

  • Some readers found the opening line needlessly spiteful toward the US for a topic as mundane as coins.
  • Others read it as light humor and were surprised others didn’t see it that way.

“No plan” vs natural phase-out

  • Several argue no elaborate federal plan is needed: stop minting, let pennies trickle into banks, be recycled/scrapped, and disappear like other obsolete denominations (e.g., half-cent, Canadian and Australian low-value coins).
  • Others think regulatory “warts” should be addressed beforehand (e.g., how rounding interacts with laws and benefits programs) rather than ad‑hoc after the fact.

Regulation, SNAP, and rounding

  • A recurring concern: SNAP rules requiring equal pricing for SNAP and non‑SNAP customers can conflict with cash-rounding schemes if card/SNAP buyers pay exact amounts and cash buyers are rounded.
  • Counterpoints:
    • Law is unlikely to be applied mechanically over trivial penny-level differences.
    • SNAP parity can be preserved simply by treating SNAP as a cash-equivalent for rounding.
    • A tiny statutory amendment could explicitly allow rounding, though some doubt US politics can handle even small fixes quickly.

Business and banking frictions

  • Marketplace reports businesses scrambling for pennies due to changes in Fed distribution; some commenters think this is overblown and will resolve via rounding and updated registers.
  • Experiences with banks vary: some demand rolled coins or refuse small deposits, others provide free coin-counting machines; Coinstar-style machines are common but often charge steep fees.
  • Many people dislike coins, report throwing pennies away, or immediately offloading them; others object that discarding metal is wasteful and suggest jars and occasional cashing-in.

Cost, materials, and disposal

  • Widely noted that pennies cost multiple cents to mint; zinc is already the “cheap” metal that replaced copper.
  • Suggestions include: scrapping/shredding pennies, using them in brass production, or selling large zinc quantities as scrap.
  • Melting coins remains legally murky; enforcement is seen as highly discretionary.

Pricing and denomination structure

  • Some propose rounding all physical prices to $0.05 or even $0.10; others note this would require touching essentially every printed price and might clash with entrenched $X.99 pricing and the dominant role of the 25¢ coin.
  • Examples from Europe and Asia:
    • Eurozone: 1–2 cent coins effectively unused in many places; cash rounded, card transactions exact.
    • Ireland: items still priced at .99 but rounded to whole euros in cash, often favoring merchants.
    • UK expectation that missing low coins should lead to consumer‑favorable rounding.
    • Hong Kong and Taiwan illustrate simplified decimal practices and integer-only pricing.

Cash vs digital money

  • Some see shrinking coin use as part of the broader decline of physical cash, already advanced in some countries.
  • Others raise civil-liberty concerns: fully digital money makes all spending traceable and gives governments or payment processors power to block categories of purchases.

Alternative uses and safety tangents

  • Novel uses mentioned: penny flooring, DIY heatsinks, and pie weights for blind-baking crusts.
  • A subthread debates whether heating post‑1982 zinc-core pennies for baking poses a health risk; one side warns about zinc fumes, others argue oven temperatures are far below zinc’s boiling point and that copper plating and common cooking practice make this effectively safe.

What if you don't need MCP at all?

What MCP Is and How Tool Calling Actually Works

  • Commenters clarify that MCP is a protocol and JSON-RPC–style wrapper for exposing tools to agents, not something “built into” models.
  • Major models already support function/tool calling; the client/agent enforces schemas and interprets “tool_use” vs “text” outputs, then invokes external functions.
  • MCP’s core value is described as a standard manifest + lifecycle so agents can auto-discover tools from arbitrary servers.

Arguments That You Don’t Need MCP

  • Many argue you can achieve the same by:
    • Publishing good API docs or OpenAPI/Swagger and feeding them as context.
    • Letting the model call existing REST/GraphQL/CLI tools via function calling.
    • Having the model write code that uses normal libraries/APIs, then executing that code.
  • For bespoke, constrained agents, MCP is said to add complexity, fragility, latency, and security risk without much benefit.
  • Critics call it “just context pollution” and “a thin JSON layer,” often overused in enterprise because of hype.

Perceived Benefits and Defenses of MCP

  • Proponents see MCP as:
    • A plugin/distribution standard for tools across agents (like “OpenAPI for LLMs”).
    • A way to encapsulate authentication/credentials and constrain capabilities, especially for non-technical users.
    • Helpful for remote SaaS integrations (GitHub, Figma, Sentry, etc.) where users “add an MCP” instead of wiring APIs/CLIs.
    • A driver of better, higher-level, task-focused APIs because vendors had to design LLM-friendly interfaces.

Technical Limitations and Context/Token Concerns

  • Multiple comments highlight:
    • Large tool catalogs and verbose schemas overloading context windows and harming reliability.
    • Intermediate tool results flowing through the model, wasting tokens and introducing error opportunities.
    • Difficulty composing tools across multiple MCP servers; subagents and “project-specific” MCPs partially mitigate this.
    • Weak story for observability, awkward HTTP/SSE transports, and security issues with stdio-based local tools.

Alternatives and Hybrids

  • Suggested alternatives include:
    • Claude Skills (essentially markdown prompts + tools with progressive reveal).
    • Simple bash/CLI tools with good --help text, described briefly in a project README/CLAUDE.md.
    • Exposing MCP tools as libraries inside code interpreters so models can compose them in code rather than via chained tool calls.
  • Several see MCP’s role shrinking to a thin, secure gateway layer, with real work done by code and CLIs.

I finally understand Cloudflare Zero Trust tunnels

Perceived Value vs Tailscale / Headscale / VPS

  • Several commenters question the “win” over Tailscale + a cheap VPS/headscale, arguing Cloudflare adds complexity and vendor lock-in to optimize a minority NAT case.
  • Others counter that for homelab and family use, Cloudflare Tunnel’s free tier and no-VPS, no-port-forwarding setup are compelling, especially for sharing services with non-technical users who won’t install a VPN client.

Vendor Lock‑in and Trust

  • Cloudflare is criticized as “half-baked features + lock‑in,” but others note all these options are vendors; the real distinction is business model, behavior, and openness.
  • Tailscale is seen as “less lock‑in” because of WireGuard, open clients, and compatible self-hosted control servers like Headscale, though not everything is fully open.

TLS Termination, Privacy, and “Zero Trust”

  • A big privacy concern: Cloudflare often terminates TLS, sees traffic, and may re-encrypt to the origin, unlike Tailscale’s end‑to‑end model.
  • Some clarify this TLS termination is not mandatory in all Cloudflare products, but many tunnel/Access features effectively require it.
  • Several people argue that calling a centrally terminating, fully trusted middlebox “Zero Trust” is marketing more than reality.

Architecture & CNAME-Based Tunnels

  • The cfargotunnel.com CNAME mechanism is called out as opaque and “kludgy”: a DNS record that looks like a normal CNAME actually triggers Cloudflare’s private routing stack.
  • Confusion points: CNAMEs that don’t resolve publicly, multiple apps sharing one tunnel identity, strict TLS settings coexisting with cleartext to the origin, and unclear behavior when CNAMEs or routes don’t match.

Bandwidth Limits and Media Streaming

  • Cloudflare’s free tiers disallow heavy video/large-file use (e.g., Plex/Jellyfin) in their ToS, though many report using tunnels for personal media servers without enforcement so far.
  • Critics dislike content-type-based restrictions for an encrypted “zero trust” network and would prefer simple global bandwidth limits.

Performance, P2P vs Relay

  • Some prefer P2P with relay fallback (Tailscale, other tools) to reduce dependency on a single relay provider and preserve privacy.
  • Others report Cloudflare’s global network gives excellent performance, sometimes better than direct P2P, especially for distributed teams.

Use Cases and Alternatives

  • Enterprise: discussed as a ZTNA replacement for traditional VPNs (inside‑out tunnels, L4 proxying, fine-grained access policies).
  • Home/homelab: exposing self-hosted services under custom domains, clientless access, bypassing CGNAT/IPv4 limitations.
  • Alternatives mentioned: Netbird, Netmaker, Headscale, NetFoundry/OpenZiti, connet, tuns.sh, and plain IPv6 where ISPs allow inbound traffic.

Three kinds of AI products work

Scope of “AI products” seen as too narrow

  • Many commenters argue the article conflates “AI” with “LLMs you chat with,” ignoring large, profitable categories: fraud detection, recommendation, translation, speech recognition, TTS, driving, medical imaging, document parsing, supply-chain optimization, etc.
  • Successful tools like Grammarly, DeepL, and vision-based document processing are cited as counterexamples to the three categories.
  • Several stress that the most valuable AI is “invisible” infrastructure; users don’t even know AI is involved.

Media generation as a major category

  • Multiple replies object to dismissing image generation as a “toy”; they say image/video/music/voice/3D tools are already production-grade for design, marketing, film, and game assets.
  • Discussion on UX: graph/node-based systems (ComfyUI-like) are seen as too technical; “Adobe/Figma-like” interfaces that hide model complexity are considered the real opportunity.
  • Some note that modern media tools already behave agentically (iterative edits, inpainting, upscaling), just on pixels instead of code.

Alternative product taxonomies

  • One proposed breakdown:
    • Batch/pipeline processing (document parsing, moderation).
    • “AI features” inside apps (summaries, tags, autocomplete).
    • True agents (AI controls workflow).
  • Others say “three kinds of AI products” is as coarse as “three kinds of internet products,” too high-level to be predictive.
  • Several point out the article itself effectively lists more than three categories.

Agents and coding tools: strong disagreement

  • Some report big wins from coding agents: debugging, analyzing logs/stack traces, discovering index issues, and catching obscure build problems.
  • Others find agents unreliable or “lying,” producing useless code even with good APIs; they trust junior devs more and see agents as “slop factories” without a human in the loop.
  • Debate on productivity impact: claims range from negligible savings to “4–8x fewer developers needed,” with skeptics demanding harder evidence.

Non-chat, task-focused uses

  • Frequently mentioned working use cases: translation, transcription, summarization of contracts/logs/meetings, basic drafting, clarifying vague business requirements.
  • Some say AI summaries are often bad or misprioritized; others find them net-positive if treated like code review (LLM drafts, human verifies).

Future directions: agents, UX, and infrastructure

  • Many want agents for mundane life tasks (appointments, cancellations, travel, forms) but note current systems struggle with real-world details and costs.
  • Some argue these tasks should be solved with open APIs and classic algorithms, not LLMs everywhere.
  • There is concern about giving agents real powers (refunds, account changes) due to jailbreak and error externalities.
  • Several conclude the real long-term value is AI as an underlying capability or interface layer, not standalone chatbots.

Heretic: Automatic censorship removal for language models

How Heretic Works (Technical Core)

  • Built on recent work showing that refusals in many LLMs are largely mediated by a single direction in residual activation space.
  • Heretic finds that “refusal direction” and then incrementally ablates it (weight orthogonalization) while:
    • Minimizing refusal rate on a harmful‑prompt dataset.
    • Constraining KL divergence from the base model so general behavior stays similar.
  • Optuna is used for hyperparameter search over ablation strength and layer ranges, trading off “uncensoring power” vs degradation.

Effectiveness Across Models and Limitations

  • Works well on many open models (e.g. GPT‑OSS 20B, Gemma, some Granite variants); users report near‑zero refusals with low KL on some.
  • Newer / “thinking” models (chain‑of‑thought refusal reasoning, e.g. GPT‑OSS‑120B, Qwen3, DeepSeek) are harder: many parameter settings barely move refusal rates, and internal monologues can confuse the refusal classifier.
  • Some users find decensored GPT‑OSS still “refusey” or unstable (oscillating between no effect and “lobotomy”).
  • Technique is likely specific to narrow, well‑detectable behaviors like refusals; commenters doubt that broad concepts like “correctness” are a single direction.

Safety, Harm, and Liability Debates

  • One camp sees this as critically important: restoring “full capability” and resisting corporate/State control over information.
  • Others argue that once you remove guardrails you personally own downstream harms; no serious production system will ship such models due to legal risk.
  • Real‑world harms cited: suicide encouragement, extremist content, fraud, and crime assistance. Others counter that information is already widely available and capabilities, not text, are the real constraint for WMDs.

Censorship, Free Expression, and Corporate Control

  • Strong disagreement over calling this “censorship”:
    • Some say model guardrails = corporate brand‑safety, not “AI rights,” but they do restrict what humans can conveniently learn.
    • Fear that LLMs will become the default interface to information, letting a few actors quietly shape history, politics, and morality.
  • Comparisons made to search engine drift from “grep the web” to tightly curated results; concern that LLMs repeat this pattern more strongly.

Datasets and “Harmful” Behavior Definition

  • Heretic’s optimization uses public “harmful behavior” datasets (e.g. how to hack banks, make drugs, self‑harm, CSAM, terrorism), which many find repulsive but technically useful as strong refusal triggers.
  • Some note the datasets are repetitive and unlicensed; worry this may overfit to narrow patterns and miss the broader refusal space.

Bias, Alignment, and Politics

  • Many examples of odd or extreme refusals (chemistry, insults, politics, LGBT, Taiwan, Tiananmen, race, song lyrics) are used to argue:
    • Alignment is shallow, brittle, and often politically skewed.
    • Corporate “safety” often encodes particular US‑liberal or Chinese state orthodoxies.
  • Others emphasize that all models are biased by data and post‑training; the issue is whose values dominate, not whether bias exists.

Potential Reverse Use and Extensions

  • Commenters note the same method could, in principle, strengthen or redirect safety by targeting other activation patterns, though harmful behaviors are likely more diverse than refusals.
  • Some speculate on extending similar techniques to diffusion/image‑edit models, but that would require new detectors and engineering effort.

Iran begins cloud seeding operations as drought bites

Effectiveness and mechanisms of cloud seeding

  • Strong disagreement on whether it “works” in a practically useful way.
  • Some argue it’s unproven or marginal, with highly conditional success rates (e.g., cited estimates like 0–20% increase in some tests).
  • Others say it’s clearly real but modest: a way to modulate existing weather, not create rain from nothing.
  • Mechanism discussed: turning supercooled droplets into ice crystals (via silver iodide, salt, etc.), which then grow and fall as precipitation. Recent experimental work was cited as confirming this microphysical chain, while not proving large-scale efficacy.

Use cases and limitations

  • Reported uses include:
    • Ski resorts and western US programs to enhance snowfall.
    • Hail suppression (breaking large hail into smaller, less damaging pellets).
    • Historical military use (Vietnam) and large programs in China and UAE.
  • Consensus: you can only redirect or trigger moisture already in the air, not solve structural water deficits.

Ethical, legal, and “stealing rain” concerns

  • One view: seeding is effectively “taking someone else’s rain.”
  • Counterpoint: under current international law, a country controls its own airspace and thus its clouds.
  • Some worry about large, under-scrutinized, commercially driven weather-modification programs and unknown externalities.

Iran’s drought and policy failures

  • Multiple comments frame Iran’s crisis as largely self‑inflicted through decades of mismanagement, over-extraction, and export‑oriented agriculture benefiting powerful interests.
  • Cloud seeding is widely seen as PR or a marginal tool that cannot fix depleted aquifers or lack of water treaties.
  • Broader fears about forced deportations, humanitarian disaster, and echoes of past climate-linked civilizational collapses.

Global parallels and future pressures

  • Comparisons drawn to Texas and the US Southwest: aquifer depletion, leaking infrastructure, political resistance to regulation, and debates over desalination and water rights.
  • Cloud seeding and other geoengineering efforts (like solar reflection) are seen by some as necessary stopgaps, by others as ominous signals in a slow slide toward a climate-damaged, possibly dystopian future.

Chemtrails, media, and presentation

  • Cloud seeding is contrasted with “chemtrail” conspiracy theories; overlap is mainly about method (spraying particulates) and secrecy.
  • Debate over reliability of Arab News vs. BBC and other outlets, and over imagery that overemphasizes religiosity in Iran.

Where do the children play?

Cross‑national contrasts in kids’ freedom

  • Many commenters stress the article mostly describes the US.
  • In Japan (especially Tokyo), Netherlands, Nordics, Germany, and parts of Central Europe, young kids routinely walk or bike to school, ride public transport, and roam neighborhoods with peers.
  • Ireland, UK, and some German regions are described as moving toward US-style sheltering, but still generally more permissive.
  • Several people say they would avoid raising kids in the US because of low independence and constant adult supervision.

Built environment, cars, and schools

  • Car-centric design, wide fast “stroads,” and large schools sited on town edges make independent mobility hard or dangerous.
  • Even in “walkable” areas, key barriers are multi-lane arterials, huge SUVs, and distracted drivers.
  • Some contrast dense European or Japanese layouts (short distances, crossing guards, bike infrastructure) with sprawling US suburbs and rural highways.

Fear, liability, and social control

  • Parents report police or child protective services being called for letting kids walk short distances, even on their own property.
  • Social disapproval (“Karens,” negligence charges) is a strong deterrent, even when the law allows independence.
  • Media‑driven panics about kidnapping, abuse, and school shootings amplify risk aversion, despite most harm to children coming from known adults.

Digital spaces: autonomy vs addiction

  • Many relate to the idea that games like Roblox or Fortnite provide the only unsupervised “peer society” kids can access.
  • Others argue the primary driver is screen addiction and parental laxity, not lack of physical options, and advocate strict limits.
  • Several note that online peer cultures (forums, games) have existed for at least a generation and can both nurture skills and stunt offline social growth.

Education systems and peer networks

  • The Bavarian/German tracked school system is debated: critics say frequent sorting and school moves undermine lasting friendships, pushing kids toward phones; defenders say most kids stay with stable cohorts and that smartphones are independently addictive.
  • Montessori and mixed‑ability models are suggested as ways to balance individual pace with social continuity.

Demographics and “critical mass” of kids

  • Fewer children per family and aging neighborhoods mean there often aren’t enough local kids to form organic play groups.
  • This “critical mass” problem makes even free‑range‑friendly parents ask: if my kid roams, who is actually out there to meet?

A new documentary about the history of forced psychiatric treatment in Spain

Parallels to Modern “Treatment” and Troubled-Teen Industry

  • Multiple commenters link US “troubled teen” programs and faith-based rehabs to the Spanish reformatories: abusive, under‑regulated, marketed as “tough love.”
  • The Elan School comic and documentary are cited as vivid depictions of systemic child abuse packaged as therapy.
  • Some reflect that vague slogans like “tough on crime” or “tough love” enable torture-like systems that outsiders emotionally endorse without seeing specifics.

Parenting, Neglect, and Surveillance Culture

  • A prior HN thread about kids seeing porn and “neglect” is referenced; commenters worry such norms push parents toward extreme control measures to avoid criminal liability.
  • Stories of CPS being called for kids briefly unsupervised highlight fears of a “tyranny of busybodies” empowered by smartphones.
  • Others note regional variation: some US areas still have visible child independence, often linked to walkable environments.

Religion, Ideology, and Abuse

  • One side argues Christian (and historically Catholic Francoist) institutions are central drivers of these abuses, including anti‑LGBTQ and misogynistic control.
  • Others counter that religion is a tool exploited by insecure or authoritarian people, not the root cause; similar abuses could manifest under other ideologies.
  • Meta‑discussion notes that blanket religious denunciations slide into hate and violate community norms.

Involuntary Commitment, Homelessness, and Care Systems

  • Some claim dismantling mental hospitals in the US led to today’s homelessness crisis; others challenge this and demand evidence.
  • A more nuanced view: US deinstitutionalization removed beds without building robust, funded outpatient and court‑ordered treatment systems, unlike many European countries.
  • Critics warn that in current political conditions, expanded involuntary commitment risks becoming a weapon, echoing Francoist practices.

Franco, Violence, and Victim-Blaming

  • A major thread disputes the BBC’s “free‑spirited girl” framing given her involvement with Molotov‑throwing protests. Some emphasize that’s serious violent crime; others stress context: a fascist dictatorship executing opponents.
  • Many argue violent resistance against such a regime is morally understandable or even heroic; equating that with ordinary crime is seen as naive or sympathetic to fascism.
  • Commenters condemn blaming the victim for her subsequent torture and forced psychiatric treatment, emphasizing she was a teenager and that the real agency lies with parents, Church, doctors, and state.

Spanish Civil War and Historical Context

  • One commenter portrays Spain’s 20th‑century history as a brutal Fascism‑vs‑Communism struggle with atrocities on both sides, suggesting rebels weren’t simply “good democrats.”
  • Others push back, restating that the war began with Franco’s coup against a democratic government; they view attempts to equalize both sides as apologetics for dictatorship.

HN Culture, Fascism, and Social Attitudes

  • Some express shock at how many commenters prioritize “law and order” or property over resistance to fascism, reading this as latent authoritarianism among startup/tech‑capitalist culture.
  • Discussion notes how fascism relies on misogyny and divide‑and‑conquer tactics, and sees victim‑blaming of rebellious young women as part of that pattern.

Medical Horror

  • A final note reacts with horror to historical use of insulin to induce comas as psychiatric treatment, especially from diabetics for whom insulin overdose is a real fear.

Where Educational Technology Fails: A seventh-grader's perspective

Engagement vs Boredom in Learning

  • Several commenters argue curriculum should be intrinsically engaging and connected to the real world (e.g., unpacking how phones, games, and modern tech actually work).
  • Others counter that even the most interesting fields are 90% “boring” work; school’s core job is partly to build tolerance for boredom and persistence.
  • There’s pushback against accepting boredom as inevitable or virtuous; some see this as defeatist and believe everyone can find motivating personal goals if given freedom and variety.

Fun, Motivation, and Types of Fun

  • Debate over whether learning should be “Type 1 fun” (immediately pleasurable) or “Type 2 fun” (hard but rewarding after the fact).
  • Some insist “learning is fun” by nature, and school damages that; others say most deep learning requires slog and delayed gratification.
  • Distinction raised between “learning is fun” and “all fun is learning”: passive entertainment can crowd out effortful learning even if the latter could be enjoyable.

Discipline, Development, and Autonomy

  • Tension between giving kids large freedom (even from age ~10) to discover their own goals vs. preparing most people for basic participation in society.
  • Many stress teaching self‑discipline, especially in teen years, often citing sports as a good training ground.
  • Others warn that “adult discipline” imposed too early can damage creativity, play, and mental health; adulthood and brain maturity are seen as gradual and culturally defined.

Technology in School: Tool vs Distraction

  • Strong skepticism that “ed tech” (Chromebooks, SaaS platforms, smartboards) improves learning; some see it mainly as a vendor-driven money sink.
  • Multiple teachers and parents report screens worsening focus, cheating, and reading issues; some moved children to low‑tech schools with better outcomes.
  • A minority emphasize genuine benefits: CS classes, easier submission/review, access to online explanations (e.g., videos) especially for poorer students.

Blocking, Censorship, and Games

  • DNS/site blocking is widely seen as futile; students routinely bypass it to play games like Roblox.
  • Some advocate strict removal of phones/games during class; others argue the real solution is to make learning more compelling than distractions.
  • Concern that calling internet censorship “educational technology” is itself revealing of misplaced priorities.

Curriculum, Testing, and Methods

  • Calls for explicitly teaching memorization and learning techniques (mnemonics, flashcards) in early grades.
  • References to classical “grammar-stage” education focused on facts, but augmented with modern learning-how-to-learn skills.
  • Critiques of online multiple‑choice testing: encourages cheating, reduces feedback quality, and replaces deeper written responses; others point to more project-based assessment as a countertrend.

The internet is no longer a safe haven

Perceived causes of rising abuse

  • Many see a clear recent increase in scrapers and attacks, largely attributed to:
    • Commercial demand for training data from AI companies.
    • LLMs making it trivial for non-experts to generate custom scrapers or “check X every second” tools.
    • Cheap, abundant cloud infrastructure and proxy networks (including residential/mobile IPs).
  • Others argue the internet has always been hostile; what changed is scale and automation, not the fundamental dynamic.

Legal vs technical governance

  • One camp sees this primarily as a legal problem:
    • Better international law enforcement and treaties (like those against software piracy) extended to DDoS and abuse.
    • Liability for hosts/ISPs and even negligent customers (e.g., outdated WordPress on a VPS).
  • Pushback:
    • Global enforcement also imports foreign censorship and speech laws.
    • Centralized control (governments, clouds) is ripe for abuse and could be worse than today’s Cloudflare-style gatekeepers.
    • Some advocate “tribes”/small communities with explicit gatekeeping instead of global regulation.

Identity, reputation, and proof-of-work ideas

  • Proposed defenses:
    • Request-signing standards plus reputation databases for crawlers.
    • Persistent, per-service pseudonymous identities that can be banned but don’t reveal real-world identity.
    • Reputation systems where capabilities grow with good behavior; resetting identity should be costly.
  • Concerns:
    • “Digital death penalty” (permanent exclusion) and abuse by authoritarian regimes.
    • Tension between reputation and privacy may be fundamentally hard to solve; zero-knowledge proofs suggested but unproven at scale.
  • Proof-of-work:
    • Suggested as a way to keep bots out; critics cite work showing global thresholds are unworkable and attackers can use cheap/botnet compute.

Defensive techniques in practice

  • Common approaches discussed:
    • Nginx rate limiting, iptables/ASN/geo blocking, SYN anti-spoofing, rp_filter.
    • Honeypots and traps: invisible links, fake admin paths, “bot-ban-me” hostnames, SSH user triggers.
    • Bot-wasting tactics like zipbombs or bogus content to poison AI scrapers.
    • mTLS, VPNs/WireGuard, and Cloudflare/Anubis-style frontends for private or small sites.
  • Mixed experience: some see these as sufficient; others say even hobby sites get overwhelmed without big-CDN protection.

Effects on hobbyists and self-hosting

  • Many recount constant automated probing since the 2000s; logs full of exploit attempts against software they don’t even run.
  • Some argue the real issue is unoptimized stacks (e.g., Gitea + Fail2ban) rather than traffic volume.
  • Others say requiring deep security expertise and endless hardening proves the environment is objectively hostile, discouraging casual self-hosting.

Debate over the internet’s past and future

  • Some think “the internet is over” as an open, welcoming space; any new platform will be swarmed the moment it gains traction.
  • Suggestions range from moving to niche protocols (Gemini, IPv6-only) to accepting centralization and signing/identity as inevitable.
  • There’s a broader philosophical split: internet as immense net-benefit vs. net-negative (consumerism, attention capture, AI “plastic content”), with no consensus on a realistic path to “safe haven” status.

Why use OpenBSD?

Documentation & System Coherence

  • Some see OpenBSD’s centralized man pages and FAQ as a major strength: consistent, complete, and tightly aligned with the system.
  • Others found gaps for common tasks (e.g., defining daemons, reverse-proxy setups with nginx) and felt the docs didn’t clearly explain the “right” way to create and supervise custom services.
  • Compared to Linux, BSD advocates emphasize that a single team designs kernel + userland as one coherent OS, vs. “cobbled-together” components.

Reliability, Upgrades & Ecosystem

  • Several report OpenBSD servers “just keep working,” with upgrades (syspatch/sysupgrade) being predictable and low-drama.
  • By contrast, Ubuntu is frequently criticized for fragile upgrades and regressions; Debian is praised as far more stable and easier to upgrade, with unattended-upgrades for security fixes.
  • On OpenBSD, there’s no full “unattended upgrades” story: base can be patched with syspatch, but packages and frequent six‑month OS upgrades still need planning.

Security Model & Features

  • Security-by-default (minimal services enabled) is viewed as ideal for servers and firewalls, though some argue “disable everything” can be impractical for appliances and that good defaults matter more.
  • Pledge/unveil are highlighted as powerful, practical process sandboxing tools; Linux’s seccomp is described as complex and fragile at scale, though it’s widely used in browsers and Android. Landlock is noted as a newer, closer analogue.
  • pf firewall syntax is widely praised as much clearer than iptables; some counter that modern Linux nftables narrows this gap.

Performance, Hardware & Use Cases

  • A recurring criticism: OpenBSD is noticeably slower (web serving, firewall throughput), partly due to conservative design (e.g., hyperthreading disabled by default for side-channel concerns), though recent releases reportedly improved the TCP stack.
  • Linux is seen as superior for raw performance, hardware support, desktop usability, and containers; many would pick Debian or Alpine for general servers.
  • OpenBSD is favored for routers/firewalls, specialized servers, and as a “understandable”, buildable-from-source system; less so as a mainstream desktop or BigCorp platform.

Licensing & Philosophy

  • BSD license appeals to those wanting fewer copyleft obligations, though some argue this mainly matters to large vendors.
  • Some perceive BSD communities as niche or even “dying breed”; others value them precisely for being smaller, simpler, and more focused.

Brimstone: ES2025 JavaScript engine written in Rust

Comparisons to Other JavaScript Engines

  • Brimstone is seen as impressively compliant for a one-person project, close to Boa in features but generally faster in shared benchmarks, sometimes about 2× Boa’s speed.
  • Binary size comparison (release builds): Boa ~23 MB vs Brimstone ~6.3 MB, with people noting this is notable given Brimstone passes ~97% of the spec.
  • Hermes and QuickJS are highlighted as strong contenders in the broader benchmark suite, balancing performance and binary size.
  • Samsung’s Escargot draws curiosity due to high ES2016+ compliance and small size, but is viewed as:
    • Too slow vs V8 (3% on one benchmark set) for general use.
    • Filling a niche for Samsung TVs/appliances or non‑JIT environments; Hermes is seen as a better alternative for that niche.
    • More spec-compliant than Hermes, which some consider important when JITs or V8 are not allowed.

Binary Size, Unicode, and Intl

  • Major size gap with Boa is largely attributed to Unicode/ICU data:
    • Boa embeds large ICU tables and ECMA‑402 Intl/Temporal data.
    • Brimstone embeds a smaller, language‑minimal Unicode set.
  • Discussion notes Unicode’s inherent bulk (collation, locales, etc.), arguing that “correct” text handling has a real size floor.
  • Others feel ICU packs in rarely used locale/calendaring details and should be optional or system-provided, not bundled into every interpreter.

Rust, Unsafe, and Garbage Collection

  • Some are puzzled by explicitly “very unsafe Rust” for the compacting GC, given Rust’s memory-safety reputation.
  • Multiple replies argue:
    • Implementing high-performance GCs and custom memory regimes almost inevitably needs unsafe.
    • The Rust model is to confine unsafe to small, auditable layers and expose safe APIs.
    • Even core types like Vec use unsafe internally; zero unsafe is unrealistic.
  • There’s side discussion about atomics and memory fences, with the view that Rust’s standard atomics often suffice without inline assembly.

Rust Ecosystem, Syntax, and Dependencies

  • Mixed views on Rust:
    • Proponents praise its error-handling model, safety, and code quality.
    • Critics complain about syntax complexity, heavy dependency trees, and slow compile times, comparing Rust culture to npm-like bloat.
  • Some distinguish the Rust language from its OSS culture, noting you can write low‑dep Rust, but community norms favor many “battle‑hardened” crates.

“Written in Rust” and Use Cases

  • Several comments ask why “written in Rust” is always foregrounded.
  • Defenders say:
    • For libraries, language matters because it determines ease of integration (e.g., pure Rust dependency vs C/C++).
    • “Written in Rust” signals likely memory safety (if little unsafe), speed, better average quality, and easier multi-platform binaries.
  • Others see it as hype or a generational fad, similar to earlier “written in Lisp/Ruby/JS” eras.

Embedding and Practical Value

  • A key benefit noted: Brimstone can be embedded in Rust programs without C/C++ linking.
  • This enables small Rust servers or apps to be scriptable in JavaScript with a purely Rust toolchain, which several commenters find “awesome.”

Licensing and Project Status

  • Initial lack of a license raises concerns; this is later corrected to MIT.
  • The author notes it started as a hobby project and has evolved over three years, focused on completeness and performance.

Miscellaneous

  • Some lighthearted remarks about the executable name and “compacting GC, written in very unsafe Rust” line.
  • Brief nostalgic tangent about cracktros and boot-time intros.