Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 278 of 360

The Right to Repair Is Law in Washington State

Scope of the Washington law & major carve‑outs

  • Many welcome the shift in regulatory “winds” but note the law excludes many high-impact categories: game consoles, cars, agricultural/construction equipment, medical devices, ISP equipment, security systems, off‑road vehicles, large energy/solar systems, and LEO broadband gear.
  • Several commenters say these carve‑outs cover “most of what I actually want to repair,” especially tractors, road vehicles, and consoles.
  • Carve‑outs are widely attributed to lobbying and corporate pressure; some point out this is especially ironic given tractors and autos helped spark the movement.

What the law does change (electronics & parts pairing)

  • For covered consumer electronics and wheelchairs, manufacturers must provide parts, tools, documentation, and can’t use parts‑pairing to:
    • Block repairs or degrade features (e.g., cameras, fingerprint sensors).
    • Show non‑dismissable, misleading “unknown part” warnings or reduce performance.
  • Debate over a secondary source’s claim that parts must be sold “at cost”: the statute actually says “at costs that are fair to both parties.”
  • Long thread on economics of spares (tooling, logistics, low volume pricing) vs the need to prevent gouging (e.g., tying parts prices to full device value).

Wheelchairs and disability impacts

  • Strong support for the wheelchair provisions from people citing extremely locked‑down, expensive chairs (e.g., batteries and simple components requiring factory service, five‑figure subframes).
  • Others worry manufacturers may exit the WA market or raise prices, reducing options for users whose devices are often funded by government programs.

Security, anti‑theft, and “genuine” parts

  • Examples: Apple serial‑paired MacBook components; TPMs and fingerprint sensors; vehicle ECUs.
  • One camp argues pairing is needed to deter theft and supply‑chain attacks; critics say users could hold the keys, and current designs mainly protect OEM repair revenue.
  • Ongoing debate over quality and safety of non‑OEM parts vs monopolistic pricing on “genuine” components.

Broader right‑to‑repair context and limits

  • Clarifications for newcomers: right‑to‑repair addresses active barriers—DRM, unavailable parts/docs, voided warranties, and bricking—rather than knowledge of how to fix things.
  • DMCA §1201 is cited as a major remaining obstacle (e.g., third‑party generator and vehicle tools becoming legally risky).
  • Some fear manufacturers will region‑limit compliance or use contract terms to claw back control; others argue once a law exists, it’s easier to tighten loopholes and attack exemptions later.

"AI Will Replace All the Jobs " Is Just Tech Execs Doing Marketing

Limits on “AI Will Do All Jobs”

  • Many argue current AI lacks reliability, embodiment (“no thumbs”), accountability, and true understanding, so can’t fully replace complex or physical work (e.g., toilets, construction, medicine).
  • Several note strong human preference for human-made things (live music, handmade food, human therapists), predicting markets and platforms will enforce “human-only” spaces.
  • Others stress AI is parasitic on human-created data; if humans stop producing, models stagnate.

Degradation of Work vs Replacement

  • A recurring fear: AI won’t remove jobs outright but will deskill them and make them worse—workers become “meat robots” monitored and directed by algorithms.
  • Existing examples cited: warehouse workers, call centers, and fast-food “management AIs,” where computers already dictate pace and behavior.
  • Senior engineers worry about becoming AI babysitters/code reviewers instead of creators or mentors; some find that dystopian, others enjoy delegating boilerplate.

Juniors, Deskilling, and Partial Job Loss

  • Many expect fewer entry-level roles: seniors + AI replace juniors; long-term this hollows out expertise pipelines.
  • Historical analogies: barcodes and scanners deskilled retail clerks; more jobs remained but became lower-paid “McJobs.”
  • Several see “some but not all” job loss as the worst outcome: widening inequality without a clear crisis to force systemic reform.

Economic and Political Context

  • Strong thread framing AI as capital vs labor: a tool to replace labor with capital, weaken bargaining power, and justify layoffs.
  • Skepticism that displaced workers will easily “reskill,” citing past automation where many never recovered comparable livelihoods.
  • Concerns that without stronger safety nets (UBI, healthcare), AI-driven displacement will fuel social unrest.

Hype, Exec Marketing, and Uncertainty

  • Many view “AI will end all jobs” as stock-pumping and layoff cover rather than current reality; examples of firms loudly touting AI while cutting staff.
  • Others argue exponential compute and recent LLM gains make human-level or superhuman AI plausible, but timelines are contested and often likened to a “religious” debate.
  • Consensus: AI is already powerful enough to affect most businesses, but claims of near-term total replacement are unproven.

AI as Tool Today: Benefits and Risks

  • Working developers describe real productivity gains (boilerplate, queries, refactors) but still needing domain expertise, design judgment, and security review.
  • Worries about security: hallucinated libraries, subtle vulnerabilities, and supply-chain attacks seeded via AI-generated code.
  • Some non-experts already ship AI-written scripts into sensitive workflows, raising hidden risk even where no jobs are formally cut.

The time bomb in the tax code that's fueling mass tech layoffs

Scope of the problem: does Section 174 “explain” mass layoffs?

  • Many argue Section 174 cannot fully explain mass tech layoffs:
    • Most laid‑off staff weren’t doing what people normally call “R&D” (e.g. games, line‑of‑business apps, ops).
    • Other major drivers cited: end of zero‑interest‑rate era (ZIRP), pandemic over‑hiring, slower growth, Twitter’s high‑profile cuts, and classic wage‑suppression tactics.
  • Others counter that, for tax purposes, a huge share of tech headcount is classified as R&D, especially since the code now explicitly treats software development as R&D. So the rule change does affect a large fraction of engineers, PMs, data scientists, etc.

What actually changed in Section 174?

  • Pre‑2022: qualified R&D (including software dev if elected) could be 100% expensed in the year incurred.
  • Post‑change: “specified research or experimental expenditures” must be amortized:
    • 5 years for US R&D, 15 years for foreign R&D.
    • Software development is explicitly deemed R&D; most dev labor now becomes capitalized, not immediately deductible.
  • Transition effect: companies went from deducting 100% to deducting only ~10–20% in year one, with the rest spread over later years.

Cash‑flow mechanics and who gets hurt

  • Several concrete examples show how a company can have zero (or negative) cash profit but be taxed as if it were strongly profitable, because only a fraction of dev salaries is deductible in the current year.
  • In steady state with flat headcount, total deductions over time are similar; the real costs are:
    • Time value of money (you’re effectively lending the IRS cash at 0%).
    • A multi‑year “tax spike” during the transition and during rapid growth.
  • Consensus in the thread:
    • Big, stable, cash‑rich firms (FAANG‑type) are affected but can absorb it; many were already capitalizing dev work.
    • Small, thin‑margin or modestly profitable software firms, and fast‑growing startups, are hit hardest; some report tax bills quadrupling, hiring freezes, or shutdowns.

Is this good policy? Competing views

  • One side: this simply closes a generous R&D loophole; software is a capital asset like a factory, so its creation costs should be amortized; subsidies distort markets and shift burden to taxpayers and non‑R&D firms.
  • Other side: historically, immediate expensing was a deliberate pro‑innovation policy; removing it:
    • Discourages R&D and early revenue, especially in software where code ages quickly and many firms never reach year 5.
    • Entrenches incumbents and weakens the startup ecosystem and open‑source work.

Politics and “time bomb” framing

  • Change came from the 2017 tax act, effective 2022, widely viewed as a budget‑scoring gimmick to offset corporate rate cuts.
  • There is broad bipartisan interest in restoring immediate expensing, but fixes have repeatedly stalled and are now bundled as a temporary rollback inside a larger, controversial tax bill.
  • Some see Section 174 as a deliberate political time bomb and a bargaining chip; others attribute the mess to routine reconciliation games and legislative dysfunction rather than targeted malice.

Go is a good fit for agents

Go’s perceived strengths for agents

  • Many see agents as orchestration-heavy: coordinating HTTP calls, streams, timeouts, retries, backpressure, and cancellation. This is described as “bread and butter” for Go’s goroutines, channels, context.Context, and strong standard library.
  • Static binaries and simple deployment are cited as advantages over Python’s virtualenv/pip complexity for production agents.
  • Some report good experience building real agentic tools/CLIs in Go, and say LLMs generate Go code reliably because the language is small, consistent, and has high‑quality training data.

Elixir/Erlang and the BEAM alternative

  • Several argue that, by the same concurrency logic, Elixir/Erlang (BEAM) are even better for agents: lightweight isolated processes, built‑in distribution, fault tolerance, hot code upgrades.
  • BEAM + durable storage (SQLite/Postgres/S3/DuckDB or mnesia) is proposed as an ideal combo for stateful agents and orchestrators, with examples of job/workflow systems and IoT‑style networks.
  • Some note BEAM’s VM and performance ceiling as drawbacks versus a compiled language like Go.

JavaScript/TypeScript and Python ecosystems

  • There’s strong defense of JS/TS for agents: native JSON handling, event-driven style, npm ecosystem, browser + Node deployment, sandboxing in the browser, and mature MCP tooling.
  • Python is defended as the de facto ML/AI language with unmatched libraries, tooling, and community; many agent frameworks, observability tools, and SDKs appear there first.
  • Others dislike JS/TS or Python on language-design grounds but concede that ecosystem gravity matters in AI more than runtime performance.

Durable execution, task queues, and state

  • Long‑running agents raise concerns about losing in‑memory state on process death. Multiple comments argue you must break work into resumable chunks and persist state in a DB or durable log.
  • Temporal, Hatchet, task queues, and similar durable-execution systems are discussed as language-agnostic solutions that replay workflows from event history.
  • Some Go users are hand‑rolling checkpointed agents using logs, context trees, and databases, but note the complexity overhead.

Performance and bottlenecks

  • Many point out agents spend most of their time waiting on LLM calls and external I/O; runtime speed rarely dominates.
  • JSON (de)serialization and conflict‑resolution (diff/patch/merge on shared state) are mentioned as the main non‑LLM hotspots; language choice only marginally affects these.

Go’s language and ecosystem criticisms

  • Critics argue Go’s type system (enums, generics ergonomics), error handling verbosity, and channel footguns hurt general productivity, not just for agents.
  • Others counter that the simplicity, interfaces, and tooling outweigh these issues and reduce long‑term maintenance risk.
  • Several note that Go’s ML and agent‑orchestration library ecosystem lags far behind Python and TS; you often end up writing your own middleware (logging, tracing, retries, DSLs, SDKs).

“Agents are just software” / language neutrality

  • A recurring view: everything described as “an agent” is standard async/distributed programming—loops, branching, queues, workflows.
  • From that perspective, “best language for agents” largely reflects developer preference; any language with decent async and libraries can work, and language choice rarely determines success.

Porn sites go dark in France over new age verification rules

Perceived harms of youth porn exposure

  • Broad agreement that very young children shouldn’t access porn; disagreement about teenagers.
  • Some think porn is a relatively minor issue vs social media or other risks; others see links (cited in the thread) to earlier/riskier sex, body shame, aggression, and coercion.
  • Several argue “we didn’t turn out fine,” pointing to rising mental health issues and sexual dysfunction, though causality is disputed.
  • Others say teens have always found porn; exposure alone doesn’t necessarily produce extreme behavior or misogyny.

Effectiveness and unintended consequences

  • Many expect teens to bypass blocks via VPNs, mirrors, smaller or offshore sites, or sneakernet.
  • Concern that burdens will fall mainly on mainstream, moderated platforms, pushing minors toward more extreme, unregulated content.
  • Counter‑view: laws don’t need to be airtight; raising friction and delay can significantly cut average exposure.

Privacy, anonymity, and technical models

  • Strong worry about loss of anonymity and creation of databases linking identity and porn use.
  • The “double anonymity” / third‑party verifier model is seen by some as a good compromise; others worry about what the verifier and government can infer in practice.
  • Discussion of privacy-preserving tools: anonymous credentials, zero‑knowledge proofs, eID apps, Privacy Pass‑style tokens. Doubts about maturity, inclusivity, and attack surfaces (e.g., relay/KYC abuse).
  • Some suggest age-flag HTTP headers or RTA-style labelling so client devices/parents can filter, avoiding central ID checks.

Parental responsibility vs societal regulation

  • One camp: it’s primarily on parents; ISPs already age‑gate access, similar to alcohol sales.
  • Others argue parents are outgunned; society routinely uses legal guardrails (tobacco, gambling, driving) and should do so here too.

Nature of porn and changing landscape

  • Claims that modern porn is more violent, incest‑themed, and ubiquitous (phones, streaming) than past eras, making comparisons (“we watched at 13 and were fine”) questionable.
  • Suggestions for “teen‑safe” porn with strict content rules, analogous to softer vs harder drugs/alcohol.

Regulatory design, enforcement, and culture

  • Debate over standards vs detailed rules, risk of selective enforcement, and costs for small sites.
  • Some favor outright online bans or physical “porn rooms”; others see this as overreach and moral panic.
  • A few question why porn is targeted more heavily than graphic violence, and note differing European vs US attitudes toward sex.

Just how bad are we at treating age-related diseases?

Progress vs. stagnation in age‑related disease

  • Commenters agree we’ve dramatically reduced many historical killers (TB in rich countries, hookworm, black lung, infectious diseases, some cancers), but have done poorly on classic age‑related degeneration (Alzheimer’s, Parkinson’s, IPF, macular degeneration, neurodegenerative diseases).
  • Disagreement over terms like “conquered”: some see big mortality drops and early‑stage survival gains (e.g., breast cancer, prostate cancer) as success; others argue that anything still common, lethal, or disabling is not “conquered.”

Treatment vs. prevention and behavior

  • Many argue we’re far better at chronic management than curing, and “successes” often come from prevention and environment (better sanitation, nutrition, shoes, fewer miners) rather than drugs.
  • Strong thread claiming many “age‑related” diseases are largely lifestyle‑driven (diet, exercise, alcohol, obesity, smoking); counter‑arguments say age and biology still dominate and behavior change is extremely hard in practice.
  • GLP‑1 drugs are cited as evidence that behavior is biologically constrained, not just willpower.

Metrics: clinical proxies vs quality of life

  • Frustration that trials focus on surrogate markers (lesion size, biomarkers) rather than quantity and quality of life.
  • Acknowledgment that QoL studies are expensive and slow; proxies are cheaper but often weakly linked to real‑world benefit.

Neurodegeneration and aging biology

  • Alzheimer’s singled out as emblematic failure; mention of past research fraud possibly delaying progress.
  • Neurodegenerative diseases seen as fundamentally unsolved; some view curing them as tantamount to achieving biological immortality.
  • Speculation about aging as a malleable process (e.g., cell reprogramming, long‑lived species), but current interventions like young‑blood transfusions are described as weakly supported and ethically fraught.

Public health, risk behaviors, and new threats

  • Tobacco control cited as a rare behavioral success requiring massive social and political pressure.
  • Debates over vaping and cannabis: how harmful vs cigarettes, dose differences, second‑hand risk, and whether society should treat them equivalently.
  • Concern that eradication strategies (vaccines, PrEP for HIV) run into compliance, education, and misinformation barriers.

End‑of‑life, ethics, and society

  • Several comments highlight the misery of long decline (Alzheimer’s, advanced cancer), aggressive life‑prolonging care with poor QoL, and the appeal of legal assisted suicide.
  • Some see our biggest failure not as medical but philosophical: inability to accept and plan for death.

Broader social and system factors

  • Loneliness, insomnia, polypharmacy, and loss of function are seen as “new” geriatric burdens now that people outlive earlier killers.
  • Observations that national systems differ: public systems may push lifestyle change more, private ones may profit from long‑term treatment dependence.

Why I wrote the BEAM book

BEAM performance, pauses, and mailboxes

  • Commenters debate how a 15ms pause could be “post‑mortem worthy”; those familiar with BEAM note that typical latencies are in microseconds, so a jump to milliseconds can cause massive backlogs.
  • A gen_server is described as effectively a big mutex guarding shared state; if it normally serves a request in ~20µs, a 15ms stall can queue hundreds of messages.
  • Unsafe receive patterns that scan entire mailboxes become catastrophic under backlog, making processing time proportional to queue length.
  • Some systems resort to drastic recovery strategies: dropping entire mailboxes, age‑based request dropping, tuning GC around large mailboxes, and monitoring drain/accumulation rates.

Concurrency model, OTP, and failure handling

  • BEAM’s main concurrency tools noted: gen_server, ETS (in‑memory tables), and persistent_term, plus newer “aliases” to avoid message buildup from timeouts.
  • There’s discussion of where to block work (e.g., letting callers block rather than the gen_server) and how to apply backpressure instead of blindly queueing.
  • Some argue BEAM’s magic is really OTP’s abstractions (supervision trees, processes, fail‑fast semantics), which can be emulated conceptually in other languages, though often without BEAM’s preemptive, lightweight processes.

BEAM vs other runtimes and stacks

  • Historically BEAM was unique; now many problems it solves (message buses, clustering, serialization, orchestration, reliability) have a “zoo” of alternatives: Kafka/SQS/NATS, gRPC/Cap’n Proto, Kubernetes, lambdas, micro‑VMs, etc.
  • Several comments emphasize that BEAM’s 2025 advantage is integration: one coherent runtime with built‑in messaging, supervision, clustering patterns, and a consistent data model, rather than wiring many disparate systems.
  • Others counter that Kubernetes‑style stacks and language‑agnostic infra give more flexibility, and BEAM’s built‑ins (queues, DB, clustering) can be weaker than best‑of‑breed external tools.

Adoption, marketing, and ecosystem

  • Many see Erlang/Elixir/BEAM as highly underrated but note weak corporate backing and marketing compared to Java, Go, Rust, etc.
  • Some say Erlang really shines at very large scales (millions of users), and its “all‑or‑nothing OS‑like stack” plus unusual deployment model (ERTS, epmd, clustering) raises the adoption bar.
  • Others argue modern “standard” languages now handle large concurrency loads on single machines, reducing the perceived need for BEAM.

Experiences with Erlang/Elixir

  • Multiple comments praise BEAM as “alien tech” for fault‑tolerant, concurrent, networked systems; Elixir especially is highlighted for web apps (Phoenix, LiveView) and small teams managing big problems.
  • Some report painful setup and environment drift (especially with LiveView) and prefer containers to stabilize runtime expectations. Others find Elixir deployment straightforward in recent years.
  • Erlang is credited with improving developers’ thinking about immutability, pattern matching, and concurrency, but can make other languages feel clumsy afterward.

The BEAM book and technical publishing

  • The book is open‑sourced on GitHub, with paid Kindle/print versions for those wanting to support the author. Readers welcome deep, implementation‑level documentation, saying official docs are too shallow.
  • Several note that writing a book is a powerful way to truly understand a complex system.
  • Broader discussion covers traditional publishers vs self‑publishing: publishers bring marketing, editing, and print logistics but push for broad, beginner‑friendly topics; niche, deeply technical works increasingly succeed via self‑publishing, LeanPub, and similar platforms.

Cockatoos have learned to operate drinking fountains in Australia

Why cockatoos use fountains

  • Multiple hypotheses discussed: better-tasting “pure” water, elevated vantage point to watch for predators, laziness/convenience, and sheer curiosity.
  • Several commenters argue taste alone is not obvious, noting many pets prefer muddy puddles over clean tap water, suggesting “interesting” or variable water sources may be attractive.
  • Running water is suggested as a general cue of freshness (as with cats), possibly relevant here.
  • Others propose that operating the mechanism itself is mentally stimulating “play” for a very smart, puzzle-loving bird.

Playfulness, mischief, and social behavior

  • Cockatoos are repeatedly described as pranksters and “little jerks”: destroying decks, hoses, and tools; running what amounts to a neighborhood “protection racket” by threatening property until fed.
  • Stories from Australia and New Zealand extend this to other birds (kea, weka, seagulls, ibis, magpies, crows), showing coordinated theft, distraction tactics, bin-opening and even traffic-cone manipulation.
  • Cockatoos are noted as very social, often in large flocks that split and recombine, with complex interaction and apparent enjoyment of games.

Pet birds: intelligence and welfare

  • Rescue and pet owners emphasize high intelligence, curiosity, destructiveness, long lifespans, and emotional complexity.
  • Birds are likened to “flying eternal toddlers with can‑opener mouths,” needing constant stimulation and often outliving owners.
  • Several commenters say this is why they now consider long-term caging of parrots/cockatoos unethical, despite how charming they are.

Animal intelligence and avian cognition

  • Discussion touches on convergent evolution of intelligence: bird pallium vs mammalian neocortex; neuron density in bird forebrains; parallels with cephalopods.
  • Debate over whether humans underestimate bird intelligence or overestimate human intelligence; some argue cockatoos at “3‑year‑old” level, others push back on dismissing their understanding.
  • Brief disagreement on bird language abilities: parrots can mimic, but some species (e.g., African greys) show evidence of deeper semantic understanding.

Handedness and learning claims

  • An anecdotal “fun fact” claims wild cockatoos are all left-footed when manipulating food, sparking broader discussion of handedness in animals.
  • One commenter calls “learned to operate” overstated because success was ~41%; others counter that twisting a spring-loaded handle with body weight clearly qualifies as operating the device.

Cloud Run GPUs, now GA, makes running AI workloads easier for everyone

Serverless GPU model and use cases

  • Commenters interpret Cloud Run GPUs as a way to run arbitrary models (e.g. Hugging Face) behind an API, paying only when used and scaling to zero between bursts.
  • Main value seen in small/custom or cutting-edge open-weight models where managed APIs don’t exist or are too restrictive.
  • Several note this is best for bursty or early-stage workloads (new apps without clear steady traffic), not for consistently busy services where VMs+GPUs are cheaper.

Cold starts and latency

  • Cold start is a major concern. Reports for CPU Cloud Run range from ~200ms to 30s+, heavily dependent on language and whether using gen1 vs gen2 runtime.
  • For GPUs, a cited example is ~19s time-to-first-token for a 4B model including container start and model load; some see this as unacceptable for interactive UX, others say it’s fine for first-request-only or batch/agent use.
  • Model weights download and GPU memory load can significantly add to startup time; several say you’ll likely keep at least one warm instance, so “scale to zero” is not always practical.

Pricing, billing, and cost controls

  • Pricing of Google’s GPUs (especially beyond L4) is widely viewed as uncompetitive versus specialized providers; L4 on other platforms is quoted at ~40¢/hr vs ~67–71¢/hr here.
  • Cloud Run GPUs bill per use but with a ~15-minute idle window; if you get at least one request every 15 minutes you effectively pay 24/7, often several times the cost of a comparable VM.
  • Lack of hard spending caps on GCP is a major worry. Budgets and alerts exist but are delayed and can’t prevent “runaway” bills; some hack auto-disabling billing but fear breakage.
  • Limiting max instances and concurrency can cap Cloud Run service spend, but not other APIs (e.g. Gemini). Several argue real stop-loss billing is essential for individuals and small teams.

Comparisons to other providers

  • Runpod, vast.ai, Coreweave, Modal, Coiled, DataCrunch, Lambda Labs, Fly, and others are discussed as cheaper or more flexible GPU options, often with per-second billing and/or true caps or prepaid credit.
  • Modal, in particular, is praised for fast cold starts, good documentation, and scale-to-zero GPUs.

Cloud Run experience and architecture

  • Many praise Cloud Run’s developer experience and autoscaling, often preferring it to AWS Lambda/ECS/Fargate/AppRunner; some report large-scale, cost-effective production use.
  • Others report mysterious scale-outs, restarts, and outages that support couldn’t fully explain, prompting moves to self-managed VMs or Kubernetes.
  • Differences between Cloud Run gen1 (faster startup) and gen2 (microVM-based, slower startup) are noted; Cloud Run Jobs (non-HTTP batch) are highlighted.
  • Root access is not yet generally available but is being worked on; GPU types are currently limited (mainly L4), with more promised.

Local vs cloud AI and GPU market

  • Some wish for consumer-grade, local “AI appliance” hardware, arguing many LLMs can run locally if UX were better.
  • Others counter that large-scale training and heavy inference still demand cloud GPUs; GPU supply on major clouds is described as constrained and expensive, fueling the rise of “neo-cloud” GPU providers.

Why I Use a Dumbphone in 2025 (and Why You Should Too)

Practical Barriers to Using a Dumbphone

  • Many key services are increasingly app-only: banking (including PSD2-style SCA in Europe), national e‑ID systems (e.g., BankID), UPI in India, school communication, WhatsApp-only business/government support, and some car parks and retail payments.
  • Some hardware (printers, cameras, CCTV) requires a phone app for setup; web interfaces are often hidden or missing.
  • Dumbphone options are shrinking as 2G/3G are shut down; 4G-capable “feature phones” can be buggy and often ship with unwanted Android + bloatware.
  • Group messaging (WhatsApp/Signal/Threema) and maps/navigation are cited as the biggest blockers to going fully “dumb”.

Workarounds and “Dumbified Smartphone” Strategies

  • Keep a smartphone but strip it down: uninstall/disable browser and social apps, turn off notifications, use only authenticator/banking/maps.
  • Stronger self-restraint setups:
    • iOS Screen Time / Focus modes with whitelists, disabled App Store, allowed sites only, and PINs controlled by another person or timelocked.
    • Supervised MDM profiles that the user cannot override on impulse.
  • Use friction: grayscale display, huge fonts, minimalist launchers with delays, long passwords, cheap/slow hardware, or an e‑paper phone with keyboard.
  • Hybrid models: dumbphone for daily carry + smartphone in drawer for OTP/banking; tablet or laptop + Wi‑Fi instead of a pocket computer.

Addiction, Attention, and Behavior

  • Many treat smartphone use as genuine addiction; “just don’t install TikTok” is compared to “just eat less” for obesity.
  • Several recount reinstalling browsers or apps after removing them; external controls are seen as more reliable than willpower.
  • Some criticize sensational “declining attention span” charts as uncited and misleading, linking to counter-arguments that debunk the goldfish comparison.

Accessibility, Rights, and Regulation

  • Strong criticism of making essential services app‑only (parking, charging, transit tickets, payments); argued this should be banned under accessibility law.
  • Concern over being effectively forced to accept Google/Apple EULAs to be a “functioning citizen”.
  • Others push back that smartphones per se aren’t the problem; addictive UX patterns and unnecessary high-tech replacements for robust low-tech systems are.

Privacy and Communication Tradeoffs

  • One side: fewer apps and no app ecosystems significantly reduce corporate tracking.
  • Other side: phones (smart or dumb) remain heavily logged by carriers and may run poorly secured OSes; privacy gains are limited against state-level actors.
  • Messaging norms complicate dumbphone use: some insist chats are essential and more respectful/asynchronous; others advocate more phone calls or alternative platforms (e.g., Matrix) and argue social graphs can shift if users take a stand.

What if you could do it all over? (2020)

Replaying Life vs. Living in the Future

  • Some find the idea of “redoing” life haunting; others are more intrigued by jumping 1,000 years forward.
  • Optimists expect dramatic reductions in disease and misery; skeptics argue human satisfaction quickly re‑normalizes to a baseline, regardless of tech.
  • There’s debate over whether medieval or very poor people were “truly” happy, or just adapted to normalized misery; others counter that many traditional/indigenous lifestyles may have supported greater contentment and that modern depression is, in part, a new disease.

Irreversibility, Meaning, and the Allure of “What If”

  • Several argue that if you could infinitely redo life, choices would lose meaning, likening it to cheat codes in games or Groundhog Day.
  • Fantasy replays usually omit real risks (e.g., military service without injury) and over-romanticize the road not taken.
  • Some point to stories/films where the “lesson” is eventually to accept the current life as the meaningful one.

Family, Nihilism, and Sources of Meaning

  • For some, children and family erase interest in unlived lives; the thought of kids not existing in an alternate timeline is intolerable.
  • Others adopt a relaxed nihilism: nothing matters cosmically, which they find freeing rather than despairing.
  • Counterarguments stress that “giving to others,” especially parenting, is empirically tied to well‑being and can ground a sense that our actions matter.
  • Large subthreads debate:
    • whether having kids is altruistic, selfish, or simply a biological imperative;
    • whether continuing the human species is meaningful or just ego;
    • whether freedom comes from dropping the need for any ultimate meaning.

Regret, Agency, and Gratitude

  • Many would change little beyond being kinder, braver, or avoiding specific relationships/jobs; they see painful experiences as necessary for growth.
  • Others emphasize that new “lives” (new careers, cities, identities) are always available in the present, making time‑travel fantasies less useful than asking “How can I change now?”
  • Some note strong effects of starting conditions and luck; without altering family background or social class, a do‑over might not radically change outcomes.
  • Several express deep specific regrets (e.g., choosing work over a partner) but also highlight gratitude for current families and hard‑won contentment.

Machine Code Isn't Scary

Demystifying Machine Code & Early Experiences

  • Many recall early 8‑bit days (BBC Micro, ZX81, Spectrum, TRS‑80, CoCo, Amiga) where BASIC was obvious, but machine code initially felt like an unreachable “secret decoder ring.”
  • The “click” often came from better explanations (books like CODE, advanced OS guides, SICP‑style thinking): realizing complex behavior is “just” data in registers/memory plus OS/hardware calls.
  • Several describe hand‑assembling hex and POKEing it into memory or using DOS debug.com; once you see bytes ↔ instructions, machine code stops being mystical and becomes “just tedious.”

Hardware, ISAs, and Instruction Encodings

  • Long subthread on how hardware implements branches: disagreement over whether “hardware executes both sides”; clarified as specific circuit patterns (muxes, ALUs, speculative execution) vs the abstract ISA model.
  • Discussion of immediate encodings on AArch64, RISC‑V, x86; big constants require multiple instructions or special patterns. Variable‑length x86 vs fixed‑length RISC designs are contrasted.
  • Some argue instruction encodings are mostly niche knowledge (assembler/disassembler writers); understanding the ISA and memory/branch behavior matters more than bit‑level formats.

JITs, Emulators, and Low‑Level Projects

  • One poster describes building a Forth that directly emits machine code for x86‑64, AArch64, and RISC‑V; finds a simple non‑optimizing JIT surprisingly approachable.
  • Others mention mapping Lisp or toy languages directly to WebAssembly or assembly, and using tools like Capstone for disassembly.
  • Emulator authors highlight opcode‑decoding recipes (e.g., Z80) and note that for actual programming, a macro assembler is far more practical than typing hex.

Should Assembly Be a First Language? (Big Disagreement)

  • Pro side: instructions are simple, map cleanly to “sequences, conditions, loops,” and expose why higher‑level languages exist. Toy ISAs, microcontrollers, and simulators give immediate, tangible feedback (LEDs, simple games).
  • Contra side: assembly is unstructured, verbose, and brittle; it doesn’t resemble the control structures students will actually use, and it distracts from problem‑solving with hardware details few will need.
  • Critics warn it can build wrong mental models for modern CPUs/compilers, and kill motivation in beginners whose goals are “make games/sites,” not “understand bits.” Many advocate starting high‑level, then introducing assembly later for context.

Assembly in Practice: Production vs Hobby

  • Embedded and OS developers still use small, targeted assembly for special instructions, calling conventions, or extreme performance, and read disassembly frequently for debugging and perf work.
  • Others with heavy production ASM experience report it’s slow and painful for general development; C (or higher) is almost always more productive, with compilers usually generating better overall code, especially on complex modern CPUs.
  • For obscure MCUs, compilers can be poor, making hand‑written assembly dramatically faster and smaller; this keeps low‑level skills relevant in some niches.

Abstraction, Mental Models, and Education

  • Several posters frame computing as “Input → Computation → Output,” or “Programs = Data + Instructions,” and say adopting this model made all levels—from machine code to OSs—much less intimidating.
  • There’s tension between two educational philosophies:
    • Start from hardware/assembly to ground abstractions.
    • Start from pure problem‑domain languages and only later show what runs underneath.
  • Consensus points: machine code itself isn’t inherently scary; good explanations, tooling (monitors, simulators, debuggers), and clear goals determine whether it feels empowering or pointless.

Merlin Bird ID

Overall Reception and Impact

  • Widespread enthusiasm; many call Merlin a “magic” or exemplar app that makes phones augment real-world perception rather than distract from it.
  • Strong adoption by casual users, parents, photographers, and serious birders; often described as “Pokémon Go for birds,” motivating people to go outside more.
  • Especially valued on hikes, in foreign countries, and by people with limited prior bird knowledge.

Sound ID Performance

  • Sound ID is the star feature: users report rapid, real-time identification of many species at once, often matching later visual confirmation.
  • Handles complex soundscapes and some mimicry (mockingbirds, thrashers, jays imitating hawks) surprisingly well, though mimicry can still fool it.
  • Struggles with: very high or very low frequencies (phone mic limits), strong background noise (AC, footsteps, roads), and differentiating very similar species (finches, crows, some warblers).
  • False positives and strict updates are noted; most users treat Merlin’s IDs as strong hints requiring human judgment and, ideally, visual confirmation.

Photo ID and Data Ecosystem

  • Photo ID is considered “good but not as magical” as sound, partly due to low-quality zoomed phone shots.
  • Users want:
    • Ability to keep their own photos in checklists instead of stock images.
    • Web-based upload/ID for DSLR workflows.
  • Integration with eBird is praised for long-term checklists, expert vetting, and contribution to research.
  • Related tools mentioned: BirdNET (and BirdNET-Go / BirdNET-Pi / WhoBird), iNaturalist and Seek, PlantNet, Birder, and various DIY acoustic stations.

UX, Coverage, and Technical Issues

  • Mixed reports on stability: some see a smooth experience; others report crashes, long-recording failures, region-pack bugs (especially on iOS), and occasional lost detections.
  • Coverage is praised in North America and Europe but described as weaker in parts of East Asia, New Zealand, and some developing regions. One comment speculates intentional limits to deter poaching; this remains unclear.
  • Android tracking report raises privacy concerns; others argue that included SDKs don’t necessarily imply meaningful data sharing.

Ethics, Features, and Wish List

  • Playback of songs can disturb territorial birds; some users warn that Merlin should caution more strongly against “calling back.”
  • Requested features: editing clips, casting audio, long-duration logging, individual-bird tracking, non-bird sound ID (frogs, insects, mammals, cars), directional localization with multiple mics, APIs, better gamification, and an iNaturalist-style bridge.

Binary Wordle

Solving strategy and game triviality

  • Core insight: with binary digits, any puzzle is solvable in at most 2 guesses.
    • Common “optimal” strategy: guess 00000 (or 11111), then flip every non‑green cell on the second guess.
    • Others point out you can start with any pattern; any non‑green cell must flip, so it’s still always solvable in 2.
  • Minor nitpicking over wording: “in 2 steps” vs “within 2 steps” (since you might solve it in 1).
  • Some players initially felt proud of solving in 2–3 guesses, then realized 2 is guaranteed.
  • Question about yellow cells: they appear only if you mix 0s and 1s on the first guess; in practice they’re redundant because “yellow = flip it” just like grey.

Humor and binary jokes

  • The game is widely read as an absurdist joke rather than a serious puzzle.
  • Many classic binary jokes appear:
    • “There are 10 types of people…” and variants.
    • Puns on “two attempts” vs “10 attempts”.
    • People claiming it took “10 tries” and riffing on that.
  • Some enjoy the fact that both inputs and the game’s logic are “binary” in every way.

Comparisons and variants

  • Compared to Mastermind/Wordle; consensus is this is much simpler.
  • One thread discusses “easy” vs “hard” Mastermind/Wordle feedback (whether positions of correct/wrong letters are revealed).
  • Several related games are shared:
    • Hex or 8‑digit hex “Wordle” variants.
    • Number‑based Wordles (rationals, factors, linear equations).
    • Other joke Wordles (e.g., horse anagrams).

Design suggestions and difficulty tweaks

  • Some suggest:
    • Fewer guesses or longer bitstrings.
    • Matching based on longer substrings to make it nontrivial.
    • A share button and showing guess counts in binary.
    • Using only two rows, since only two guesses are ever needed.

Implementation and UX

  • Minor keyboard‑focus bug reported (especially after “play again”).
  • Positive comments on the UI, animation, and the commitment to the gag.

Technical tangents

  • Brief digression into whether 0s vs 1s use different energy, Landauer’s principle, and word sizes (why 5‑bit “wordle” is not a real computer word).

DiffX – Next-Generation Extensible Diff Format

Existing Tools vs. “New Standard”

  • Many commenters argue the problems DiffX claims to solve are already covered by:
    • git format-patch/git am and mbox for multi-commit patch sets.
    • Git-style unified diffs with rich headers.
    • RFC822/email-style headers above diffs for metadata.
  • Several see DiffX as “standard n+1” (invoking xkcd 927), especially since Git’s format is de facto canonical for many workflows.
  • Others point out these Git-centric solutions don’t help tools that must integrate with many different SCMs (SVN, Perforce, ClearCase, in-house systems) that lack consistent or rich diff formats.

Who Actually Has the Problem?

  • Proponents (notably from the Review Board side) say the real pain is on tool authors:
    • Every SCM has its own diff quirks, often undocumented, requiring bespoke parsers.
    • Some SCMs have no diff format, or omit crucial info: revisions, modes, symlinks, deletions, encodings, binary changes.
    • Large diffs (hundreds of MB) are expensive to parse without clear sectioning and lengths.
  • Skeptics respond that:
    • Most users stick to a single SCM per project and never see these issues.
    • Better SCMs or documented Git-style formats would be preferable to inventing a new one.
    • Claims about massive binary/versioning setups are viewed by some as edge cases or “imaginary problems.”

Design of DiffX Format

  • Structure:
    • Hierarchical section headers (#..meta, #...diff, etc.) plus explicit length= fields.
    • Metadata in JSON blobs, with a simple key/value header syntax indicating format and length.
  • Critiques:
    • Dot-based hierarchy is hard to read and error-prone; different levels all called “meta.”
    • Mixing custom header syntax and JSON means two parsers, less friendly to grep/awk-style tooling.
    • Length fields are seen as fragile when humans edit patches.
    • JSON is criticized as noisy and awkward for hand-editing; some argue JSON5 would be nicer, others insist on baseline JSON for maximal compatibility.
  • Defenses:
    • DiffX is intended to be machine-generated/consumed; human editing is not the main use case.
    • Lengths and hierarchy allow efficient partial parsing and mutation in large diffs.
    • JSON was chosen after trying other grammars; widely supported, unambiguous types.

Scope: Diff vs Patch, Metadata, Commits, Binary

  • Some say DiffX conflates concepts:
    • A “diff” should just be line changes; commit lists and metadata belong in the VCS/transport (or in patch sets).
    • Encoding and metadata problems should be solved by standardizing on UTF‑8 and one SCM.
  • Others argue:
    • In practice, many VCSs expose only textified content with local encoding, mixed newlines, or incomplete metadata.
    • Tools need a portable representation of “delta state” including commit ranges, per-file revisions, symlinks, modes, and binary deltas to reconstruct or analyze changes across diverse backends.
    • Multi-commit-in-one-file is valuable to avoid ordering/missing-patch issues for downstream tools.

Alternatives and Broader Perspectives

  • Suggestions include:
    • Formalizing Git’s diff header grammar and/or email-style headers instead of creating DiffX.
    • Using more semantic/AST-based diffs (e.g., difftastic) or structured formats for JSON/AST changes.
    • In some scenarios, just shipping both full file versions (or compressed pairs) may be simpler.
  • Some note diffs are still important for:
    • Code review pipelines.
    • Tools interacting with LLMs where diffs can dramatically reduce token usage and latency.
  • Adoption concerns:
    • Currently appears mostly used inside the Review Board ecosystem.
    • Without buy-in from major VCSs, some doubt it will gain wide traction, though others see it as a useful documented format that others may adopt if they share similar pain points.

Ask HN: Has anybody built search on top of Anna's Archive?

Scope and Feasibility of Full‑Text Search on Anna’s Archive

  • Several commenters note that AA already has rich metadata search; the question is about full-text and possibly page-level search.
  • Rough estimates: AA’s ~1 PB could become 10–20 TB of plaintext; indexing would further multiply storage but is still feasible on commodity hardware.
  • Main technical bottlenecks:
    • Reliably extracting clean text from heterogeneous formats (especially scanned PDFs).
    • Handling OCR artifacts, hyphenation, footnotes, layout quirks.
    • Choosing a search backend that can handle the scale (Lucene/Tantivy seen as more realistic than Meilisearch; SQLite+WASM for client-side experiments).
  • Ideas include partial indexing (e.g., top 100k books first), static-hosted indexes fetched by hash, and TF‑IDF–style per-term shards.

Deduplication, Editions, and Result Quality

  • Simple ISBN-based dedup is inadequate: many editions per ISBN family, multiple ISBNs per work, retitled collections, etc.
  • Alternatives suggested: Library of Congress or Dewey classifications plus author/title/edition; or content-based dedup.
  • Users want one canonical result per work, with optional edition drill‑down and weighting by quality; also the possibility of indexing a “plain” version but serving a nicer EPUB/PDF.

Use Cases and Value

  • Proposed beneficiaries:
    • Researchers in fields heavily dependent on older books and paywalled PDFs.
    • People wanting direct access to exact passages instead of LLM paraphrases.
    • Niche projects like curated “canons” (e.g., frequently cited HN books) optimized for semantic/LLM search.
  • Some see it as “game‑changing” for scholarship and knowledge access; others question who would use it versus Google Books, Amazon, or Goodreads and how it would be funded.

Legal and Policy Risks

  • Core concern: indexing and exposing full text of largely pirated material may be treated like facilitating infringement (Pirate Bay analogy), even without hosting files.
  • Distinction drawn between:
    • LLM training as a possibly transformative, in‑house use.
    • A public engine that enables verbatim retrieval and points users to shadow libraries.
  • Some argue fair use precedent around Google Books; others note that AA’s sources are outright unauthorized, which makes the situation riskier.
  • Several commenters conclude the project is non‑monetizable, high‑risk legally, and thus unlikely to be publicly deployed, though individuals could build private indexes.

Existing Partial Solutions

  • Z‑Library reportedly offers limited full-text search but at smaller scale.
  • Various book search tools and an AA competition exist, mostly around metadata/ISBN, not full text of all books.
  • Android apps and external search tricks (e.g., site:annas-archive.org) provide practical but shallow search.

LLMs and Double Standards

  • Widely shared belief that major LLMs (Meta, others) have already ingested AA/related datasets; Meta’s torrenting of AA is cited.
  • Several comments highlight perceived double standards: individuals and small sites face harsh copyright enforcement, while large corporations push legal boundaries with relative impunity.

Illegal Non‑Copyright Content

  • Some worry that bulk-downloading AA might incidentally pull in non‑copyright criminal content (e.g., sexual exploitation material or bomb manuals).
  • Opinions differ on how much such material is present and how laws in different jurisdictions treat text vs. images, or instructional content.
  • This contributes to hesitancy about mirroring or seeding large chunks of the archive.

Ask HN: Startup getting spammed with PayPal disputes, what should we do?

Nature of the attacks and likely motive

  • Most commenters think this is card/credential testing using stolen PayPal accounts or cards: attackers run many low-value transactions to see which accounts still work, then use or sell the “validated” ones.
  • Some suggest it might be automated chargeback abuse to harm the marketplace or its PayPal standing.
  • A minority proposes money-laundering or competitor sabotage, but others (including people with payments experience) argue the pattern fits testing/fraud, not laundering.

Perceived weaknesses of PayPal

  • PayPal is seen as offloading fraud risk to merchants and being slow or unhelpful on non-standard issues; multiple stories of arbitrary freezes, bans, and locked funds.
  • In a marketplace setup, platform-wide controls (e.g., rejecting unverified buyers) often must be configured per seller, limiting defense options.
  • However, several note PayPal remains popular with buyers for trust, convenience, and micropayment pricing; removing it can hurt conversion.

Mitigation strategies proposed

  • Account / buyer controls:
    • Reject or temporarily block unverified PayPal buyers; treat small/micro transactions with extra suspicion.
    • Add email/phone/SMS verification, or hold “risky” orders for manual review.
  • Traffic and identity controls:
    • Use browser/device fingerprinting, header/TLS fingerprints, IP reputation/proxy/VPN checks, and ASN/geo blocking (especially Tor, datacenters, cheap VPS ranges).
    • Rate limiting and velocity rules per IP/fingerprint/email; threat levels that automatically tighten rules on spikes or low approval rates.
    • CAPTCHAs/Turnstile/hCaptcha and JS challenges; some note solvers and AI make these weaker, so they should be adaptive, not the only line of defense.
    • Shadowbanning or returning “success” while not charging, to waste attacker time.
  • Operational responses:
    • “Under attack” modes that disable or heavily gate checkout, even at the cost of lost sales.
    • Extensive logging and monitoring to detect new attacks early.

Alternatives and ecosystem discussion

  • Many advise planning to migrate away from PayPal (Stripe, Adyen, local gateways, 3DS flows, open banking), but others note similar risk-averse behavior from card processors and the difficulty of replacing PayPal’s reach and user trust.
  • A long subthread debates crypto and stablecoins as an alternative; some report good results and lower fraud, while others argue volatility, regulatory risk, and unsafe adoption by unsophisticated users make them unsuitable as a general solution.

A manager is not your best friend

Role of Manager: Not Friend, Not Enemy

  • Many agree managers shouldn’t be “best friends” or “bros,” but also not faceless corporate enforcers.
  • Good managers are described as diplomatic, accessible, fair, and willing to “manage up/sideways” for their team without trash‑talking others.
  • Several argue the article overcorrects: you can be close, even friends, with reports or managers while still enforcing boundaries and sometimes firing people.

Commiseration, Complaining, and Negativity

  • Strong focus on “commiseration”: many interpret it as “complaining together” or “co‑misery,” i.e., validating negative feelings about a shared bad situation.
  • Multiple commenters report that venting with reports about other teams or leadership reliably poisons cross‑team collaboration and creates “us vs them” dynamics.
  • Others defend limited, guided commiseration: acknowledge feelings, then redirect toward what can be controlled or improved, often best in 1:1s.
  • Several note that many people can’t easily switch back from venting to constructive action; complaining becomes an identity and a productivity sink.

Empathy, Truth, and Psychological Safety

  • The article’s line that “empathy must be highly conditional” is heavily debated.
  • One camp: a manager’s main duty is performance and truth; too much focus on making people feel good leads to avoidance of hard conversations and manipulation.
  • Opposing camp: happy, psychologically safe teams deliver better work; framing empathy as conditional or secondary is seen as dehumanizing.
  • Some distinguish empathy (understanding/recognizing feelings) from sympathy or agreement; empathy should be constant, responses conditional.

Work Relationships, Friendship, and Competition

  • Many recount drawing a line between “coworkers I like” and real friends, often only becoming friends after one leaves the company.
  • Others describe deep, lasting friendships with teammates and even managers, sometimes vacationing together and staying close for years.
  • A cynical strain argues that in stack‑ranking, layoff‑prone environments, everyone is effectively a competitor; colleagues and managers will protect themselves first.
  • Others push back, saying this attitude is toxic in itself and that some organizations deliberately build high‑trust, non‑cutthroat cultures.

Language, Culture, and Context Limits

  • Several non‑native and native speakers question the article’s use of “commiserate,” feeling it’s misused or at least confusing without added context.
  • Some criticize the piece as culturally narrow and absolutist, ignoring variations in hierarchy, national work culture, and individual personalities.
  • Others still see it as a useful reminder for new managers to avoid over‑sharing, over‑validation, and seeking to be liked by reports.

Brain aging shows nonlinear transitions, suggesting a midlife "critical window"

Understanding the study and its claims

  • Several commenters provide lay summaries: brain aging accelerates nonlinearly from ~40–60 as glucose utilization worsens; ketones can temporarily restore function during this “critical window,” but seem ineffective in older cohorts.
  • One user points out the paper’s own abstract already serves as a reasonable TL;DR.
  • There’s debate over the appropriateness of posting LLM-generated summaries, with some arguing it conflicts with forum norms, even if disclosed.

Ketones, keto diet, and feasibility

  • Discussion centers on using ketogenic diets, fasting, MCT oil, or commercial ketone esters to raise ketone levels.
  • Some highlight that strict keto isn’t necessary; moderate low-carb or modified Atkins can still induce ketosis.
  • Exogenous ketone products used in the study are noted to be extremely expensive if taken daily; effectiveness window appears to be a couple of hours post-dose.
  • A few people report subjective mental clarity on keto and attribute it partly to avoiding “carb comas.”

Exercise performance and carbohydrate needs

  • Lifters report that strict keto compromises performance with heavy weights; many adopt targeted or cyclical low-carb (carbs around workouts or on training days).
  • Others argue you can maintain strength on <100g carbs/day if timed well, though experiences vary with body fat levels and activity.

Fasting vs calorie restriction

  • Several practice intermittent or extended fasting, sometimes with added salt and black coffee, and discuss what “breaks” a fast (milk, supplements, small amounts of fat).
  • One thread cites a meta-analysis suggesting fasting isn’t superior to continuous calorie restriction for long‑term outcomes, but may improve insulin sensitivity and be useful as a metabolic intervention.
  • Autophagy timing and magnitude under fasting are described as unclear and contested.

Health risks and safety of keto

  • Concerns are raised about potential kidney and heart risks with long-term keto or very high protein, supported by individual anecdotes.
  • Others call this “myth,” arguing there’s no solid evidence that high protein harms healthy kidneys and that the bigger issue is sedentary lifestyle plus excess calories.
  • One link is shared about rapid plaque progression in certain “lean mass hyper‑responder” low‑carb individuals, but no consensus is reached.

Broader diet debates (carbs, rice, policy)

  • Strong anti–high-carb sentiment appears, with repeated criticism of white rice, high-GI foods, and sugar for energy crashes and insulin resistance; some tie this to high diabetes prevalence in certain cultures.
  • Others counter that many high-carb cultures are historically lean and that activity levels and food processing matter as much as macros.
  • Long subthreads debate government guidelines (e.g., calorie and protein recommendations, old low‑fat pyramids) and agricultural subsidies incentivizing sugar and corn, versus personal responsibility and total calorie intake.
  • There is disagreement on whether “processed food,” carbs, seed oils, or simple overconsumption are the primary drivers of obesity.

Conflicts of interest and criticism of the study

  • Commenters note that a key researcher has commercial interests in ketone products and receives royalties, prompting concern about bias while acknowledging that disclosure is standard.
  • Some see the work as promising but preliminary; others dismiss it as “terrible science,” accusing the authors of overinterpreting mechanistic data and pushing a keto-friendly narrative.

Precious Plastic is in trouble

Practicality and Scale of Precious Plastic Machines

  • Many see the machines as too small, power‑hungry, expensive and fiddly to be more than hobby tools (e.g., 15 kW for a single sheet, “several sheets per day”).
  • Skepticism that plastics processing can be safely and efficiently miniaturized to “cottage industry” scale; industrial plants use continuous processes with heat recovery that are hard to replicate.
  • Others report successful educational labs and small workshops using PP designs, arguing the compromises (manual, small‑batch, simple) fit prototyping, education, and small series production.
  • Debate over whether buying small industrial machines from Alibaba or used industrial gear is cheaper/more practical than building PP’s open‑source designs.

Organization, Finances, and Governance

  • Strong criticism that key problems are self‑inflicted: no insurance, weak budgeting, lack of financial transparency, deletion/migration of old forums, and giving away a €100k donation to the “community” instead of shoring up the core organization.
  • Some see this as evidence of incompetence or “performative” activism and are wary of donating again without clear changes in leadership or structure.
  • Others defend PP’s low burn rate, non‑profit ethos, and willingness to let the project die if it can’t find a sustainable path, framing it as a public good rather than a failed startup.

Community, Education, and Open Hardware Value

  • Supporters argue PP’s main contribution is building a global community, sharing open‑source machine plans, and making plastics, materials, and circular economy concepts tangible.
  • Even critics concede PP inspired more practical spin‑offs and cottage industries, especially in developing countries, which used the ideas to build more robust, locally adapted systems.
  • PP is contrasted with industrial suppliers by its open hardware focus and “microfactory” vision, not pure throughput.

Wider Debate: What Should We Do with Plastic?

  • Multiple comments argue small‑scale recycling is ecologically marginal or harmful: recycled plastics are lower quality, shed microplastics, and are often non‑recyclable again.
  • Proposed alternatives include:
    • High‑temperature incineration / waste‑to‑energy.
    • Plasma gasification.
    • Landfilling as de‑facto carbon sequestration in well‑engineered sites.
    • Chemical depolymerization and advanced recycling, if economics and scale can work.
  • Several argue the real lever is upstream: taxing plastics, extended producer responsibility, bans on single‑use items, and systemic reduction and reuse rather than consumer “recycling theater.”

Safety, Liability, and Legal Exposure

  • Discussion of a New York lawsuit over an accident with PP machinery; US legal costs (e.g., $600/h lawyers) are seen as a major drag.
  • Some suspect the shredder design is inherently risky (amputation/entanglement hazard) and insufficiently guarded, making liability hard to deflect.

Communication, Roadmap, and Trust

  • Many readers found PP’s appeal confusing: unclear description of what they actually do, what “Version 5” means, and how new funds would be used.
  • Calls for a concrete, directional roadmap (technical goals, organizational reforms) before further fundraising, and concern that “we’re so close, just one more version” sounds like chasing sunk costs.