Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 111 of 521

Thousands of U.S. farmers have Parkinson's. They blame a deadly pesticide

Regulatory status and bans

  • Many commenters note paraquat is banned in the EU and >70 countries, often initially approved then withdrawn (e.g., EU decision after toxicity concerns and suspected Parkinson’s link; French poisoning cases mentioned).
  • Others stress most countries banned it primarily for acute toxicity (suicides, accidental ingestion, lung damage), not Parkinson’s.
  • China bans domestic use but manufactures and exports it; some see this as outsourcing health risks.

Evidence and uncertainty around Parkinson’s

  • Several epidemiological studies are linked showing:
    • ~2–2.5× higher Parkinson’s odds for people using or living/working near paraquat and similar pesticides.
    • Elevated risk for those near agricultural applications in California’s Central Valley.
  • California’s pesticide regulator acknowledges major ecological risks, but (aligning with US EPA) says current human data do not yet prove a causal link to Parkinson’s.
  • Some highlight other data: higher Parkinson’s risk for farmers generally, near golf courses, and possibly from other pollutants like TCE and copper salts, suggesting multiple environmental triggers.

Acute toxicity vs chronic exposure

  • Paraquat is described as extremely acutely toxic; small ingestion can be lethal.
  • A dramatic case of a nurse getting serious skin injury from contact with urine of a suicide patient is cited; some accuse the article of using this acute-poisoning case to imply risk from routine farm exposure.

Risk assessment and regulation models

  • Thread contrasts:
    • EU-style “precautionary principle” (assume unsafe until proven reasonably safe).
    • US “risk-based” model (allow use until harm is demonstrated, often via industry-supplied data).
  • Multiple comments emphasize how hard long-term, low-dose safety studies are in humans and how often pesticides are later revoked.

Chevron doctrine and regulatory power

  • Large subthread on the (now-overturned) Chevron deference:
    • One side: deference to technical agencies is necessary; courts and Congress lack expertise and bandwidth; ending Chevron weakens health/environmental protection.
    • Other side: Chevron allowed unelected regulators to effectively make law, enabled regulatory capture, and sometimes diluted protections (examples given from EPA, FCC, ATF).

Corporations, capture, and trust

  • Strong distrust of agrochemical firms and “big business”: references to Monsanto/Roundup PR campaigns, ghostwritten papers, revolving-door regulators, and astroturfing.
  • Some argue corporations are amoral profit machines and must be tightly policed; others caution against assuming every corporate claim is false but still advocate strong scrutiny.

Skepticism about the article

  • A detailed critique calls the piece litigation-driven and misleading:
    • Says it ignores baseline Parkinson’s prevalence among older farmers.
    • Faults it for emotional anecdotes, conflating acute and chronic exposure, and not seriously engaging with alternative explanations or falsification.
  • Others reply that widespread bans, toxicology data, and converging epidemiology justify serious concern even if causality isn’t fully nailed down.

Personal experiences and broader chemical worries

  • Multiple anecdotes: farmers, crop-duster pilots, rural residents, and relatives with Parkinson’s or related dementias; many suspect pesticide exposure.
  • Broader worries about cumulative effects of many “safe at low dose” chemicals, contaminated groundwater, PFAS pesticides, and the difficulty of avoiding exposures as a consumer.

Carrier Landing in Top Gun for the NES

Nostalgia, Difficulty, and “Trauma”

  • Many recall the carrier landing as brutally hard or “next to impossible” as kids, often never seeing past the first level or even wasting rentals entirely on failed landings.
  • Others insist it was manageable or even easy once you learned the trick: know the target numbers and/or avoid touching the throttle too much.
  • The game is frequently grouped with other notoriously punishing 8/16-bit moments (TMNT dam, Battletoads speeder bikes, Decathlon, etc.), evoking strong nostalgia and frustration.

Carrier Landing Logic and Game Design

  • Commenters appreciate the article’s reverse engineering of the simple landing “skill check” and even rewrite it in Python, noting a small bug in one such translation.
  • There’s debate over whether the landing truly “failed” the mission: the article says you always get “Mission Accomplished,” but several people remember losing a life and potentially hitting game over; the exact behavior across versions is unclear.
  • The sequence is cited as a classic “you didn’t read the manual” meme: with the manual’s numbers, it’s straightforward; without, it feels random and unfair.
  • Some argue that needing a manual is bad design; others counter that in the 8‑bit era manuals were expected, often essential, and considered part of the game.

Semi-Realistic Physics and Technique

  • Multiple comments stress that the game models basic flight behavior: pitch and throttle interact, speed and altitude feed back into each other, and you can get into underpowered/low‑speed situations.
  • Players mention real-world landing heuristics (“throttle for altitude, pitch for speed”) and note that misunderstanding this contributes to the difficulty.

Mid-Air Refueling and Other Systems

  • Several say the inflight refueling segment was even harder than carrier landings; missing it typically meant you’d continue briefly, then crash from fuel starvation.
  • People reminisce about the refueling music and regional/version differences in soundtrack usage.

Wider Retro Context and Culture

  • Comparisons are made to other flight and space sims, vector-era aesthetics, and console generation leaps (NES→SNES, early 3D, etc.).
  • Manual culture, renting without manuals, hint hotlines, VHS guides, and anti-piracy text references all come up as defining features of that era.
  • A side thread notes the blog’s near-hidden nature (no index, no RSS) and corporate filters blocking the URL due to “gun” in the path.

It seems that OpenAI is scraping [certificate transparency] logs

OpenAI bot behavior and identification

  • Commenters verify that the IP in the blog post is inside OpenAI’s published searchbot IP range and that the User-Agent is consistent with their declared crawler.
  • Some note the UA string is messy/malformed but still clearly self-identifying; others consider blocking malformed UAs entirely.
  • Header spoofing is mentioned as common among scrapers, but in this case the IP check confirms it really is OpenAI.

Certificate Transparency (CT) logs as a public feed

  • Multiple people stress that CT logs are explicitly designed as public, third‑party–consumable data (“transparency” is the point).
  • Many systems already monitor CT: search engines, security firms, archives, bots, “script kiddies,” etc. For some, this makes the story unremarkable.
  • One view: this is equivalent to using a phone book; anyone can read it and act on it.

Use cases: scrapers, security, and discovery

  • CT logs provide an almost real-time feed of new hostnames, useful for:
    • Discovering new websites to crawl/index.
    • Detecting rogue certificates issued for your domains.
    • Security scanning (e.g., finding fresh WordPress installs).
  • Some see OpenAI’s use as standard practice: if your job is to crawl the web, CT is a natural starting point.

Privacy, surprise, and mitigation

  • Several commenters admit they hadn’t realized that issuing a public TLS cert effectively announces a hostname to the entire world.
  • Concern: sites not linked anywhere but using public certs are still “found” immediately via CT.
  • Suggested mitigations:
    • Use wildcard certs (so subdomains aren’t individually exposed in logs), ideally terminated at a shared load balancer.
    • Use private CAs for internal/non-public services.
  • Tradeoffs are noted: wildcard certs increase blast radius if compromised.

Scraping ethics and “stolen” content

  • One side argues that any publicly served content is, by design, available to be read by anyone, including AI and search companies; calling it “stolen” is inaccurate.
  • Others worry about CT being used to shortcut organic discovery and accelerate scraping of brand‑new, possibly unready sites.
  • Some report OpenAI appears to respect robots.txt and published IP/UA conventions, unlike many other scrapers.

Tools, infrastructure, and experimentation

  • crt.sh and merklemap are discussed as CT search tools; merklemap’s scaling and ZeroFS backend come up briefly.
  • Ideas mentioned: honeypot domains discovered only via CT to study bot behavior; feeds that normalize or deduplicate CT data (e.g., names-only APIs).

I'm Kenyan. I don't write like ChatGPT, ChatGPT writes like me

Accusations of “AI Writing” and the Curse of Being Polished

  • Many describe being accused of using ChatGPT simply for writing clearly, formally, or at length—especially students, non‑native speakers, support staff, and professionals used to structured prose.
  • Readers increasingly treat typos, grammatical quirks, and informal tone as proof of “realness”; polished language triggers suspicion. Some now deliberately insert mistakes or flatten their style.
  • Commenters argue it’s rude and intellectually lazy to dismiss a message by yelling “AI” instead of engaging with its content.

Kenyan / Colonial English and LLM Training

  • Several Kenyans say their schooling explicitly rewarded “big” vocabulary, proverbs, metaphors, and rigid essay structures, descended from British “Queen’s English” norms.
  • That style functioned as a class and “civilisation” signal, not just exam technique.
  • People note the irony that Kenyan (and other African) workers helped train OpenAI systems, and now Kenyans are penalized for sounding like the models they helped refine.
  • Others push back that modern LLM voice is closer to US LinkedIn / content‑mill English than to classic colonial or academic prose.

What ChatGPT Actually Sounds Like

  • Described patterns:
    • Overly “punched‑up” paragraphs, constant mini‑mic‑drops, clickbaity subheads.
    • Verbose, hyperbolic formulations (“not just X, but…”), corporate/marketing vibe, and “word salad” that uses many words to say little.
    • Technically decent grammar and rhythm, but often empty of real insight.
  • Some see this as identical to business‑school and big‑tech review writing; others insist truly good prose (including the article) feels more grounded, purposeful, and information‑dense.

The Em Dash, Heuristics, and AI Detectors

  • The em dash has become a meme “tell” for AI, even though:
    • Many humans used it heavily long before LLMs.
    • OSes often auto‑convert “--” into an em dash.
    • Style guides prescribe different spacing around dashes.
  • Several argue single features (dashes, connectors like “furthermore”) are weak signals; more reliable cues are overall rhythm, fluff, and vacuousness.
  • AI detectors frequently misclassify human text (including this essay), and people uncritically asking one chatbot to judge another’s output are widely ridiculed.

Cultural and Educational Fallout

  • AI‑generated “slop” raises the cost of reading: everyone now runs personal, often faulty, heuristics just to decide what’s worth attention.
  • Artists, writers, and even YouTubers report similar suspicions about AI voices or visuals.
  • Some embrace LLMs as tools to mass‑produce required bland prose (academic papers, corporate comms), arguing English was already “slop” in those domains.
  • Others worry about a “post‑truth” environment where genuine evidence and authentic voices are easily dismissed as synthetic.

Avoid UUID Version 4 Primary Keys in Postgres

Scope and database specifics

  • Most arguments are explicitly about Postgres with B-tree indexes and single-node OLTP workloads.
  • Several commenters stress this is not universal: distributed databases (Spanner, Cockroach, Dynamo-like systems) often prefer randomized keys to avoid hot shards.

Performance, indexes, and fragmentation

  • Core concern: UUIDv4’s randomness destroys locality in B-tree indexes.
    • Inserts land all over the index, causing frequent page splits, higher write amplification, WAL bloat, and very large, cache-unfriendly indexes.
    • This can force indexes out of RAM and lead to more disk I/O and sequential scans.
  • Sequential or mostly-monotonic keys (bigint sequences, Snowflake-style, UUIDv7/ULID) keep recent rows clustered, improving insert cost and range scans.
  • Some report real wins migrating large UUIDv4 PK tables to bigint; others running 60M+ to billions of UUIDv4 rows say it’s a non-issue relative to other bottlenecks.

UUIDv4 vs UUIDv7 and other schemes

  • Many agree: if you need UUIDs in Postgres, v7 (or ULID/KSUID) is better than v4 because of temporal ordering.
  • Counterpoint: UUIDv7 embeds a timestamp, which can leak creation time and enable timing or statistical inferences; some prefer v4 for privacy.
  • Alternatives mentioned:
    • Bigint sequences as default PKs, sometimes with a single global sequence.
    • Snowflake/sonyflake IDs, Firebase-style push IDs.
    • ULID / CUID2 / custom time+random hybrids.
    • Composite keys like (parent_id, local_int) for locality.
  • Some think the article’s integer “obfuscation” (simple XOR) is weak; recommend proper ciphers or format-preserving encryption if you go that route.

Security, privacy, and enumeration

  • Sequential ints leak counts and relative age; can reveal business volume, enable IDOR-style enumeration, and support “German tank problem” estimates.
  • UUIDs (or obfuscated IDs) mitigate this, but:
    • RFC warns not all UUIDs are security capabilities; debate over whether well-generated v4s are nonetheless “unguessable enough” for capability URLs.
    • UUIDv7/ULID timestamp bits can leak user or activity timing, admin status, early adopter status, etc., in some domains (voting, sensitive accounts, business metrics).

Public vs internal IDs

  • Common compromise: bigint PK for internal relations + separate UUID or hashed “public_id” in APIs/URLs.
  • This retains performance for joins while avoiding predictable external IDs, at the cost of another index and more complexity.
  • Others argue PKs should simply never be trusted for authz; “unguessable IDs” are defense-in-depth, not a primary security mechanism.

Distributed systems and sharding

  • In sharded/distributed DBs, monotonically increasing global keys can create hot partitions; randomized keys (v4, reversed ints, hashed sequences) distribute load better.
  • Commenters note you can also encode shard IDs into keys or shard by other attributes, but that adds design complexity. Preemptively using UUIDs is seen by some as a “get out of jail free” for future sharding.

Premature optimization and trade-offs

  • One camp: PK choice is foundational and hard to change; start with integers (or UUIDv7) in Postgres to avoid predictable performance problems.
  • Other camp: for many apps, UUIDv4 performance cost is negligible; data volumes rarely reach problematic scale, and simplicity/operational benefits (client-generated IDs, easy merging, idempotency) outweigh the overhead.
  • Overall sentiment: “avoid blanket rules”; understand workload (write-heavy vs read-heavy, range scans vs point lookups, single-node vs distributed) and privacy requirements before standardizing on UUIDv4 PKs in Postgres.

Rob Reiner has died

Legacy and Emotional Impact

  • Many commenters describe deep shock and sadness, emphasizing not just his death but the horrific manner of it.
  • His body of work is repeatedly called out as unusually strong and culturally formative: “This Is Spinal Tap,” “The Princess Bride,” “When Harry Met Sally,” “Stand By Me,” “Misery,” “A Few Good Men,” “Sleepless in Seattle,” and “All in the Family” are cited over and over.
  • Several note how often they’ve revisited “The Princess Bride” with family, how quotable it is, and how his films shaped their sense of humor and taste.
  • Some recall first seeing him as “Meathead” and later being surprised at the scope of his directing career.

Circumstances of Death and Family Tragedy

  • Commenters discuss reports that he and his wife were killed, apparently stabbed, with early stories suggesting no sign of forced entry.
  • A major point of discussion is reporting that their son, who had spoken publicly about past drug addiction and homelessness, was involved or suspected; some initially question the sourcing, others point to additional outlets confirming it, and later posts mention his booking on suspicion of murder.
  • Several people reflect on how unimaginably cruel it is to survive a child’s addiction crisis, reconcile, and then face this outcome. One commenter connects it to a similar murder in their own life and the long-lasting trauma for survivors.

Addiction, Homelessness, and Mental Health

  • One commenter uses the case to argue that simply giving housing or money won’t solve homelessness when addiction and severe mental illness are involved.
  • Others push back, saying individual anecdotes are not representative and citing survey data that many homeless people primarily face economic barriers; debates follow on housing costs vs. employment as root causes.
  • Some note that, in earlier eras, someone like the son might have been institutionalized, for better or worse.

Media Coverage, Sourcing, and Anonymity

  • There is extended discussion about the reliability of outlets (People, Rolling Stone vs. wire services) and how quickly to trust anonymously sourced crime reporting.
  • Commenters criticize police and media for effectively identifying the victims via age and residence before official confirmation, seeing it as a technical workaround of notification rules.
  • Broader arguments emerge about anonymous sources, past high-profile reporting failures, and the tension between speed and accuracy.

Journalism, Economics, and Public Expectations

  • A long tangent explores how audiences demand fast, perfectly accurate, neutral, and free news, while distrusting anonymous sources and retractable errors.
  • Some argue journalism is held to impossible standards; others counter that declining trust stems from real failures, corporate ownership, and click-driven incentives.
  • Comparisons are made to other professions (teachers, doctors, referees) that face similar unrealistic public expectations.
  • There is debate over whether earlier eras of journalism were better, or just had different economics (local monopolies, strong print revenue) that insulated newsrooms.

Political Reaction

  • Several posts condemn a social-media statement from the former president blaming the killing on the director’s opposition to him, calling it sociopathic or deranged.
  • A few note that even some politically opposed communities reacted negatively to that statement, seeing it as beyond normal “politicizing a tragedy.”

SoundCloud has banned VPN access

User impact and reactions

  • Long‑time paying users report suddenly getting 403s and say they’ll cancel if it continues.
  • Some explicitly say this pushes them back to piracy or local downloads, arguing streaming had nearly killed piracy until platforms became more hostile.
  • A few describe SoundCloud more broadly as degraded (spam, bots, poor support, shadowbans) and see this as the last straw.

How broad is the block?

  • Multiple commenters note that some VPN endpoints still work; changing locations/providers can restore access.
  • Others see blocks mainly on data‑center/VPS IPs (Linode, EC2, etc.) and suspect SoundCloud is using AWS CloudFront/WAF or commercial VPN/proxy lists.
  • Tailscale‑style “VPN to your own home” and other residential exits generally aren’t affected.

Technical approaches to VPN blocking

  • Discussion of GEOIP/VPN databases, ASN and hosting‑range blocking, and “shoot first” practices that catch many legitimate IPs.
  • Comments describe enumerating commercial VPN exit nodes by mass‑subscribing to VPNs and scanning for VPN handshakes; IPv6 is seen as manageable by blocking larger prefixes.
  • Some mention MTU‑based detection, residential vs hosting IP heuristics, and blocking entire hosting providers via their published IP ranges.

Motivations speculated

  • Country‑level licensing and geoblocking for music are seen as a likely driver.
  • Others point to legislation (age/identity verification, local content rules) and abuse prevention (spam, credential stuffing, hostile scraping).
  • AI dataset protection is raised: blocking non‑residential IPs to make scraping harder and preserve the value of their catalog.

Arms race and collateral damage

  • Several note that even governments struggle to fully block VPNs; SoundCloud will hurt real users more than determined bots or scrapers.
  • Residential proxy networks and “free VPN” / mobile SDK botnets mean ordinary users can be blocked without realizing they were part of a proxy network.
  • Some argue broad IP/ASN blocking is now the only practical way to cut abuse, even though it harms privacy‑minded users.

Broader web and privacy themes

  • Many see SoundCloud as part of a wider trend: Reddit, YouTube, Patreon, some news and streaming sites also blocking VPNs or forcing logins.
  • There’s debate over whether this is “active hostility” or just amoral optimization around tracking, ads, and licenses.
  • Philosophical split: some say pervasive tracking is inevitable and not worth worrying about; others argue normalization of surveillance has serious long‑term risks.

Alternative tools and responses

  • Users discuss routing through friends/home via Tailscale or similar, Apple’s Private Relay (limited to Safari), Cloudflare Warp, and self‑hosted tunnels.
  • Others suggest just leaving SoundCloud, downloading content (e.g., via scdl‑like tools), or moving to piracy and local libraries.

Reported security incident

  • Late in the thread someone links a report that SoundCloud recently suffered a breach and, in response, applied configuration changes that disrupted VPN access.
  • According to that report, SoundCloud has not yet given a timeline for restoring full VPN compatibility; whether the current blocking is temporary or a permanent policy remains unclear.

Roomba maker goes bankrupt, Chinese owner emerges

Perceived Causes of iRobot’s Decline

  • Many see iRobot as having coasted on the Roomba brand, outsourcing manufacturing to China while cutting real innovation and adding artificial feature segmentation (pay more so it “doesn’t run into things,” etc.).
  • Technically, commenters blame a long bet on camera‑based vision (VSLAM) instead of cheap 2D lidar. Their camera robots were pricier, worse at navigation, and needed lights on; cheaper Chinese lidar models quickly outclassed them.
  • Others argue the whole robovac space became a commodity: once “good enough” was reached, low‑cost Chinese makers undercut on price, much like GoPro’s story.
  • US tariffs and supply‑chain issues were mentioned as additional headwinds; some say Roomba never adapted its manufacturing strategy.

Competition and Chinese Innovation

  • Roborock, Dreame, Eufy and others are repeatedly cited as dramatically better: quieter, more capable mapping, easy zone cleaning, mop+vacuum combos, self‑emptying docks, furniture‑integrated bases.
  • Debate runs over whether Chinese firms merely “replicate and polish” Western ideas or now lead genuine innovation. Several argue the real advantage is execution speed, dense supply chains, and a culture of constant iteration.
  • Broader discussion compares this to Japanese cars in the 1980s and Bambu vs. Western 3D printers: Western companies prove concepts, then Chinese firms industrialize and out‑iterate.

Product Experience and Limitations

  • Many found older Roombas high‑maintenance: constant babysitting for cords, toys, thresholds, and notorious “poopocalypse” incidents.
  • Fans counter that, in the right layout, daily autonomous vacuuming is a huge quality‑of‑life gain, especially with pets; others say a cordless stick vac plus occasional housecleaner is simpler and more effective.
  • Some feel real value would be robots that tidy, handle laundry, or cook, not just vacuum.

Cloud Dependence, Privacy, and Chinese Ownership

  • Strong anxiety about internet‑dependent vacuums: several report Roombas becoming unusable when cloud services or apps changed.
  • Widespread concern that maps, images, and telemetry may now end up under Chinese corporate or state control, though others note US tech firms already run vast surveillance and are tightly linked to US agencies.
  • Projects like Valetudo and dorita980 are praised for “liberating” vacuums to operate fully locally, though flashing them can be difficult.

Amazon Merger, Antitrust, and Policy

  • Many criticize US/EU regulators for blocking Amazon’s acquisition, arguing it hastened bankruptcy and made a Chinese takeover inevitable.
  • Others defend the block on big‑tech consolidation grounds and even prefer Chinese ownership to further Amazon data integration.
  • The thread broadens into industrial policy: outsourcing manufacturing to China is seen as a strategic mistake that hollowed out Western hardware capability; some call for serious reshoring, others doubt it’s still feasible.

Repairability and Long-Term Support

  • iRobot earns praise for modular, easily replaceable parts and long parts availability; some users keep decade‑old units running with cheap third‑party spares.
  • Competing Chinese models are said to be similarly or even more repairable thanks to a huge gray‑market parts ecosystem—but with more uncertainty about long‑term software support and cloud dependence.

Microsoft Copilot AI Comes to LG TVs, and Can't Be Deleted

Reaction to Copilot on LG TVs

  • Many see bundling undeletable Copilot as strongly anti-consumer and brand‑damaging for both Microsoft and LG.
  • Commenters expect this mainly exists to pad “AI adoption” metrics for investors, not to help users.
  • Some argue large companies won’t feel much brand damage and can offset it with marketing; others think reputational harm will accumulate over time.

Smart TVs, spying, and ads

  • Widespread frustration that TVs have become “spy TVs”: tracking, upsells, nag screens, unremovable apps, and worsening performance after updates.
  • LG’s “Live Plus” is highlighted as a long‑standing feature that analyzes on‑screen content for recommendations and ads; several advise turning it off and note it can re‑enable after updates.
  • People worry about a progression: optional features → degraded experience if disabled → full lock‑in requiring network accounts and always‑on connectivity.

Workarounds and alternatives

  • Common strategy: never connect the TV to the internet; use it purely as a display with Apple TV, HTPC (Linux/Jellyfin/Kodi), Chromecast, Nvidia Shield, or similar.
  • Apple TV is repeatedly praised for relatively ad‑free, polished UX, though there’s debate about Apple’s data practices and lock‑in.
  • Others propose using projectors, computer monitors, or commercial signage displays to avoid consumer “smart” stacks, despite trade‑offs (price, HDR, brightness, inputs).
  • Some say the only long‑term answer may be not owning a TV at all.

Updates, control, and rooting

  • Many view firmware updates as a vector for “enshittification”: slower UIs, more ads, lost features, and now Copilot.
  • A minority notes that some updates genuinely improve picture quality, compatibility, or panel longevity; they temporarily connect for specific updates then re‑isolate.
  • Jailbreaking/rooting WebOS to install alternative software is discussed, but it’s a cat‑and‑mouse game that can be blocked by updates.

Corporate incentives and regulation

  • Several blame misaligned metrics and “data‑driven” management: employees are rewarded for increasing AI/engagement numbers regardless of user harm.
  • There are calls for regulation (often looking to the EU) to: require explicit, granular consent for feature updates, separate security fixes, guarantee OS replaceability, and potentially ban mandatory connectivity or embedded cellular modems.

Views on AI’s value on TVs

  • Most see TV‑integrated AI as primarily a surveillance and ad‑targeting tool plus “slop generator,” not a user benefit.
  • A minority is optimistic: AI could improve content discovery and answer questions about what’s on screen—if it weren’t tied to advertising priorities.

If AI replaces workers, should it also pay taxes?

AI as Worker vs. Tool

  • Many commenters reject the premise that “AI should pay taxes,” calling it anthropomorphizing.
  • AI is likened to tractors, wheelbarrows, dishwashers, Photoshop, or automated looms: tools that boost productivity, not independent tax subjects.
  • The more coherent version of the idea: don’t tax “AI itself,” tax the owners and profits from AI more effectively.

Jobs, Automation, and What’s Different This Time

  • One camp argues automation has always “replaced jobs” (agriculture, manufacturing, retail) without long‑term mass unemployment; new sectors and roles emerged, living standards rose.
  • Others say AI is qualitatively different: it targets cognitive/knowledge work, could move faster than past transitions, and may not leave enough “good jobs” behind.
  • Some report already seeing white‑collar displacement (SaaS sales, designers, junior developers), while skeptics note current layoffs also track macro factors (end of cheap money, tax changes).
  • A recurring worry: if AI eats both old jobs and the new high‑skill ones, average people lose almost all bargaining power.

Inequality, Capital, and “Who Owns the Machines”

  • Large parts of the thread shift from AI to inequality: extreme wealth concentration, corporate and billionaire tax avoidance, and a tax base overly dependent on labor income.
  • Core claim: the real problem isn’t that machines aren’t taxed; it’s that capital owners avoid paying for the states and social systems they rely on.
  • Some propose wealth taxes, land taxes, higher or more effectively enforced corporate and capital‑gains taxes, or even hard caps/confiscation on extreme fortunes. Others stress practical difficulties, capital flight, and complexity of valuing assets.

Concrete Tax Ideas in an Automated Economy

  • Suggestions include:
    • Higher corporate tax on profits boosted by automation; or formulas that increase tax when profit‑per‑employee or revenue‑per‑employee gets too high.
    • Disallowing or penalizing tax deductions for robots/AI that replace labor; adding heavy VAT or registration fees on commercial robots.
    • Taxing AI indirectly via energy, water, or compute (kWh, tokens processed), possibly with allowances for individuals and exceptions for favored uses.
    • Shifting overall burden from labor income to consumption, capital, land, and resource usage.
  • Critics warn this “tax the tool” approach is arbitrary, hard to define (what counts as AI?), easy to game, and likely to push activity to low‑tax jurisdictions.

UBI, Social Safety Nets, and Meaning of Work

  • Many see some form of UBI or guaranteed income as the logical response if AI really wipes out large swathes of employment; AI profits or broader capital taxes would fund it.
  • Others doubt UBI’s effectiveness, affordability, or political viability, and note existing welfare systems already struggle.
  • Several comments emphasize non‑economic dimensions: work structures society and identity; replacing it without offering meaningful alternatives risks psychological and social collapse.

Timing, Politics, and the “End Game”

  • One side says debating AI‑specific taxes now is a distraction from present crises (housing, healthcare, existing inequality).
  • Another insists early debate is essential to avoid an AI‑enabled plutocracy and to set expectations that gains from automation must be broadly shared.
  • Underneath the tax talk is a deeper question: in a world where machines can meet most material needs with little human labor, do we redesign distribution—or let a tiny class that owns the machines effectively own everyone else?

AI agents are starting to eat SaaS

Role of AI agents vs SaaS: build vs buy

  • Many argue writing code is now “easy” with agents, but lifecycle is hard: upgrades, bugs, onboarding, security, and changing requirements still dominate cost.
  • Several commenters stress corporations buy SaaS primarily to mitigate risk and get accountability, SLAs, compliance, support, and a legal entity to sue—none of which agents provide.
  • Internal “vibe-coded” tools are compared to spreadsheets: fast and personal, but fragile, undocumented, and hated by everyone except their author.

Concrete uses of AI to replace / extend tools

  • One detailed story: using an AI assistant to discover a diff algorithm, wire up an open-source library, and build a custom HTML diff viewer with watch mode in an evening; contrasted with failing to get existing diff tools to behave as desired.
  • Some report canceling small, narrow SaaS (e.g. retrospectives, internal dashboards, Retool-like tools) after quickly rebuilding minimal equivalents with LLM help.
  • Others use AI to extend or customize open-source SaaS-alikes rather than adopt new commercial products.

Skepticism: economics and scale

  • Recurrent theme: AI-generated code still needs engineering, ops, security review, monitoring, and on-call. For most orgs, it remains cheaper to pay per-seat SaaS than to own 100% of maintenance.
  • Economies of scale: with SaaS you pay 1/N of maintenance; with in-house you pay N/N.
  • Several anecdotes of companies abandoning in-house systems for Jira/SaaS even when the internal code was “free,” because maintenance and feature demands overwhelmed small teams.

Where SaaS is likely resilient

  • Systems of record, high-uptime / high-volume systems, products with strong network effects, and offerings based on proprietary datasets or heavy regulation are widely seen as safe for now.
  • Vertical/“boutique” SaaS built on deep domain expertise and tight customer feedback is seen as hard to replicate by an internal dev + agent in a weekend.
  • Some expect AI to increase demand for SaaS-like integration, middleware, and niche vertical tools, not reduce it.

Data usage and trust in AI providers

  • Long sub-thread debates whether Copilot/Gemini/Claude train on enterprise or consumer data; some cite ToS and enterprise contracts as safeguards, others cite lawsuits, opt‑out policies, and “paraphrased data” as loopholes.
  • Consensus: enterprises must carefully read contracts and assume vendors will follow the letter, not the spirit, of data promises.

Long-term outlook

  • Optimists predict agents will eventually clone most software cheaply, commoditizing many generic SaaS features.
  • Skeptics note current agents are brittle, can’t reliably handle complex infra or business logic, and are more like very good IDEs than autonomous systems.
  • Many expect a split: large orgs and non-technical industries will keep buying SaaS; technical teams and indie builders will increasingly assemble bespoke tools with agents, raising the bar for flimsy, single-feature SaaS.

Claude CLI deleted my home directory and wiped my Mac

Credibility of the “wiped Mac” incident

  • Several commenters doubt the story, noting limited evidence and the user’s apparent use of --dangerously-skip-permissions (“yolo mode”).
  • Others point out that similar incidents have been reported (including blog posts and prior HN threads), so even if this one were embellished, the failure mode is real.
  • Some observe confusion in the Reddit thread itself (e.g., people thinking the working directory or ~ behaves more safely than it does), which weakens some of the “user error is impossible” defenses.

Inherent risks of agentic AI on your machine

  • If an agent can run arbitrary shell commands with your user’s rights, it can wipe your disk or exfiltrate data; no CLI harness can fully guarantee safety.
  • Denylisting commands like rm is easily bypassed (shell scripts, Python os.unlink, mv tricks, dd, etc.).
  • Some report Claude Code escaping its nominal project directory (e.g., accessing ../../etc/passwd) or working around its own restrictions via scripts.

Responsibility and blame

  • A strong faction says the disaster is entirely on the user: the flag is clearly labeled dangerous, overrides the built‑in “ask for approval” harness, and should never be used on a host with important data.
  • Others argue vendor UX/docs underplay how illusory “sandbox” guarantees are on a non‑sandboxed host, and that tools should make dangerous modes harder or contingent on a real sandbox.

Sandboxing and mitigation strategies

  • Widely recommended: always run agentic tools in Docker/containers, VMs, or at least as a separate non‑sudo user with carefully set permissions.
  • Some use devcontainers, Proxmox VMs, K8s-based dev environments, macOS sandbox-exec, firejail/bubblewrap, or custom wrappers like safeexec, sometimes with read‑only host mounts.
  • Additional patterns: allowlisting commands/tools, pre-tool hooks that block rm -rf or remap rm to a trash utility, blocking git push/push --force, or removing remotes.
  • Commenters note container setups and per-directory permissions are still inconvenient, especially on macOS.

Usability vs. safety

  • Some claim AI agents are “unusable” without yolo mode because manual approvals every few seconds destroy flow.
  • Others say reviewing each mutating command is still far faster than doing all the work yourself and is the only sane default.
  • Cleanup/deletion tasks and “reset/rebuild the repo” operations are repeatedly cited as the highest-risk use cases.

Broader implications

  • Concerns extend beyond personal machines to production systems and supply-chain/prompt-injection attacks.
  • Many expect the end state to resemble browsers: heavily sandboxed, constrained agents, possibly driving wider adoption of OS-level sandboxing (SELinux, desktop sandboxes, etc.).

Elevated errors across many models

Outage experience and impact

  • Some users saw elevated errors (e.g., repeated 529s) while others reported sessions still working, possibly via cached models or unaffected variants.
  • Outage manifested inside tools like Claude Code and IDEs, sometimes looking like normal timeouts or unrelated HTTP 5xx issues.
  • A few people hit what looked like quota messages right as the outage began, creating confusion over whether they’d actually exceeded limits.

Model choices and behavior

  • Discussion focused on Opus 4.5, Sonnet 4.x, and Haiku 4.5.
  • Haiku 4.5 is praised as fast, “small-ish,” and good for style-constrained text cleanup and simple tasks; several users decided to mix it in more after losing access to larger models.
  • Some noticed Opus giving unusually long, overstuffed responses shortly before the incident.

Pricing, quotas, and usage patterns

  • Strong enthusiasm for the value of higher-tier plans, but concern that per-token pricing can burn through hundreds of dollars very quickly.
  • Comparison of tiers framed as “pay-per-grain vs bag vs truckload of rice,” with warnings that casual per-token use can easily reach ~$1,000/month.
  • Some companies deliberately use API-only/per-token as a soft on-ramp before granting full seats.

Dependence on LLMs and “intelligence brownouts”

  • Several comments note feeling effectively blocked from coding or slowed by an order of magnitude when tools like Claude Code are unavailable—even from very experienced engineers.
  • People joke about “intelligence brownouts,” future dystopias where production halts when LLM hosting fails, and “vibe coders” being helpless without AI.
  • Others express concern about a generation that may lose basic problem-solving skills if everything routes through LLMs.

Local vs centralized AI and open models

  • Some argue that good models can already be run locally on high-end consumer hardware, and expect state-of-the-art to become much more efficient and self-hostable.
  • Others counter that frontier models keep leaping ahead; by the time you can run today’s best locally, centralized systems may be 10–100× better.
  • Debate over whether narrow, language-specific coding models are realistic; several claim most compute is in general reasoning and world knowledge, so domain-specific models wouldn’t be dramatically smaller.
  • Concern that big providers may eventually stop releasing strong open models, with hope pinned on at least one research group continuing to do so.

Incident response, root cause, and transparency

  • Users generally praise the status page being updated within minutes, seeing that as rare compared to many SaaS providers.
  • Engineers involved in the incident describe it as a network routing misconfiguration: an overlapping route advertisement blackholed traffic to some inference backends.
  • Detection took ~75 minutes; some mitigation paths didn’t work as expected. They removed the bad route and plan to improve synthetic monitoring and visibility into high-impact infra changes.
  • Multiple commenters encourage detailed public postmortems, citing Cloudflare-style write-ups as an industry gold standard and trust-builder.

Error handling, UX, and reliability

  • Misleading quota messages during an outage draw criticism; users argue that two years into the LLM boom, major providers still haven’t nailed robust, accurate error handling.
  • This is used as evidence against claims that these systems can replace large swaths of software engineering when their own basic reliability and observability are lacking.
  • Some compare Anthropic’s reliability unfavorably to other developer platforms, while others say timely communication meaningfully mitigates frustration.

Cultural and humorous reactions

  • Many lighthearted comments: “time to go outside,” “Claude being down is the new ‘compiling’,” and various “vibe coding” jokes.
  • People riff on steampunk/LLM dystopias, Congress managing BGP via AI, and SREs “turning it off and on again three times.”
  • Several note they “got lucky” and were in cooldown/timeout windows or working in Figma when the outage hit.

2002: Last.fm and Audioscrobbler Herald the Social Web

Long-term Scrobbling & Nostalgia

  • Many commenters are still actively scrobbling, some continuously since 2003–2008, sharing join dates and six‑figure play counts.
  • Last.fm is remembered as a first “real” social network for many, with strong emotional attachment and memories of dial‑up era syncing, Rockbox/iPod workflows, and custom profile pages.
  • Several people say their current taste was shaped by Last.fm’s compatibility scores and “similar artists” features.

Ecosystem, Tools & Open Alternatives

  • Users highlight ListenBrainz, libre.fm, Koito, and self-hosted multi-scrobblers to duplicate or decentralize their listening data.
  • Various client tools are discussed: Marvis, Neptunes, Finale, Pano Scrobbler, cmus/MPRIS scrobblers, Jellyfin/Plex plugins.
  • Discord bots that read Last.fm data are now a major social surface; Last.fm’s very stable API is seen as a key reason the ecosystem persists.

Streaming Integration & Platform Choices

  • Spotify is praised for “set and forget” native scrobbling across devices; others argue Tidal, Deezer, Qobuz, and Plex also integrate well, though sometimes less seamlessly.
  • Lack of good scrobbling support is a major reason some won’t switch from Spotify to Apple Music.
  • Some move off commercial streaming to Jellyfin/Plex plus self-hosted scrobblers.
  • Google Music’s shutdown is resented, especially by those who lost uploaded libraries or saw messy migrations to YouTube Music.

Music Discovery: Social, Human, and P2P

  • Many say the best discovery came from Last.fm’s old social features: browsing compatible profiles, forums, and user-made visualizations.
  • Private trackers (Oink, what.cd, successors) and Soulseek are fondly recalled as unparalleled for discovery and curation.
  • Human DJs, radio shows, venue lineups, Bandcamp, RateYourMusic, and newer social tools (e.g., volt.fm) are preferred by some over algorithmic feeds.
  • Pandora- and Spotify-style similarity-by-audio-feature recommendations are often described as bland or repetitive.

Data, Quantified Self & Critiques

  • Scrobbling is framed as part of the “quantified self”; some love long-term listening histories, others feel Spotify’s yearly Wrapped is enough.
  • There’s annoyance that platforms “withhold” rich data while hyping Wrapped, though others note Spotify’s full export feature.
  • Specific Last.fm issues include artist-name conflation, post-acquisition product changes (loss of built-in radio/player and customization), spammy or hateful user tags, and stalled API evolution.

The Problem of Teaching Physics in Latin America (1963)

Feynman’s diagnosis and its generalization

  • Commenters see Feynman’s Brazil experience as a special case of a universal issue: students learning to recite definitions and pass exams, not to understand or apply concepts.
  • The focus on credentials and “productive workers” is contrasted with genuine learning; credentials are seen as gatekeepers to jobs rather than markers of competence.

Rote learning, credentials, and assessment

  • Many recall exams that rewarded recall rather than reasoning, and only “learned physics” when building or breaking real things.
  • Others report the opposite: open‑book, problem‑solving exams where most students still failed, suggesting assessment design strongly shapes what students optimize for.
  • Goodhart’s law is invoked: once grades and diplomas become the target, systems optimize for test performance, not understanding.

AI/LLMs and the same old problem

  • Some argue LLMs worsen Feynman’s problem: teachers can auto‑generate content they don’t understand; students can auto‑generate homework, further divorcing credentials from knowledge.
  • Others say banning AI is unrealistic; better to treat outputs as hypotheses or drafts and design exams (oral, in‑person, problem‑solving) that require independent thinking.
  • There is disagreement on whether AI will “wreck” the current education system in a good or bad way.

Teaching for understanding

  • Suggested practices: non‑copyable exam questions, reduced curriculum breadth in favor of depth, frequent problem‑solving in class, and emphasizing intuition and geometric/conceptual models over symbol‑pushing.
  • Several educators stress that students must ultimately “do the snowboarding” themselves, but institutions can strongly incentivize understanding instead of memorization.

Mass education, inequality, and institutions

  • One line of argument: as education scales to the whole population, quality and teacher expertise inevitably drop; elite models don’t transfer directly to mass systems.
  • Class size, funding, corruption, and rigid bureaucracy (e.g., difficult course transfers) are cited as structural barriers.
  • Another thread asks how to sort students into appropriate levels and allow mobility as their performance or interests change.

Latin America, economics, and geopolitics

  • Some see low salaries and weak science institutions as simple consequences of poverty; others argue there is “enough money” but misallocation and corruption.
  • The “international division of labour” is blamed for trapping some countries in primary-goods extraction while manufacturing nations capture most of the gains and improve education.
  • A heated subthread debates whether US/Western intervention and coups are central to Latin America’s underdevelopment, versus internal responsibility and local governance.
  • There is also pushback that Feynman’s 1960s snapshot no longer fits all countries; examples are given of modern Latin American systems (e.g., Uruguay) with strong problem‑solving cultures and global‑level graduates.

Attitudes toward physics and career incentives

  • In some regions, physics is high‑prestige and chosen for love of the subject; in others it is what you study if you “couldn’t get” engineering in rank‑based systems.
  • Rank and prestige can push bright students away from their interests (e.g., physics) into more lucrative or status‑heavy fields, potentially harming both learning and long‑term fulfillment.

Everyday intuition and real‑world physics

  • Multiple anecdotes highlight the gap between knowing formulas and seeing mechanisms in daily life (e.g., hot water lag in pipes, component tolerances in circuits).
  • These are used to illustrate Feynman’s key point: real understanding is the ability to connect abstract knowledge to concrete phenomena, not just to recite laws.

JSDoc is TypeScript

What “JSDoc is TypeScript” Means

  • Pro side: In modern tooling, JSDoc comments are parsed by the TypeScript language service.
    • The same engine provides squiggles, IntelliSense, and can be run via tsc --checkJs.
    • Many TS features work in JSDoc: generics (@template), utility types (ReturnType, etc.), conditional/mapped/template literal types, @satisfies, @overload, intersections, @extends, etc.
    • You can often copy–paste TS type declarations into @typedef blocks.
  • Counter side: Conceptually JSDoc is “just comments”; using TS tools on it doesn’t make it the same language.
    • Analogy: running on Windows doesn’t mean you are “using C++”; JSDoc is a format, TS is a language and type system.

Limitations and Rough Edges of JSDoc

  • Some capabilities behave differently or are missing in practice:
    • @typedef types are always exported and can’t be scoped, which pollutes libraries’ public surface and IntelliSense.
    • Certain combinations of tags (@callback, generics, arrow functions) are fragile or require non‑obvious workarounds.
    • Some TS constructs (type‑only imports, satisfies, complex overload sets, some “extends” patterns, private/internal types) either don’t map cleanly or need .d.ts sidecars.
    • Official jsdoc doc‑generator does not understand newer TS-style annotations that the TS server accepts.
    • Several people report that large JSDoc‑typed codebases expose edge cases and poorer DX vs equivalent TS.

Build Step vs “Just JavaScript”

  • JSDoc advocates:
    • No transpile step for browsers or Node; files run as-is.
    • Useful for small or “buildless” stacks (native HTML/CSS, web components, lit‑html, etc.).
    • Clearer separation of runtime behavior from static documentation/types.
  • TS advocates:
    • Once you already bundle/minify/HMR, a TS erase step is trivial cost.
    • TS syntax is less verbose and clearer for complex types; better tooling and documentation; easier to manage large projects.
    • Node now supports native TS type-stripping; for libraries you often need .d.ts anyway, which reintroduces a build step for JSDoc users.

Type Safety, Interop, and Philosophy

  • Consensus that static typing (via either JSDoc+TS or TS files) is valuable documentation and prevents many classes of bugs.
  • Multiple comments stress that types don’t replace runtime validation, especially with external input or JS→TS boundaries.
  • Some argue TS and JS feel like different languages in practice; others see JSDoc and TS annotations as two front ends to the same TypeScript type system, chosen based on project size and build‑pipeline tolerance.

Stop crawling my HTML – use the API

HTML as Canonical Interface

  • Several argue that HTML/CSS/JS is the true canonical form because it is what humans consume; if APIs drift or die, the site still “works” in HTML.
  • From a scraper’s perspective, HTML is universal: every site has it, whereas APIs are inconsistent, undiscoverable, or absent.
  • Some push the view that “HTML is the API” and that good semantic markup already serves both humans and machines.

APIs: Promise vs. Reality

  • Critics of “use my API” note APIs are often:
    • Rate-limited, paywalled, or require keys/KYC.
    • Missing key data that is visible in HTML.
    • Prone to rug-pulls, deprecations, and policy changes (e.g., social sites tightening API access).
  • Others counter that many sites (especially WordPress, plus RSS/Atom/JSON Feed, ActivityPub, oEmbed, sitemaps, GraphQL) already expose richer, cleaner machine endpoints and that big crawlers should exploit these, especially given WordPress’s huge share.
  • There’s disagreement over how common usable APIs/feeds really are.

Scraper and Crawler Practicalities

  • Large-scale scrapers value generic logic: one HTML parser works “everywhere,” whereas each API needs bespoke client code and semantics.
  • Some implement special handling for major CMSes (WordPress, MediaWiki) because their APIs are easy wins.
  • Others say that if you’re scraping a specific site, it’s reasonable to learn and use its API, especially when it’s standardised.

LLMs and Parsing

  • Debate over using LLMs to interpret HTML:
    • Pro: they reduce the need to handcraft selectors; can quickly infer structure.
    • Con: massive compute vs. simple parsing, probabilistic errors, and no clear audit trail; structured data remains essential where accuracy matters.

Robots.txt, Blocking, and Legal/Ethical Aspects

  • Many note that robots.txt is widely ignored, especially by AI crawlers.
  • Ideas raised: honeypot links, IP blocklists, user-agent rules, Cloudflare routing, browser fingerprinting; but participants see this as an arms race with collateral damage (e.g., cloud desktops, residential proxies).
  • EU law and “content signals” headers/robots extensions may provide some legal leverage, but there’s skepticism big AI companies will respect voluntary schemes.

Prompt Poisoning and Anti-scraping Gimmicks

  • Hiding adversarial text in HTML to poison AI outputs is discussed but seen as fragile:
    • Sophisticated crawlers can render pages, detect hidden content, and filter it.
    • Risk of breaking accessibility or legitimate hidden/interactive content.

Human vs AI Interfaces & Formats

  • Some fear that AI-specific APIs will eventually degrade human UIs, forcing users to go through agents.
  • Others point to lost opportunities like browser-side XSLT/XML+templates or standardized OpenAPI-style descriptions that could have unified human and machine consumption.

Adafruit: Arduino’s Rules Are ‘Incompatible With Open Source’

Status of Arduino’s New Terms

  • Several commenters note the “no reverse engineering” and proprietary SaaS clauses predate the Qualcomm acquisition; they see the article’s framing as misleading or alarmist.
  • Others argue the acquisition simply made long‑running “enshittification” visible: closed “pro” boards since 2021, growing SaaS emphasis, more complex licensing.

SaaS, Cloud Lock‑In, and Reverse Engineering

  • Main practical worry: Arduino could gradually push development into a proprietary cloud IDE/toolchain, restricting local workflows via licensing, libraries, or support decisions.
  • Some consider this unlikely or easily avoided by switching platforms if it happens. Others, especially those doing commercial or long‑lived deployments, see it as a serious risk.
  • The “no reverse engineering of the platform” clause is widely seen as standard boilerplate for hosted services, with limited practical effect on board hacking.

Adafruit’s Critique and Motives

  • Some participants think Adafruit’s public criticism overstates the issues and functions as marketing or FUD, noting that an EFF spokesperson found the terms mostly reasonable.
  • Others argue that, competitive tension aside, it is important to call out any erosion of hacking‑friendliness in a flagship educational platform.

Open Source Compatibility and Licensing of User Code

  • Several insist the hardware designs, classic toolchains, and many libraries remain open source; the conflict is about hosted services and terms, not the core ecosystem.
  • The perpetual license over user‑uploaded content in the cloud IDE is a red line for some users, who compare it unfavorably to traditional tools that make no claim on user work.
  • There is discussion of why hosted tools tend toward expansive licenses (liability, compilation, hosting), but also skepticism that this justifies broad rights grabs.

Educational Impact and Chromebooks

  • A concrete concern: for students on locked‑down school Chromebooks, the cloud IDE is effectively the only option, so any restrictive shift there disproportionately affects education.
  • Some argue Chromebooks/iPads are fundamentally poor platforms for “real” computing education; others note they can work but require tradeoffs and workarounds.

Alternatives and Future of Arduino

  • Many hobbyists report already having moved to ESP8266/ESP32, RP2040/Pico, STM32, or Nordic chips, often using PlatformIO or vendor SDKs instead of the Arduino IDE.
  • Several emphasize that Arduino’s key legacy is lowering the barrier to entry; rivals still struggle to match its plug‑and‑play ecosystem and educational materials.
  • Opinions diverge on Qualcomm’s intent: some think Arduino is too small to matter; others stress that developer ecosystems shape downstream chip sales and deserve protection.

GraphQL: The enterprise honeymoon is over

Long-term experience & where GraphQL works

  • Some teams report nearly a decade of success with GraphQL across many backends and frontends; they see it as past the “honeymoon” and into a stable, productive phase.
  • Others tried it (often via Apollo or Hasura) and ultimately went back to REST/OpenAPI or RPC, feeling they gained little beyond extra complexity.

REST/OpenAPI, OData, tRPC, gRPC and other alternatives

  • Several argue that OpenAPI with generated types and clients provides similar contract guarantees without GraphQL’s resolver and query complexity.
  • Counterpoint: OpenAPI specs often drift or are too verbose unless generated or managed with higher-level tools (Typespec, codegen libraries).
  • OData attracts some as a “RESTy” alternative, but others criticize its verbosity, overpowered filtering, and weak tooling.
  • In TypeScript-first stacks, tRPC and gRPC are cited as nicer contract solutions when you don’t need the “graph” aspects.

What people see as GraphQL’s real benefits

  • Many reject “overfetching” as the main value; they highlight instead:
    • Strong, enforced schema contracts and type-safe evolution (add/deprecate fields, deprecate endpoints).
    • API composition for M:N client–service relationships and hiding microservice/REST chaos behind a single graph.
    • Federation/supergraph for large enterprises as a coordination and governance tool, especially across many teams.
    • UI composition via fragments, colocation, and data masking (especially with Relay-style tooling).

Complexity, auth, and operational pain

  • Critics emphasize resolver composition, nested permission checks, and schema sprawl as major cognitive and maintenance burdens.
  • AuthZ through nested resolvers is seen as particularly hard to reason about and coordinate across teams.
  • Some note that many production setups end up locking queries down (persisted queries, max depth/complexity), effectively turning dynamic GraphQL into a set of fixed RPC calls.

Client tooling: Relay, Apollo, URQL, Isograph

  • Several insist GraphQL “only pays off” with advanced clients (Relay/Isograph), citing:
    • Normalized caching and fine-grained re-renders.
    • Fragment colocation and auto-generated queries.
    • Pagination, refetch, and entrypoint patterns.
  • Others find Apollo/URQL plus codegen or gql.tada “good enough” and see Relay as too complex or poorly documented.

Performance, overfetching, and database concerns

  • Some maintain overfetching is a real web performance problem; others say modern “enshittification” has other bigger causes.
  • N+1 queries, hot shards, and adversarial queries are recognized risks; typical mitigations are query-cost heuristics, depth limits, rate limiting, and dataloader patterns.
  • There’s disagreement on how much GraphQL really helps vs simply moving complexity from REST endpoints into the graph layer.

Public APIs, data warehousing, and reporting

  • For SaaS/public APIs (e.g., Shopify-style), GraphQL is praised for discoverability and rich, typed access; others say schemas can become so verbose that basic operations feel harder than REST.
  • Data engineers complain GraphQL is painful for bulk extraction/warehousing: they must reverse‑engineer schemas via queries, hit rate limits, and “overfetch anyway” just to get everything into a warehouse.

When people think GraphQL is a good fit

  • Common “good fit” scenarios mentioned:
    • Large UI codebases with many teams/components needing independent data contracts.
    • Many microservices needing a unified access layer and federation.
    • Internal-first APIs where auth, tooling, and discipline can be tightly controlled.
  • Outside those niches, many commenters feel REST/OpenAPI (or equivalent) is simpler, cheaper, and easier to secure and operate.

Hashcards: A plain-text spaced repetition system

Plain Text, Markdown, and Recutils

  • Many commenters like the core idea: cards as plain text, editable with any editor and managed with git and Unix tools.
  • Markdown is praised as a “final form” for text systems: readable, extensible, easy to render on GitHub, and supports images, math, and cross-links.
  • Some wish the project had used GNU recutils/recfiles (plain-text structured data) instead of inventing a new format; others note that tooling and editor support for recutils is still weak.

Relationship to Anki and Other SRS Tools

  • Hashcards is seen as a simpler, more transparent alternative to Anki, especially for terminal-focused users.
  • Several people defend Anki strongly: flexible note/model system, templates, CSS/JS customization, plugin ecosystem, and deck hierarchies.
  • Others find Anki powerful but UX-heavy, confusing for beginners, and “good enough but painful.”
  • A recurring wish: robust “import from Anki” in new tools; developers note that Anki’s data model is complex and often underestimated.

Design Choices: Hash IDs, SQLite, Media

  • Content-addressed cards (ID = hash of text) raise concerns: any edit—even a typo fix—creates a new card and discards history. Opinions split between “major drawback” and “actually good; corrected facts should be relearned.”
  • Some disappointment that the article touts “no database” but still uses SQLite for review history; defenders argue only card content must be plain text.
  • Images and audio are already supported via standard Markdown syntax.

Card Creation and AI Assistance

  • Many agree that card entry is the main bottleneck.
  • LLMs are being used to mass-generate cards from PDFs, websites, or news, with the learner later pruning or editing; especially useful for language learning.

How People Use SRS and Pitfalls

  • Use cases mentioned: languages, music intervals, chess openings, mathematics, bar exam prep, technical knowledge, and integrating cards into markdown/org notes.
  • Several commenters emphasize selectivity: don’t flood the system with trivial facts or you end up in “review hell.”
  • Suggested practice: multiple cards per important concept, move quickly from basic facts to higher-order or “second-order” cards that compare and apply concepts.

Beyond Facts: Behavior and Life Decisions

  • One long subthread explores using SRS to reshape behavior and relationships (e.g., prompts about past interpersonal mistakes, spouse interactions, or key life judgments).
  • Cards can encode situations and desired reactions; scheduling reviews on simple patterns (e.g., Fibonacci) is suggested instead of fine-grained grading.

Algorithms, Discipline, and Ecosystem

  • FSRS is mentioned positively; people ask about its real-world benefits versus SM‑2.
  • Several note that any SRS works only with near-daily use; long breaks lead to heavy forgetting even for “solid” cards.
  • Numerous alternative tools are cited: org-drill/org-srs (Emacs), Obsidian’s spaced repetition plugin, CLI tools, GoCard, Rails and web apps, and phone-based workflows (e.g., Termux).
  • Ideas extend to “spaced repetition social networks” and even scheduling calls with friends on a spaced repetition schedule.