Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 202 of 355

The dead need right to delete their data so they can't be AI-ified, lawyer says

Being remembered vs being deleted

  • Some find posthumous “AI-ification” of themselves disturbing, but consider total erasure worse, seeing persistent data as a small claim on history.
  • Others argue future remembrance is pointless because everyone is eventually forgotten and the living should focus on present life, not legacy.
  • A counterpoint: if you want to be remembered, you must accept you won’t control how future technologies use your traces.

Posthumous rights and autonomy

  • Comparisons are drawn between the right to delete one’s data after death and the contested right to die; both pit individual autonomy against state/collective interests.
  • Some argue the dead already have rights (wills, treatment of remains, protection of likeness/defamation), so extending this to digital data is consistent.
  • Others think posthumous rights are harmful “dead hand” control that should yield to benefits for the living (e.g., organ donation).

Consent and AI replicas

  • Several commenters would actively opt in to being AI-ified, especially for comforting or advising loved ones (e.g., a parent leaving an AI “self” for their child).
  • Others stress this should be strictly opt-in, not automatic, given abuse potential (e.g., AI interviews with deceased victims presented as “news”).
  • There’s concern about respect: auto-deleting everything may be disrespectful, but so is commercial reuse of someone’s likeness against their wishes.

Commercial exploitation and ad dystopias

  • Strong fear that ad-tech will weaponize “digital ghosts” for profit: AI versions of grandparents or dead children pushing products or keeping users engaged in grief.
  • A fictional vignette about an ad network that resurrects deceased loved ones for hyper-targeted ads resonates as disturbingly plausible.
  • Commenters expect AI-built personas of all users for ad optimization and microtargeting, with some joking about “grandma endorsing grooming products.”

Legal frameworks and likeness

  • Discussion covers estate law, RUFADAA, and “postmortem right of likeness,” plus moves like Denmark’s proposal to give people copyright over their features.
  • Questions arise about conflicts (e.g., lookalikes, identical twins, background subjects in photos) and whether likeness laws end up mainly protecting the famous.
  • Some propose simply folding likeness/data rights into estates; others see this as mostly benefiting those with money to enforce it.

Technical and practical issues

  • One commenter is actively collecting exhaustive personal data (video, audio, sensors) to train a future “self-model,” acknowledging it as a long-shot.
  • Others doubt how accurate a persona can be from typical online traces, though many assume current profiling is already sophisticated enough for persuasive fakes.
  • Practical experience with Facebook memorialization shows platforms’ processes for handling the dead can be slow, inconsistent, and seemingly low-priority.

Skepticism and edge cases

  • Some believe the dead should have fewer rights, not more; once you’re not a legal person, your data should be governed by estate and general law, not new rights.
  • Concerns include: could deletion rights destroy evidence of crimes? Could heirs erase embarrassing or historically important records?
  • Cultural and ethical objections liken AI resurrection to necromancy or “wizard portraits,” arguing the living shouldn’t speak with the mouths of the dead.

OpenFreeMap survived 100k requests per second

Cloudflare vs origin load

  • Several commenters note that Cloudflare served ~99% of traffic, implying the origin only handled ~1,000 rps while the CDN absorbed ~99,000 rps.
  • Others push back on dismissing this as “just Cloudflare surviving”: designing URL paths and cache headers to achieve a 99% hit rate is seen as real engineering work, not an accident.

Were the requests “real users” or bots?

  • The blog’s claim that usage was largely scripted is questioned: people say map-art fans often “nolife” exploration for hours, which can generate thousands of tile requests.
  • One commenter measured 500 tile requests in 2–3 minutes of casual scrolling, arguing the author’s “10–20 requests per user” baseline fits embedded, non-interactive maps, not active exploration.
  • Others counter with math: 3B requests / 2M users (1,500 requests/user) and /r/place‑style dynamics strongly suggest significant automation, even if not exclusively bots.

Blame, entitlement, and expectations of a free API

  • There’s a sharp split on whether it was fair to criticize wplace:
    • One side: if you publicly advertise “no limits,” “no registration,” and “commercial use allowed,” you shouldn’t blame users for heavy usage; that’s akin to honoring a bulk hamburger order at the posted price.
    • The other side: hammering a volunteer, no‑SLA service at 100k rps is effectively stress‑testing it; expecting the operator to scale “to infinity” on their own dime is seen as entitled.
  • Some argue the operator handled it well by blocking via referrer, reaching out, and suggesting self‑hosting while keeping the free public instance available.

Rate limiting and controls

  • Suggestions include per‑IP rate limits (e.g., 100 req/min) or JA3/JA4 fingerprinting, but the maintainer prefers referrer‑based controls so they can talk to site owners and steer heavy users to self‑hosting.
  • Others note referrer‑based rate limits match the real control point (the embedding site) better than per‑user limits for distributed clients.

Infrastructure, caching, and costs

  • Debate over why wplace didn’t cache tiles themselves: some call it “laziness,” others cite priorities and the reality of a fun side‑project that suddenly went viral.
  • 56 Gbit/s is viewed by some as “insane” and by others as feasible on a few well‑provisioned servers; consensus is that bandwidth cost, not raw server capability, is the main constraint for a free service.
  • Long subthread on nginx tuning: file‑descriptor limits, open_file_cache size, multi_accept, and whether FD caching is even necessary with modern NVMe and OS caches.

Alternative architectures

  • Multiple people suggest PMTiles + CDN as a simpler model (single large static file, range requests), noting comparable performance in small benchmarks.
  • Others ask why not run entirely on Cloudflare (Workers, R2, Cache Reserve); responses highlight migration effort and the risk of variable, usage‑based bills vs predictable Hetzner dedicated servers.

Show HN: The current sky at your approximate location, as a CSS gradient

Perceived accuracy and physical intuition

  • Many commenters report the gradient matches their actual clear sky “shockingly” well, including color shift toward the horizon and wildfire-smoke haze.
  • Others note mismatches when local conditions deviate from the ideal clear atmosphere: cloudy, gray, or smoky skies often appear as clear blue in the app.
  • Some high-latitude users and people on daylight-saving time report that twilight and night colors can be off by about an hour.
  • Several people newly notice why the horizon isn’t blue (longer optical path, more scattering/particles), and appreciate seeing that captured.

Night sky and realism limits

  • At night, the page is often just black; multiple users initially think the site is broken.
  • Suggestions include adding stars, night gradients, clouds, or light-pollution effects, but others argue that even in other apps, a stylized, simple sky is often preferable to realism for usability.

Weather, smoke, and measurement

  • Repeated suggestion: incorporate real-time weather, haze, or satellite data so the gradient reflects actual cloud/smoke conditions.
  • One commenter describes commercial work using physical sensors at windows to measure true sky color temperature and reproduce it indoors, arguing that modeling alone can’t capture clouds/smoke accurately enough.

Implementation details & web tech discussion

  • People are impressed that the page renders via a simple HTML gradient with essentially no client JS or DOM complexity.
  • The stack: Astro on Cloudflare Pages, using Cloudflare’s IP geolocation headers (surfaced via Astro.locals.runtime.cf) plus a sun-position library and an atmospheric-scattering model.
  • There’s lively side discussion about old-school meta http-equiv="refresh" vs HTTP headers, .htaccess, nginx behavior, and limitations of early shared hosting, which explains why client hints like meta-refresh were attractive.
  • Some ask for a pure client-side version; others propose using timezone as a rough location proxy for privacy.

Feature ideas and applications

  • Popular ideas: live desktop/phone wallpaper, smart-home dashboards, “fake windows” or skylights, backgrounds for other sites, and a UI to tweak or copy gradients.
  • Requests also include manual location/time override for when IP geolocation is wrong.

Broader discussion: realism vs product needs

  • A long sub-thread debates a story about implementing a highly realistic sky in navigation software, then being told to revert to a simple blue rectangle.
  • Themes include: overengineering vs scope, delight vs clarity, corporate aversion to “micro-innovations,” maintainability costs, and what professional craftsmanship should prioritize.

Intermittent fasting strategies and their effects on body weight

Overall interpretation of the study

  • Many see the “longer trials needed” language as partly signaling a funding need, not just scientific caution.
  • Core takeaway: alternate-day fasting (ADF) leads to weight loss and may slightly outperform equivalent continuous calorie restriction, though the basic “eat less → weigh less” result is unsurprising to many.

Intermittent fasting vs simple calorie restriction

  • Several comments emphasize that if ADF averages the same weekly calories as daily restriction, weight loss is similar; the paper’s nuance is that ADF may yield marginally more loss.
  • Supporters argue that fasting may activate distinct pathways (mTOR, autophagy) beyond linear calorie math and could affect longevity.
  • Others stress that the main benefit of IF/ADF is making restriction easier to adhere to, not magic timing.

Body composition, lean mass, and protein

  • Concern: weight loss metrics ignore whether loss is fat vs lean mass; losing muscle is especially risky for older adults.
  • Counterview: for obese people, total weight/fat loss dominates other concerns.
  • Consensus that any deficit risks lean mass loss; adequate protein and resistance training are needed.
  • Debate on how to calculate protein needs (actual vs ideal/adjusted body weight), with citations for using ideal or adjusted weight in obesity.

Calories in / calories out and metabolic complexity

  • One camp insists thermodynamics is decisive: sustained deficit always leads to weight loss; timing or diet type mainly help psychology.
  • Others argue this framing is “true but useless” because:
    • Energy expenditure adapts to intake (NEAT, body temperature, hormonal changes).
    • People misestimate calories in and can’t measure calories out precisely.
    • Crash deficits can provoke long-lasting metabolic compensation and regain.
  • Disagreement over how large adaptation effects are and how actionable CICO is in practice.

Psychology, habits, and practicality

  • Many see IF as a habit-reset tool:
    • Breaking automatic breakfast/snacking.
    • Learning that hunger signals are tolerable and not emergencies.
  • Coffee-with-milk during “fasts” sparks minor purity debates; some prioritize strict rules, others prioritize sustainable behavior change.

Exercise vs diet

  • Strong theme: you “can’t outrun a bad diet”; exercise often increases appetite and burns fewer calories than people think.
  • Others report large endurance volumes (running/cycling) allowing very liberal eating, but this is framed as niche and time-intensive.
  • For modest weight loss, some argue exercise helps mood and adherence; others emphasize portion control as primary.

Fasting, health, and longevity

  • Some clinicians and commenters use IF within broader longevity strategies and note macaque and animal data on fasting benefits.
  • Discussion of fasting around chemotherapy and long fast anecdotes (including extreme multi-day fasts) suggest possible autophagy and immune effects, but commenters flag that much evidence is animal-based and human relevance is still unclear.

Anecdotal outcomes and alternative strategies

  • Multiple N=1 stories:
    • Long-term 16:8 IF with stable weight but perceived mental clarity.
    • Extreme IF patterns (23:1, alternate-day) with substantial weight and fat loss.
    • IF combined with keto producing dramatic losses and appetite suppression; carb restriction cited as making IF “trivially easy.”
    • Others lose large amounts via skipping specific meals plus tracking macros, then later drop IF due to blood sugar issues.
  • Some frame IF mainly as one structured system among many (keto, low-carb, etc.) that can help people who struggle with unstructured “just eat less” advice.

Long-term exposure to outdoor air pollution linked to increased risk of dementia

Possible biological mechanisms

  • Some wonder if air pollution could interact with hypotheses about lithium depletion and Alzheimer’s.
  • Ideas raised: other metals from pollution might displace lithium in biological processes; air pollution induces oxidative stress, while lithium may modulate oxidative-stress pathways.
  • Others note lithium itself is not an antioxidant; current dementia–pollution work emphasizes inflammation, oxidative stress, vascular damage, and blood–brain barrier disruption rather than direct nutrient depletion. Mechanistic links remain unclear.

Air quality policy and urban pollution

  • Several comments treat the findings as support for low-emission zones (e.g., ULEZ), citing clear downward trends in city pollution over time.
  • Some complain about diesel exhaust (especially for motorcyclists) and advocate stricter bans.
  • Others highlight non-vehicle sources: coal trains, warehouses, ports, and roads generally as major PM2.5 contributors.
  • Historical discussion notes a brief period when indoor and outdoor air may both have been relatively clean, with gas and electric replacing wood/coal fires before mass motorization; gas cooking is called out as an indoor pollutant.

Caregiving, planning, and end-of-life choices

  • Multiple caregivers describe dementia care as emotionally and financially crushing, stressing:
    • Plan early for late-life and memory care.
    • Don’t attempt 24/7 home care alone; burnout is common.
    • Facilities with enough residents can afford continuous staff but are often ruinously expensive, depleting estates before public support kicks in.
  • Some commenters discuss pre-planned assisted suicide or self-directed death to avoid prolonged cognitive decline, recognizing legal and practical uncertainties and moral ambiguity.

Environmental justice and socioeconomic disparities

  • Air pollution is framed as a “third factor” behind poverty and marginalization; marginalized groups disproportionately live near major roads and industrial areas.
  • A claim that unequal air pollution exposure reduces life expectancy for Black residents by 15 years in one county is met with skepticism:
    • Commenters question causal identification versus confounding by poverty.
    • Others argue such large effects are plausible in heavily polluted, segregated settings, but 15 years solely from air is hard to accept without very strong methods.

Prevalence, aging, and numbers

  • Commenters convert the article’s global dementia counts to proportions (~0.7% of total population; ~7% of people 65+), noting this is already large.
  • The projected tripling of cases by 2050 is attributed mainly to population aging; age-adjusted prevalence may stay nearly flat.

PM2.5 composition and toxicity

  • PM2.5 is emphasized as a size class, not a single substance. Particles can be salts, organics, metals, etc., with very different toxicities.
  • Some note evidence that toxicity per microgram differs widely by source (traffic vs coal vs biomass), though which is worst is not settled in the thread.
  • Understanding composition-specific health impacts is described as a major open research area.

Mitigation strategies for individuals

  • Suggested actions for people in polluted cities:
    • Use HEPA (or “HEPA-like”) air purifiers indoors; add substantial activated carbon if gases (NO₂, VOCs) are a concern.
    • DIY options: box fan plus HVAC filter; cheap consumer purifiers from big-box stores.
    • Wear respirators or masks outdoors on bad days.
    • Be cautious about living/placing schools and daycares within a few dozen meters of major roads or freeways; pollution is believed to drop off with distance, though data are sparse in the thread.
    • Recognize that car cabin filters are limited; cars draw outdoor air too.
  • A cited study suggests that, despite higher outdoor exposure, cyclists and transit users may gain net life expectancy from everyday physical activity versus car commuters.

Other notes and disagreements

  • Some argue that in extremely polluted environments, people may simply die earlier from other causes and never reach typical dementia ages, which others criticize as a misuse of statistics.
  • Genetics (e.g., early-onset Alzheimer’s mutations, plus potentially protective variants) are mentioned as powerful determinants, showing that environment is only one factor.
  • There is scattered frustration about slow political action despite long-standing knowledge on how to cut particulate and NO₂ emissions, and concern about regulatory rollbacks.
  • One commenter attributes multiple chronic diseases primarily to insulin/metabolic issues, but this is presented without supporting evidence and not taken up by others.

Stanford to continue legacy admissions and withdraw from Cal Grants

Elite Admissions as Legalized Bribery & Class Reproduction

  • Many see Stanford’s stance as continuous with a long tradition: big “legal” donations buy access, while only small, direct bribes send people to jail.
  • Several argue this is structurally indistinguishable from a country club: power, money and status circulate among the same strata; universities are “finishing schools for the elite.”
  • Legacy and donor admissions are framed as modern hereditary nobility that preserves past discrimination (who was allowed in generations ago).

Defenses of Legacy & Donor Preference

  • Defenders claim legacies at top schools are usually academically strong and often better-prepared than average applicants.
  • Legacy preference is portrayed as a tie‑breaker among many near-identical high achievers, helping build multigenerational alumni networks and boosting donations.
  • Donor admits are justified by some as “whales” whose full freight and gifts subsidize financial aid for many non‑rich students and maintain institutional strength.
  • Others argue mixing wealthy, connected students with “smart but less privileged” peers can create powerful networks and opportunities.

Merit, Tests, and “Holistic” Admissions

  • There is broad skepticism about holistic review: historically used to restrict Jews, now seen by some as a tool to cap Asians and engineer a desired class and racial mix.
  • Debate over metrics:
    • Standardized tests praised as the least gameable, especially in a fragmented US K‑12 system.
    • Others note top scores cluster so tightly that tests alone can’t rank within the elite pool; GPA and tests both imperfect but predictive.
    • High school grades are seen as heavily gameable and distorted by grade inflation and unequal school quality.
  • Some propose much harder national exams or university‑run entrance exams; others suggest admitting widely and “weeding out” via very difficult intro courses.

Public Funding, Cal Grants, and Fairness

  • Many support California tying aid to nondiscriminatory admissions and applaud the state for not subsidizing an elite, status‑conferring institution.
  • Others note the immediate losers are low‑income Stanford admits losing Cal Grants; Stanford’s applicant pool and finances will barely notice.
  • A recurring principle: as long as a university takes public money or tax advantages, it should not use legacy, donor, or opaque “holistic” preferences.

Purpose and Value of Elite Universities

  • Several commenters say undergrad academics at “T10” schools aren’t dramatically better than good state schools; the real value is status and connections.
  • Others counter with experiences of much higher rigor and better teaching at top programs, especially in specific departments.
  • Some argue truly egalitarian policy would be to abolish Harvard‑style elite universities altogether; others accept an elite tier but want it genuinely merit‑based.

Broader Class, Politics, and Hypocrisy

  • Thread repeatedly notes that DEI and legacy are both challenges to “pure” academic merit, but one is framed as social justice and the other as naked class privilege.
  • Long side debates question whether US academics are actually “left,” pointing to their tolerance of legacy admissions as evidence of liberal, not egalitarian, politics.
  • Several see the entire system—legacy, DEI fights, Cal Grants tug‑of‑war—as surface struggle over a deeper reality: wealth and connections dominate outcomes regardless.

Yet Another LLM Rant

Model knowledge, synthetic data, and “model collapse”

  • Some speculate newer GPT versions feel “better at coding” mainly because they’re trained on newer docs/blog posts, not because of deeper reasoning.
  • Several comments worry about future models training on their own outputs (synthetic data), calling this unsustainable and linking it to “model collapse” or a “software Habsburg jaw.”
  • Others note synthetic data can be used to reinforce patterns or coverage, but can’t magically create genuinely new knowledge.

Hallucinations, truth, and what it means to “know”

  • One camp says hallucinations are inherent: LLMs always generate something, have no concept of reality, and you can’t define a clean probability cutoff without a ground-truth model of the world.
  • Another camp argues that for narrow domains with labeled data (e.g. Iris dataset, math benchmarks) probability cutoffs and calibration are feasible; general world-knowledge is the hard part.
  • Long subthreads debate whether humans are “statistical models,” whether LLMs can be said to “know/think/deduce,” and whether humans are actually much better at knowing what they don’t know.
  • Some bring in philosophy (Kant, Dennett, qualia) and neuroscience; others stress we don’t yet have agreed theoretical criteria for AGI.
  • There’s mention of newer reasoning models that more often answer “I’m not sure” in math, as a partial mitigation of hallucinations.

LLMs as tools: usefulness vs unreliability

  • Many argue LLMs are valuable but must be treated like powerful, error-prone tools—more like overconfident interns than compilers. You must always verify.
  • Others counter that good tools are reliable and transparent; LLMs feel more like capricious bureaucracies and so are poor tools, especially when marketed as near-oracles.
  • Some liken the trust problem to Wikipedia or journals—imperfect but still highly useful if you understand their limits; others insist LLMs’ opaque failures and overconfidence are a qualitatively worse issue.
  • A recurring point: checking LLM output can be easier than producing it from scratch for boilerplate or tedious tasks, but becomes dangerous with subtle or high‑stakes work.

Coding workflows, agents, and the zstd/iOS case

  • The original rant centers on GPT fabricating an iOS zstd API; commenters confirm both failure and success cases:
    • Some runs of GPT‑5 (especially with “thinking” mode or web search) correctly say “you can’t; iOS has no zstd, use LZFSE/LZ4 or vendor libzstd.”
    • Other runs confidently hallucinate a built‑in zstd option, illustrating non‑determinism and routing issues.
  • Several advocate “reality checks”: coding agents or IDE integrations that compile/run code, run tests, or query official docs (via tools/MCP), catching hallucinations automatically.
  • Others report agents can loop uselessly when stuck on a false assumption, burning tokens, so human oversight is still essential.
  • Some share experiences where LLMs handle boilerplate and standard patterns very well, but fail badly on niche algorithms (e.g., Byzantine Generals) or poorly documented edge cases.

Evaluation practices and expectations

  • Multiple comments criticize judging GPT‑5 from a single prompt, likening it to discarding a language or type system after one failure.
  • Others defend that if a tool can confidently send you down a dead end on a straightforward factual constraint (“this API doesn’t exist”), it’s disqualifying for their personal workflow.
  • There’s a meta‑discussion about prompt style (short vs detailed, use of reasoning/search) and “holding it wrong” accusations.

Social and professional impacts

  • Some worry hype is devaluing software engineering, encouraging management to over‑rely on LLMs, and eroding pathways for junior developers who may become dependent on expensive tools.
  • Concerns extend to artists and other professions whose work is used for training, and to a future where AI’s main clear winners are large corporations cutting labor costs.
  • One commenter notes many treat AGI as “Artificial God Intelligence,” criticizing unrealistic expectations and marketing.

Meta: anthropomorphizing and rhetoric

  • Several point out contradictions in calling LLMs “chronic liars” while insisting we shouldn’t anthropomorphize them.
  • The “stochastic parrot” line is seen by some as insightful, by others as an outdated meme that ignores recent empirical progress in reasoning and internal structure.
  • Overall, the thread splits between skeptics emphasizing unreliability and epistemic limits, and practitioners emphasizing pragmatic gains when LLMs are used cautiously within robust feedback loops.

Multimodal WFH setup: flight SIM, EE lab, and music studio in 60sqft/5.5M²

Overall reaction to the setup

  • Many find the tiny, equipment-dense room visually striking and inspiring, especially the ability to combine flight sim, music, and EE lab in ~60 sq ft.
  • Others see it as just “a closet with industrial shelving,” underwhelming once framed as a design-studio case study rather than a personal DIY post.
  • Several note they’d quickly clutter such a space or feel mentally “squeezed” by the cramped environment.

Design firm, cost, and copywriting

  • Some are puzzled or dismissive about hiring a design studio for a home office, likening it to outsourcing your dotfiles.
  • Others defend the value of deliberate design and appreciate the detailed write-up of tradeoffs.
  • The marketing language (“layering shelves vertically…”) is mocked as overwrought; more than one person calls out design-industry “word salad.”
  • This triggers a side debate: is design/beauty objective or subjective? Commenters argue both sides, using logo redesigns and art movements as examples.

Furniture, materials, and ergonomics

  • The unfinished particleboard desk surface is widely criticized as uncomfortable, fragile, and gross-prone; some suggest birch or nicer plywood.
  • The chair and legroom are viewed as ergonomically poor by several commenters, who fear for the user’s back and wrists.
  • Keyboard discourse: some think the mechanical keyboard choice is underwhelming; others like it. There’s a long subthread on tenkeyless vs full-size layouts, desk space, and macro keys.

Shelving and hardware details

  • Multiple people try to name/identify the shelving: boltless/industrial shelving, “teardrop” uprights, Gorilla/Muscle/Edsal-style racks, wire rack systems, SuperStrut, and IKEA wooden variants.
  • Practical notes include stability on uneven floors, moisture sensitivity, and tips for cutting legs, adding custom tops, and using caster wheels.

Space, psychology, and WFH

  • Some love the “multimodal WFH” idea: quickly switching between work and hobbies without reconfiguration feels psychologically grounding.
  • Others insist they must physically separate work and play spaces to stay sane, sometimes preferring a simple laptop at the kitchen table.
  • Space constraints (Brooklyn real estate) are cited to explain the tiny room; others share their own small-office builds and large-monitor or multi-desk preferences.

Did California's fast food minimum wage reduce employment?

Effects on Fast-Food Employment

  • NBER paper: finds ~2.7–3.9% relative decline in California fast‑food employment vs “elsewhere in the US,” roughly 18–20k jobs.
  • Several commenters note this is small compared to the sector and may be overshadowed by COVID, delivery shift, and broader restaurant decline.
  • Others argue that even a few percent is “massive” given only part of the sector was directly affected and that the effect likely grows over a decade.

Tradeoffs: Higher Wages vs Fewer Jobs

  • Many frame it as a trade: modest job loss for roughly 20–25% higher wages at the bottom.
  • Supporters call that an overall success, especially if total wages paid rose and if displaced workers quickly found other jobs; skeptics stress those workers experience a 100% wage loss.
  • Utility arguments appear: a 25% raise for low‑income workers may bring more benefit than the loss borne by a smaller displaced group—unless that group is permanently sidelined.

Who Loses Jobs? Youth, Low‑Productivity, Disabled

  • Several argue high minimum wages mainly exclude:
    • Teens and first‑time workers (removing “stepping stone” jobs),
    • Older low‑skill workers,
    • People with disabilities or very low measured productivity.
  • Counterpoint: these groups can be supported via wage subsidies, EITC, or targeted programs rather than keeping wages low for everyone.

“Living Wage” vs Business Viability

  • One camp: if a business can’t pay a living wage, it shouldn’t exist; low wages are framed as exploitation and de facto corporate welfare (with taxpayers subsidizing workers via benefits).
  • Opposing camp: “living wage” is vague, differs by household, and minimum wage is not designed for that; forcing high floors kills marginal but socially useful businesses and self‑employment.

Automation, Hours, and Service Quality

  • Observations of:
    • Fewer staff, reduced hours, higher prices, more kiosks and app‑only or drive‑through‑only formats.
  • Some see this as accelerated automation that would have come anyway; others view it as policy‑induced substitution of machines for marginal human jobs, plus degraded service.

Broader Economic & Sector Spillovers

  • Unclear from the thread whether total employment in California fell: some data cited show overall job and fast‑food job counts still rising statewide.
  • Discussion of spillovers: higher low‑end wages could lift demand in other sectors, or simply bid up rents and benefit landlords in a constrained housing market.
  • Housing scarcity and property‑tax structures (e.g., Prop 13) are repeatedly blamed for making any wage inadequate.

Methodology, Data, and Conflicting Studies

  • Berkeley study (using chain‑specific data) is cited as finding stable fast‑food employment, higher wages (~18%), and modest price increases (3–7%).
  • Critics question Berkeley’s union funding; others point out NBER’s corporate funders. Both are seen as potentially biased.
  • There is disagreement about using broad BLS categories vs narrowly defining covered firms and about appropriate control regions (“elsewhere in US” vs adjacent areas, à la Card & Krueger).

International & Comparative Notes

  • Commenters point to Europe and Ontario: higher fast‑food wages, strong unions, and no tipping culture coexist with affordable food—but often slower growth and very high housing costs.
  • Nordic countries often rely on sectoral bargaining rather than statutory minimums, plus strong social safety nets.

Normative and Philosophical Fault Lines

  • Deep split between:
    • Market‑first views: wages = productivity; price floors inevitably price out some workers.
    • Justice‑first views: certain job/wage combinations are morally unacceptable, even if markets “clear.”
  • Alternatives floated include stronger unions, higher EITC, UBI, wage subsidies for disabled workers, and aggressive housing supply reform. No consensus emerges.

An engineer's perspective on hiring

Engineer fungibility, replaceability, and “bus factor”

  • Some argue many developers (CRUD, web apps) are largely fungible and readily replaceable; companies inevitably refill roles after attrition.
  • Others counter that domain knowledge and deep system understanding make certain people “difficult to replace” and that true innovation requires non‑interchangeable experts.
  • There’s disagreement on risk: one side prioritizes designing organizations where everyone can be swapped out; the other says the real risk is falling behind, and mitigations should be overlapping expertise rather than full fungibility.

Interviewing employers and cultural fit

  • Several comments stress “interviewing the employer”: scrutinizing future managers, benefits that seem too good, weird vibes, fearful or silent interviewers, or being the only person of a given demographic/attitude as red flags.
  • Others note that expecting demographic or lifestyle similarity is unrealistic in small companies and prefer to keep many personal attributes out of the workplace entirely.

Application volume, filtering, and AI

  • Hiring managers describe being overwhelmed by applicants (hundreds to thousands per role) and needing harsh or arbitrary filters to protect engineering time.
  • There’s concern AI will further flood pipelines with automated applications, escalating an “arms race” of filtering that hurts genuine candidates, especially those without strong networks.

Leetcode, live coding, and take‑homes

  • One camp defends live coding as the strongest signal of core CS skills, problem decomposition, and coachability; they distinguish it from extreme leetcode and set a low bar for correctness with probing follow‑ups.
  • Others criticize leetcode‑style rounds as high‑stress memory tests mismatched to actual work, rewarding grinders and disadvantaging people with anxiety or family obligations.
  • Take‑homes:
    • Proponents like their realism, low immediate pressure, and ability to show real work style.
    • Opponents see them as disrespectful unpaid labor, biased against busy candidates, easy to cheat on, and often poor predictors; some argue companies should pay honoraria.

Alternative interview structures and probation

  • Popular alternatives include: code review of real or sample code, debugging broken code (“uno reverse”), pair‑coding on small features, and in‑depth conversations about past projects and tradeoffs.
  • Some report success with very short interviews plus a probationary/contract‑to‑hire period, or multi‑day “work together and ship something” trials.

Exams, licensing, and professionalism

  • A long subthread frames current processes as serial “exams” unlike other professions with standardized licensing.
  • There’s debate over whether software should move toward formal engineering licensure; obstacles cited include broad, fast‑changing scope and the fact that most failures are merely inconvenient, not catastrophic.

Passion, tenure, and compensation

  • Some hiring philosophies look for “passion” and cultural/taste alignment; others explicitly treat the job as a professional, transactional exchange and warn against exploiting passion.
  • There’s skepticism about multi‑hour, multi‑stage processes when tenure is short and compensation is comparable to less onerous roles.

Partially Matching Zig Enums

Zig comptime enums and comptime unreachable

  • Discussion centers on using Zig’s inline and comptime unreachable to partially match enums and statically prove some branches impossible.
  • Multiple commenters stress this is not an optimizer trick: it relies on Zig’s defined lazy compile‑time evaluation, similar to C++ templates or if constexpr.
  • The compiler instantiates each inline enum case with a comptime-known value, so only the taken branch is compiled; unreachable branches can therefore be enforced at compile time.

Comparisons to Rust, C++, D, and others

  • Rust examples show roughly analogous behavior with const { ... }, but there’s debate over how close the correspondence really is, especially when mixing const and non‑const control flow.
  • D and C examples demonstrate that similar patterns are possible via templates/macros and static if, but often with more verbosity and less clarity.
  • Some praise Zig’s syntax and legibility for these tricks; others think such patterns are too “cute” and prefer straightforward exhaustive switches or simple runtime checks.

Memory safety, correctness, and tradeoffs

  • Large subthread debates Zig vs Rust vs C++ (and occasionally Go/Java/Ada):
    • Zig: bounds-checked slices by default; optional runtime safety modes; manual memory management with defer; no data‑race guarantees.
    • Rust: strong compile‑time guarantees against UAF and data races in safe code, but with higher complexity, “false positives,” and ergonomic pain.
    • C++: RAII and smart pointers can drastically reduce leaks/UB, but the language itself offers few hard guarantees; many pitfalls remain.
  • Some argue simplicity plus runtime checks (Zig) can yield very good real‑world safety; others counter that without strong static guarantees, humans and large codebases inevitably re‑introduce serious bugs.
  • Disagreement over whether features like RAII/destructors vs explicit defer are net safer or just different tradeoffs; no consensus, but strong stated preferences.

Zig’s role as a systems language

  • Supporters highlight: powerful yet simple metaprogramming (comptime, inline else), good C interop, clear generated code, and ergonomic zero‑cost abstractions as Zig’s real advantages.
  • Critics argue Zig sidesteps “hard problems” (ownership, lifetimes, concurrency models, formal invariants) and mostly reimplements C with nicer syntax and better tooling.
  • There’s ongoing tension between viewing Zig as a pragmatic, lower‑ceremony alternative and seeing it as underpowered for long‑term, large‑scale, safe systems work.

Tesla used car prices keep plumetting, dips below average used car

Battery life and degradation

  • One early claim is that used EVs are “dead batteries”; most replies call this exaggerated.
  • Multiple commenters cite real-world data: many EV packs retain ~80–90% capacity after 200–250k miles; some Teslas over 300k miles; studies show ~2% capacity loss per year on average.
  • First‑gen Nissan Leafs are noted as a poor benchmark (air-cooled packs, no thermal management, small range leading to frequent full cycles).
  • Batteries degrade via both age and cycling; there’s debate whether calendar age or cycles dominate.
  • Battery health is measurable on many EVs (including Teslas), unlike ICE engine health, which relies on indirect tests and history.
  • Still, the high cost of pack replacement ($10–20k quoted for some Teslas) creates “concentrated risk” that scares used buyers, even if most packs last longer than the car’s economic life.

Maintenance, reliability, and repair risk

  • Pro‑EV side: far fewer moving parts, no engine/gearbox/timing belt/spark plugs; regenerative braking drastically reduces brake wear; maintenance can be mostly tires and a 12V battery.
  • Skeptical side: Teslas and other EVs can suffer from motors, inverters, chargers, and electronics failures; some anecdotes of EVs spending far more time at dealers than old ICE cars.
  • Tires may wear faster due to EV weight and instant torque; unused friction brakes can rust or seize if not exercised.
  • ICE longevity: some keep ICE cars 20–25 years with routine maintenance; others note modern vehicles are engineered for ~12 years/120k miles but often last much longer.

Pricing, depreciation, and market dynamics

  • The headline drop figure in the article is questioned; commenters say the chart shows ~18% YoY for Teslas, with 4.6% being only the last 90 days.
  • Several people report that “cheap” used Teslas in the US tend to have high mileage or issues; clean, low‑mile cars are still close to new prices. In Germany, some say they still can’t find genuinely cheap Model 3s.
  • EVs in general may depreciate faster due to rapid tech improvements and an early‑adopter market that churns every few years, flooding the 3‑year‑old segment.
  • Some argue used EVs are underpriced because buyers overestimate the impact of moderate range loss.

Brand, reputation, and media coverage

  • One thread says Tesla faces unusually harsh press (e.g., OTA “recalls” trumpeted as major defects); others respond that regulators treat all brands equally and Tesla’s software isn’t especially good.
  • Several comments argue Tesla’s resale is hurt by its CEO’s politics, Nazi‑style salutes, broken FSD promises, and general controversy; others claim this matters mainly to “chronically online” people, not typical buyers.
  • Surveys and protests are cited by some as evidence of a real reputational hit, especially in Europe.

Other EVs and evolving tech

  • BYD and various EU/Asian EVs are mentioned as credible or better‑built alternatives, though long‑term reliability is still unknown.
  • Rapid advances (LFP packs, new fast‑charge chemistries) make older EVs feel obsolete more quickly, which may further depress used prices.

Let's properly analyze an AI article for once

Overall reaction to the blog post

  • Many commenters found the critique of the GitHub CEO’s article sharp, funny, and overdue, especially around inflated claims about AI and developers.
  • Others argued the author undercut their own credibility with weak statistics and at least one apparent misquote about Miyazaki, accusing the piece of its own form of “hallucination.”

CS fundamentals vs “Baby’s First LLM”

  • Strong defense of traditional CS fundamentals: data structures, algorithms, performance, security, reliability, systems understanding.
  • Several argue AI makes fundamentals more important, because weak engineers using LLMs can create huge volumes of bad code and become team liabilities.
  • Pushback: much day‑to‑day work is CRUD/JSON API plumbing; people question whether deep CS is necessary for the majority of jobs.

Whiteboard interviews and hiring

  • One view: whiteboard problems train and test general problem‑solving and communication, not literal job tasks, and correlate positively (if imperfectly) with job performance.
  • Counterview: they function as hazing, filter out anxious candidates, and lack transparent, scientific validation; proprietary internal data is distrusted.
  • Shared sentiment: whiteboard alone is insufficient; real hiring should also test actual programming and debugging.

Academia vs vocational skills, bootcamps, and “software as a trade”

  • Ongoing tension between CS as a science vs software development as a trade:
    • Some say universities should focus on theory; tooling (git, Docker, CI, SSH, IAM, CSS quirks, etc.) is vocational and belongs in on‑the‑job training or separate programs.
    • Others insist basic tooling and collaborative workflows are “day 1” essentials and academia systematically fails students by ignoring them.
  • Bootcamps: praised for producing teachable, lower‑cost “front‑end/trade” workers; criticized as creating “assembly line” coders with no path up.
  • Several propose clearer separation: CS (science) vs Software Engineering / trade‑school style programs.

Math and statistics disputes

  • Commenters challenge the blog’s sample‑size criticism: sample size depends mainly on desired confidence/margin of error and effect size, not population size; 22 can still be informative but with large error.
  • Some see the whole GitHub piece as pure marketing, not worth serious statistical analysis.
  • Side debate on calculus vs statistics in CS curricula: some argue stats is more practically useful; others reply stats itself rests on calculus and both are widely applicable.

AI’s “plausible nonsense,” metrics, and technical debt

  • Multiple comments resonate with the idea that LLM marketing prioritizes “somewhat plausible” output and quantity (e.g., “% of lines generated by AI”) over correctness or value.
  • People note that AI‑generated code can inflate line counts while providing zero or negative productivity if developers must debug broken generations.
  • Concern that “vibe coding” plus weak fundamentals will massively increase technical debt, creating future demand for consultants and debuggers.

AI art, misquotes, and honesty in representation

  • Several criticize AI food and marketing images as dishonest, contrasting them with traditional “enhanced but real” food photography; others note food imagery has always been heavily faked.
  • The Miyazaki quote is flagged as misused: commenters provide context that he criticized a specific grotesque AI demo as an “insult to life,” not AI art in general.

Broader skepticism about AI hype and propaganda

  • Strong sentiment that CEO/AI‑vendor content is sales copy aimed at executives and investors, not developers.
  • Some see continuity with prior hype waves (crypto, “FOMO capitalism”), where truth and rigor matter less than narratives that justify valuations and control.

Blender is Native on Windows 11 on Arm

Blender ARM Port and Tooling

  • People are pleased Blender is now native on Windows ARM, noting prior ARM support on macOS and Linux made this plausible.
  • Some note the hardest part in ARM ports is often dependencies that implicitly assume “Windows == x86,” not the main app code.
  • Discussion dives into build systems: Autotools often struggles with universal binaries; CMake and clang’s -arch support make multi-arch builds easier.
  • Clang is described as inherently cross-compiling, but LLVM IR itself is not fully architecture-independent, limiting “compile once, target everywhere” fantasies.

Blender on Other ARM Platforms

  • Commenters mention Blender has run on Linux ARM (even on phones) until GPU requirements rose.
  • iOS ARM builds existed for years but lacked a usable UI; recent talks and blog posts outline “beyond mouse/keyboard” directions and a more touch-friendly future UI.

Why Windows-on-ARM Lags Apple’s ARM Transition

  • Repeated points: Apple controls hardware SKUs, ships its own ARM chips, and fully committed by stopping Intel Mac sales.
  • Apple had ARM-ready dev tools, system apps, and a strong translation layer (Rosetta) plus universal binaries, so users and developers could transition gradually.
  • Microsoft, by contrast, supports both x86 and ARM indefinitely, reducing pressure to port; its earlier Windows RT effort lacked compatibility, confused APIs, and damaged trust.
  • The Windows ecosystem is far larger and more heterogeneous in hardware and legacy software, making a clean break much harder.

Commitment, Incentives, and Backward Compatibility

  • Many argue third‑party developers have little incentive to target Windows ARM while it’s a small share of sales and x86 remains default.
  • Some think ARM/RISC‑V will eventually erode x86 dominance; others expect side‑by‑side architectures for decades due to Windows’ heavy backward‑compatibility expectations.
  • arm64ec and the emulation layer are criticized as complex “kludges” that risk undermining Windows’ core selling point of running old software seamlessly.

Experiences with Current Windows ARM Hardware

  • Several users report excellent experiences with recent Snapdragon-based Surface devices: long battery life, “real computer” feel in tablet form, and surprisingly good x86 emulation (including many games).
  • Others remain skeptical, having seen multiple “waves” of Windows ARM initiatives fade.

ARM vs RISC‑V and Future Architectures

  • A tangent debates RISC‑V and other ISAs (e.g., Loongson) as potential future desktop contenders.
  • Enthusiasts foresee rapid progress, while skeptics stress that matching top-tier x86/ARM performance will likely take a decade or more, and that real-world adoption lags hype (likened to the “Year of the Linux Desktop”).

Miscellaneous

  • Some ask about non‑Qualcomm Windows ARM laptops (none retail yet; other ARM vendors are rumored to be working on chips).
  • One commenter suggests using Blender’s official blog as the primary source instead of an ad-heavy news site.

Dial-up Internet to be discontinued

Modern Web on Dial-Up (and Other Slow Links)

  • Consensus: most modern sites are effectively unusable at 56k; page loads would take minutes to tens of minutes.
  • Simple, text-first sites (HN, personal blogs, “smolweb”/HTML+CSS only) are considered viable; once you add large images or heavy JS, it breaks down.
  • Several people point out that you can simulate dial-up via browser dev tools (Chrome, Firefox, Safari) or by experiencing weak mobile coverage, which can approximate or exceed dial-up pain.
  • “Lite” or text-only versions of news sites (e.g., lite.cnn.com, text.npr.org) are praised and sometimes preferred even on fast connections.
  • Adblocking (e.g., uBlock Origin) is suggested as a major performance win on slow links.

Who Still Uses Dial-Up and Why

  • Some rural or remote users still rely on dial-up because DSL, cable, or good cellular coverage aren’t available or are prohibitively expensive.
  • Where available, satellite (HughesNet, later Starlink) or new fiber builds have replaced dial-up, but coverage is patchy.
  • Dial-up is mostly seen as usable only for basic email (without images), text-heavy sites, or text-mode browsers like lynx.

Technical Limits and Infrastructure

  • Dial-up speed is still effectively capped around 56 kbps due to analog phone line limitations (~3.1 kHz channel / 64k digital channels, with signaling overhead).
  • Many users historically never reached 56k; noisy lines often forced 33.6k or less.
  • Some areas never got DSL due to distance from central offices or old equipment; new deployments tend to leapfrog straight to fiber.

SPA vs. Server-Rendered vs. Native for Low Bandwidth

  • One side argues SPAs can be good for low bandwidth: large bundle once, then minimal incremental traffic.
  • Others counter that typical SPAs ship massive JS bundles and fail to load at all on bad connections, while simple server-rendered HTML + forms remains robust and lightweight.
  • Some suggest offline-first or native apps as the ideal for very poor or intermittent connectivity.

AOL, Business Model, and Sunset

  • Surprise that AOL still existed as an ISP in 2025.
  • Dial-up was reportedly a major revenue source as late as 2010; by 2014 it had shrunk to a small share as AOL pivoted to ads/media.
  • Concern that customers may keep getting billed for “AOL plans” even after dial-up ends, with plans padded by marginal “benefits.”
  • Some users discover and stop paying for legacy AOL subscriptions while keeping free email.

Nostalgia and Cultural Memory

  • Strong nostalgia for the dial-up modem sound; links to visual/audio explanations, remixes, and jokes.
  • Remembered as a symbol of the idealistic early internet versus today’s more commercial, AI-driven landscape.
  • People recall AOL CDs, aggressive marketing, and call-center upsells, plus rival services (Prodigy, MSN).
  • Some mark this shutdown as a symbolic end of the dot-com era.

A spellchecker used to be a major feat of software engineering (2008)

Historical constraints and ingenuity

  • Several comments recall early PC and 8‑bit days where having any spellchecker felt miraculous. Separate programs, floppy swaps, and no suggestions were common.
  • The original article’s core challenge—200K words on ~256KB machines, sometimes floppy‑only—is emphasized as the real feat, not just lookup logic.
  • References are made to early Unix spell, classic Programming Pearls columns, and later writeups explaining how these systems fit into 64KB and similar limits.

Algorithms, data structures, and compression

  • People wish the article went deeper into techniques; they speculate and link to:
    • Tries, compressed DAGs, and Bloom filters as dictionary representations.
    • Levenshtein distance (often limited, e.g. edit distance 1) for suggestions.
    • Disk‑based lookup with custom indexing and caching hot words.
  • The need to store dictionaries at well under 1 byte/word pushes toward probabilistic or highly compressed schemes rather than simple tries.

Checker vs. corrector, and why it’s still hard

  • Multiple commenters distinguish:
    • Spell checking: is this token in the word list? Conceptually trivial with basic data structures.
    • Spell correction: generate high‑quality suggestions, especially in context. This is much harder.
  • Examples show how valid but wrong words (“form” vs “from”, “pubic” vs “public”) defeat naive systems.
  • Some argue writing a basic checker is undergraduate‑level; others insist a truly good spellchecker is still non‑trivial without libraries and corpora.

Modern spellcheckers: widespread frustration

  • Many complain that today’s cloud‑scale systems (Gmail/Chrome, Android, iOS) often feel worse than older desktop tools.
  • Suspected causes include:
    • Backend shifts toward more generic ML/LLM systems hurting quality.
    • Poor or missing use of context.
    • Overweighting first letters or profanity filters.
  • iPhone and Android keyboards are singled out for erratic suggestions, over‑eager autocorrect, and annoying UI/UX decisions (e.g., period vs space behavior).

Social and educational angles

  • Historical parallels are drawn with anxiety over calculators and grammar checkers “dumbing down” users.
  • Some say spellcheck improved their spelling by constant feedback; others argue people still can’t spell, so benefits are limited.
  • There’s interest in tools that log personal errors, integrate full dictionaries/thesauri, and explicitly help users learn.

LLMs and new possibilities

  • Commenters note that weak LLMs can now do spell/grammar/style checking and more, via prompts rather than bespoke algorithms.
  • Ideas include using masked‑token models or LLM “surprise” scores to heat‑map awkward or likely‑wrong tokens, addressing the “valid but wrong word” problem.

Beyond English: CJK and input methods

  • Spellchecker‑like technology is noted as essential for Chinese, Japanese, and Korean input, mapping phonetic or partial codes to characters amid high ambiguity.
  • Historical Chinese input hardware and the challenge of large glyph sets are mentioned as related feats of engineering.

RISC-V single-board computer for less than 40 euros

Performance and Use Cases

  • Disagreement on speed: some say the board feels “unbearably slow,” others argue raw CPU is closer to a Pi 4 than Pi 3 for non-SIMD workloads.
  • SIMD (Neon) absence and immature software stack are noted as main performance drawbacks vs ARM.
  • Extra RAM (2–8 GB vs Pi 3’s 512 MB–1 GB) and NVMe via M.2 are highlighted as major practical advantages for builds, dev work, and general usability.
  • Consensus that neither current RISC‑V SBCs nor Raspberry Pis are great full‑time desktops; cheap used x86/Macs are often recommended instead.

Storage Choices and SD Endurance

  • NVMe is praised but some argue SD cards are sufficient for light CLI/dev use, especially with ample RAM caching.
  • Concerns about SD wear are countered with back‑of‑envelope calculations suggesting very high build counts before failure for large cards.

Software, ISA, and Distro Support

  • JH7110’s lack of RVA23 compliance raises future‑compatibility worries, especially with Ubuntu’s decision to target RVA23.
  • Debate over Ubuntu’s stance: some call it premature and exclusionary; others see it as necessary to push vendors toward robust, modern RISC‑V implementations.
  • Fedora and Debian are cited as better fits for current non‑RVA23 hardware.
  • One commenter states that “basically everything works” Linux‑wise, though others fear untested riscv64 paths and subtle bugs.

Ecosystem, Alternatives, and Positioning

  • Orange Pi RV2 is recommended as faster with better software support than this board.
  • Other cheap RISC‑V options (Milk‑V Duo, Banana Pi F3, Pico‑class MCUs) are mentioned, stressing differences between microcontrollers and full SBCs.
  • Discussion on why competitive high‑end RISC‑V cores are rare: integration effort, risk aversion, and the fact that “people buy solutions, not ISAs.”

Hardware Design and Security

  • M.2 (single PCIe lane) is appreciated; lack of full PCIe slots is attributed to signal‑integrity, PCB cost, and validation complexity.
  • Some want boards without onboard flash/wireless for offline security uses.
  • Concerns raised about hidden binary blobs, especially for GPU (IMG BXE) firmware, limiting hobby OS and security‑focused work.

GDPR, Cookie UI, and Ads

  • Thread extensively criticizes the site’s cookie banner: full‑screen, “accept all” prominent, no easy “reject all.”
  • Some argue this violates GDPR guidance requiring symmetric consent options; others say users can just close the tab.
  • Broader frustration with “malicious compliance” cookie UIs and surveillance‑driven ad models; several suggest privacy‑friendly defaults or contextual (non‑tracking) ads instead.

Undefined Behavior in C and C++ (2024)

Rust vs C/C++ in Real-World Use

  • Tooling: Several comments praise Rust’s tooling—especially Cargo and the compiler’s diagnostics—as far superior to typical C/C++ workflows, partly offsetting slower compile times by reducing “edit–compile–debug” cycles.
  • Dependency management: Cargo is seen as a strong advantage over ad‑hoc C/C++ build systems, though some argue package management should live at the project, not language, level.
  • Async and concurrency: Experiences differ. Some find async Rust and multithreading “empowering” and safer than C++, others report serious pain when mixing async with generics/traits or complex ownership, leading to “channel hell.”
  • Game development: Multiple anecdotes and references claim Rust is a poor fit for non‑trivial game engines: refactoring is costly, many common patterns fight the borrow checker, and developers often end up “working around” Rust’s safety and losing its benefits.
  • Embedded / tiny binaries: Some argue C/C++ still dominate where code size, no-heap operation and unsupported CPUs matter. Others counter that Rust can produce very small binaries and treat allocation as a library concern, but acknowledge configuration complexity.
  • Ecosystems: Graphics, drivers, major APIs (Khronos, POSIX, CUDA, DirectX, console SDKs) and mainstream IDE/debugger tooling still assume C/C++. This inertia is seen as a major practical reason to stick with C/C++.

Portability and Language Fit

  • Portability: One side claims C is only “portable” in that compilers exist everywhere; real software requires substantial per‑platform porting and #ifdefs. Others argue C remains the most portable option, especially for obscure architectures, and that fully standard‑conforming code can be widely portable.
  • Style and complexity: Rust’s pattern matching, enums and traits are seen by some as over‑academic and verbose compared to C’s “terse, do‑what-I-say” style; others argue they encode invariants more safely than scattered booleans and null pointers.
  • “Rust forces you to code in the Rust way”: Some like C/C++’s freedom to choose paradigms, others point out that every language constrains you (e.g., missing tagged unions, pattern matching, or methods in C).

Undefined Behavior, Safety, and Standards Work

  • C/C++ UB scope: A standards participant notes that many UB instances in C23 are being removed or clarified; many “UB entries” were effectively documentation bugs (“ghosts”) rather than real traps.
  • Pointer provenance & integer–pointer conversions: Long debate over proposals to formalize provenance, how much pointer forging should be allowed, and whether this blocks writing allocators, arena systems, or low‑level tricks (vector loads, Wasm linear memory, DMA). There’s disagreement over how much this is rooted in hardware reality vs optimization-driven modeling.
  • Optimization vs correctness: One camp insists most UB (signed overflow, strict aliasing, null to strlen, etc.) exists to enable significant optimizations (loop unrolling, vectorization, avoiding extra branches). Another camp argues the performance benefits are overstated, heavily benchmark‑driven (e.g., SPEC), and that UB more often leads to dangerous miscompilations than meaningful speedups.
  • Uninitialized variables: C and upcoming C++ revisions refine semantics (e.g., “erroneous behavior” instead of UB, well‑defined garbage values in some cases) to reduce catastrophic misoptimizations while retaining performance.
  • UB as extension surface: It’s emphasized that “undefined by ISO C” often just means “left to the implementation”: vendor extensions, POSIX APIs, and nonstandard headers live here. But this also means a compiler may legally treat UB as dead code and delete it.
  • Safety tools and variants: Mentions of Fil‑C (memory‑safe, panicking C/C++ implementation) and tools enforcing safe C++ subsets suggest an emerging space of “safer C/C++” that trades performance or flexibility for safety, without switching languages.

Language Culture and Alternatives

  • Some complain about “Rust everywhere” and aggressive advocacy, noting other options (Ada, Zig, Hare, Odin, V) and criticizing Rust community attitudes toward unsafe.
  • Others stress that despite all flaws, C and C++ still underpin virtually all major OSes, runtimes, and infrastructure; many theoretically superior languages never achieved similar adoption.

Our European search index goes live

Scope and rollout of the new index

  • Launch is currently France-only and covers only a fraction of French queries; target is 50% of French search traffic by year’s end, with later expansion to other countries.
  • Some commenters are impressed by how fast Ecosia/Qwant delivered something after earlier announcements; others question how much traffic is actually hitting the new index vs Bing results.
  • Qwant already maintains its own index and augments it with Bing where needed; this collaboration is seen as a logical extension.

Can you even build a new index today?

  • One view: the “crawlable web” is dying because sites barely link, SEO dominates, and many URLs are essentially fed directly to Google; therefore, independent discovery is impossible.
  • Counterpoints:
    • Link-based PageRank is not the only ranking method.
    • Domain enumeration via DNS zone files (ICANN CZDS), SSL certificate logs, Common Crawl seeds, and commercial DNS data can bootstrap a large index.
    • Ranking could emphasize page content rather than link referrals.

Privacy, ads, and utility vs business

  • Question raised whether ad-blocking users (e.g., uBlock) are “worth it” to search engines; answer: even without ad clicks, query and click patterns remain valuable for ranking and recrawl decisions.
  • Some argue search should be treated like a regulated utility; others say utilities tend to ossify, and search still changes too fast for that model.
  • Skepticism about “privacy-first European infrastructure”: concern it may just reallocate power and data from Google to EU media/telecom players, with Qwant’s investors cited as a warning sign.

UX, quality, and adoption

  • Several testers report surprisingly good results and are considering switching from Google; others find Ecosia’s results ad-heavy and weaker than DuckDuckGo.
  • The busy, non-minimalist start page is a recurring complaint.
  • Some distrust Ecosia’s older “plant trees with your searches” messaging, feeling that ad-click dependence was under-emphasized.

European vs US control and culture

  • Strong support for non-US search options as a way to reduce dependence on US tech and norms; equally strong fear that EU regulation (e.g., chat control, “protect the children”) could lead to censorship.
  • Long tangent on imported US culture wars (“woke,” identity politics, BLM) shaping European discourse, with disagreement over whether this is corrosive or a continuation of European egalitarian ideas.
  • Debate over whether EU structures are sufficiently democratic and whether “digital sovereignty” actually improves user freedom, versus simply swapping one set of political constraints for another.

Regulation, cookies, and GDPR

  • Some hope European infrastructure might eventually lead to a saner web (e.g., fewer cookie banners), with the observation that banners are largely an anti-pattern created by tracking-heavy business models.

The Framework Desktop is a beast

Soldered RAM, Physics, and Repairability

  • Big debate around soldered LPDDR5X: critics say it contradicts the brand’s DIY/repair ethos and makes the desktop less repairable than typical PCs (and even their own laptops).
  • Defenders argue it’s a hard technical constraint: at these frequencies, sockets hurt signal integrity (impedance changes, reflections, crosstalk), and this AMD Strix Halo platform only supports high-bandwidth LPDDR5X in soldered form.
  • CAMM/LPCAMM is mentioned as a possible future middle ground, but current attempts reportedly failed for this CPU.
  • Some see soldered RAM as making the system “throwaway” if RAM fails or needs upgrading; others note many users never upgrade RAM and have almost never seen RAM failures.

Framework’s Mission vs This Product

  • One camp feels this desktop undermines the company’s core promise of repairability and upgradability, suggesting a different sub-brand for less-modular products.
  • Another camp sees it as a pragmatic one-off: everything except RAM remains modular/standard (ITX-like board, flexATX PSU, storage, case), and future mainboard swaps can still extend life.

Strix Halo, Unified Memory, and AI/LLM Workloads

  • Core appeal is the Ryzen AI Max 395+ APU: 16 cores plus a large iGPU sharing up to 128GB of unified memory at ~256 GB/s, similar in concept to Apple’s unified memory.
  • This makes big local models possible (especially ~100B MoE) with GPU access to essentially “128GB VRAM,” but token speeds are much lower than big Nvidia cards; some benchmarks show ~5 tok/s on 70B models and slow prompt processing for long contexts.
  • NPU/“AI” block exists but is seen as weak, under-documented, or hard to use today; most real work lands on the iGPU via Vulkan/HIP.

Comparisons to Macs and Traditional Desktops

  • Many compare directly to M4 Pro/Max and Mac Studio/Mini:
    • Apple has higher memory bandwidth on high-end parts (up to ~2×) and, in some tests, much better AI performance, but at much higher prices and with macOS lock-in and poor repairability.
    • Price comparisons are contested: depending on config, Framework is cheaper than a Studio but close to an M4 Pro Mini; Apple’s RAM/SSD pricing is widely called predatory.
  • Against classic PC builds: for pure gaming or maxed LLM inference, people still recommend 9800X3D/9950X + large Nvidia GPU or Threadripper/EPYC, at the cost of size, power, and noise.

Software Ecosystem and Alternatives

  • CUDA still dominates many AI workflows; AMD’s ROCm works on these chips but support and tooling are viewed as immature and fragmented. llama.cpp + Vulkan/HIP works but optimal backends differ per model.
  • SCALE and ZLUDA are cited as emerging bridges for CUDA code on AMD.
  • Several commenters opt instead for:
    • Used EPYC servers for huge but slower RAM,
    • Minisforum/Beelink/GMKtec Strix Halo boxes,
    • HP Z2 Mini (with limited “link ECC” only),
    • Or simply sticking with Mac or conventional SFF PCs.