Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 288 of 360

CAPTCHAs are over (in ticketing)

Bot Detection, CAPTCHAs, and PoW

  • Many argue local behavioral profiling (mouse, scroll, IP patterns) is attractive but runs into false positives and accessibility issues; sophisticated attackers can mimic or reverse‑engineer these signals.
  • reCAPTCHA v3 / Cloudflare Turnstile etc. are seen as privacy‑invasive and increasingly ineffective; bots farm out CAPTCHAs to humans or spoof telemetry directly.
  • Proof‑of‑work CAPTCHAs are criticized as bad rate‑limiting: SHA‑256 is already massively optimized by GPUs/ASICs, so attackers get orders‑of‑magnitude cost advantage over normal users.
  • Some propose proof‑of‑humanity/payment schemes (e.g. one‑time donations, email‑based “humanity providers”), but commenters note these mostly shift cost, don’t stop high‑margin scalpers, and risk excluding poorer users.

Identity, KYC, and the BAP Trilemma

  • Repeated theme: you can’t have strong Bot‑resistance, strong Accessibility, and strong Privacy all at once (“BAP theorem”).
  • Hard‑KYC proposals: legal ID–backed accounts, lotteries, non‑transferable or name‑locked tickets, ID checks at the gate, government eID/OIDC, or zero‑knowledge proofs on top of eID.
  • Objections: legal constraints on ID use, privacy risks, tourist handling, operational burden of ID checks, and slower entry; some report that even strict ID+face recognition (e.g. in China) doesn’t fully stop scalpers.

Economics and Role of Scalpers

  • Many argue this is fundamentally an economics problem, not a bot problem: underpriced tickets create arbitrage; scalpers just capture the difference.
  • Counterpoint: organizers intentionally underprice to avoid looking greedy, maintain fan goodwill, and generate “sold out in minutes” hype.
  • Several suggest the industry—and especially vertically integrated giants—benefit from scalpers: they get instant, low‑risk sell‑through, secondary‑market fee revenue, and sometimes unused tickets that avoid venue costs.
  • Others contend scalpers still hurt artists (lost concessions/merch), venues, and fans, and are only tolerated because of monopoly power and misaligned incentives.

Proposed Distribution Schemes

  • Lotteries: pre‑registration windows, random allocation, sometimes with small card charges; widely used in Japan and some US sports/entertainment, but require strong identity to prevent multi‑entry.
  • Pricing ideas: Dutch auctions, second‑price style bids, regressive price over time with post‑hoc rebates, bonds refundable after attendance, or exponential pricing per additional ticket; criticized as elitist, complex, or group‑unfriendly.
  • Strict ticket tying: name on every ticket, mandatory ID at entry, official resale only at face value (or face+cap); some countries and artists already do this, reportedly diminishing secondary markets.
  • Offline/analog: sell only at local shops/box offices with human judgment. Opponents say this penalizes tourists and remote fans and scales poorly.

Regulatory and Normative Debates

  • One camp advocates regulation: break up ticket monopolies, ban above‑face‑value resale, mandate low‑ or no‑fee transfers, and enforce harsh penalties for systematic scalping.
  • Another camp objects that concerts are luxuries and governments shouldn’t fix self‑inflicted mispricing; others reply that citizens widely hate scalpers and antitrust is precisely for such power concentrations.
  • Broader tension appears between those prioritizing privacy and accessibility, and those willing to sacrifice both to curb bots and scalpers.

Scientific conferences are leaving the US amid border fears

Historical context and what’s “new”

  • Several commenters note visa/border barriers to scientific meetings are not new (e.g., HIV/AIDS conferences during earlier bans), but argue the scale and visibility have changed.
  • Others emphasize a qualitative shift: normalization of “ethno‑fascist” rhetoric, indefinite detention, and deportations without due process are framed as a break from past practice, not just a continuation.

Logistics of moving conferences

  • Organizers explain that large conferences are planned 1–3 years in advance; moving countries on short notice is often impossible without financial ruin.
  • Because of this lag, some argue six conferences leaving the US already is “a huge deal” and that the real effects will only be visible in a few years of site-selection cycles.
  • Canadian cities (Vancouver, Toronto, Montreal, etc.) are frequently cited as practical alternatives for North American–adjacent events.

Border climate, risk, and personal decisions

  • Many scientists and organizers report colleagues skipping US events or moving conferences abroad due to fear of arbitrary detention, device searches, or being turned back—especially for non‑white, non‑citizen, or trans attendees.
  • Stories of students, researchers, and visitors detained, sent to third‑country facilities, or refused entry after reviewing phones/social media drive a perception of “qualitative” risk, even if absolute probabilities are low.
  • Some non‑US commenters say they now avoid the US entirely for tourism and conferences, preferring Canada or Europe.

Skepticism and accusations of fear‑mongering

  • Skeptical voices argue that:
    • Documented cases are rare relative to millions of entries.
    • Border agencies claim device searches are <0.01% of travelers and have been rising steadily since before the current administration.
    • Media and political opponents amplify isolated incidents into generalized fear.
  • Others counter that for high‑value invitees (senior scientists, students with limited funds), even a small chance of catastrophic outcome (detention, deportation, visa black marks) is a rational deterrent.

Data, media, and Nature’s role

  • Some criticize the Nature article as thinly sourced “political news” lacking baseline statistics (total conferences, percentage moved, longitudinal trends).
  • Others respond that Nature has a long‑standing news function, that it did list specific conferences (behind the paywall), and that comprehensive data do not yet exist this early in the cycle.

Broader political and cultural backdrop

  • Numerous comments tie conference flight to wider issues: anti‑immigrant and anti‑science policies, frozen grants, demonization of allies, and a sense of American instability from one administration to the next.
  • Some US-based scientists note that this is already part of day‑to‑day planning: they are shifting conferences away from the US and say trust—even if politics change later—will take years to rebuild.

Why old games never die, but new ones do

Survivorship Bias vs. What “Dying” Means

  • Many argue the premise is mostly survivorship bias: we only see the standout old games; thousands of contemporaries are forgotten.
  • Others counter that even bad or obscure old games are still playable if you have media/emulators, whereas many newer games become literally unplayable.
  • Distinction emerges between “culturally dead but technically playable” (forgotten ROMs) and “legally/technically dead” (server‑locked titles).

DRM, Online Requirements, and Planned Obsolescence

  • Core concern: newer titles often require central servers, DRM, or asset streaming; when servers shut down, games (even single‑player) stop working.
  • Older games could be run from a disc or ROM, with or without patches; modern equivalents like The Crew or Overwatch 1 are cited as deliberately killed.
  • Some see this as conscious planned obsolescence and compare it to streaming video platforms making catalogs transient.
  • There are calls for regulation: mandating offline modes, server code release, or open-sourcing at end of life.

Multiplayer, Matchmaking, and Fragmentation

  • Old LAN/direct‑IP games can still be played if you gather friends; modern competitive games depend on centralized matchmaking and huge player bases.
  • Once population dips below a threshold, ranked ladders and onboarding collapse, effectively killing the game.
  • The “everyone moves in a crowd” effect via influencers and FOMO events makes newer multiplayer feel like disposable social “events.”

Mods, Emulation, and Community Preservation

  • Community patches, private servers, and emulators (for DOS, consoles, MMOs, Thief/UT/PSO, etc.) are credited with keeping many older games alive.
  • Some games become “zombies”: fan-maintained but in legal limbo. Others (Factorio, Stardew, Minecraft, classic CRPGs) are seen as modern titles likely to endure thanks to offline play and mod-friendliness.

Quality, Monetization, and Design Trends

  • Strong split: some claim cultural/creative decline, enshittification, and monetization-first design (battle passes, loot boxes, daily quests, “gambling for kids”).
  • Others push back, listing many recent single-player and indie titles as equal or superior to classics; problem is AAA live-service economics, not games as a whole.
  • Complaints about modern complexity bloat, constant balance patches, and psychological engagement loops vs. the relative simplicity and clarity of older games.

Copyright, Ownership, and Law

  • Long copyright terms and DRM are criticized as blocking preservation and personal archiving (parallels drawn with ebooks and streaming).
  • Several propose shorter copyright, mandatory public-domain or free-play transitions for old games, or automatic open-sourcing of “abandonware.”
  • The “Stop Killing Games” EU initiative is repeatedly cited as a concrete push for legal change.

Reinvent the Wheel

What “Don’t Reinvent the Wheel” Is Supposed to Mean

  • Many commenters argue the phrase is business-context advice: optimize for time, reliability, and focus, not personal curiosity.
  • Others say it’s overused as dogma, often thrown around online without understanding the specific problem or context.
  • Several distinguish “reinventing a wheel” (starting from scratch) from “improving a wheel” or building a specialized variant.

Reinventing as a Learning Tool

  • Strong agreement that reimplementing existing systems is an excellent way to gain deep insight—“you don’t really understand it until you’ve built it.”
  • Debate over whether rewriting from scratch is the “best” way to learn:
    • One side: it’s expensive but uniquely effective.
    • Other side: you can learn progressively via reading, experimentation, and smaller exercises without full rewrites.
  • Personal stories: people building their own ML libraries, web servers, schedulers, etc., report major conceptual gains.

Production, Work, and Startups

  • At work, reinventing is usually constrained by deadlines, customer value, and runway.
  • Common view:
    • Reinvent for core differentiating tech.
    • Reuse for commodity pieces (auth, crypto, date handling, web frameworks).
  • Some note extreme NIH in enterprises and startups leading to fragile in-house “wheels” that never reach library quality or maintainability.

Dependencies, Complexity, and Bloat

  • A major pro‑reinvention argument is avoiding unnecessary dependencies, transitive bloat, and opaque behavior.
  • Examples: pulling huge libraries for trivial use, or frameworks that bring hundreds of packages for simple tasks.
  • Suggested middle grounds:
    • Vendor and trim existing libraries.
    • Write small, targeted implementations when the general solution is overkill.
  • Crypto is repeatedly cited as a domain where “don’t roll your own” still strongly applies.

When Reinvention Makes Sense

  • Niche or tightly scoped problems where general tools are misaligned or overengineered.
  • Cases where existing “wheels” encode bad assumptions, poor performance, or unfixable complexity.
  • Practice in invention/innovation itself: solving “old” problems builds skill for future novel ones.
  • One detailed example describes “delinking” binaries—essentially reversing linkers—to enable new forms of reverse engineering; offered as proof that challenging the standard approach can yield world‑class tools.

Risks and Nuance

  • Rewriting often underestimates hidden complexity; many “new wheels” fail on edge cases, security, or long‑term maintenance.
  • Chesterton’s Fence is invoked: understand why the old solution exists before tearing it down—but also recognize that some fences were built badly.
  • Consensus: reinventing is valuable, but context, stakes, and humility matter; balance curiosity with responsibility.

It is time to stop teaching frequentism to non-statisticians (2012)

Preprints, Blogs, and “Cargo Cult” Science

  • Some argue it’s odd that non–peer-reviewed work appears on arXiv and suggest blogs/Substack instead.
  • Others reply that:
    • Preprint servers are designed for unreviewed work; using them isn’t misuse.
    • They provide DOIs and stable archiving, unlike personal blogs or platforms that may vanish.
    • Yes, arXiv has an endorsement system, so it’s not totally “anyone can post.”
  • There’s concern that using a preprint server purely for the optics of credibility is “cargo cult science,” but others note formal journal publication is not required for work to be scientific.

Gatekeeping, Peer Review, and Scale

  • One side: “only what is said matters; why gatekeep?”
  • Counter: with billions of people publishing, credentials and peer review are crucial filters; peer review is supposed to check content and is often double blind.
  • Others note that current journal systems have serious issues (replication crisis, incentives) and don’t obviously “scale better” than open models.
  • Analogy: GitHub allows everyone to upload code; that doesn’t mean we treat all repos equally, but we also don’t block uploads.

Citing Unreviewed or Informal Work

  • Several commenters say you should cite important preprints if they’re relevant, even if unreviewed.
  • In some fields, people routinely cite key preprints that never made it to formal publication.
  • Extreme example: if a correct solution to a math problem appeared on an anonymous forum, you’d still need to acknowledge it somehow.

What to Teach Non‑Statisticians

  • Some think the article is old and not well argued; instead of “Frequentism vs Bayes” they’d focus first on exploratory data analysis and understanding data/phenomena.
  • There’s frustration that many scientists/ML practitioners can run sophisticated methods but can’t properly inspect data, detect leakage, or match metrics to real goals.

Frequentist vs Bayesian: Competing Philosophies

  • Pro‑Bayes points:

    • Frequentism treats probability as long-run frequency; Bayes treats it as degree of belief and is more general (e.g., one-off events like Saturn’s mass or a digit of π).
    • Frequentist thinking about parallel universes or infinite repetitions is seen as metaphysically awkward and not matching the questions scientists actually ask.
    • NHST is heavily criticized as answering the wrong question (P(data|H), not P(H|data)), easy to game, and central to the reproducibility mess.
  • Pro‑Frequentist or skeptical‑of‑Bayes points:

    • Frequentist methods aren’t “wrong,” just less general; they can be mathematically clean, powerful, and often equivalent to Bayes with weak/flat priors.
    • Much “Bayesian” work in practice uses nearly uninformative priors, collapsing back to frequentist-like results.
    • Priors can be highly subjective and have large, poorly appreciated effects; pretending that frequentists “secretly” use priors is disputed.
    • NHST is misused more than inherently invalid; the real problem is poor understanding and design, not the entire frequentist paradigm.

Applied, Pragmatic View: Tools, Not Ideologies

  • Several self-described applied statisticians see the debate as overly ideological.
  • View: statistics is applied math plus a way to encode uncertainty; Bayesian and frequentist methods are just different tools with trade-offs in bias, variance, interpretability, and computation.
  • Choice should depend on the task (e.g., casino-like long-run guarantees vs one-off decisions, observational sciences with many confounders, or large-scale ML where full Bayes is computationally costly).

Live facial recognition cameras may become 'commonplace' as police use soars

Inevitability, Sousveillance, and Asymmetry of Power

  • Several see large-scale facial recognition as technologically inevitable, arguing the only realistic mitigation is “sousveillance” (civilians watching authorities) to counter asymmetry.
  • Others are skeptical that sousveillance will help: officials can openly take credit for surveillance systems, and the public lacks equivalent institutional power.
  • There’s concern over who gets anonymity and accountability: police vs public, moderators vs users, with complaints about opaque moderation and hidden decision-making.

Crime, Safety, and Social Consequences

  • Supporters note reported successes: hundreds of arrests and some serious offenders caught with help from facial recognition.
  • Critics ask for denominators: how many people were scanned and tracked, and in how many cases was the tech uniquely necessary?
  • Fears include:
    • Intensifying criminalization of already over-policed communities, forcing those with warrants or minor “lifestyle” offenses to avoid cameras and public space.
    • Chilling effects on protest, association, and everyday behavior, while determined criminals adapt or mask up.
    • “Two-tier” enforcement: automated, strict punishment for ordinary people vs weak enforcement against hardened offenders.

Abuse, Data Markets, and Function Creep

  • Many worry about databases being repurposed: lenders, marketers, stalkers, abusive insiders, or future regimes misusing movement and identity data.
  • Examples are raised of plate-recognition and other data already sold or accessed by private firms and law enforcement.
  • Some argue UK safeguards (logging, discipline, prosecutions) show such systems can be controlled; others counter with US examples of systemic misuse and mission creep (Patriot Act, ICE, ALPR vendors).

Legal, Ethical, and Constitutional Debates

  • Proposals include treating facial surveillance like wiretaps: bulk collection allowed, but query limited by warrant and narrow purpose.
  • Others call for outright bans or criminal penalties on building tracking databases, though skeptics say cheap hardware and hobbyists make bans impractical.
  • A US-focused subthread debates whether mass automated tracking in public violates the spirit of the Fourth Amendment, even if traditional doctrine says there’s “no expectation of privacy” in public.

Technical Escalation and Futuristic Scenarios

  • Commenters note expansion beyond faces: gait recognition, cross-building tracking, and long-term data retention via hashes and metadata.
  • Some think such systems could approach near-infallible tracking; others doubt the reliability of current AI and point to high false-positive rates as reason enough to prohibit deployment.

AI can't even fix a simple bug – but sure, let's fire engineers

AI as a Tool vs Overhyped “Replacement”

  • Many frame AI as just another tool: powerful when used well, useless or harmful when misused.
  • Others argue this analogy breaks because vendors aggressively market AI as an autonomous replacement, not a simple productivity aid.
  • Several suggest the real criticism should be aimed at companies and marketing, not at the raw capability of the models themselves.

Coding, Debugging, and Technical Limits

  • Experiences are mixed: some report strong gains for boilerplate, refactors with good tests, DSL transpilers, and documentation help.
  • Others describe frequent hallucinations (fake options, joins, APIs), brittle debugging help, and broken or unmaintainable code, especially in complex domains like runtimes, compilers, and native platforms.
  • AI often needs extremely detailed, carefully curated prompts and context; once it’s “off,” the overhead to recover can exceed any time saved.
  • Several note that “funny failures will be gone in months” has been said for years, while quality appears to plateau or even regress in places.

SaaS, Control, and Data Concerns

  • Strong disagreement over whether cloud LLMs are truly “tools” when users can’t inspect, repair, train, or fully constrain them.
  • Concerns include codebase stomping, data leakage, brittle dependence on connectivity, and opaque experimentation by providers.
  • Local models are suggested as an answer, but many note the hardware and ops costs are prohibitive for most.

Jobs, Layoffs, and Productivity

  • Debate over whether engineers are actually being fired because of AI: some see AI as cover for a tech recession, Section 174, and prior over-hiring; others report orgs explicitly cutting junior roles and “downsizing 50 to 5” with AI.
  • Comparisons to spreadsheets and accountants: tools changed the work mix, reduced some roles, but didn’t eliminate the profession—yet accounting’s trajectory is cited as a cautionary tale.
  • Some argue that firing engineers for AI is “natural selection” for bad companies; others stress the human cost and note that C-suites rarely bear the consequences.

Adoption Dynamics and Hype Pressure

  • AI use is often driven top‑down for PR, KPIs, and “we are an AI company” narratives, sometimes disconnected from actual usefulness.
  • There is pressure to “be the AI expert on the team,” but skepticism about investing heavily in workarounds for rapidly obsolete tools.

Where AI Works Well Today

  • Commenters highlight embeddings, semantic search, log/error analysis, scaffolding boilerplate, and assisted test-writing as genuinely high-value uses.
  • The consensus across the thread: AI is a real accelerator in the hands of skilled engineers, but nowhere close to safely replacing them.

Will the AI backlash spill into the streets?

AI Job Creation vs Destruction

  • Central question: if AI can perform “wholesale automation of intelligence,” what new jobs arise that AI itself can’t do?
  • Some argue most prior “new jobs” weren’t in maintaining machines but in entirely new sectors (services, commerce), so something similar may happen again.
  • Others counter that modern AI can occupy far more roles than past machines, so net job losses are plausible even if some new work appears.
  • Several commenters expect partial, not total, automation: 20–30% headcount cuts in white‑collar roles (software, support, sales development) are already visible, and that alone is economically significant.

Pace and Limits of AI Progress

  • Disagreement over whether current LLMs are on a path to AGI or a limited paradigm that will hit diminishing returns.
  • One side expects continued strong gains, citing immature techniques and past underestimation (e.g., solar, prior IT waves).
  • The other side stresses that not all tech follows exponential curves (unlike Moore’s law), so radical “end of scarcity” scenarios may require future paradigm shifts, not just bigger LLMs.

Who Benefits: Distribution, Class, and Politics

  • Thread is skeptical that cheaper production will automatically yield cheaper goods or better lives; recent productivity gains have mostly gone to capital, not wages.
  • Many foresee AI as primarily attacking white‑collar, higher‑paid work (software, back office, BDRs) after blue‑collar automation already hollowed out manufacturing.
  • Class fragmentation and weak unions are seen as key reasons why there may be little broad political resistance to white‑collar displacement.
  • Some imagine a future where social welfare plus cheap AI‑produced goods make non‑work viable; others respond that this depends entirely on political struggle, not technology.

Backlash, Protests, and Historical Parallels

  • Several commenters doubt there will be large‑scale “AI riots”: unemployment is currently low, and most people experience AI as incremental tooling, not existential threat.
  • Luddites are invoked both as a cautionary analogy and as people who were “right” that their own lives worsened even if later generations benefited.
  • One long critique argues that elites frame the issue as “helping the displaced” instead of asking who should own and control AI; if displacement becomes massive, the logical demand would be socializing AI’s gains.
  • Protests are seen as capable of influencing elections and, in some historical cases, larger policy, but many doubt they’ll overturn entrenched economic power around AI.

Concrete Automation Examples

  • Self‑driving: cited as a warning that “almost here” tech can remain limited for decades; others argue recent systems (e.g., Waymo, Tesla) show it is finally scaling.
  • Self‑checkout: widely deployed; seen as an example where automation won, but with caveats about theft, customer experience, and still‑needed staff.
  • Software work: viewed as unusually automatable due to testing and verification, but also as a field that has survived multiple “automated programming” waves.

Good Writing

Scope of “Good Writing”

  • Many readers argue the essay is really about essayistic, idea-developing prose, not fiction, poetry, or lyrics.
  • Others note that fiction and poetry still convey “truth” via analogy and emotional impact; examples cited include Moby Dick, Ted Chiang’s “Story of Your Life”, and Arrival.
  • Several distinguish between clear exposition vs. stylistic beauty or memorability: Douglas Adams, Tucholsky, and others are praised for lines that stick even when they’re not “frictionless”.

Style vs. Truth

  • Central claim debated: does writing that “sounds good” tend to be more correct?
  • One camp: iterative rewriting clarifies both prose and thought, so clumsy writing often signals muddled or wrong ideas. Bad structure in technical proposals is cited as a practical problem.
  • Counter-camp: eloquence and correctness are only loosely correlated; sophistry, propaganda, marketing, and political rhetoric show that beautiful writing can be deeply false.
  • Non‑native speakers and domain experts with poor prose are raised as counterexamples to “ugly ⇒ wrong”.

LLMs and the Post‑Truth Context

  • Several say large language models undermine the heuristic: they produce fluent, plausible, but often factually wrong text, at scale.
  • Others respond that the essay explicitly denies “beautiful ⇒ true” and only claims “clumsy ⇒ probably wrong”, so LLMs aren’t a clean refutation.

Nuance, Legibility, and Audience

  • Some argue that forcing ideas into highly legible, simplified forms can destroy nuance, invoking “legibility” in the Seeing Like a State sense.
  • Others stress audience: what “sounds good” or “reads clearly” depends on who is reading (layperson vs. expert, native vs. non‑native).

Evaluating the Essay and Its Author

  • Supporters praise the essay’s clarity and its emphasis on rewriting, likening writing refinement to code refactoring or culling photos.
  • Critics find it repetitive, imprecise, self‑regarding, or philosophically naive, and question the leap from “good flow” to “truer ideas”.
  • Broader skepticism appears about treating a successful tech investor as an authority on literary quality, though others note his essays helped shape startup culture.

The Hobby Computer Culture

Mail‑order culture, trust, and fraud

  • Commenters compare Altair-era “sight unseen” mail orders with 1990s e‑commerce skepticism, noting earlier postal orders were backed by mail-fraud laws.
  • Long traditions of catalog sales (e.g., kits, scientific gadgets, even houses) are cited to argue that sending money to unknown vendors wasn’t new, but others stress the difference between reputable brands and “fly‑by‑night” ads in hobby magazines.
  • Examples of 1970s mail-order scams in the S‑100/early micro market illustrate that fraud was real even then.
  • Some recall using money orders on early eBay and even successfully mailing cash abroad for niche items.

Did the personal computer era end?

  • One view: the “bicycles for the mind” era ended when PCs became networked, account‑gated thin clients; the web and cloud re‑centralized power.
  • Counterpoint: hobbyist empowerment continues via tools like OpenSCAD, CNC, and local software that reclaim autonomy from the browser/cloud model.
  • Another thread sees LLMs and local models as possibly reviving the personal-computer spirit, reversing a long plateau in perceived innovation.

From hobby toys to business tools

  • Several dispute the article’s implication that, by 1978, interest was mostly hobbyist: spreadsheets like VisiCalc and later Lotus 1‑2‑3 quickly pulled PCs into mainstream business use.
  • Stories describe non-hobbyist professionals buying full systems just to run a single killer app (e.g., spreadsheets for accounting and consulting).

Clubs, community, and career formation

  • Local computer and later Linux user groups are credited with teaching skills, providing mentorship, and directly leading to multiple job opportunities and entrepreneurial careers.
  • Vintage-computing and robotics clubs are described as spiritual successors to the Homebrew era, though some say today’s groups are driven more by pessimism about modern computing than by frontier optimism.

Media, physical culture, and nostalgia

  • Thick, ad-heavy magazines (BYTE, Computer Shopper, TRS‑80 titles) and specialty catalogs were crucial discovery channels before the internet.
  • Pop-up shops, gym “expos,” and informal clubs conveyed knowledge in an environment of nonstandard hardware and near-total DIY software.

Hobbyism, economics, and over‑commercialization

  • Several lament that 1970s hobbyists spent large sums on practically useless machines for pure exploration, whereas today’s projects are judged by cheap mass-produced alternatives and monetization potential.
  • Globalization, offshoring, and economic anxiety are seen as shrinking the time and psychological space for non-monetized tinkering, which commenters fear will dampen future innovation.

Semicolons bring the drama; that's why I love them

Debate over the title’s semicolon

  • Large sub-thread on whether “Semicolons bring the drama; that’s why I love them” misuses a semicolon.
  • Some argue a colon or em dash would be better, calling the second clause an explanation, not a parallel one; others say the semicolon is fine because both sides are independent clauses and closely related.
  • Disagreement over whether “if it works with a comma it works with a semicolon” is valid; opponents call that a misunderstanding that leads to comma splices.
  • Multiple style guides (Chicago, AP, Merriam–Webster) are invoked to argue semicolons should usually join independent clauses, though some note they’re often used more flexibly in practice and in literature.

Prescriptivism vs descriptivism

  • One side: written language needs relatively stable rules for clarity; style guides are mostly descriptive and conservative and help avoid ambiguity.
  • Other side: declaring language “wrong” often backfires because usage is broader than school rules; semicolon use is heavily stylistic, especially in poetry and casual writing.
  • Several note how many “rules” (no sentence-starting “And/But”, “fewer vs less”, etc.) are faddish rather than historically grounded.

How and why people use semicolons

  • Suggested heuristics: use a semicolon when a period feels too abrupt; the two resulting sentences should still make sense alone.
  • Others see semicolons as “silent conjunctions” or “soft periods,” often replaceable by “and,” “therefore,” or a period.
  • Some enjoy them for adding nuance, hierarchy, and rhythm—likening them to another outline level or to nested code; others complain about overuse and long, fatiguing sentences.
  • Teachers and tests sometimes encourage avoiding semicolons or using at most one, reinforcing their reputation as advanced or risky punctuation.

LLMs, dashes, and punctuation style

  • Noted that large language models frequently use em dashes, usually without surrounding spaces; some find this unusual relative to typical web writing.
  • Discussion touches on regional differences (US em dash vs spaced en dash), phone autocorrect habits, and speculation about stylistic or watermarking reasons.

Skepticism about semicolons

  • A critical view lists downsides: most people misuse them; subtle pause-length distinctions are lost on many readers; simpler symbol sets are preferable; and long sentences are undesirable.

I used o3 to find a remote zeroday in the Linux SMB implementation

Exploit, validation, and tooling limits

  • Commenters ask if the ksmbd bug is practically exploitable and whether syzkaller or classic fuzzing could have found it.
  • The vulnerability involves concurrency and shared objects, leading several to doubt that traditional static analysis would reliably catch it.
  • Some wonder if other SMB implementations share similar bugs; consensus is that codebases differ enough that this isn’t obvious.
  • A later subthread presses for proof-of-concept (PoC) requirements; the author clarifies they did build a crashing PoC with KASAN, but it wasn’t emphasized in the writeup.

Signal-to-noise, workflow, and “prompt engineering”

  • The reported ~1:50 useful-to-noisy finding ratio divides opinion: some think it’s excellent for “needle in a haystack” work; others say reading LLM slop is less efficient than a skilled human audit.
  • Several maintainers complain they already drown in AI-generated false-positive CVEs and fear this article will worsen the spam.
  • Others argue triage is exactly where real gains are needed—if models could generate harnesses/PoCs or use sanitizers as an oracle, S/N might rise dramatically, but that’s expensive.
  • There’s extended debate over whether prompt design is “engineering” or just vibes; many describe structured workflows (separate prompt files, XML tagging, scratchpads, multi-step “reasoning” agents) as useful, even if empirically tuned rather than rigorously benchmarked.

Model capabilities and comparisons

  • Multiple people see this as evidence that new “reasoning” models (o3, Gemini 2.5 Pro, etc.) have crossed a threshold for nontrivial bug-hunting, especially in concurrent code.
  • Others report similar experiments: ~1:10 success on custom code challenges, needing many iterations, suggesting the low raw accuracy but high upside pattern is common.
  • There’s disagreement on which frontier model is best; some claim Gemini 2.5 can find the same bug more reliably with a good prompt.

Security arms race and deployment

  • Many expect intelligence agencies and serious attackers are already or soon will be automating zero-day discovery this way, triggering an arms race.
  • Defenders can also integrate such scans into CI or periodic audits, but abandoned or unmaintained software remains a major weak point.
  • Several highlight the mismatch between the modest dollar cost (~$116 for 100 runs) and the potentially high market value of a working zero-day.

ksmbd adoption, performance, and risk

  • Discussion notes ksmbd is a kernel-space SMB server offering high performance and SMB Direct/RDMA support, attractive on fast (e.g., 25G) networks and mixed environments.
  • Others question why such a large, risky protocol lives in kernel space at all, citing past catastrophic kernel SMB bugs and Samba’s slower but safer user-space model.

Microsoft-backed UK tech unicorn Builder.ai collapses into insolvency

Scale of collapse & suspected fraud

  • Commenters are stunned that a “website/app builder” could consume ~$500m before imploding.
  • Multiple links trace a long-running pattern: early reporting that “AI” largely meant outsourced human labor, later criminal probes, related-party auditor issues, and ultimately restated revenues and insolvency.
  • Several liken it to Theranos/WeWork: inflated tech claims, book-cooking, lavish founder lifestyle, awards and analyst badges lending false legitimacy.
  • Precise extent of fraud vs. simple over-optimism remains unclear, but many assume serious misconduct given revenue overstatement and investigations.

AI hype, bubbles & startup economics

  • Many see this as an early domino in an AI bubble: lots of 9‑figure-funded startups with weak or non-existent products expected to run out of cash.
  • People note AI infra (GPU/LLM) costs make these businesses more capital-intensive than classic SaaS, and vendor credits plus VC money may mask unsustainable unit economics.
  • Some argue this is normal for emerging tech; others say the hype cycle now systematically rewards “wrappers” and vaporware.

Real value of AI vs. gimmicks

  • Some claim almost no AI startups are truly profitable; Nvidia is seen as the main winner.
  • Others provide concrete internal-use examples (scraping competitors, RAG for customer support, creative tools) that already save time and money.
  • Debate around consumer AI gadgets (AI pins, hardware assistants): some say “ahead of their time,” others say they reveal hard limits of the tech and demand.
  • Posters contrast high-cost cloud LLMs with a belief that sustainable value will come from smaller models on consumer hardware.

VC behavior, access to capital & inequality

  • Several argue big checks go to insiders from elite schools/clubs, not necessarily to the most capable founders.
  • There’s frustration that “serial entrepreneurs” can self-enrich, then walk away, while honest, profitable small businesses struggle to raise any capital.
  • Calls appear for tougher clawbacks and bans on repeat governance roles for executives involved in fraud.

Regulation, UK context & broader politics

  • Some see the UK as a soft-regulation environment (“land without Sarbanes‑Oxley”), conducive to repeated corporate fraud.
  • Broader political tangents connect such failures to deteriorating public services, perceived oligarchic politics, and media influence—though others push back that voters themselves chose these conditions.

Tariffs in American History

Source & Institutional Context

  • Several commenters focus on Hillsdale College’s political and religious positioning: described as a conservative, Christian, movement-aligned institution and a Project 2025 participant.
  • This leads many to question the neutrality of the lecture, calling it propaganda or “retcon” to justify current Trump-era tariffs, and noting lack of references or data.
  • Some push back, arguing that Christian or conservative affiliation doesn’t automatically imply bad scholarship and that the piece is “just” a historical overview.

Tariffs: History vs. Current Use

  • Many accept that historically, tariffs were central to US development (Hamilton, “American System”) and later to Germany, Japan, Korea, Taiwan.
  • Multiple commenters emphasize: tariffs can work when targeted, time-limited, and tied to performance metrics (exports, competitiveness).
  • The current US approach is widely characterized as broad, impulsive, and politically driven rather than technocratic industrial policy.

Implementation Quality & “Chaos vs. Stability”

  • Repeated theme: tools aren’t inherently good/bad; implementation, predictability, and strategy determine outcomes.
  • Criticism of Trump tariffs centers on:
    • Blanket, frequently changing measures that make planning and retooling risky.
    • Conflicting justifications (reshoring vs. “temporary leverage” vs. pure optics).
    • Economic damage (supply-chain disruptions, canceled investments) without clear gains.
  • Defenders focus more on breaking an “unfair” status quo and forcing adjustment, with some explicitly embracing shock and instability as desirable.

EU–US Trade, VAT, and Cars

  • Strong dispute over the article’s treatment of Germany/EU:
    • Multiple commenters state VAT is a destination-based consumption tax applied equally to domestic and imported goods, not an import tariff.
    • They argue the article’s framing of VAT as a trade weapon is misleading or outright false.
  • Explanations for more German cars in the US than US cars in Germany:
    • Product fit and consumer preferences (size, quality, fuel costs, road design), not primarily tariffs.
    • Historical European production by US brands (Ford, GM) and EU production by foreign brands.
  • Some nuanced points: higher VAT and fuel taxes shrink the European car market overall; US “chicken tax” on trucks distorted US vehicle mix.

Protectionism, IP, and Alternatives

  • Debate on whether robust IP protection underpins US wealth, with counterexamples of early US and Hollywood IP theft.
  • Several argue smarter industrial policy (CHIPS Act, targeted EV/tech measures, Norway-style agricultural tariffs) would outperform broad tariffs.
  • Others warn US tariffs are largely emotional, nationalist theater that ignore services trade, global poverty dynamics (especially China’s rise), and environmental/quality-of-jobs tradeoffs.

Ask HN: Go deep into AI/LLMs or just use them as tools?

Framing the Choice

  • Two main paths discussed:
    1. Go deep into ML/LLM internals (research, training, architectures).
    2. Treat LLMs as powerful but imperfect tools inside “normal” software engineering.
  • Several people add a “path 3”: build systems, infrastructure, or consulting around LLM integration in existing businesses.

Job Market & Career Risk

  • Multiple posters with ML/PhD backgrounds say core-ML/LLM research is extremely saturated: hundreds of papers/day, many more applicants than jobs, PhD often required for meaningful “internals” roles.
  • Others counter that if there were truly 100x more people than jobs, salaries would have crashed; high pay remains at top labs and big tech.
  • For most developers, there are still far more roles in full‑stack / application engineering than in LLM research.
  • Age and career stage matter: older engineers are nudged toward leadership and problem‑solving roles; early‑career people might justify a bigger pivot.

Using LLMs as Tools

  • Many suggest defaulting to option 2: become very good at leveraging LLMs for coding, documentation, search, and automation.
  • Experiences vary: some report dramatic productivity (e.g., dozens of PRs/day with Codex as an “intern”), others find LLM coding agents frustrating and error‑prone.
  • Consensus: treat LLMs like junior developers—verify everything with tests and reviews; never blindly trust outputs.

Going Deep / How Much to Learn

  • Common advice: understand one abstraction layer below how you use the tool (basic NN, backprop, transformers, tokens, sampling, limitations), but you don’t need to train frontier models.
  • Suggested learning path:
    • Build a simple NN from scratch.
    • Learn qualitatively how modern architectures work.
    • Learn how to run/open‑source models and use provider APIs.
    • Practice prompt engineering and AI‑assisted coding.

AI Engineering & Application Layer

  • Several foresee most work being “AI engineering”: building products and workflows on top of foundation models (RAG, tools, agents, evals, cost/latency constraints), not inventing the models themselves.
  • LLMs are compared to databases or 3D engines: complex, but most devs will use them as components rather than implement them.

Bubble, Hype, and Longevity

  • Some view current LLM excitement as a bubble similar to dot‑com or crypto; others argue even if a bubble pops, the underlying tech stays, like web or search.
  • Strong split between “it’s hype that will burst, don’t hitch your career solely to it” and “this is the biggest tech shift so far; ignoring it is irrational.”

Broader Career Philosophy

  • Repeated themes: follow genuine curiosity, favor skills that solve real problems, avoid chasing hype solely from fear of obsolescence.
  • Several stress that all tech niches are in flux; depth in fundamentals plus adaptability matters more than picking the “perfect” AI path.

Valve takes another step toward making SteamOS a true Windows competitor

SteamOS, Linux Maturity, and Fragmentation

  • Some see SteamOS as proof that “user-facing Linux” is finally ready: console-like UX, better than traditional consoles, open-source base.
  • Others counter that SteamOS sidesteps normal desktop Linux: custom window manager, curated hardware/kernel/drivers, immutable image, no standard package manager. This doesn’t demonstrate the maturity of KDE/GNOME etc.
  • Fragmentation is criticized: too many distros/DEs/libs complicate support and dilute effort. Counterarguments: choice isn’t a problem in practice, Flatpak mitigates differences, and each distro defines its own supported stack.
  • Repeated reminder that “Linux” is just a kernel; actual Windows competitors are full distros (Fedora, Ubuntu, SteamOS, etc.).

Windows Experience and Gamer Sentiment

  • Many gamers want to leave Windows: complaints include mandatory Microsoft accounts, ads/recommendations in the OS, Copilot nagging, bundled “bloat”, and dark patterns.
  • Others defend Windows: accounts are framed as security features, app suggestions aren’t seen as ads, and preinstalled apps help non‑expert users.
  • Some developers and users perceive Microsoft as focused on AI/Copilot and cost-cutting in Xbox, despite big gaming acquisitions; others say the acquisition spree proves gaming is still strategic.
  • Several participants say if you want “just works” and no tinkering, a console is still more reliable than a Windows HTPC.

Mac, Apple, and Valve

  • Strong sense that Valve has effectively written off macOS: Apple’s frequent deprecations, lack of Vulkan/OpenGL, 32‑bit removal, and small Steam share make it unattractive.
  • Some argue Apple cares deeply about iOS gaming revenue but has long neglected serious Mac gaming; others note Macs could be great gaming devices but both Apple and Valve seem unmotivated.
  • Volunteers reportedly got far running Steam/games on Apple Silicon (and via Asahi Linux), but efforts lack serious Valve backing.

Proton, Anti‑Cheat, and Compatibility

  • Proton is praised as a “trojan horse” making Linux gaming viable by running Windows titles; hope is that success will eventually incentivize native Linux ports.
  • Competitive multiplayer games with kernel‑level anti‑cheat remain a major blocker; these often refuse to run under Linux, keeping many gamers on Windows.
  • Experiences with ProtonDB are mixed: for some, everything they play “just works”; others report hardware‑specific breakage, crashes, and updates that regress compatibility.

SteamOS, Bazzite, and Distro Choices

  • Some users want an official, generic SteamOS installer or Steam‑powered “Steambooks”/living‑room consoles, with console‑like simplicity and guaranteed compatibility.
  • Others argue there’s “nothing special” about SteamOS for desktops: any mainstream distro plus Steam (often via Flatpak) gives essentially the same experience.
  • Bazzite (a Fedora Atomic spin with SteamOS‑style UX) gets both praise as the closest console‑like Linux for HTPCs and criticism as a niche layer that adds maintenance risk over just using Fedora.

Ads, Canonical, and Trust

  • Windows is widely criticized as “adware”; some contrast this with Linux distros that avoid such practices.
  • Counterpoint: Ubuntu previously shipped Amazon-linked ads and promotional MOTD content, so ad‑free behavior isn’t guaranteed by principle, only by current choices.
  • Canonical’s history (ads, Snap push, LXD issues) makes some wary; others say they’ve learned and that the Ubuntu community would reject Windows‑style advertising.

Market Outlook

  • Some believe web-centric workflows, plus Linux’s adequacy for browsing/communication, erode Windows’ lock-in, especially for non‑gamers.
  • Many think Valve can meaningfully erode Windows’ de facto dominance for gaming—but not fully dethrone it until anti‑cheat and multi‑launcher fragmentation are solved.

How to Make a Living as a Writer

Reaction to the Essay & Style

  • Many commenters found the piece “beautiful,” “entertaining,” and easy to read, praising its balance of light tone with underlying sadness and hope.
  • The horse/stable puns split readers: some loved them as clever and charming; others saw them as clichéd, “dad-joke” level, or movie-review-grade corniness.
  • Several people said the essay rekindled the feeling of reading “random stuff” online just for pleasure, without distraction.

Making a Living as a Writer

  • Multiple commenters stressed that writing alone rarely pays a living wage; most working writers either have another income stream, live very lean, or rely on some form of privilege (family wealth, high-earning partner, etc.).
  • Comparisons were made to “how I bought a house at 29” stories that quietly hinge on rich parents or other invisible advantages.
  • Some suggested more sustainable adjacent paths: editing, proofreading, content marketing, analyst roles, technical writing, and grant writing. These can pay rent but rarely lead to affluence.
  • A minority pushed back on fatalism, arguing you can self-fund a writing career by first getting a high‑paying job and saving, which others derided as unrealistic for most people.

AI, “Content,” and Creative Work

  • A large subthread debated why the essay didn’t mention AI, given its obvious relevance to paid writing.
  • One side claimed most of the described gigs are already replaceable “for a fraction of the cost,” predicting anonymous corporate writing (copy, product blurbs, headlines, summaries) will largely move to AI.
  • Others argued managers don’t actually want to prompt and shepherd models, that prompts themselves require writing skill, and that writers will still be needed—especially where error rates must be near zero or where lived experience and voice matter.
  • Several writers described AI use as equivalent to strikebreaking, given training on unlicensed work and the devaluation of human creativity.
  • There was broad agreement that personality, identity, and parasocial connection will matter more: writers who build a recognizable, human brand (often via video) may survive even as generic “content” is automated.

Disability, Ethics, and Compromise

  • Commenters discussed the author’s chronic condition: some empathized deeply; others argued about what “counts” as disability and how openly one should frame it.
  • The ethics of specific writing work (horse-racing coverage, “reputation management,” erotica, scammy marketing) were debated; several noted the tension between survival and moral discomfort, praising the author’s honesty about that tradeoff.

Why Algebraic Effects?

Motivation & “Why” Algebraic Effects?

  • Several commenters feel the article doesn’t clearly justify why effects are better than existing tools (DI frameworks, mocks, monads).
  • Proponents argue main benefits are ergonomics and explicit control over side effects: easier testing, sandboxing, and capability-style APIs without heavy frameworks or pervasive parameter threading.
  • Skeptics ask how this beats existing practices enough to justify major investment in mainstream languages.

Relation to Dependency Injection & the Color Problem

  • Effects are framed as “DI in the language”: call pure-looking code whose side effects are provided by handlers higher in the stack (e.g., production vs test handlers).
  • This can replace DI containers / global context for things like loggers, DB, etc.
  • On the “what color is your function” issue:
    • Supporters say effect polymorphism collapses many “colors” into one system, making functions compatible unless you explicitly restrict effects.
    • Others argue you still get two worlds—effectful vs pure—and possibly many different effects (DB, FS, network…), so colors multiply unless effect polymorphism and inference are very good.

Effects vs Monads / Error Typeclasses

  • Some note strong similarity to monadic error abstractions (MonadError, “free”/freer monads, mtl-style constraints) and claim languages already enjoy these “algebraic effects” today.
  • Counterpoints:
    • Algebraic effects + handlers give direct-style syntax, dynamic installation/overriding of handlers, and easier composition of many effects without transformers or n² typeclass boilerplate.
    • Effects operate on the actual stack (often via delimited continuations), enabling resumable exceptions, multi-shot continuations, backtracking, etc., which are awkward or costly to simulate monadically.

Expressive Power & Use Cases

  • Cited capabilities: resumable exceptions, generators, coroutines, async/await, backtracking search, probabilistic programming, non-determinism, dependency injection, state, “dynamic variables”, sandboxing, and structured concurrency patterns (racing tasks, cancellation, cleanup).
  • Some see this unification (“one concept for many control flows”) as the main attraction.

Debuggability, Readability & Tooling Concerns

  • Major worries:
    • Harder to see that a call can fail or trigger an effect without inspecting types or tooling.
    • Hard to locate which handler actually runs at a given call site; depends on dynamic call stack → potential “yo‑yo problem”.
    • Multi-shot continuations and non-local control transfers could be very hard to reason about and debug.
  • Advocates respond that:
    • This is similar to exceptions or high-level DI already in use; benefits and costs are two sides of the same coin.
    • Good IDE/LSP support (effect annotations, “find handlers”, call-graph queries) can mitigate these issues; some research and prototypes exist.

Implementation & Practical Adoption

  • Discussion notes multiple implementation strategies: delimited continuations, segmented stacks, exception-like translations, monadic transformations, capability passing, and aggressive effect specialization.
  • Some equate effects to longjmp-like control, but others clarify that multi-shot resumption and backtracking require more advanced mechanisms.
  • Comparisons are made to Lisp conditions, Smalltalk resumable exceptions, DI frameworks, React Hooks, and effect libraries in functional and TypeScript ecosystems.
  • Several are skeptical that full algebraic effects will become mainstream: perceived complexity, debugging difficulty, extra syntax, and limited visible ROI outside advanced concurrency or highly disciplined FP codebases.

Modification of acetaminophen to reduce liver toxicity and enhance drug efficacy

Perceptions of the Project & Student Achievement

  • Many commenters are amazed by the sophistication of the work for a 17‑year‑old and say it’s at least master’s-level chemistry.
  • Others temper this by noting she did not win the overall competition, and that many finalists are doing similarly advanced work in diverse fields.
  • Several people describe mixed emotions: inspiration, but also feelings of personal inadequacy or “falling short,” prompting a side discussion about whether one must “leave a mark” on the world.

Access, Mentorship, and Fairness in Science Fairs

  • Strong consensus that high-end science fairs are largely about access to labs, equipment, and expert mentorship.
  • Multiple anecdotes: projects done in university labs under close guidance from senior scientists or relatives; students often come from highly academic families.
  • Some argue this doesn’t diminish the students’ effort, but makes clear these are not solo “garage” projects.
  • There’s debate over whether such fairs genuinely advance science or mainly function as college-admissions theater and career-building for organizers.

Technical Discussion of the Chemistry

  • Chemists note the core is a four-step synthesis adding a protecting/functional group to acetaminophen, with an iridium-catalyzed key step.
  • The modified compound is computationally predicted to bind TRPV1 and reduce liver toxicity, but commenters see no in vitro or in vivo validation yet.
  • Questions raised:
    • Are these steps scalable and economical for mass production?
    • Why use silicon, given silicon-containing drugs are generally difficult and may violate Lipinski rules (too lipophilic)?
    • Whether the molecule’s properties could be tuned (e.g., by additional polar groups).

Patents and Commercial Prospects

  • Some ask if the sponsor has patented the molecule; responses say composition-of-matter patents don’t require efficacy data, but real value would hinge on biological results.
  • Commenters speculate that if expensive, such a drug would target high‑risk patients rather than replace cheap generic acetaminophen.

Acetaminophen, Toxicity, and Alternatives

  • Long subthread on how close therapeutic and toxic doses are, overdose frequency, and the grim nature of liver-failure deaths.
  • Debate over whether acetaminophen should remain OTC, especially given widespread unintentional overdoses from combination products.
  • Discussion of N‑acetylcysteine as an antidote and why it isn’t routinely co‑formulated (taste, side effects, and risk of encouraging higher dosing).
  • Many compare its modest analgesic effect (especially for strong pain) to NSAIDs and opioids, with varied personal responses.
  • Some mention emerging concerns about dementia risk and subtle psychological effects, versus known GI/cardiovascular risks of NSAIDs.

Pain Management, Morphine, and End‑of‑Life Care

  • Several comments pivot to opioids: morphine’s role in palliative care, overdose risks via respiratory depression, and side effects like constipation and cognitive dulling.
  • There’s brief discussion of fentanyl as a superior clinical analgesic but socially tainted by illicit use.
  • Some ethical tension: whether escalating morphine at end of life mainly relieves patient suffering or also hastens death for the “benefit” of caregivers.

Root for your friends

Title jokes & double meanings

  • Many expected a technical post about Unix “root,” device rooting, or SSH; others referenced the board game Root.
  • Australian and Kiwi readers noted “root” as sexual slang, finding the title unintentionally funny.

Resonance of the core message

  • Several commenters felt the essay articulated something they’d half‑formed: that consciously rooting for friends is both kind and personally beneficial.
  • A specific line about not trusting anyone with your wins struck a chord; some realized they rarely share successes, yet still expect support.

Jealousy, envy, and emotional work

  • Multiple threads discuss jealousy as common but manageable: you can feel it without acting from it, and practice shifting toward genuine happiness for others.
  • Some describe moving from bitterness to intentionally celebrating others, finding it improves their own well‑being.
  • Others admit to feeling competitive or zero‑sum, then consciously reframing success as non‑threatening.

Praise, humility, and sharing wins

  • A few are averse to praise and avoid sharing wins to dodge the discomfort of being “evaluated.”
  • There’s debate over bragging vs. healthy self‑disclosure: some insist visible pride makes people dislike you; others argue that dimming yourself breeds misery.

Negative bonding and toxic dynamics

  • Several warn against friendships built on shared resentment or gossip, which feel energizing but are corrosive.
  • Some recount “friends” who quietly root for their failure or even sabotage them, and advocate trimming such people.

Gender and cultural perspectives

  • Some perceive men as particularly prone to adversarial, grudge‑holding behavior compared to more overt “hype” among some women’s groups.
  • Others note cultural differences in the meaning of “friend,” contrasting instrumental, career‑oriented ties with deep, expectation‑light friendships.

Workplace implications

  • Many endorse celebrating coworkers’ wins, calling out invisible contributions, and “punching up” praise to managers as both kind and career‑enhancing.
  • Several refuse to write negative peer reviews, citing layoff trauma and HR systems that weaponize isolated criticism.

Skepticism and darker views

  • A minority argue that jealousy is inevitable, rising‑tide thinking is naïve, or even that “true friendship” doesn’t exist.
  • Others push back, stressing boundaries, selective pruning, and investing in people who reliably root for you.