Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 331 of 363

How dairy robots are changing work for cows and farmers

Enthusiasm and robot design

  • Commenters are fascinated by the ecosystem of barn robots: milking arms, feed pushers, and “manure Roombas,” with lots of joking about naming (“Poopoombas,” “moombas”).
  • People like that designers explicitly treated cows as end users, tuning behaviors (e.g., poop robots had to be more assertive so cows wouldn’t bully them).

Cow behavior and welfare

  • Several note that robotic milking lets cows “self-schedule”: they go when udders are uncomfortably full, associating the robot with pain relief and treats.
  • Autonomy is widely assumed to improve welfare and milk yield; some share anecdotes that happier cows produce more and higher-quality milk.
  • Debate over whether cows prefer pasture or barns: some say they mostly care about herd size, feed, and comfort; others cite visual evidence of obvious joy when cows are released to pasture.
  • Question of whether cows care about human vs robot interaction gets a mixed, anecdotal answer: varies by individual cow; many are indifferent.

Manure and barn automation

  • The stat that a cow produces ~68 kg of manure per day surprises many; discussion clarifies most of that is water and is removed while still wet.
  • People describe the scale of manure infrastructure: scrapers, trenches, pits, and pumps—robots are seen as a natural fit here.

Economics, scale, and labor

  • Robots have existed for years and are sold by multiple companies; some very large farms (thousands of cows) use them, though rotary parlors with cheap labor are still common.
  • Robots can raise feed intake, reduce disease, and milk more often, boosting yield and convenience, but they’re expensive capital equipment competing with low-wage, unpleasant human jobs.

Ethical concerns and “dark dairies”

  • One line of discussion fears “dark dairies” with minimal welfare and no light.
  • Counterarguments: milk production is tightly linked to cow comfort; highly stressed cows underperform, so extreme neglect is likely unprofitable.
  • Others argue economics can still push toward “good enough” but miserable conditions, so consumer pressure and welfare laws matter.

Reliability, downsides, and article tone

  • Some think the article reads like a single-vendor ad, lacking discussion of failures, costs, or competing products.
  • Farmers report real-world issues: hardware breakdowns at all hours, constant alarms, and stress; early adopters sometimes reverted to conventional milking.
  • Newer systems are believed to be better, but automation is framed as changing, not eliminating, labor.

Future directions

  • Several speculate about skipping cows entirely via bioreactors or “plant-based milk,” and jokingly extend the automation metaphor to human care and “AI overlords.”

What the hell is a target triple?

Use of “anachronism” and compiler history

  • Commenters dispute the intro’s use of “anachronism”: some say it’s technically correct for toolchains that only do native builds; others object that single‑target toolchains still exist and aren’t obviously historical relics.
  • Several emphasize cross‑compilers have existed since mainframes; “compilers only for the host” was never universally true.

GCC vs LLVM and cross‑compiling

  • Big contrast drawn between LLVM/Clang’s “one binary, many targets” and GCC’s per‑target binaries; some agree GCC’s model feels archaic and user‑hostile.
  • Others argue GCC’s modularity is deliberate and scales better when you need many exotic targets without installing dozens of unused backends.
  • There’s extensive criticism of the article’s strong anti‑GCC tone; people stress GCC’s historical importance and continued dominance in many distros.
  • Historical tidbit: LLVM was once offered to the FSF and the offer was effectively lost in Stallman’s email, which might have altered GCC’s trajectory.

Target triples: design, origin, and messiness

  • Several comments clarify that “triples” originated in GNU config (config.guess/config.sub), not LLVM; LLVM’s scheme is one variant among many.
  • Detailed breakdowns list up to 5–7 logical components: arch, vendor, kernel/OS, libc/userland, ABI, float ABI, object format. LLVM compresses some of these into its “environment” field.
  • Canonical vs non‑canonical triples, vendor field semantics, and Linux’s libc component all contribute to confusion.
  • Many agree the system is mostly accreted hacks: backwards‑compatibility, aliasing, and differing normalizations between projects (LLVM, GNU, Rust, etc.) make triples hard to reason about.
  • Some propose throwing triples away in favor of explicit structured parameters or per‑project fixed target lists (as Rust does).

ELF sections/segments and linker behavior (tangent)

  • Discussion of an earlier linker‑script post critiques its GCC bias and omission of segment (program header) details.
  • Debate over whether sections and segments are “the same concept”: consensus is they’re related metadata but serve different roles (development vs runtime).
  • Practical issues around embedding data into ELF, ensuring it’s mapped by LOAD segments, and using linker scripts vs custom tools are explored in depth.

Endianness

  • Observations that almost all modern mass‑market hardware is little‑endian; many developers happily static_assert(LE) and ignore BE.
  • Others warn this is risky for niche or legacy platforms (IBM Power/AIX, SPARC/Solaris, some ARM/MIPS modes, LEON in space, etc.), though such systems are rare.

Go, Zig, Rust and cross‑compilation ergonomics

  • Several Go developers defend Go’s GOOS/GOARCH scheme: it ships cross‑compilers out of the box, avoiding most of the “toolchain wiring” pain; mismatch with traditional triples is seen as a small price.
  • Zig is praised as the only toolchain that truly “just cross‑compiles,” including controlling glibc versions; commenters note that triples don’t encode glibc version, so they’re underspecified for serious Linux cross‑targeting.
  • Rust’s explicit target list and JSON target specs are cited as a saner, more structured layer over LLVM’s triples.

Linux, libc, and “no system libraries”

  • One thread argues Linux’s stable syscall ABI makes libc‑free static binaries viable; DNS and NSS issues are characterized as user‑space/glibc choices, not kernel constraints.
  • Others push back that, in practice, dropping glibc often breaks expectations (e.g., name resolution) and that alternative implementations sometimes violate standards.

Windows ARM64EC vs Rosetta

  • Multiple commenters strongly object to the article’s dismissal of Windows ARM64EC and praise it as a more flexible, incremental, and compatibility‑preserving design than Apple’s Rosetta approach.
  • They argue ARM64EC avoids fat‑binary bloat, allows fine‑grained porting, and better preserves UI and framework evolution.
  • There’s disagreement over how serious Rosetta 2’s real‑world drawbacks are; some maintain it’s effectively transparent for most users.

Naming bikesheds (x86_64 / amd64 / x64 / “quadruples”)

  • Debates over whether to prefer x86_64, amd64, or x64: the article’s prescriptive stance (“no one calls it x64”) is contradicted by several commenters.
  • Some defend amd64 as easier to type and widely used by distros; others prefer x86_64 as architecturally clearer.
  • The insistence on calling multi‑component identifiers “triples” is mocked; “tuple/moniker” is suggested as more accurate.

Perception of the article and blog

  • Many praise the technical depth, clarity on triples, and overall blog quality and design (especially the side preview), while noting performance quirks.
  • Others are put off by what they see as condescending language, factual slips (e.g., x32 status, protected mode history), and a harsh, dismissive stance toward GCC and some platform choices.

The last RadioShack in Maryland is closing

Nostalgia, but with mixed memories

  • Many recall RadioShack as their entry point into electronics and computing: TRS‑80 demos, Forrest Mims books, pegboard components, and helpful staff who let kids tinker.
  • Others stress it was often mocked as low‑quality and overpriced even then (“only game in town”), with things like $30 aux cables and Monster cables cited.
  • Several note that what people truly miss may be “being young” and having a local place to explore, more than the store’s actual quality.

From components to cell phones – and decline

  • Widely shared view: the store was worthwhile mainly for components, tools, kits, and repair gear. Once that was sacrificed for phones, toys, and impulse gadgets, the chain “went downhill fast.”
  • Former employees describe a heavy push toward cell phone contracts, batteries, upsells, and aggressive commission structures, making it unpleasant to work there and useless for hobbyists.
  • Some see leadership failure: RadioShack could have owned the “maker” era (Raspberry Pi, 3D printing, hobby robotics) but instead tried to be a mini–Best Buy.

Economic and technological forces

  • Online component pricing (AliExpress, Mouser, DigiKey) and bulk buying made $1.99 single transistors unsustainable.
  • Mall rents and corporate debt compounded the problem.
  • Fewer people repair electronics; devices use dense surface‑mount parts and are often disposable. Smartphones collapsed the demand for many categories RadioShack once sold (stereos, media players, landlines, etc.).

Online vs brick‑and‑mortar, and labor ethics

  • Several mourn the loss of physical spaces for browsing, learning, and social interaction (Fry’s, GameStop, telescope and camera shops).
  • Others emphasize consumer behavior: people say they value stores but overwhelmingly choose cheaper online options, especially when budgets are tight.
  • Debate arises over judging Amazon use:
    • One side criticizes Amazon’s labor practices and broader societal “higher‑order effects.”
    • Others point out similar issues across retail/logistics and in overseas manufacturing, and question why Amazon is singled out.

Surviving niches and alternatives

  • Micro Center is repeatedly cited as a successful niche: packed stores, knowledgeable staff, decent component aisles, and same‑day access to parts and PCs.
  • Bay Area and other regional holdouts (Anchor Electronics, Electronics Plus, Urban Ore, hobby shops) are mentioned, but many have closed.
  • Canadian commenters describe a similar arc: RadioShack → The Source → Best Buy Express, with the brand long associated with junk and high prices.

Culture, status, and recognition

  • Some see classism/credentialism in the article’s anecdote about a long‑time repair worker denied an official title for lack of college, emblematic of broader gatekeeping. Others say the sexism angle is unproven.
  • There’s broad agreement that retail jobs once provided accessible first‑job experience and that devaluing workers (and withholding recognition) contributes to apathy in the sector.

Odds and ends

  • Memories of the “Battery Club” and free monthly batteries surface repeatedly.
  • Several note privacy concerns about RadioShack’s long‑standing insistence on collecting phone numbers.
  • A few point out that the RS brand still persists in fragmented form: scattered U.S. independents and stores in Latin America and elsewhere.

I speak at Harvard as it faces its biggest crisis since 1636

Interest in the talk itself

  • A few commenters note that, amid the political crisis, the advertised lecture topic (limits of rational perception, computability vs. knowability) sounds especially compelling and will be streamed and recorded.

Harvard’s wealth, endowment, and “war chest”

  • Some argue Harvard is effectively a $50B fund with a university attached; losing federal research money won’t threaten its survival, only bloated administration.
  • Others push back: endowment money is less flexible than it looks (legal restrictions, donor intent), and assuming it can easily be redeployed for political battles is misleading.
  • There’s debate over whether endowments should ever be tapped for “non-specified” purposes in a systemic crisis.

Is the Trump letter normal conditionality or authoritarian overreach?

  • One camp calls the federal letter blatant overreach: government acting as Harvard’s HR department, demanding abolition of DEI while mandating “viewpoint diversity,” auditing admissions and hiring, and conditioning existing grants on new ideological terms.
  • They describe this as contract-breaking, executive blackmail, and part of a pattern of refusing to honor commitments.
  • Others insist taxpayers are not obligated to fund Harvard “no matter what,” and see the conditions as a legitimate response to perceived ideological capture or “communist/socialist rhetoric.”

Should public money fund private universities at all?

  • A substantial subthread argues that private universities should not receive federal research funding or enjoy tax exemption, especially given their real-estate wealth.
  • Counterarguments: US research has long depended on private universities; excluding them would harm scientific progress and is a poor, non-merit-based allocation of funds.
  • Some take this further, suggesting ending most public payments to private entities; critics call that unworkable and “hilariously silly.”

Antisemitism, Israel, and pretexts

  • Many agree the crackdown is not really about antisemitism but about seizing ideological control of elite institutions; antisemitism is seen as a convenient pretext.
  • Others contend elite universities historically have antisemitism problems and have failed to protect Jewish students; they see genuine issues but also an overcorrection.
  • A dissenting view argues that US campuses, especially Harvard, are among the least antisemitic places and that conflating criticism of Israel with antisemitism helped legitimize today’s assault on academia.

DEI, academic freedom, and hypocrisy

  • Some commenters emphasize prior illiberal trends within academia: DEI loyalty oaths, ideological hiring filters, suppression of disfavored research topics, and poor free-speech records; they view universities as reaping what they sowed.
  • Others maintain that whatever internal problems exist, government-compelled speech codes and hiring mandates in the opposite direction are worse, and the real principle should be keeping the state out of ideological governance entirely.

“Burn it down and rebuild” vs. reform

  • A long subthread explores the idea (popular in some tech circles) of cutting off federal loans, research funds, and tax exemptions to push existing universities into collapse and then “rebuild” new institutions.
  • Critics warn this would cause brain drain, damage US science, and likely yield more ideological, lower-quality schools.
  • Some favor more modest structural reforms instead (e.g., changing accreditation, spreading research funding beyond a few rich elites, limiting grant concentration).

It's easier than ever to de-censor videos

Line-scan imaging and everyday analogues

  • Several comments connect the demo to line-scan cameras and slit-scan techniques used in industrial vision systems and sport photo finishes.
  • People note you can approximate the “traveling slit” reconstruction with your own eyes by moving past gaps (e.g., bathroom stall doors), sparking a tangent about US vs European stall design and privacy.
  • Rolling shutters and old film shutters are cited as related “moving slit” exposure mechanisms.

Blur, pixelation, and information leakage

  • Multiple commenters stress: blur and naive pixelation rarely remove information; they mostly redistribute it. Deconvolution or search over candidate texts can often recover content, especially with known fonts and UI.
  • Blur is closer to an invertible convolution; pixelation is likened to a weak hash that can be brute-forced in small regions.
  • Larger block sizes and more noise make attacks harder but not always impossible, especially with priors (likely filenames, words, etc.).

Practical redaction techniques and common failures

  • Strong consensus: if you really need to hide something, you must destroy the original information, not just cover it visually.
  • Recommended patterns: solid opaque shapes, then re-screenshot or print-scan; or rasterize PDFs and verify no text remains.
  • Many historical failures are mentioned:
    • Image formats keeping old data (aCropalypse, leftover buffers).
    • Embedded thumbnails or previews not updated after cropping or censoring.
    • PDFs where text is only “black-highlighted” but still selectable.
    • Font metrics and character positioning leaking names even under black boxes.
  • Some suggest replacing real content with fake/lorem ipsum, then applying blur/pixelation for aesthetics.

Video-specific issues and mitigations

  • The key vulnerability in the article’s example is movement of text under a fixed pixelation grid: multiple frames act like many measurements of the same underlying signal.
  • Suggested mitigations:
    • Pixelate once, then overlay a static censored screenshot on all frames.
    • Use pure-color masks or fake-looking but uncorrelated pixelation.
    • Add deliberate noise or scramble patterns, though practicality is debated.

Ethical and legal concerns

  • AI “decensoring” of Japanese porn is discussed: some see it as merely generative porn, others call it “deeply unethical” when it violates performers’ expectations or local law context.
  • Broader concern: advances in de-anonymization threaten blurred faces/voices in older investigative journalism; French public TV reportedly moved to actors and back-shot filming and has pulled some archival material.

Historical and technical context

  • Commenters argue that multi-frame deblurring, blind deconvolution, superresolution, and similar techniques have existed for decades (e.g., astronomy, biomedical imaging); what’s new is accessibility and tooling, not the core math.

Generate videos in Gemini and Whisk with Veo 2

Creative potential and “one‑person movie” debate

  • Many see Veo 2 as a big leap: 8‑second, high‑quality clips open the door to solo or tiny‑team films.
  • Some predict a single‑creator, AI‑assisted movie grossing $100M soon; others argue distribution, marketing, and IP barriers make that unlikely.
  • Existing near‑examples (like small‑team animated films and AI shorts such as “Kitsune”) are cited as proof of the trajectory, not full realizations.

Economics, distribution, and discovery

  • Even if production costs approach zero, attention remains scarce: users expect a YouTube/TikTok‑like world with vast slop and a few breakout hits.
  • Success is expected to remain Pareto‑distributed: story, branding, and marketing still dominate, not pure technical capability.
  • Platforms that best surface gems from massive AI output are seen as the real power centers.

IP, copyright, and style cloning

  • Discussion of US law: purely AI output isn’t copyrightable, but human editing/selection can create a protectable work.
  • Many expect laws to change as industry adopts AI.
  • Ghibli‑style marketing examples raise ethical concerns about training data and derivative “soul‑less” mimicry.

Art, taste, and authenticity

  • Some find early AI films exciting, rough, and more “human” than overly polished studio output; others dismiss them as amateurish, cliché, and depressing for real artists.
  • Debate over whether people care more about authenticity/authorial intent versus entertainment value.

Technical limitations and workflow pain

  • Major complaints: 8‑second cap, character inconsistency, low resolution, cost per minute (one user burned $48 on a dozen clips), and high rejection rate from content moderation.
  • Text‑to‑video is described as emotionally draining: endless prompt tweaks, slow feedback, results far from intent, little sense of authorship.
  • Users want more controllable pipelines (sketches, paths, keyframes; “…‑to‑3D‑scene”; integration with tools like Blender/DAWs).

Whisk, Imagen 3, and access issues

  • Whisk uses “prompt transmutation” (image → text description) rather than true latent image encoding; some speculate legal/safety, not technical, reasons.
  • Access is patchy: regional blocks (GDPR concerns), paid tiers, broken UI/rollouts, and confusing product overlap (Veo vs Google Vids).

Google’s role in the AI race

  • Some frame Google as an “embrace, extend, extinguish” giant; others note it pioneered the core transformer tech and now benefits from in‑house hardware (TPUs).
  • There’s praise for recent Gemini 2.5/Veo progress but frustration that product UX lags far behind the underlying models.

Benn Jordan's AI poison pill and the weird world of adversarial noise

Scope of “Learning” and IP Rights

  • One camp argues artists should not control how others “learn” from published works; any data use, including AI training, should be allowed once something is public.
  • Others see a clear distinction between humans learning and corporations training models for profit, and view unconsented AI training as a new form of exploitation.
  • Several participants stress that current law protects copying, performance, and distribution, not “learning” as such, and that analogies between human and machine learning are legally weak.

Radical Anti‑IP vs Reformist Positions

  • A vocal minority advocates abolishing IP entirely: no copyrights, no control over remixing or commercial reuse, and acceptance that corporations could freely profit from all published works.
  • Critics counter that this primarily benefits large platforms, further weakens already precarious creators, and would gut many knowledge‑ and R&D‑heavy industries.
  • A more common middle ground favors shorter copyright terms and narrower rights, while preserving some exclusivity to incentivize creation and prevent outright plagiarism or fraud.

How Should Artists Get Paid?

  • Multiple comments note that royalties and streaming payments are already negligible for most musicians; many rely on touring, merch, sponsorships, patronage, or basic income‑style support.
  • Some argue YouTube‑style models (free content, monetized via sponsorship and fan support) show that creators can earn without strong IP; others respond that this favors “personality” content and doesn’t scale to all art forms.

Adversarial Noise / “Poison Pills” Against AI

  • Technically minded posters are skeptical that adversarial perturbations will work long‑term as a protection strategy:
    • Attacks often don’t transfer well across models, architectures, and preprocessing pipelines.
    • Data cleaners can add noise, denoise, filter inaudible spectrum, or resynthesize audio, stripping many perturbations.
    • Once a defense is broken, all previously “protected” content becomes retroactively vulnerable.
  • Some see these methods as symbolic protest or a temporary cost‑imposer on model trainers; others warn that overselling weak defenses misleads artists into a false sense of security.

Ethics, Politics, and Centralization

  • There’s debate over whether opposing IP in this context aligns more with empowering creators or with the interests of large tech firms seeking cheap training data.
  • Several comments frame generative AI not as “liberating creativity” but as centralizing cultural production inside expensive, proprietary models owned by a few corporations.

Decreased CO2 during breathwork: emergence of altered states of consciousness

Mechanisms and Physiology

  • Several commenters affirm the OP’s “secular” summary: hyperventilation → lower CO₂ (respiratory alkalosis), plus rhythmic diaphragmatic movement and focused attention → euphoria, trance‑like states, cognitive shifts.
  • More detailed explanations: low CO₂ causes vasoconstriction and respiratory alkalosis; albumin binds calcium more, altering neuronal excitability and producing tingling and muscle cramps (tetany), especially in hands/face.
  • Key distinction made between:
    • Hyperventilation‑driven low CO₂ (breathwork).
    • Apnea/freediving‑driven low O₂/high CO₂.
  • One thread notes that lower end‑tidal CO₂ in exhaled air isn’t a direct readout of tissue CO₂, and CO₂ has important physiological roles (Bohr effect), not just “waste gas.”

Safety and Risks

  • Tetany is described as uncomfortable but generally transient; explicit warning that people with epilepsy should avoid this kind of breathwork.
  • Freediving studies are cited suggesting possible mild, persistent cognitive impairments after years of extreme apnea training; commenters stress this is a different mechanism from hyperventilation‑based practices.
  • Strong warnings from divers and freedivers against pre‑dive hyperventilation: it delays the CO₂‑driven urge to breathe without increasing O₂ much, increasing risk of blackout.
  • Some worry about nitrous oxide use and hypoxia; others counter that nitrous’ primary effects are pharmacologic, not just from oxygen deprivation.

Subjective Effects and Use Cases

  • Many anecdotes of holotropic, Wim Hof, and yogic breathwork producing intense states: euphoria, trance, emotional release, “not far off psychedelics.”
  • Others report primarily unpleasant effects (lightheadedness, cramps, insomnia, “hard to think”) or find it difficult to sustain intense breathing outside guided group settings.
  • Surfers and freedivers describe training to remain calm and control breathing under extreme stress (hold‑downs, “washing machine” sets), framing it as almost “ego death.”

Meditation, Rationality, and Tradition

  • Debate on why breathwork/meditation are popular in atheist circles: some see tension with “rationality,” others emphasize strong empirical support (especially for meditation) for mood, attention, and executive function.
  • Clarifications that meditation/breathwork are not inherently religious; they can be treated as “exercise for the brain.”
  • Multiple comments connect breathwork to long‑standing practices: pranayama, tummo, Vipassana, and Buddhist psychology; several assert that modern science is “catching up.”

Indoor Air and Environment

  • A substantial subthread argues that chronic exposure to stale, high‑CO₂/poor‑quality indoor air subtly worsens mood and cognition.
  • CO₂ is framed mainly as a proxy for overall ventilation; commenters also stress VOCs, PM₂.₅, mold, and radon.
  • Discussion of consumer air‑quality monitors (CO₂, VOC, particulates), their cost, sensor quality, and calibration, with disagreement over how much CO₂ itself matters at typical household levels.

OpenAI is building a social network?

Motives and Strategy

  • Many see this as a data play: own a constant stream of fresh human interactions instead of paying or being locked out by X, Meta, Reddit, etc.
  • Others think it’s mainly about finding a new business model and ad inventory as model improvement plateaus and costs stay high.
  • Some frame it as rivalry or retaliation against existing social/AI players, not a carefully considered core strategy.
  • A few argue OpenAI needs more “distribution” and a daily destination, not just an API and chat box.

Proposed Value & Product Concepts

  • Optimistic ideas:
    • An AI-powered “most human” social network that filters bots/spam/AI junk and surfaces real people.
    • LLM-native group chat where humans and bots collaborate, brainstorm, or co-create.
    • A DeviantArt/Tumblr-like space where OpenAI pays for high‑quality training data.
    • Personal AI filters curating feeds to each user’s explicit notion of “quality.”
  • Skeptical replies question how to define “good” content, prevent gaming with AI outputs, and avoid simple “Grok but here” gimmicks.

Spam, Bots, and Identity

  • Strong interest in using AI to detect spam/bots, but doubt it will be better than current ML.
  • Debate over “proof of human” schemes (passports, Worldcoin-style) vs anonymity and privacy risks; concern about creating surveillance honeypots.

AI Slop, Information Quality, and Ethics

  • Worry about a future where feeds are near‑100% AI-generated “slop,” optimized only for engagement.
  • Split views on summaries/“tl;dr”:
    • Critics say it infantilizes users and spreads hallucinations.
    • Defenders say it enables non-experts to learn quickly, if models are accurate.
  • Fears that AI‑curated social networks will deepen echo chambers and become powerful tools for propaganda and emotional manipulation.

Market Saturation & Adoption Skepticism

  • Many question why anyone would join “yet another Twitter clone,” especially one openly designed to harvest training data.
  • Social-graph lock‑in and user fatigue with new platforms are seen as major obstacles.
  • Some think a bots‑only or bots‑heavy network could be weirdly popular; others say the appeal of social media depends on knowing real humans are on the other end.

Implications for OpenAI and AGI Hype

  • Several commenters read this as a sign OpenAI knows AGI isn’t imminent and is pivoting to more conventional Web‑2 style products.
  • Others argue diversification is rational given slowing model gains and fierce competition from other labs.
  • There is visible cynicism: this looks less like a path to “super-intelligence” and more like building an AI‑driven slop feed with ads.

JSX over the Wire

Comparisons to Existing Stacks

  • Many see strong parallels with Inertia.js (Laravel/Rails + React/Vue), HTMX, Hotwire/Turbo, Phoenix LiveView, Fresh, Astro, and Livewire/LiveView-style “HTML over the wire.”
  • Some argue RSC is “PHP/JSF/webforms again,” others see it as the next turn of the spiral: same idea (server-driven UI), but with modern React, streaming, and richer composition.
  • A few note prior art like Facebook’s XHP/Async XHP and KnockoutJS/GraphQL, saying RSC re-explores a well-trodden space with different tradeoffs.

Architecture, REST, and BFF Layer

  • One camp prefers clean JSON APIs and a strict client/server boundary; they see JSX on the server as needless abstraction and tight coupling.
  • Others argue “REST in practice” already degenerates into ad‑hoc view models; RSC + a Backend‑for‑Frontend (BFF) is framed as making that layer explicit and component-shaped.
  • Hypermedia/HATEOAS advocates say the whole mess exists because REST was watered down to “JSON endpoints”; they prefer HTML-as-API (HTMX-style) instead of JSX or JSON UI trees.

Interactivity and Client/Server Split

  • Multiple questions center on how stateful interactions (like buttons, likes, list items) work when part of the UI is server components:
    • Suggested patterns: optimistic updates with client components, selectively re-fetching server trees, or wrapping server components in client “shells.”
    • Critics worry this leads to awkward composition, “use client” as a crutch, and confusion over where event handlers and state should live.
  • Some note RSC doesn’t remove the need for client-side logic; it just moves data-fetching and static parts of the tree.

Complexity, DX, and Article Reception

  • Several commenters find RSC conceptually interesting but see high complexity, especially in Next.js (rendering modes, caching, mental overhead).
  • Others are persuaded by the article’s historical framing and checklist, feeling it’s the clearest justification for RSC so far.
  • Length is contentious: some want a TL;DR; others defend the depth and blame low-quality commentary on people not reading.

Performance, Deployment, and Alternatives

  • Concerns about N+1 data fetching in per-component await patterns; defenders say proper batching/data-loader layers are required regardless.
  • Version skew between client and server during deploys is flagged as a real operational risk; solutions mentioned include skew-protection infra, routing by version, or last-resort reloads.
  • GraphQL appears as an alternative that already lets clients declare data needs; opinions split on whether RSC is redundant, complementary, or a simpler middle ground.
  • Many conclude RSC/“JSX over the wire” is powerful but not universally appropriate; simpler SSR (Django/Rails/HTMX, Inertia) may be better for many apps.

Notion Mail is out

What Notion Mail Is (and Isn’t)

  • Works only as a Gmail client; no own mail hosting, no generic IMAP/“normal email” support.
  • Several people were disappointed it’s “just a Gmail wrapper,” not a real provider or standards-based client.
  • Comparisons to Superhuman: similar keyboard-first, focused UI vibe, but Notion Mail is (currently) free or bundled vs Superhuman’s ~$30/month.
  • Some early testers say it looks sleek and feels snappier than Notion’s main app, but still an Electron app and likely not as fast as Superhuman or native clients like Mimestream.
  • Confusion about pricing and AI limits; the AI inbox organization feature appears to require the Notion AI add‑on.

Gmail-Only Strategy and Ecosystem Positioning

  • Speculation that this is a step toward a full productivity suite (email, calendar, docs) to rival Google Workspace/O365, leveraging Notion’s positive brand.
  • Others think directly competing with Google/Microsoft would be nearly impossible due to lock‑in and compatibility expectations; better to integrate tightly instead.
  • Some wonder if the Gmail focus is about increasing acquisition appeal to Google, while others hope Notion instead becomes a true Workspace competitor.

Protocols, Standards, and Alternatives

  • Strong frustration that it doesn’t support IMAP; repeated mentions that IMAP is “nightmare fuel,” but still the standard.
  • JMAP is cited as ideal but with weak adoption. Gmail’s proprietary API is seen as the practical reason so many new clients are Gmail-only.
  • A few commenters highlight IMAP‑centric alternatives (e.g., Marco) as the “real” standards-based path.

AI Features and Hype Skepticism

  • Many dislike the vague “AI” branding; they want concrete descriptions like “summarize threads” or “draft replies,” not just “AI inbox.”
  • Notion’s AI assistant is widely described as underwhelming compared to going straight to GPT‑4.
  • Some want the ability to plug in their own OpenAI‑compatible API keys; others argue Notion is intentionally avoiding deep dependency on external models, even while depending on Gmail.

Performance, UX, and Core-Product Concerns

  • Multiple users complain Notion itself has become slow, memory‑heavy, and unreliable at scale, especially on large databases and on mobile.
  • A faction considers Notion a “toy” that breaks down past ~100 items, unsuited for business‑critical workflows; others say it’s vastly better than SharePoint/Docs/Word for many cases.
  • Complaints that Notion is drifting like Evernote did—adding collaboration/AI bloat while core speed, search, and usability stagnate.

Security, Compliance, and Privacy

  • Several commenters are uncomfortable granting a startup full read access to their email, especially for work accounts subject to compliance and audit.
  • SOC 2 messaging is called out as inconsistent: marketing claims Type 1, while FAQ originally said “not currently SOC 2 compliant,” later updated.
  • Discussion clarifies Type 1 vs Type 2 and notes that a new product often starts with Type 1, though some still view the gap as a maturity signal.
  • Users ask about E2EE and worry that Skiff’s encryption focus was sacrificed because it conflicted with AI-driven features.

Reaction to Skiff Shutdown and Broader SaaS Fatigue

  • Strong disappointment that Notion acquired and shut down Skiff Mail—seen as a superior, privacy‑oriented product—only to launch a Gmail‑only client.
  • Several people express general fatigue: every SaaS eventually adds an email client, task management, and “AI,” leading to overlapping, undifferentiated products.
  • Some former Notion fans report migrating to tools like Obsidian, Linear, and Whimsical due to performance issues, lock‑in, pricing changes, and intrusive banners.

TLS certificate lifetimes will officially reduce to 47 days

Overall reaction

  • Thread is sharply polarized.
  • Supporters see this as a natural continuation of the last decade’s move toward automation and shorter lifetimes.
  • Opponents call it “catastrophic”, unnecessary “security busywork”, and accuse the CA/B Forum and browsers of ignoring operational reality and concentrating power.

Security rationale and claimed benefits

  • Shorter lifetimes reduce exposure from:
    • Mis‑issued or compromised certificates that can’t be reliably revoked.
    • Stale validation data (domain control, organization info, old DCV methods).
    • Long‑lived certs obtained via BGP hijack or past CA incidents.
  • Revocation (OCSP/CRL) is widely seen as broken or inconsistently checked; short lifetimes are framed as a de‑facto revocation mechanism.
  • Some argue this also reduces CA “too big to fail” risk: misbehaving CAs’ impact is naturally time‑limited.

Operational and automation impacts

  • Pro‑change side:
    • “If a human can do it, a machine can” – treat this as a forcing function to adopt ACME, monitoring, and proper PKI hygiene.
    • Believe most sites can use free ACME clients, reverse proxies (Caddy, Traefik), or managed CDNs/ALBs to automate.
  • Anti‑change side:
    • Many products (IIS, F5, NAS, IPMI, older appliances) lack first‑class ACME; workarounds are brittle glue scripts and REST hacks.
    • Large shops already struggle with 1‑year certs; 47‑day cycles increase failure modes (especially across weekends, vacations, long chains of approvals).
    • Multi‑server deployments, non‑HTTP services, and segmented networks complicate automation.

Small orgs, legacy systems, and internal PKI

  • Concern that this disproportionately hurts small orgs and “off‑the‑shelf” users, pushing them toward big cloud vendors and managed platforms.
  • Many enterprises are already moving most internal services to private CAs with long‑lived certs; some see that as the desired outcome.
  • Others note running an internal CA and rolling it out across mixed fleets (desktops, phones, containers, IoT) is non‑trivial.

Self‑hosting and “grassroots internet”

  • Some fear this accelerates the decline of DIY hosting and independent sites, making “simple home server + static HTML” less viable and increasing dependence on third‑party platforms.
  • Counterargument: ACME + modern proxies actually make HTTPS easier than the pre‑Let’s‑Encrypt era of expensive, fax‑verified multi‑year certs.

Identity vs encryption and alternatives

  • Long subthread debates whether TLS’s real value is encryption alone or authenticated identity.
  • Several argue unauthenticated encryption (self‑signed, TOFU) is fine for many small/local use cases and browsers are too hostile to it.
  • Others reply that on hostile networks (coffee‑shop Wi‑Fi, ISPs) MITM is the primary threat; without identity, encryption is easily intercepted.
  • Alternatives like DANE/DNSSEC, SSH‑style TOFU, local CAs, and special “intranet” trust models are discussed but seen as poorly deployed or lacking browser support.

Revocation, short‑lived certs, and CT

  • Broad agreement that OCSP and CRLs don’t work well in practice; many clients don’t check them consistently.
  • Short‑lived certs (down to 7 days, even 6‑day options) are presented as a way to bypass revocation altogether at the cost of heavier infrastructure load (CT logs, CAs, HSMs).
  • Some worry about the “purity spiral”: effort poured into certificate hygiene instead of larger, more common security problems.

Governance, incentives, and “endgame” concerns

  • Multiple comments note this change was driven primarily by browsers (especially Apple/Google) with CAs concurring; browsers represent billions of relying parties.
  • Critics argue the decision externalizes costs onto operators while browsers/large CAs bear little downside. Some raise legal‑exposure and central‑control theories; defenders call such claims “silly” or conspiratorial.
  • Speculation on the “endgame”: very short lifetimes (hours or minutes) or even per‑connection online validation is raised and generally dismissed as operationally and audit‑wise untenable, though many expect further reductions after 2029.

How to win an argument with a toddler

Nature and purpose of arguments

  • Many commenters distinguish between “arguments” as cooperative exchanges aimed at insight vs. performative fights for status, validation, or spectacle.
  • Several argue that real mind‑change is rare and slow; arguments mostly refine one’s own views, expose hidden assumptions, or clarify what the disagreement is really about.
  • There’s pushback on the article’s claim that you should “lose” about half your arguments; some say that implies you formed views randomly rather than based on prior evidence or expertise.

Changing minds, identity, and rationality

  • Strongly held views often sit inside personal identity; changing them feels like changing who you are, which makes honest argument hard.
  • Some advocate probabilistic thinking: updating confidence levels instead of flipping from “right” to “wrong.” Others emphasize the need to separate ego from beliefs and treat being corrected as a win.
  • Others warn that extreme openness to changing beliefs can make people more vulnerable to cults and manipulative movements; rationalist circles are cited as an example with both benefits (self‑improvement) and risks (cult-like offshoots).

Talking across political divides

  • Several long subthreads explore how to talk with right‑leaning or MAGA relatives/friends. Tactics mentioned: framing issues in their terms (e.g., permanence of expanded powers), finding shared axioms, “steel‑manning” their position, and treating conversations as long‑term “seed planting.”
  • Others say large parts of the modern right (or left) are not fact‑responsive, rely on propaganda, or operate more like populist or quasi‑religious movements. That view is strongly contested by people who see this as dehumanizing generalization.
  • There’s meta‑critique that calling opponents “toddlers” or fascists can itself be toddler‑like and kills genuine dialogue.

Democracy, Trump, and danger assessment

  • One cluster debates whether US democracy is “teetering.”
    • Some list concrete actions (attempted overturning of an election, abuses of emergency powers, ignoring court rulings, rendition cases, politicized use of law enforcement) as clear danger signs.
    • Others argue similar conflicts between branches and norm violations have happened before, see much of the fear as media‑driven framing, and stress that many citizens interpret the same facts very differently.
  • A recurring theme: if you cannot even imagine how “the other half” reached a different conclusion from the same facts, you may be the “toddler” the article describes.

Online vs offline discourse

  • Several note they almost never change their mind in online arguments but frequently do in person, attributing this to lack of trust, low bandwidth of text, anonymity, and incentives for “slam dunks” rather than understanding.
  • Others counter that online debates can change minds indirectly: you research to rebut someone and end up discovering you were wrong.
  • There’s broad agreement that without good‑faith engagement, argument is pointless; detecting bad faith online is hard.

Actual toddlers and parenting analogies

  • A parallel thread discusses literal toddlers: validation of feelings, offering constrained choices, and focusing on underlying emotions rather than surface demands often “wins” conflicts more effectively than power struggles.
  • Many say this maps to adults: first acknowledge emotional reality and shared goals, then discuss alternate solutions.
  • Some warn that purely transactional “deals” with kids can backfire long‑term, and that children also must learn to accept genuine limits.

Labels and bureaucrats

  • Multiple commenters dislike the article’s lumping of “defensive bureaucrats, bullies, flat‑earthers, agenda‑driven people, and radio hosts” as “toddlers,” arguing it’s polarizing and self‑congratulatory.
  • Others defend criticism of “defensive bureaucrats” who hide behind rules against ethics, while another long comment defends bureaucrats as constrained implementers of messy, politically negotiated rules, not overgrown children.

Philosophy Major Snatched by ICE During Citizenship Interview

Duplicate links and why this story was posted here

  • Commenters note this story had already appeared via CBC/BBC links.
  • The submitter argues Daily Nous is significant because it is a professional-philosophy outlet breaking its usual focus to highlight an “extraordinary” rights issue.

Legal and constitutional concerns about the deportations

  • Several ask why courts have not broadly paused deportations given patterns of detention without charges and rapid removal preventing legal petitions.
  • Some say the “rule of law is compromised” because the executive is openly ignoring court orders, including a recent unanimous Supreme Court decision requiring people to see a courtroom before deportation.
  • Others explain limits of US courts: judges typically can only grant relief to parties before them; standing rules often require harm to have already occurred.
  • There is discussion of habeas corpus and whether quickly flying people out is a way of evading judicial review.
  • One commenter suggests the pattern could fit a RICO-style conspiracy theory targeting officials across states.
  • Another cites doctrine that courts cannot direct the executive’s conduct of foreign relations, raising questions about state responsibility toward people harmed by such policies.

Courts, partisanship, and enforcement

  • Long subthread on “court packing”: many argue the right has systematically filled courts with partisan judges; others respond that both parties do this and that current conservative moves are “restoring balance.”
  • Some point out that even many right-leaning judges appear disturbed by current executive actions, questioning the point of packing courts if their rulings are then ignored.
  • There is a historical tangent about FDR, the Warren Court, and whether current conservative efforts are retaliation for earlier “abuses.”

Immigration, “illegals,” and camp terminology

  • One side insists anyone who entered illegally is “guilty” and rejects use of “concentration camp” for El Salvador’s facility.
  • Others argue the issue is lack of due process, not mere illegality, and characterize secretive mass detention and transfer against court orders as closer to human trafficking than ordinary deportation.

Meta: HN as a venue for political discussion

  • Some say HN is a technical forum and not the place for this; others want “technical” levels of critical thinking applied to politics.
  • There is criticism of perceived anti-political moderation, with claims this itself is a political stance and may tilt right.
  • A counterpoint clarifies there is no formal ban on politics; instead, the community tends to suppress threads likely to devolve into low-quality flamewars.

“Philosophy major” as a narrative hook

  • Commenters question why the headline stresses the person’s major.
  • Replies: it signals an educated, non–working-class profile that many readers can relate to (“it could be me”), suggests a likely legitimate student-visa path toward citizenship, and fits the source (a philosophy-focused site).

Extended tangent: crime, homelessness, and “right-wing impulses”

  • A large branch of the thread shifts to San Francisco crime, homelessness, and statements by prominent tech figures.
  • One side frames calls for stricter enforcement as a “right-wing impulse” that moralizes about poor people’s failings while ignoring systemic and white-collar crime.
  • Others argue people across classes—especially the poor—are harmed by lax enforcement and simply want basic public order without adopting authoritarian models.
  • There is debate over causes of SF’s situation: urban form and climate vs. policing and prosecution changes; the effectiveness of “housing first”; and whether some severely ill people ultimately require inpatient care.

Moral reaction to this specific case

  • At least one commenter who had been skeptical of similar stories finds this case especially compelling after watching the subject’s interview, viewing him as clearly sympathetic and “faultless.”
  • Another suggests the apparent indifference and brutality may be intentional, signaling that authorities “don’t care” and normalizing cruelty as political realism.

You cannot have our user's data

Public data vs. control

  • Some argue that once data is on the public web, you effectively lose control over its propagation; treating public content as non-public is seen as unrealistic.
  • Others push back that “public but not like that” is still meaningful: users can object to large-scale scraping, mass replication, and attention-diverting derivatives even if individual copying is inevitable.
  • Analogies are made to a public square or a restaurant mint bowl: public access doesn’t imply unlimited industrial-scale extraction.

Copyright, law, and jurisdiction

  • Many point to copyright (and Berne) as the legal mechanism for “public but controlled.”
  • Counterpoints: copyright mostly constrains redistribution, not private use; enforcement is hard across borders and with botnets.
  • Some stress that simply adding “no AI training” clauses or licenses is useless without actually litigating; others note real limits on suing actors in places like Russia or China.

Resource abuse and crawlers

  • Broad agreement that blocking badly behaved crawlers (AI-related or not) is legitimate: they can generate huge bandwidth bills and denial-of-service conditions.
  • Some emphasize that LLM crawlers aren’t “the public” when they effectively crowd out human users by saturating bandwidth.
  • There’s frustration that scrapers repeatedly hit mostly static sites with no apparent benefit.

Host neutrality vs. anti-AI stance

  • One side welcomes SourceHut’s explicit ban on ML training use, seeing it as defending users and infrastructure from exploitative “Big Tech.”
  • Another side, including maintainers of permissively licensed projects, dislikes hosts imposing blanket anti-ML terms on code they don’t own; they want maximum visibility, including via LLMs.
  • Debate over whether hosts should strive for maximal neutrality vs. aligning with particular ethical/political positions.

Cloudflare and “racketeering” framing

  • Some suggest the “racketeer” label refers to Cloudflare both selling AI services and selling protection against AI scrapers, and similarly offering DDoS protection while fronting for DDoS-for-hire sites.
  • Others recall high pricing when SourceHut sought Cloudflare help and cite criticism that Cloudflare benefits from widespread attacks.

Licenses and LLMs

  • People speculate about licenses that would force trained models to be open and outputs to be open source; several think such clauses would be unenforceable or struck down as unfair.
  • Disagreement over whether LLM training is or will be legally “fair use,” whether models are derivative works, and whether model outputs are copyrightable remains unresolved and labeled as legally unclear.
  • Some argue GPL contamination might already apply to many models; others note courts seem to demand proof of concrete damages.

Anubis proof-of-work and browser issues

  • Anubis, used by SourceHut, employs a multi-threaded proof-of-work in browsers to distinguish humans from bots.
  • Critics say this becomes de facto gatekeeping against older or nonstandard browsers and contradicts SourceHut’s “no JavaScript required” positioning.
  • Defenders argue any modern browser can implement it and that UA checks are primarily an optimization; proof-of-work is the real barrier for large-scale scraping, not genuine users.

Miscellaneous ideas and concerns

  • Suggestions include scraper tarpits that feed infinite “poison” training data or redirect proof-of-work into mining for site owners.
  • Some wonder how sure anyone is that specific heavy traffic is from LLM scrapers vs. plain DDoS with plausible deniability.
  • A few promote more distributed, self-contained VCS systems (e.g., Fossil-like) as a structural response to centralized scraping and hosting constraints.

America underestimates the difficulty of bringing manufacturing back

Tariffs, Markets, and Industrial Policy

  • Many see broad, sudden tariffs as chaotic “policy by tweet” rather than a coherent industrial strategy. They raise uncertainty, deter long‑term factory investment, and may be reversed by the next administration.
  • Others argue tariffs are one of the few blunt tools available to counter decades of offshoring and unfair foreign subsidies, and that some level of protectionism is necessary for strategic industries.
  • There’s substantial support for targeted industrial policy (e.g. CHIPS Act, EV and battery incentives) and for distinguishing between critical sectors (chips, energy, defense, some materials) and low‑value goods (t‑shirts, toys).

Economics and Feasibility of “Bringing Manufacturing Back”

  • Rebuilding a competitive ecosystem is seen as a 10–20 year project requiring whole supply chains, not just final assembly. China’s strength is systemic: dense supplier networks, logistics, and cheap, increasingly low‑carbon energy.
  • Several commenters stress that the US is still the world’s #2 manufacturer; what’s missing are many mid‑ and low‑value segments and critical components, not “manufacturing” in general.
  • Skeptics argue that even if production returns, it will be heavily automated—creating relatively few jobs—and that high US wages and costs mean most products will remain uncompetitive globally without permanent protection or subsidies.

Labor, Jobs, and Social Outcomes

  • A recurring theme: voters are nostalgic for an era when a high‑school graduate could support a family on a factory wage. Many equate “bringing back manufacturing” with restoring middle‑class stability and dignity, not a love of factory work itself.
  • Others push back: not everyone can or wants to do knowledge work, but many Americans also don’t want repetitive or physically demanding factory jobs, especially at current pay levels and cost of living.
  • There’s broader criticism of US business culture: short‑termism, shareholder primacy, underinvestment in training, and reliance on offshoring and low‑wage or prison labor instead of building domestic skills.

China, Geopolitics, and Security

  • Commenters highlight China’s integrated industrial base, aggressive state planning, and rapidly expanding cheap electricity as enormous advantages that can’t be matched quickly.
  • National‑security‑oriented voices argue that dependence on an adversarial China for critical goods (chips, batteries, materials, munitions inputs) is untenable, especially in a Taiwan crisis. Others warn that trying to re‑create China’s model in the US is unrealistic and risks economic self‑harm.

Politics and Governance Constraints

  • A fundamental obstacle identified across the thread is political: US policy swings every 4–8 years, making long‑horizon industrial planning difficult.
  • Many see current tariffs as more about domestic politics, populist symbolism, and enriching insiders than about a serious, durable reindustrialization plan.

Launch HN: mrge.io (YC X25) – Cursor for code review

Perceived value & use cases

  • Many commenters like the direction: AI-focused code review to reduce rubber‑stamping and catch subtle bugs, especially as AI-generated code increases.
  • Solo developers and open source maintainers find value in having a “second pair of eyes” that can be strict without social friction.
  • Some teams report moving from other AI review tools to mrge and seeing better, more useful comments and encouragement of stacked PR workflows.

Comparison with existing tools

  • Mentioned alternatives include Graphite, CodeRabbit, Copilot for PRs, Aviator, and in-IDE agents (e.g., Claude via MCP).
  • Some feel existing AI reviewers are “too nice” and mostly wrong or trivial; mrge’s founders claim better context-awareness and less noise, but at least one user reports poor results on a sample PR (1/11 useful comments).
  • A few users don’t see the need for a separate tool when their AI editor already helps with review.

AI behavior & feature ideas

  • Features called out positively: PR summaries, conceptual grouping of diffs, diagram generation, custom rules inferred from comment history, and conservative one-click fixes.
  • Users want:
    • Rules learned from past reviewer discussions and coding standards docs.
    • Multiple models/personas (security, architecture) and model “promotion” based on accepted feedback.
    • Detection or highlighting of AI-generated PRs.
    • Awareness of previous commits and better handling of large monorepos.

Integrations & workflow

  • Current focus is GitHub; GitLab, Bitbucket, GitHub Enterprise, and local/IDE pre-PR review are frequently requested and said to be on the roadmap.
  • Some worry about having to leave GitHub; mrge clarifies all comments sync back and the web UI is optional.

Security, privacy, and compliance

  • Strong concern over required write/merge permissions; multiple commenters want a read-only mode or branch exclusions.
  • SOC 2 status is important for adoption; mrge states their own certification is in progress and subprocessors are already certified.
  • Some security-minded users still recommend against use until permissions and deployment models (self-hosted/hybrid) are more constrained.

Maturity, pricing, and polish

  • Service is currently free with plans for per-author pricing; trial length and “free” messaging are seen as unclear.
  • Website UX issues (slow fade-in, black screen) and marketing language (“AI era”) draw minor criticism.
  • Overall sentiment: promising and thoughtful, but with open questions about reliability, security model, and ecosystem coverage.

CT scans could cause 5% of cancers, study finds; experts note uncertainty

Risk–benefit and overuse of CT scans

  • Many argue that by the time a CT is ordered, the suspected condition is usually riskier than the incremental cancer risk.
  • Others counter that CTs are often ordered “just in case,” especially in chronic or ambiguous cases, suggesting overuse and poor justification.
  • Several anecdotes show both sides: unnecessary scans later rendered moot by simple treatments vs. delayed CT leading to late cancer diagnosis.
  • Commenters want EMRs to track cumulative radiation and more explicit risk discussions before ordering scans.

CT vs MRI vs X‑ray

  • Recurrent theme: “Why not MRI instead?”
  • Responses: CT is faster, higher resolution in many contexts, better for certain pathologies (e.g., lungs, acute stroke, some post‑cancer surveillance), and usable in patients with metal implants.
  • MRI is slower, more resource‑intensive, needs helium, can’t be used with some implants, and often needs contrast (whose long‑term risks are debated but currently lack compelling evidence of cancer causation).
  • Some say single X‑rays are preferable to CT for bones; others note many abdominal/chest questions genuinely require CT detail.

Radiation dose, models, and uncertainty

  • Multiple comments emphasize that the study is a modeling exercise, extrapolating from radiation dose to expected cancers, not directly counting cancers after CT.
  • Critiques: highly confounded population (people needing CT are already sicker), unclear handling of prior disease, and no direct answer to “does this scan improve lifespan/quality of life?”
  • Debate over Linear No‑Threshold (LNT) vs possible thresholds or even hormetic effects at low doses; some say evidence for low‑dose harm is weak, others insist ionizing radiation is necessarily carcinogenic in proportion to dose.
  • Strong skepticism about the “5% of cancers” estimate; some say if that were true, population‑level signals (e.g., by country CT usage) should be obvious.

Preventive screening and incidental findings

  • Discussion on whether broad preventive imaging is wise: risk of false positives, invasive follow‑ups, and overtreatment for very rare diseases.
  • Contrasting anecdotes: “full‑body” scans catching early, manageable issues vs. counterpoints that most such findings track age‑related changes and don’t require imaging to justify lifestyle advice.

Emotional impact and communication

  • Several commenters with multiple CTs express anxiety after reading such studies.
  • Others stress putting risk in context (e.g., flights, occupational exposure, improved low‑dose scanners) and argue media coverage of statistical risks is often misleading or sensational.

How the U.S. became a science superpower

China’s rise and U.S. retreat in key fields

  • Several researchers report that, in some areas (e.g., radar), Chinese papers have gone from weak imitations to being the cutting edge; “new ideas” are often pre-empted by Chinese teams.
  • China is actively recruiting with high salaries and large startup packages; some commenters describe direct offers to run labs there.
  • Many see current U.S. cuts (especially under DOGE / the current administration) as accelerating a loss of leadership just as China’s ecosystem matures.

Indirect cost reimbursement and university overhead

  • The article’s claim that indirect cost reimbursement was “secret sauce” is fiercely debated.
  • Defenders say overhead pays for facilities, shared equipment, compliance, and staff; without it, labs can’t function and top researchers will leave. Rates are described as negotiated and relatively small in macro budget terms.
  • Critics argue universities are bloated, can accept lower-overhead private grants, and need more transparency. They propose caps on overhead and public budgets; others respond that private grants are subsidized by full‑overhead federal grants and are a small fraction of total funding.

Brain drain, researcher mobility, and politics

  • Multiple comments note increasing moves from U.S. institutions to EU or Asian ones, driven by funding uncertainty and political hostility toward science and academia.
  • Some see current ideological attacks (e.g., on elite universities, DEI, “viewpoint diversity”) as analogous to historical purges of academics by authoritarian regimes.

Debt, taxes, and whether cuts are fiscally rational

  • One camp argues U.S. debt and interest costs are becoming unsustainable, so all programs—including research—must be on the table.
  • Others counter that R&D is a tiny share of spending, has high long‑run ROI, and cutting it is like “shaving your head to lose weight.” They point instead to tax cuts, military spending, and unwillingness to tax high wealth.
  • There is deep disagreement over whether “tax the rich” can meaningfully fix the deficit, and over how much blame belongs to social programs vs. wars and tax policy.

Models of science funding and broader history

  • Some think the U.S. advantage is less about one mechanism and more about: post‑WWII wealth and intact industry, decentralized funding through universities, integration with private industry, and massive talent inflows (including via Operation Paperclip).
  • Britain’s centralized lab model and postwar austerity are cited as cautionary, but commenters dispute how much current U.S. problems really resemble Britain in 1945.

4chan Sharty Hack And Janitor Email Leak

4chan’s Place in the “Old Internet”

  • Many see 4chan as one of the last large-scale remnants of pre-platform, pre-algorithm web culture: anonymous, no accounts, minimal tracking, niche but deep boards, ephemeral threads.
  • Others argue it’s not “old internet” at all (founded 2003, post–dot-com crash) and that earlier eras (Usenet, BBSes, Geocities) are the real “old web.”
  • Several posters say 4chan itself stopped feeling “countercultural” years ago, becoming more like the broader, politicized internet around 2010–2016.

Culture, Boards, and Politics

  • Strong distinction drawn between /pol/ (politics) and the rest: many boards (e.g. /g/, /fit/, /tg/, /ck/, /lit/, /mu/, /v/, /vr/, /po/, /diy/) are described as creative, hobbyist, or surprisingly high‑signal.
  • Others say /pol/ and its style of racism, conspiracy and “edgelord” posting eventually bled into much of the site, especially from the Gamergate/Trump era onward.
  • There’s disagreement over how many posters are “real” racists vs ironic edgelords; critics point to links between 4chan and several mass shooters and alt‑right memes, defenders emphasize trolling, containment‑board design and counter‑speech.
  • Multiple comments note 4chan’s huge memetic influence (slang like “based,” “zoomer,” Wojak, “slop,” incel/r9k culture, etc.) and occasional serious contributions (e.g. math/superpermutations).

Free Speech, Moderation, and “Jannies”

  • 4chan is characterized as less “free‑speech absolutist” than its reputation: it bans illegal content (especially CSAM), handles DMCA, and has global + board rules.
  • Users complain moderation is arbitrary and personal rather than ideological: off‑topic or “annoying” posts can draw unappealable 3‑day bans, while offensive speech often stands.
  • Some argue admins deliberately went “easy” on racism, helping steer the culture; others say racism and other extremes are still called out and flamed by users.
  • Ongoing debate whether platforms should host such speech at all versus pushing it into harder‑to‑see spaces.

The Hack: Technical Details and Neglect

  • Leaked shell screenshot shows the main server on FreeBSD 10.1 (EOL 2016) with very old PHP, suggesting years of minimal maintenance after the 2015 ownership change.
  • An attacker described the entrypoint: some boards allowed PDF uploads; the backend passed them to a 2012 Ghostscript to thumbnail without verifying true PDF format.
  • A malicious PostScript file renamed “.pdf” exploited Ghostscript to get arbitrary code execution and a remote shell, from which configs and databases were exfiltrated.
  • Commenters highlight this as a textbook example of: ancient dependencies, unsafe file handling, and running powerful parsers with high privileges exposed to untrusted input.

Doxxing Moderators and Janitors

  • The leak reportedly includes staff emails (and possibly more) for janitors and moderators; some fear this will lead to extensive harassment of volunteers and low‑paid staff.
  • Reactions split:
    • One camp: “live by the sword, die by the sword” / “not victims” — arguing staff long enabled a harmful culture.
    • Another: regardless of 4chan’s sins, doxxing and real‑world retaliation are unethical and will cause needless suffering to people who often spent unpaid time removing illegal content.
  • Several note that the initial hacker and those performing deanonymization/harassment appear to be distinct groups.

Should 4chan Survive?

  • Some hope it never returns, seeing it as a net‑negative: a major engine for alt‑right radicalization, harassment, and bigoted memes that fed into modern politics.
  • Others argue even a toxic anonymous forum is preferable to everything being driven onto identity‑linked, algorithmic, advertiser‑shaped platforms; they see 4chan as “the devil you know.”
  • A large nostalgic contingent emphasizes what will be lost if it dies: unique anonymous discussion, weird creativity, niche technical and artistic communities, and a powerful meme‑incubator now largely displaced by TikTok/Twitter‑style feeds.