Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 275 of 359

A year of funded FreeBSD development

Funding and Corporate Support

  • Several comments dissect how FreeBSD development is funded, noting that Foundation donations are only a minority of total corporate-funded work.
  • Some criticize big users (e.g., large cloud and tech companies) for apparently minimal visible sponsorship via the FreeBSD Foundation, while others point out this is a partial picture: companies also fund individuals or in-house work that never passes through the Foundation.
  • Microsoft is cited as having recurring donations and in-house developers working on Hyper‑V and Azure FreeBSD support; reasons range from customer demand to internal FOSS funding programs.
  • Amazon is viewed by some as doing the least for FOSS among large tech firms, though it clearly funds some FreeBSD work directly.
  • One commenter worries that a part‑time release engineer funded for a limited period is not a sustainable model for an OS of this size.

FreeBSD on AWS and the “Magic” Disk Size Cliff

  • A widely discussed anecdote: changing the EC2 root disk size from 5→6 GB made FreeBSD boot 3× slower, while 8 GB restored performance.
  • Speculative explanations about EBS/S3 caching heuristics are floated, but the real cause remains opaque; even AWS veterans in the thread can’t definitively tie it to historic S3 object size limits.
  • The debugging process involved bisecting over weekly snapshots and building many AMIs; with a good starting window, it took only a few hours.
  • Some participants share broader curiosity about AWS’s internal tooling and layering of services on top of other AWS services.

FreeBSD’s Niche vs Linux and Other BSDs

  • Multiple users describe FreeBSD as:
    • Having a larger userbase and software catalog than OpenBSD/NetBSD.
    • Strong in throughput/networking, ZFS, jails, and as a cohesive “single OS” rather than a collection of projects.
    • A refuge from systemd and perceived Linux “churn,” with more stability in interfaces and configuration over time.
  • ZFS support is highlighted as cleaner than on Linux due to licensing, with real‑world wins (e.g., instant rollbacks after production mistakes).
  • FreeBSD’s integrated base system (kernel, libc, userland, tools like jails, DTrace, ZFS, bhyve, pf) is contrasted with Linux’s “zoo” of distros and independently developed components.
  • Some note FreeBSD’s smaller hardware support matrix (especially Wi‑Fi and certain 10 GbE NICs) and lag on big.LITTLE scheduling, though laptop/modern hardware work is underway and funded.

Corporate Influence and “Soft Power”

  • A long subthread debates whether Apple or other corporations exert “soft power” over FreeBSD.
  • Several experienced users and developers insist Apple has essentially no influence today: macOS has its own XNU kernel, rarely rebases from FreeBSD, and Apple’s historic LLVM work is now only part of a much broader ecosystem.
  • Netflix, NetApp, Juniper and others are cited as more impactful FreeBSD users; Netflix in particular both uses FreeBSD at scale (CDN) and contributes extensive performance/stability work.
  • Some commenters prefer FreeBSD specifically because, compared to Linux, it feels less steered by large corporate agendas, though others note the tradeoff: fewer resources and slower hardware support.

Tooling, Laptops, and Practical Experiences

  • Zig now ships FreeBSD master builds and supports it as a first-class cross‑compilation target, which commenters see as helpful for CI and broader app support.
  • The FreeBSD Foundation’s laptop initiative (e.g., S0ix sleep, hybrid CPU awareness) and a ~$750k investment are mentioned as signs of active desktop/laptop work.
  • Practitioners report success running dense multi‑service setups in jails on a single FreeBSD server, with very cost‑efficient throughput; hybrid FreeBSD/Linux cloud migrations raised costs but brought cloud benefits.
  • Others recount hardware pain points (NIC drivers, Wi‑Fi, ARM/big.LITTLE support) and the need to choose hardware carefully—often preferring Intel NICs for reliability.
  • Overall, many see FreeBSD as a well‑engineered, stable, and coherent system that rewards users who value control and long-term consistency over latest‑and‑greatest features.

Show HN: AI game animation sprite generator

Product concept and potential use cases

  • Tool generates animated game sprites from user-uploaded art; users see it as potentially useful for:
    • Rapidly prototyping 2D characters and animations.
    • Reducing tedious “in-between” animation work, especially for solo/indie devs.
    • Possibly supporting isometric/top‑down views, tilesets, and interchangeable equipment in future.
  • Some commenters envision AI as a helper for animators (keyframes by humans, tweens by AI), not a replacement.

Quality, style, and limitations

  • Many find the sample animations low quality:
    • “AI fuzziness,” background jitter, missing or changing details (e.g., gloves disappearing, anatomy glitches).
    • Inconsistent animations across frames; cycles (walk/run) don’t loop cleanly.
    • Strong resemblance to Street Fighter–style moves and timing, prompting concern about derivative copying.
  • Non‑humanoid characters (e.g., slimes) and highly stylized pixel art appear especially difficult.
  • Several users say outputs would still require frame‑by‑frame cleanup by an artist.

Reliability, UX, and early‑stage issues

  • Multiple reports of:
    • Jobs stuck in queue for 10–30+ minutes or lost on page reload.
    • Sample videos not loading; settings/profile pages broken.
    • Payment link not tied to login, credits disappearing after purchase.
  • Some appreciate the solo‑founder constraints; others argue it’s too early to charge given bugs and quality.

Transparency, models, and legal/privacy concerns

  • Users note missing or broken links for privacy/legal pages and GitHub; this makes them hesitant to upload original IP or create accounts.
  • Several ask what models are used, whether they’re open source, and whether custom training is involved; this remains unclear in the thread.
  • FAQ claim that users “own the rights” to generated content is questioned, given uncertainty over AI art copyright.

Ethics, impact on artists, and data usage

  • Strong divide:
    • Critics say tools like this devalue and displace struggling artists, produce “slop,” and rely on training data from artists who aren’t compensated or asked.
    • Supporters argue it solves real problems (cost, speed), enables more games that otherwise wouldn’t exist, and parallels past technological shifts (CGI, Photoshop, assembly lines).
  • Ongoing debate over whether training on public art is akin to human learning or fundamentally different due to scale and automation.

The Illusion of Thinking: Strengths and limitations of reasoning models [pdf]

What “reasoning” means and what LRMs really are

  • Many commenters argue that “large reasoning models” are just LLMs with extra steps: more context, chain-of-thought, self-refinement, RLHF on problem-solving traces.
  • Disagreement over definitions: some want formal “derive new facts from old ones” (modus ponens, generalizable algorithms), others stress that pattern matching plus heuristics may still look like reasoning in practice.
  • Several note that current “reasoning” is often just a branded version of long-known prompt-engineering tricks; the name oversells what’s actually happening.

Core experimental findings from the paper

  • Puzzles are used because they: avoid training-data contamination, allow controlled complexity, and force explicit logical structure.
  • Three regimes emerge:
    • Low complexity: vanilla LLMs often outperform LRMs; extra “thinking” leads to overcomplication and worse answers.
    • Medium complexity: LRMs do better, but only if allowed many tokens; gains are expensive.
    • High complexity: both LLMs and LRMs collapse to ~0% accuracy; LRMs even reduce their reasoning depth as complexity rises.
  • Even when given the exact algorithm in the prompt, LRMs still need many steps and often fail to follow it consistently.
  • Models appear to solve certain tasks (e.g., large Tower of Hanoi instances) likely via memorized patterns, while failing on similarly structured but unfamiliar puzzles (e.g., river crossing variants).

Implications for AGI and the hype cycle

  • Many see this as evidence of a “complexity wall” that more tokens and compute don’t simply overcome, weakening near-term AGI claims.
  • Comparisons are made to self‑driving cars and fusion: big progress, but generality and robustness stall in long-tail cases.
  • Others remain bullish, viewing this as mapping where current methods break, not a fundamental limit; they expect new architectures, tools, or agents to push the wall back.

Critiques and caveats about the study

  • Some say it mostly measures long-chain adherence, not whether models can invent algorithms; allowing code-writing or tools would trivialize many puzzles.
  • Others note missing or fuzzy definitions (“reasoning”, “generalizable”) and argue that humans also fail catastrophically beyond small N, yet we still say humans reason.

Observed behavior of today’s models & future directions

  • Anecdotes match the paper: “reasoning” models often excel on medium tasks but overthink simple ones and derail on complex ones (coding, Base64, strategy questions).
  • Suggested next steps include neurosymbolic hybrids, explicit logic/optimization backends, agents that decompose problems, more non-linguistic grounding, and better ways to manage or externalize long reasoning chains.

SaaS is just vendor lock-in with better branding

Title vs. Thesis / What the Article Is Really Arguing

  • Many find the title misleading: it frames SaaS as bad lock‑in, yet the article’s conclusion is “pick a platform (e.g., Cloudflare) and lean into it.”
  • Several commenters think the real argument isn’t “SaaS is bad” but “don’t juggle dozens of vendors; use one integrated platform early on to avoid overhead.”
  • Critics note a conflict of interest: the recommended platform is exactly where the author’s framework runs.

What “Vendor Lock‑in” Actually Means

  • One camp: “Everything is lock‑in” because any change (even open‑source/self‑hosted) requires substantial rewrite or migration.
  • Pushback: lock‑in is when switching is impossible or economically irrational vs. staying put. Good architecture (simple interfaces, loose coupling) can make components replaceable.
  • Practical guidance:
    • Use SaaS for non‑critical pieces or early stage; plan to migrate if you succeed.
    • Minimize dependencies and avoid building complex apps directly inside SaaS platforms.
    • Abstract external services where it’s likely you’ll want to swap them—but abstractions have their own cost and often force you to the “least common denominator.”

Data Portability and Regulation

  • Some argue that open data and easy export/import mean no real lock‑in; if you can walk away with your data, you’re less trapped.
  • Others say this is rare in practice and almost never a selling point.
  • GDPR is mentioned: it always applies (profit or not) but only to personal data, and doesn’t guarantee portability for all business data.

SaaS Economics, Rent‑Seeking, and Pricing Power

  • Strong anti‑SaaS views: subscriptions and bundling of software + hosting are framed as modern rent‑seeking, especially when software could run on‑prem for a one‑time fee.
  • Counter‑arguments:
    • This stretches “rent‑seeking” beyond its economic meaning; SaaS has real ongoing costs (infra, support, R&D) and often improves reliability and productivity.
    • Customers care about value, not provider cost structure; profit above cost isn’t automatically rent‑seeking.
  • Concerns widely shared:
    • Long‑term pricing power once a tool is entrenched (e.g., big price hikes, add‑on fees, API changes and deprecations).
    • Loss of the “near‑zero marginal cost” upside of owning software as usage scales.

Open Source, Self‑Hosting, and Funding Models

  • Some read the piece as an implicit pitch for open source and self‑hostable SaaS, which can mitigate several “taxes” (discovery, integration, local dev).
  • Others emphasize the real cost of running OSS (“free if your time is worth nothing”) and the operational risk for non‑technical orgs.
  • There’s active debate on:
    • How to fund high‑quality OSS (support contracts, SaaS upsells, source‑available licenses, even public funding).
    • Whether FLOSS apps match proprietary quality; libraries are generally seen as stronger than end‑user apps.

Pragmatic Takeaways from the Thread

  • For early startups: using a single strong platform can reduce complexity and speed iteration, but increases concentration risk later.
  • For mature businesses: be wary of deep integration with any one SaaS in core workflows; consider architectural seams and exit paths.
  • Across the thread, recurring themes are:
    • Minimize dependencies where feasible.
    • Prefer services with good data export and/or self‑hosting options.
    • Expect future price and API changes as part of the long‑term SaaS trade‑off.

See how a dollar would have grown over the past 94 years [pdf]

Small-cap premium and chart presentation

  • Many were surprised by small-cap stocks’ outperformance; several noted it mostly stems from a big 1970s–80s divergence, with lines roughly parallel otherwise.
  • Some cited an ongoing debate about whether the small‑cap premium still exists (e.g., “SMB size factor”) and suggested recent underperformance may be cyclical.
  • There was disagreement on the semi‑log Y axis: some called it “misleading,” others argued log scale is exactly what you want for long‑run percentage returns and any linear chart would be worse.

Bonds vs stocks, “risk‑free,” and time horizons

  • A personal anecdote (30‑year government bonds vs equities) spurred debate about what “risk‑free” means.
  • One side: government bonds are minimal‑default‑risk and were a reasonable choice ex‑ante; hindsight comparisons to stocks are misleading.
  • Counterpoint: over multi‑decade horizons, diversified equities historically have a lower chance of real loss; bond “safety” is eroded by inflation and rate risk, especially when yields are low.
  • Others stressed that risk reduction via time only really applies to diversified portfolios, not single stocks.
  • There was detailed back‑and‑forth on whether 1990–2020 bonds actually “lost to inflation”; one commenter numerically argued they did not, even after tax, while another focused on taxes, inflation measurement issues, and bond‑ETF quirks.

International diversification and survivorship

  • One commenter claimed most non‑US exchanges have been flat/negative for decades; this was strongly disputed with examples of many countries having long‑run positive real equity returns.
  • Japan’s Nikkei was cited both as evidence of multi‑decade stagnation and as an example where total‑return (including dividends) and dollar‑cost averaging look much better than price‑only, lump‑sum-at-peak.
  • General consensus: diversification across countries and assets is crucial; no guarantee the US outperformance continues.

Inflation, CPI, gold, and purchasing power

  • Some argued the chart understates inflation and that the dollar’s purchasing power collapse is the “real” story.
  • Gold was suggested as an alternative inflation proxy (~100x over the century), but others countered that gold is highly volatile, policy‑distorted, and not a good long‑run inflation measure.
  • CPI “manipulation” claims were met with references to its heavy external scrutiny and independent projects that broadly validate it, while acknowledging methodology debates (e.g., housing).
  • Several noted that modest, predictable inflation is by design: holding cash long‑term shouldn’t be a winning strategy; you’re supposed to invest.

Behavioral aspects and drawdowns

  • Some framed the chart not as returns but as a test of “nerves of steel”: most investors struggle to hold through deep drawdowns.
  • A few commenters claimed they were never tempted to sell in major crashes and even see crashes as buying opportunities; others argued timing such moves is extremely hard and that complacency about crashes is dangerous.

Future equity returns and structural tailwinds

  • One thread questioned whether past US equity growth is repeatable over the next 50 years, noting historic tailwinds:
    • Falling corporate tax rates
    • Falling interest rates and rising P/E multiples
    • Demographic expansion
    • Improved diversification lowering required risk premia
  • Environmental and other future headwinds may not be fully priced. The view here: stocks will likely still beat bonds over the long run, but expecting past US‑style returns may be optimistic.

Other investment issues raised

  • Concern that many 401(k)s default to cash or money‑market options, leading to dramatically lower long‑term outcomes for uninformed savers.
  • Repeated emphasis that dividends and their reinvestment are crucial; price‑only equity charts are misleading.
  • Historical notes: broad‑based index funds for retail investors have existed for decades, but minimums and access used to be harder.
  • Rule‑of‑72 math was used to sanity‑check the small‑cap line; side discussion on better approximations and mnemonics.

How we decreased GitLab repo backup times from 48 hours to 41 minutes

Nature of the bug and the fix

  • Backup slowness was traced to a 15-year-old git bundle create function with an O(N²) duplicate-ref check implemented as a nested loop over refs.
  • The fix replaced this with a set/map-based deduplication, turning the critical section into O(N) (or N·logN depending on implementation), yielding a ~6x improvement locally but much larger end-to-end impact for GitLab (48 hours → 41 minutes).
  • Commenters note this is a textbook “use a hash/set instead of a quadratic scan” situation, and that in C it’s common to default to arrays/loops because container usage is more cumbersome.

Debate over “exponential” language

  • The article’s phrase “reducing backup times exponentially” in the same sentence as big-O notation drew heavy criticism.
  • Many argued that in a technical performance post, “exponential” should not be used colloquially; it created confusion and wasted readers’ time trying to locate an actual exponential-time algorithm.
  • Suggested alternatives: “from quadratic to linear,” “dramatically,” “by a factor of n,” or explicitly stating the new big-O.
  • A few defended colloquial use, but most saw it as sloppy or misleading in this context; the author later acknowledged the issue.

Quadratic complexity in practice

  • Multiple anecdotes support the idea that O(N²) is the “sweet spot of bad algorithms”: fast in tests, disastrous at scale (sometimes far beyond quadratic in the wild).
  • Discussion covers when N is small and bounded (e.g., hardware-limited cases) where quadratic or even cubic can be acceptable, versus unbounded N where it’s a latent production time bomb.
  • Some advocate systematically eliminating N² unless N has a hard, low upper bound, and adding explicit safeguards if assumptions about N might change.

C, data structures, and Git implementations

  • Several comments note this as an example that C alone doesn’t guarantee speed; algorithm and data structure choices dominate.
  • Others argue C makes these mistakes likelier because sets/maps aren’t standard and are harder to integrate than in languages with built-in containers.
  • Thread lists alternative Git implementations/libraries (Rust, Go, Java, Haskell, C libraries), with some using them successfully in production.

Backup strategies and alternatives

  • Many question why GitLab relies on git bundle instead of filesystem-level backups (e.g., ZFS/Btrfs snapshots plus send/receive).
  • Defenses: direct filesystem copying can bypass Git’s integrity checking and is tricky across diverse filesystems and self-hosted environments; bundles provide a portable, Git-aware backup format.
  • Offsite replication of ZFS snapshots and snapshot consistency issues (e.g., with Syncthing) are discussed; Git’s lack of a WAL-like mechanism makes naive snapshotting risky in some setups.

Profiling and tooling

  • The flame graph in the article was produced via perf plus FlameGraph tooling; alternatives recommended include pprof, gperftools, and Samply with Firefox’s profiler.
  • Several commenters emphasize that algorithmic changes uncovered via profiling dwarf typical micro-optimizations.

Reactions to GitLab and the write-up

  • Some praise the upstream contribution (landing in Git v2.50.0) and note GitLab now has a dedicated Git team.
  • Others complain about GitLab’s slow UI and the blog post’s style: too long, possibly LLM-like, and missing concrete code snippets, though still useful once skimmed to the core technical section.

4-7-8 Breathing

Perceived Benefits and Purpose

  • Many commenters frame breathwork as a tool to deliberately influence internal state: calm before sleep, down-regulate stress/anxiety, or gently up-regulate alertness.
  • Some report concrete benefits from structured patterns (4‑7‑8, box breathing, 3‑7, etc.) for anxiety, blood pressure, and chronic pain management.
  • Others emphasize that breath exercises are primarily “for your brain,” not to learn how to breathe in general.

Scientific Evidence and Skepticism

  • One meta-analysis is cited: breathwork appears to reduce stress, but many studies are biased or overhype broader health claims.
  • Breath is compared to exercise, yoga, dancing: likely helpful, but shouldn’t be sold as a miracle cure.
  • Debate over James Nestor’s Breath: some find it excellent and well-referenced; others see outlandish, weakly supported claims.
  • Discussion of the hypothalamic–pituitary–adrenal axis: chronic stress vs. brief “bear chase” emergencies; breathwork is presented as a way to downshift a chronically activated stress system, though the exact strength of evidence is unclear.

Techniques, Variants, and Personalization

  • Multiple patterns discussed: 4‑7‑8, box breathing (4‑4‑4‑4), 4‑7‑11, 3‑7, and “just longer exhale than inhale.”
  • Several users find prescribed intervals too long or panic-inducing and are encouraged to shorten timings or start unstructured and gradually lengthen.
  • Noted that “ratios, not literal seconds” matter and that people’s physiology and CO₂ sensitivity differ.
  • Some breath coaches avoid rigid timers for trauma-affected people, preferring body-led pacing.

Risks and Safety Concerns

  • Strong warnings about hyperventilation and freediving: risk of shallow water blackout due to suppressed CO₂ drive and unnoticed hypoxia.
  • More general caution that advanced or extreme techniques can be harmful and may warrant supervision; others think ordinary breath practice is safe for most, if not pushed to extremes.

App UX Feedback (4‑7‑8 Site)

  • Multiple issues reported:
    • Timer sometimes freezing on “Hold,” confusing users.
    • No clear signal when a cycle ends; users want audible cues at every transition and sessions to end after a full exhale.
    • Difficulty planning exhale because the shrinking circle’s endpoint isn’t obvious; requests for a visible “target” size and a brief “catch-up” gap between cycles.
    • Initial center-circle color was too low-contrast.
  • Developer responds and iteratively fixes audio cues, color contrast, and cycle-ending behavior.

Tools, Apps, and Other Resources

  • Alternatives mentioned: Breathly, One Deep Breath, Prana Breath, Medito, Plum Village app, watch-based box-breathing apps, and custom web tools inspired by research.
  • Some prefer no app at all, advocating simple, quiet awareness of diaphragmatic (not superficial “belly”) breathing.

Origins and Attribution

  • Multiple comments note that similar techniques exist in traditional yoga/pranayama (e.g., Patanjali / Hatha Yoga), disputing the site’s claim that 4‑7‑8 was “developed” by a modern physician without citing older roots.

Meta: Shut down your invasive AI Discover feed

Lack of clarity in Mozilla’s campaign

  • Many commenters found Mozilla’s petition confusing and under-explained.
  • Complaints: no screenshots, flow diagrams, or concrete examples; assumes prior knowledge of Meta’s AI app; feels like engagement-bait more than an informative explainer.
  • Several people said the submission link should have been to the investigative articles that actually describe the feature.

What Meta’s AI Discover feed appears to do

  • Context from linked reporting: Meta’s standalone AI app has a “Discover” tab showing other users’ AI chats.
  • People testing the app describe the flow as:
    • Chat screen has a “Share” button.
    • Tapping “Share” opens a preview with a prominent “Post” button.
    • Tapping “Post” makes the chat public and surfaces it in Discover; a link can then be shared.
  • Some evidence from users: Discover shows clearly unintended content (e.g., “note to self” about canceling insurance, stylized baby photos with originals attached).

Dark patterns vs user responsibility

  • One side:
    • Chats are private by default; you must tap Share → Post.
    • UI clearly shows you’re “posting”; Mozilla’s language about “quietly turning private chats public” is misleading or false.
  • Other side:
    • “Share” usually means “let me choose where/who to share with,” not “publish to a public feed by default.”
    • Making public posting the only way to share, and calling it “Share,” is a dark pattern, especially for non-technical users.
    • Meta’s long history of nudging oversharing and testing many UI variants makes people distrust that this is merely user error.

Leaving Meta vs network lock-in

  • Some argue the only real solution is to stop using Meta entirely.
  • Others say that’s unrealistic due to network effects, especially where WhatsApp is the de facto communication channel (often zero-rated and used for everything from school to business).
  • A few report successfully quitting Facebook/Instagram with minimal impact; others describe real social or practical costs, particularly outside the US.

Views on Mozilla’s credibility and strategy

  • Mixed reactions:
    • Some are glad Mozilla is still pushing on privacy issues.
    • Others say this specific campaign is sensationalist, poorly communicated, and undermines trust.
  • Prior incidents (terms-of-use changes around data, bundled addons/telemetry, perceived Google dependency) are cited as reasons to doubt Mozilla’s moral high ground.
  • Several call for Mozilla to focus on making Firefox and its web tech stronger instead of vague activism.

Broader attitudes to privacy and platforms

  • Strong baseline distrust of Meta/Facebook: many say you should assume anything you give them can become public or be exploited.
  • Others push back against fatalism, arguing that “nothing is private anyway” is how privacy norms get destroyed.

Too Many Open Files

Debate: Are file descriptor limits still justified?

  • One camp argues per‑process FD caps are arbitrary, poor proxies for actual resources (memory, CPU) and distort program design. They suggest directly limiting kernel memory or other real resources instead.
  • Others insist limits are essential to contain buggy or runaway programs, especially on multi‑user systems, and prefer “too many open files” to a frozen machine.
  • There’s philosophical tension between “everything is a file” and “you can only have N files open,” with some seeing limits as legacy relics and others as necessary quotas.

Historical and kernel-level reasons

  • Early UNIX likely used fixed‑size FD tables; simple arrays are easier to implement and reason about.
  • Kernel memory for FD state isn’t swappable, so unconstrained growth can have nastier OOM behavior than userland leaks.
  • FD limits also act as a guardrail against FD leaks; hitting the cap can reveal bugs.

Real-world needs and bugs

  • Many modern workloads legitimately need tens or hundreds of thousands of FDs: high‑connection frontends, Postgres, nginx, Docker daemons, IDEs, recursive file watchers, big test suites.
  • People share war stories of FD leaks (e.g., missing fclose, leaking sockets) causing random failures, empty save files, or failures only on large inputs.
  • VSCode’s higher internal limit hid FD problems that showed up in normal shells.

APIs, select(), and FD_SETSIZE

  • A major practical constraint is the classic select() API and glibc’s fixed FD_SETSIZE (typically 1024) for fd_set. FDs ≥ this break code using select().
  • Man pages now explicitly recommend using poll/epoll/platform‑specific multiplexers instead of select.
  • People describe hacks to avoid third‑party libraries’ select() limits (e.g., pre‑opening dummy FDs so “real” FDs stay below 1024).

OS-specific behavior

  • macOS is criticized for very low defaults and undocumented extra limits for sandboxed apps; raising kernel sysctls has caused instability for some.
  • Linux defaults (e.g., 1024) are widely considered too low for modern machines; values like 128k or 1M are seen as reasonable on servers.
  • Windows handles are contrasted: more types, effectively limited by memory, not a small hard cap.

Proposed practices and tooling

  • Common advice: raise the soft limit to the hard limit at program startup (Go does this automatically; similar Rust snippet shown), and configure higher hard limits on servers.
  • Others caution this is a band‑aid unless you first understand why so many FDs are needed.
  • Tools like lsof, fstat, and htop help inspect FD usage, though lsof’s noisy output is criticized.

Self-reported race, ethnicity don't match genetic ancestry in the U.S.: study

Genetic Ancestry vs. Self-Reported Race

  • Commenters note the headline is overstated: self-identified groups like “African American” usually do reflect substantial African ancestry, but not cleanly or uniformly.
  • The study is read mainly as showing that U.S. racial labels are coarse, self-reported, and often misaligned with the fine-grained ancestry visible in genomes.
  • Some argue this just renames “race” as “African/European/Asian ancestry” rather than abolishing the concept; others stress that gradual geographic gradients and admixture make hard racial clusters scientifically weak.

High Diversity Within Africa and Limits of “African”

  • Repeated emphasis that Africa holds more human genetic diversity than the rest of the world combined; “African” is seen as a very poor biological category.
  • Discussion of population bottlenecks and founder effects when small groups left Africa vs. long, continuous diversification within Africa.
  • Some point to Khoi-San groups as especially diverse, though it’s noted that overall genetic variance doesn’t necessarily translate to visibly different appearance.

Race in Medicine: Crude Proxy vs Precision

  • Many want medical research to move from racial categories to direct genetic markers (e.g., for obesity, sickle cell, cystic fibrosis) and environment.
  • Others counter that race still has practical value:
    • It’s cheap and fast to ask in clinical settings.
    • It correlates both with some ancestry-linked risks and with social/environmental exposures (e.g., discrimination, diet, access to care).
  • There is disagreement over how strong a signal race provides, and whether it is “better than nothing” or dangerously misleading.

Social Construction, Culture, and Identity

  • Several emphasize race as a social construct with real consequences: people live their lives as “Black,” “white,” “Asian,” etc., independent of DNA.
  • Stories about Cajun/Creole identities, Italian/Irish, and Native American claims illustrate that “race/ethnicity” often track history, culture, and power more than genetics.
  • Some describe choosing a single race on forms despite mixed ancestry, based on culture, family ties, or perceived advantage/safety.

US Categories, “Hispanic,” and Administrative Uses

  • Many criticize U.S. race/ethnicity boxes as inconsistent and politicized (e.g., “Hispanic” as ethnicity, not race; Spain vs. Latin America; “Caucasian” vs official “White”).
  • Others respond that these categories are designed primarily to track social inequality and discrimination, not to cleanly map biology.

Scientific and Political Disputes

  • Debate over:
    • Whether humans have biologically meaningful “races” or subspecies (most argue no, some think genetically defined subgroups could be formalized).
    • Interpretations of Out-of-Africa vs archaic admixture.
    • Editorial pushes in major journals to treat race and ethnicity as sociopolitical constructs and to avoid using them as genetic proxies.
  • Some see “race science” as discredited; others view it as evolving toward a more complex, ancestry- and environment-based picture rather than simple racial essentialism.

What methylene blue can (and can’t) do for the brain

Appeal of methylene blue in “biohacking” circles

  • Seen as a long‑lived underground favorite in nootropics/alternative medicine communities.
  • Pattern described: people discover it, expect a miracle, then either feel nothing, get mild short‑lived benefits (possibly placebo), or side effects; most stop using it.
  • Compared to selegiline and other MAO inhibitors that attract “more dopamine = better” enthusiasts, with similar disappointment and confusion about fatigue/tolerance.

Access, legality, and “gatekeepers”

  • One reason for its popularity: easy to buy (even from lab or dish‑supply stores), old enough to predate modern regulation, no prescription needed in many contexts.
  • Debate over whether ordering prescription meds (e.g., rasagiline, selegiline) from overseas is “easy” vs risky and gatekept; legality varies by jurisdiction and controlled‑substance status.
  • Some argue trust in foreign subsidiaries/European labs is often similar to domestic pharma; others warn about unregulated Chinese suppliers and contamination risks.

Efficacy and user experiences

  • One commenter took daily oral doses for months and noticed no cognitive change; main issues were intense staining of counters, teeth, and blue/green urine.
  • Others emphasize that homeostasis makes lasting performance boosts from any psychoactive drug hard to achieve without side effects and tolerance.
  • Skeptical view: supposed “promising results” are largely placebo, survivorship bias, and marketing by supplement grifters.

Risks and pharmacology

  • Acknowledged as a powerful MAO inhibitor at some doses; warnings about serotonin syndrome and one reported fatal outcome when self‑treating depression.
  • Classic MAOI dietary issues are raised; a user on another MAOI clarifies that the highest risk is with large amounts of aged/fermented foods, not casual chocolate/coffee.
  • Caution for people with G6PD deficiency, especially from Mediterranean, African, or SE Asian backgrounds, due to risk of hemolysis (similar to other antimalarials).
  • Discussion of high‑dose hospital use, tissue staining (brain/heart), and dose ranges considered “safe” vs clearly excessive.

Clinical use and research obstacles

  • Known medical roles mentioned: vasopressor in vasoplegia after cardiopulmonary bypass; treatment for fish parasites.
  • Some claim lack of double‑blind trials is due to no patent incentive; others counter that plenty of non‑patentable interventions are heavily studied and that repurposed, old drugs can still be highly profitable.
  • Practical challenge for blinding trials: blue/green urine; suggested workaround is a similarly colored placebo dye, though technically nontrivial.

Broader critique of nootropics and self‑medication

  • Several commenters argue that sleep, exercise, hydration, diet, and removing harmful exposures outperform nearly all “stacks.”
  • Cautionary parallels drawn to St. John’s Wort and ADHD stimulants: initial euphoria or relief is often misinterpreted, tolerance develops, and underlying neuropharmacology is more complex than “boost chemical X.”

Top researchers leave Intel to build startup with 'the biggest, baddest CPU'

CPU vs GPU and ML Hardware

  • Multiple comments argue it’s far easier for a startup to ship a CPU than a GPU: CPU interfaces (compilers, OS, tools) are standardized, while GPUs need massive, evolving software stacks (graphics APIs, custom compilers, CUDA-like ecosystems).
  • Several people want affordable ML-capable hardware more than a “GPU” per se, but others note ML accelerators are even harder: you must match NVIDIA’s rapid cadence and CUDA lock-in, which most software assumes.
  • Discussion of GPU memory:
    • Request for GPUs with user-upgradable large RAM; countered that GDDR close to the die is essential for bandwidth, and any move to socketed or system RAM is a huge performance hit.
    • Techniques like GPU access to system RAM/storage exist, but are seen as last-resort tools that “all suck to different degrees.”
  • Debate over whether discrete GPUs/AI coprocessors will disappear like FPUs. Consensus: integrated NPUs/GPUs will dominate low-power devices, but high-end and datacenter workloads will continue to need large discrete accelerators.

RISC‑V, Openness, and Ecosystem

  • Some are excited by a “biggest, baddest” RISC‑V CPU and see room for a high-performance implementation, analogous to Apple’s use of ARM.
  • Others note RISC‑V’s main advantage is open licensing; it doesn’t prevent ME/AMT-style management engines, which are ISA-agnostic.
  • Ecosystem concerns:
    • Toolchains exist and are improving, but high-end microarchitecture-specific tuning is immature because there are few truly high-performance RISC‑V cores to target.
    • LLVM/GCC can and do optimize for particular cores via scheduling models, but this requires complex per-CPU descriptions and detailed vendor docs.
  • Some see starting at supercomputing/high-end servers and working downward as an unusual but potentially disruptive path for an ISA.

Startup, Article, and Intel Context

  • Commenters find the article vague on technical details, reading more like a local-business or investor pitch emphasizing founder pedigree rather than architecture specifics.
  • The piece is framed as regional news: Intel is a major Oregon employer, so a spinoff is notable to a non-technical audience that may barely recall what a CPU is.
  • Some see this as a bad look for Intel—loss of senior talent and continued disinvestment in Oregon—rather than clear evidence the startup is special.
  • There’s skepticism of “ex‑BigCo” branding in general; prior high-profile failures are cited as evidence that résumés and combined “X years of experience” are weak predictors of startup success.
  • A few expect brutal competition in AI/compute and predict that, if the company succeeds, it’s likely to be acquired by a larger player.

Dystopian tales of that time when I sold out to Google

Generational disillusionment and “it’s all a scam”

  • Older commenters reflect that realizing capitalism is often extractive, not meritocratic, is a common late realization across generations.
  • Some note that Millennials are no longer young or naive; many have already gone through the “it’s not that bad / it would be illegal if it were” self-rationalization phase.

Corporate doublespeak and “radical transparency”

  • The line “radical transparency doesn’t mean you get to say negative things” is widely mocked.
  • Some argue it’s not cognitive dissonance but deliberate doublespeak: words mean one thing in PR and another inside the company.
  • Others say managers often really mean “this isn’t a license to be an asshole,” but admit it’s usually used to suppress criticism.

Crypto’s “true purpose” and systemic comparison

  • A major tangent debates whether crypto is inherently a “Captain Planet villain scheme” or a tool to escape state monetary control.
  • One side argues its purpose is censorship-resistant, non-confiscatable money; opponents counter that states can and do seize it, and physical coercion still works.
  • Critics say crypto’s real impact has been enabling fraud, ransomware, dark markets, and sanctions evasion; defenders reply that traditional banking also enables plenty of abuse.

Privilege, AI, and who pays the costs

  • Some agree with the post’s theme that tech workers’ comfort rests on others’ exploitation.
  • Others push back against caricatures of “rich white guys” dismissing AI harms, but multiple people say they’ve seen exactly that attitude on HN.

Co‑ops, ownership, and risk

  • The quoted line about hoarding profits sparks a question: why aren’t there more software co‑ops?
  • Answers: risk aversion; lack of sales skill among engineers; capital providers want returns; interpersonal conflict and ego; co‑ops can magnify people problems.

Reactions to the author and tone

  • Some find the piece powerful and relatable, especially the contrast between Google’s “don’t be evil / best place to work” branding and lived reality.
  • Others call it badly written, overdramatic, or self‑victimizing: “believed corporate propaganda, made trouble, then got laid off.”
  • There’s a heated subthread about the author’s identity (polyamorous anarchist, queer) and whether criticizing Google implies “people like me should run things,” with accusations of bias and straw‑manning on both sides.

“Bring your whole self to work” vs professionalism

  • One camp says this slogan was a mistake: work should be about skills and boundaries, not full personal identity and politics.
  • Another argues that “whole self” just normalizes what straight parents have always done—talk about their lives at work—and that authenticity can be healthy when goals align.
  • Many agree there must be limits: some aspects of identity and politics are best kept out of day‑to‑day collaboration.

Temps, inequality, and white‑collar norms

  • Commenters highlight how temps/contractors (TVCs) are structurally kept second‑class to avoid legal obligations and benefits, not just to inflate engineers’ egos.
  • Some note that invisible service staff are a longstanding feature of Brazilian class inequality; Google participated in, rather than invented, this dynamic.
  • A few frame this as part of a broader destruction of social mobility ladders (e.g., “mailroom to executive suite”).

Surveillance, spyware, and Gaza line

  • The closing claim that “every software is spyware” is disputed: some insist free software needn’t be, others point out many “free” projects still track users.
  • The line about Google “indexing which Gaza families to bomb” confuses some; others interpret it as a metaphor for cloud/military contracts and data‑driven targeting, though details are seen as unclear or hyperbolic.

Being fat is a trap

Addiction, Emotion, and the “Fat Trap”

  • Many commenters agree with the article’s framing that overeating often functions like an addiction: food regulates emotion, dampens stress, and fills psychological gaps.
  • Several note that if you just change shopping habits or “avoid bad aisles” without addressing underlying emotional needs, you tend to relapse into takeout, snacking, or binges.
  • Others push back, saying for them excess weight was mostly unexamined habit and sedentary life, not “food obsession” or addiction.

CICO vs Biology, GLP‑1s, and “Willpower”

  • One camp insists weight loss is fundamentally calories-in/calories-out (CICO): all diets are just disguised restriction; “stop drinking calories,” “eat less, mostly whole food.”
  • Another emphasizes that biology defends body weight: post‑diet hunger, metabolic slowdown, and lifelong “food noise” make maintenance extremely hard for many.
  • GLP‑1 drugs (Ozempic, semaglutide, etc.) are repeatedly cited as game‑changers because they quiet intrusive thoughts about food and ease compulsive behavior; multiple people report large, sustained losses after “a lifetime of being hungry.”
  • Some argue “willpower” is the wrong frame; success comes from restructuring environments and using pharmacology or therapy, not just trying harder. Others still see willpower and discipline as central.

Diet, Exercise, and Concrete Tactics

  • Broad agreement that diet matters far more than exercise for weight loss; exercise is framed as vital for health, identity, mood, and keeping weight off, not as the main calorie burner.
  • Strategies mentioned: no snacks / no late eating; high‑volume, low‑calorie foods; cutting liquid calories; fasting regimes; removing trigger foods from the house; cooking in bulk; simple home bodyweight routines.
  • There’s disagreement over “all‑or‑nothing” versus moderation. Some find abstaining from certain foods easier; others say absolutism backfires and resembles other addictions.

Environment, Time, and Capitalism

  • Many stress structural barriers: long commutes, shift work, food deserts, high prices for fresh food, and an environment saturated with ultra‑processed, heavily marketed products.
  • Others counter that these can become excuses: you can cook cheaply, do calisthenics at home, and walk early or late; blaming corporations is seen by some as surrendering agency.
  • US portion sizes, sugar, and fast‑food culture are contrasted with many Asian/European norms; some explicitly blame capitalism and food industry incentives.

Genetics, Inequality, and Variability

  • Multiple anecdotes highlight large differences in appetite, satiety, and response to exercise: some remain lean on junk food, others stay obese despite heavy training.
  • Commenters debate how much is genetics versus misreported intake, but there’s broad recognition that people vary widely in hunger signals and “default” weight.

Stigma, Body Positivity, and Mental Health

  • Many echo the article’s view that shame is counterproductive: most fat people already know they’re fat and feel bad about it.
  • Others criticize framing fatness itself as a “trap,” arguing health should be decoupled from size and focus on metabolic markers and mental well‑being.
  • Several tie obesity to broader “class” problems (like poverty or social media addiction), where individual responsibility is real but overwhelmed by systemic forces and cultural norms.

Doge Developed Error-Prone AI Tool to "Munch" Veterans Affairs Contracts

Misuse of AI and VA Contract “Munching” Tool

  • Many see the AI contract‑scanning tool as fundamentally unfit for deciding which VA contracts to cut, especially medical ones affecting veterans’ care.
  • Strong criticism that its author openly admits he wouldn’t trust his own code, yet it was allowed to influence real decisions.
  • Several note the prompts assume LLMs have deep institutional knowledge (e.g., what can be insourced), which they clearly do not.
  • Some defend the concept of AI as a triage aid for human reviewers, but others argue that in practice it became a de‑facto decision tool without rigorous testing or metrics.

Ethics and Professional Responsibility

  • Many argue participation in DOGE, especially in building tools that affect benefits and healthcare, should be a serious black mark on a résumé.
  • Suggested interview questions: why they joined, why they stayed after seeing the risks, and whether they tried to understand how outputs were used.
  • Counterpoint: the job market is tough and many workers are “cogs” with limited choice, though this is challenged given reports of unpaid/volunteer roles.

DOGE Staffing, Culture, and Intent

  • Widespread view that DOGE was staffed with very young, inexperienced, ideologically aligned tech people who “axe first, ask questions later.”
  • Examples cited of recruiting college dropouts and self‑congratulatory blog posts about “saving government” after a few weeks.
  • Some see this as deliberate: people without domain knowledge or empathy are more willing to make drastic cuts.
  • Others suspect the real goals were political/ideological purges (e.g., using AI to flag DEI/WHO‑related content) and broader data access, not efficiency.

Government vs Startup Mentality

  • Strong pushback against applying “move fast and break things” to veterans’ healthcare and other critical services; this is “not Tinder.”
  • Commenters note reviewing 90k contracts is entirely possible with lawyers and analysts given realistic timelines; the 30‑day deadline is seen as artificial justification for reckless shortcuts.
  • Long subthread compares DOGE to Musk’s Twitter layoffs, debating whether aggressive cost‑cutting is sound business practice or destructive short‑termism.

Broader AI-in-Government Concerns

  • Some cautiously support AI for preliminary filtering if humans remain firmly in the loop and accuracy is continuously audited.
  • Others fear a predictable pattern: unproven AI adopted for scale and cost reasons, then gradually allowed to replace human judgment, with harms difficult to unwind.

What you need to know about EMP weapons

Perceived nuclear risk and Ukraine context

  • Several comments question the article’s framing that we are “on the verge” of nuclear conflict.
  • Others link current anxiety to: Russian nuclear saber‑rattling over Ukraine, Ukrainian drone strikes on Russian nuclear‑capable bombers, and recent India–Pakistan tensions.
  • Some argue damage to Russia’s bomber leg marginally destabilizes deterrence; others say ICBMs and SLBMs dominate, so bomber losses are more “symbolic” than strategically decisive.
  • Doomsday Clock references are dismissed by some as melodramatic or no longer specific to nuclear risk.

Mutually Assured Destruction and rationality

  • One side: any use of nukes between nuclear powers is inherently irrational because retaliation is guaranteed.
  • Other side: past uses (Hiroshima/Nagasaki) were “rational” in context, and limited use or coercive signaling remains thinkable.
  • Debate over whether an isolated tactical use (e.g., by Russia, or in hypothetical Turkey–Russia conflict) would trigger full exchange or stay limited; views range from “game over” for the initiator to “non‑nuclear retaliation is plausible.”

Survival vs. “better to die”

  • Some say in a large nuclear war you’d prefer instant death rather than suffering from burns, radiation, famine, and social collapse (influenced by films like Threads and The Day After).
  • Others strongly reject this fatalism, insisting people underestimate their own survival drive and that life after catastrophe, while grim, can still be worth living.
  • There’s pushback that fictional portrayals exaggerate post‑war social regression; Hiroshima/Germany’s post‑WWII recovery is used as a counterexample, though others respond that modern arsenals are vastly larger and dirtier.

EMP effects: physics, evidence, and uncertainty

  • Multiple commenters note the article gives almost no quantitative parameters (field strengths, distances, frequencies), making its warnings hard to evaluate scientifically.
  • Distinction is made between:
    • Nearby ground/low‑altitude bursts (destroy electronics but coincide with massive blast/radiation).
    • High‑altitude nuclear EMP, which can cover huge areas but is thought to couple mainly into long conductors (power lines, telecom).
  • Historical data: Starfish Prime is cited (Hawaii streetlights and telecom disrupted ~900 miles away). Some emphasize affected technologies were old and modern designs might differ in vulnerability.
  • Military EMP work is said to be largely classified; public documents suggest big concern for communications and grid infrastructure, less for small unconnected devices.
  • One view: modern electronics with better ESD and surge protection may be more robust than 1980s gear; another: solid‑state systems and dense, grid‑tied infrastructure may still be fragile. Overall risk level remains “unclear.”

Faraday cages and practical protection

  • Several users share practical experience: Faraday cages attenuate rather than fully block EM, with performance highly frequency‑dependent.
  • Simple aluminum‑foil wrapping often leaks (especially if seams are neat and uniform); crumpled, multilayer, random overlaps seem to perform better in ad‑hoc tests with Wi‑Fi and phones.
  • Microwaves, metal rooms, MR scanner cages and band‑limited meshes are discussed as real‑world examples, emphasizing:
    • Hole size must be much smaller than the wavelength you want to block.
    • Even “good” cages leak; doors/gaps are weak points.
  • Consensus: for EMP, long external cables and antennas are the main problem; isolated small devices in metal enclosures may fare relatively well.

Critique of the article and sources

  • Some dismiss the piece as alarmist “disaster porn,” noting:
    • No cited experiments, models, or author credentials in EMP physics.
    • No clear distinction between realistic, tested EMP effects and speculative worst‑case scenarios.
  • Others counter that classified military work and historical tests justify taking EMP seriously for infrastructure, even if civilian small devices aren’t wiped out.
  • A few also nitpick SI misuse (units like “Km,” “Khz”) as reducing perceived technical credibility.

Other tangents

  • Side discussions touch on: prepping vs. wasting one’s life, nuclear‑war fiction (Warday, One Second After), NATO Article 5 edge cases, and jokes about YouTube “banning Faraday cage videos” or hiding gear in microwaves.

The X.Org Server just got forked (announcing XLibre)

Fork Motivation and Project Status

  • The fork (XLibre) comes after the author was effectively pushed out of X.Org; the README frames it as rescuing X from “toxic” corporate influence and DEI policies.
  • Many commenters note X.Org is effectively in maintenance/bugfix mode and treated as “abandonware” by most active graphics developers, who are focused on Wayland.
  • Some see the fork as the only way to pursue “making X11 great again” with larger refactors; others think trying to revive X is swimming against the tide.

Maintainer Dispute and Code Quality

  • Linked X.Org issue threads show serious friction between the forking developer and existing maintainers.
  • Maintainers complain his large refactor/cleanup patches:
    • Are mostly cosmetic (moving code, renaming, reflow) with little direct user benefit.
    • Have repeatedly broken basic functionality (e.g., xrandr), indicating fragile code plus insufficient testing.
    • Cause significant ABI churn that downstreams (including proprietary drivers) struggle to keep up with.
  • Supporters argue that:
    • Someone has to tackle technical debt and “random churn that makes the code better” is preferable to nobody touching it.
    • X lacks tests, so breakage isn’t solely his fault.
  • Skeptics see him as a “liability” and doubt a one‑person fork can maintain compatibility across drivers, kernels, and distros.

X vs Wayland: Stability, Features, and Hardware

  • Strong split in anecdotes:
    • Some say X has “just worked” for decades and Wayland quickly runs into missing features (screen recording, screensavers, network transparency, some apps like Jitsi/OBS/Emacs frameworks, Raspberry Pi issues).
    • Others report Wayland has been stable and problem‑free for years, with better smoothness, high‑DPI, multi‑monitor, hot‑plugging, HDR, and security.
  • Nvidia is a major fault line:
    • Several users say they “would use Wayland if they could” but proprietary Nvidia drivers remain problematic (e.g., Xwayland acceleration, multi‑monitor quirks).
    • Some argue this is Nvidia’s fault; opponents respond that if Wayland doesn’t work with widely used hardware, that’s still a practical blocker.

Politics, DEI, and Trust

  • The README’s anti-“Big Tech,” anti‑DEI, and “moles”/EEE conspiracy language triggers extensive pushback.
  • Commenters recall prior anti‑vaccine posts by the same developer and label him anything from cranky to extremist; others dismiss such labels as overreach.
  • Broader DEI debate ensues (meritocracy vs quotas, perceived discrimination), unrelated to graphics but souring some on the fork’s governance culture.

Prospects for the Fork

  • Some hope XLibre will:
    • Provide a haven for X users (especially with Nvidia or BSDs).
    • Put competitive pressure on Wayland.
  • Others predict:
    • It will remain a small, unstable, one‑person project (likened to past efforts like X12/Mir).
    • Distros and serious users will avoid it unless it demonstrates clear, stable improvements and broad hardware support.

Infomaniak comes out in support of controversial Swiss encryption law

User reactions to Infomaniak’s stance

  • Many commenters had just migrated domains, email, or cloud storage to Infomaniak based on its “Swiss privacy” marketing and now feel betrayed or regretful.
  • Some say Infomaniak is “dead to them” and will move domains and data elsewhere, even if it’s painful to migrate large datasets again.
  • Others aren’t surprised, noting it’s common for hosting providers to avoid “truly anonymous” services because such customers can be expensive and risky.

Privacy, AI, and “shared values”

  • One commenter argues that, with AI and pervasive tracking, anonymity is becoming obsolete and that some level of traceability might help protect democracies from disinformation and abuse.
  • This triggers a long philosophical debate about whether humans have any truly “shared values,” with examples like freedom, dignity, and “murder/stealing are bad” challenged as context‑dependent.
  • Several participants stress that it’s safer to define boundaries on actions, not beliefs, and warn that any claimed universal value tends to be used to suppress dissent.

Swiss context and law prospects

  • Some Swiss commenters emphasize Switzerland is not a police state and relies on citizen responsibility; others respond that every police state uses similar rhetoric.
  • One person claims the proposal is broadly opposed politically and “very unlikely” to pass; others link critical local coverage and describe it as Switzerland copying authoritarian surveillance states.
  • There’s mention that other Swiss services (e.g. VPN/email providers) already face or will face surveillance and data retention, undermining “Swiss privacy” as a safe haven.

Alternatives and jurisdiction shopping

  • Multiple domain registrar alternatives are suggested (mostly European), including options for .ch and .li, though users accept tradeoffs in UI or jurisdiction.
  • Some argue there is effectively “nowhere to go”: small privacy‑friendly states will fold under pressure from larger powers, and famous examples of secrecy (e.g. Swiss banking) have already eroded.
  • A strong contingent says true privacy increasingly requires self‑hosting rather than trusting any provider or jurisdiction.

Broader surveillance and authoritarianism concerns

  • Several commenters see a global shift toward authoritarianism (populist or technocratic) and view anti‑encryption/anti‑VPN measures as part of that trend.
  • Others counter that some level of surveillance is necessary for law enforcement and victim protection, leading to a heated exchange about police states, rule of law, and the cost of liberty.

Freight rail fueled a new luxury overnight train startup

Ride Quality, Equipment, and Infrastructure

  • Commenters compare smooth European sleepers with rougher experiences in Egypt, Morocco, and much of the US, attributing differences mainly to track quality and maintenance, not just train age.
  • US freight locomotives are almost all diesel-electric, but they power only their own axles; there is little true distributed traction in freight consists.
  • Track standards in the US vary by ROI: “glass-smooth” in long, sparse corridors important to high‑value freight; rougher and slower near cities where curves, crossings, and congestion limit speeds anyway.
  • Western Europe’s electrified, multi-track, passenger‑oriented corridors are contrasted with the US freight‑first network and minimal electrification (some of which was removed for cost and clearance reasons).

Freight vs Passenger Priority

  • About 95% of US intercity passenger trains run on freight-owned track; freight’s operational needs (including very long trains) cause delays and make reliable passenger schedules difficult.
  • By law Amtrak should have dispatching preference, but commenters say this is often ignored in practice.
  • Some note ongoing incremental upgrades to 90–110 mph sections, but these are piecemeal and slow.

Economics and Externalities

  • Many argue long‑distance US passenger rail (especially sleepers) struggles to compete with cheap, fast flights; most long routes are seen as “cruise‑like” tourism, not practical transport.
  • Debate over whether pricing externalities (environment, congestion) would make rail cheaper: one side cites economies of scale if more riders shift to rail, others expect overall travel demand to shrink or shift to cars.
  • Sleeper trains have inherently low seat density and high operating costs (staff, linens, food, complex cabins), so they usually require high fares or subsidies.

Experiences with Sleepers and “Moving Hotels”

  • Fans emphasize overnight trains as “moving hotels”: downtown‑to‑downtown, no airport hassle, and a night of lodging replaced by the sleeper.
  • Others report poor sleep, high prices, and limited savings versus flight + hotel, both in Europe and North America.
  • US examples mentioned include the California Zephyr, Coast Starlight, and Auto Train; some highlight spectacular scenery and enjoyable “train cruise” experiences, but not time efficiency.

Auto Trains and Driving Culture

  • The East Coast Auto Train (car + passenger) is cited as Amtrak’s only clearly successful long-distance train, often sold out; several wonder why there’s no West Coast equivalent.
  • Europeans in the thread find >200 km drives tiring; many Americans see 200–400 km as routine day trips, which reduces perceived need for short overnight services.

Viability of Luxury Overnight Startups

  • Several see a niche for high-end “train cruise” products aimed at wealthy tourists or business travelers who value comfort over speed.
  • Others are highly skeptical: US single‑track, freight congestion, and frequent multi‑hour delays are seen as incompatible with a premium, time‑sensitive product.
  • There is concern that low capacity (suites instead of seats) plus custom rolling stock makes the business case very fragile; “affordable” sleeper startups are viewed as especially unrealistic.
  • Some note the LA–SF distance may be awkward for an overnight run (too short for a full night unless artificially slowed/parked) and question route choice.

Alternatives and Variations

  • Suggestions include day trains optimized for remote work (private desk pods, good connectivity) and combining daytime offices with nighttime cabins at higher density.
  • Commenters emphasize that rail works best downtown‑to‑downtown; where car rental or suburban origins/destinations dominate, trains lose appeal.

Self-hosting your own media considered harmful according to YouTube

YouTube’s dominance and (limited) alternatives

  • Many see no realistic one‑for‑one replacement for YouTube due to its scale, infra, search/recommendations, and ad payouts.
  • Alternatives mentioned: Vimeo, Rumble, Odysee, BitChute, Nebula, PeerTube, Dailymotion, Twitch, Kick, X, Substack, Internet Archive.
  • Rumble is praised for video quality and lax moderation but criticized for tolerating extremist content; some refuse to support it on that basis.
  • Nebula and Floatplane are cited as promising creator‑driven platforms, but their reach still depends heavily on YouTube for discovery.

Self‑hosting and federated options

  • Self‑hosting via Jellyfin/Plex/Kodi or PeerTube/ActivityPub is popular in principle but seen as too complex for most users; “four‑click containers” and turnkey images help but don’t solve UX or discovery.
  • Bandwidth and storage costs, CDN complexity, and risk of viral traffic spikes are repeatedly cited as hard blockers versus “just put an MP4 on a web server.”
  • Some argue federation plus “value‑for‑value” or direct patronage could eventually support creators, but monetization and discoverability remain unresolved.

Ads, ad‑blocking, and user behavior

  • Aggressive anti‑adblock measures (warnings, playback limits) pushed several commenters to use yt‑dlp or watch less YouTube; others pay for Premium and consider that the “fair” solution.
  • Some predict escalating technical “ad wars”; others note that Premium itself is being slowly enshittified (price hikes, restrictions).

Copyright, piracy, and the “harmful” label

  • Many believe the strike is really about revenue protection and ad‑skipping (e.g., Kodi YouTube plugin), not safety.
  • One thread argues the video plausibly violates YouTube rules against explaining how to get unpaid access to media, especially given DVD/Blu‑ray decryption laws in the US. Others counter that legality varies by jurisdiction and region‑locking would be saner than global removal.
  • DMCA systems and Content ID are widely criticized: easy for bad actors to file fraudulent claims, hard and risky for small creators to fight back.

Moderation, censorship, and scope creep

  • Broad concern that once vague “dangerous or harmful” categories and automated enforcement are normalized (COVID, “safety,” copyright), they expand to cover competition, self‑hosting, and unpopular viewpoints.
  • Others push back: some level of moderation is unavoidable (CSAM, incitement, obvious medical quackery), and platforms face real legal and business pressure from advertisers and regulators.
  • Debate centers less on whether to moderate and more on who decides (platforms vs law vs courts), clarity of rules, and lack of meaningful appeal.

Economics and lock‑in

  • Several note creators are “golden‑handcuffed”: YouTube’s ad market, recommendation engine, and network effects make moving away economically irrational.
  • Self‑hosting or federated video is considered feasible for niche or hobby use, but not yet for those trying to earn a living.
  • Broader structural critiques target the ad‑funded platform model and call for antitrust action, regulation, or public/commons‑based infrastructure to rebalance power.