Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 42 of 350

Nvidia takes $5B stake in Intel under September agreement

Size and implications of Nvidia’s Intel stake

  • Original September piece said ~4% post-issuance; commenters ask if that’s still accurate but no clear answer emerges.
  • Some see it as a major symbolic shift: Intel’s key owners now include the U.S. government, Nvidia, and SoftBank (though others note large index-fund managers still hold more overall via funds).
  • One concern raised: this may dampen Intel’s role as a meaningful AI competitor to Nvidia.

Could Nvidia just buy Intel? Antitrust and policy views

  • Several argue a full acquisition would likely be blocked on global antitrust grounds, citing prior failure to buy ARM and the power of EU/UK regulators over global deals.
  • Others think the current U.S. administration might tolerate it, especially to create a “US foundry behemoth.”
  • There’s debate over how real foreign regulators’ power is in a de jure sense versus the huge de facto leverage they have through market access.

Intel’s technical position and “wizard” talent

  • One thread stresses Intel’s problems are not primarily money but missing expertise: “wizards” in advanced manufacturing mostly sit at TSMC, with deep, tacit knowledge not externally published.
  • Path to becoming such a “wizard” is described as long, specialized PhD work plus years under experts, often for modest pay and difficult hours.
  • Some push back that Intel is closer to TSMC than portrayed: already doing high‑volume EUV since 2023–24 and advancing 18A.

Ownership structure, funds, and control

  • Debate over whether large asset managers (BlackRock, Vanguard, State Street) should “count” as owners versus being proxies for millions of individuals.
  • Clarifications: funds typically vote the shares (proxy voting guidelines), which raises governance concerns for some; others point out this is standard practice and not inherently conspiratorial.

Circular investments and risk to Nvidia

  • Multiple comments worry about Nvidia investing in its own customers (OpenAI, Intel, others) while also selling them hardware, calling it a “tight circle” beyond normal money velocity.
  • Critics argue this amplifies downside: if a customer fails, Nvidia loses both revenue and equity value.
  • Others frame it as medium‑risk, high‑reward: a few breakout successes could more than offset failures—but there is no consensus.

Corporate ownership and limited liability (broader tangent)

  • One subthread proposes banning companies from owning companies; only people would own companies.
  • Lawyers and others push back:
    • Would complicate subsidiaries, cross‑border operations, joint ventures, and M&A.
    • Would force immediate “IPOs” of subsidiaries to individuals and reduce saleability/value of businesses.
    • Limited liability is defended as necessary for risk‑taking; critics counter that it mainly protects capital owners and socializes some risks.

Five Years of Tinygrad

Project goals & status

  • Commenters ask what tinygrad has actually achieved in five years and what it can do now.
  • Cited goals from its site: run standard ML benchmarks/papers 2× faster than PyTorch on a single NVIDIA GPU and perform well on Apple M1; ETA mentioned as next year.
  • It already powers an automotive driver-assist stack and can run NVIDIA GPUs on Apple Silicon via external enclosures.
  • Mission is framed as “commoditizing the petaflop” and enabling efficient LLM training on non‑NVIDIA hardware.

Potential impact & competition

  • Some see tinygrad as a potential alternative backend to PyTorch/TensorFlow, especially for edge and non‑CUDA hardware.
  • Others argue PyTorch could neutralize it by adding an AMD backend to its own compiler stack, leaving tinygrad’s main work (AMD codegen) as a feature PyTorch could adopt.
  • tinygrad maintainers respond that they welcome being used as a backend and already provide PyTorch and ONNX frontends.

Code size, complexity, and style

  • The low line count is polarizing: some see ~19k SLOC with zero dependencies as evidence of low incidental complexity; others complain it feels like code‑golf and is hard to read.
  • A linked optimization file becomes a focal point: critics find it dense; defenders say GPU compilation is inherently complex and the code is readable, “2D”, and appropriate for a small, expert team.
  • There’s debate over whether fewer lines actually imply simplicity; several note that autoformatters trade away information density for consistency.

Language & ecosystem comparisons

  • Discussion branches into Mojo vs CPython, Julia’s suitability as a Python successor, 1‑based indexing, multiple dispatch, metaprogramming, and trust in Julia’s correctness.
  • Some argue Mojo’s divergence from Python semantics weakens its pitch; others say Mojo’s aim is acceleration of Python‑like code on specialized hardware, not replacing CPython.

Organization, hiring & funding

  • Hiring via paid bounties and contributions is praised as highly productive and more meaningful than LeetCode interviews, but also criticized as potentially underpaying skilled work.
  • The company is small, mostly remote, with periodic meetups in Hong Kong and some physical offices.
  • Funding comes from VC, AMD contracts, and a hardware division selling multi‑GPU boxes (~$2M/year revenue); commenters debate whether this can support a team of engineers.

“Elon process”, TRIZ, and attribution

  • The blog’s reference to an “Elon process” (remove dumb requirements, “the best part is no part”) triggers pushback.
  • Several note these ideas predate that figure (e.g., TRIZ, classic design aphorisms); some dislike marketing that centers a celebrity rather than original sources.
  • There’s broader meta‑discussion about separating technical achievements from controversial public personas, and about not derailing threads into personality politics.

NVIDIA, AMD & market dynamics

  • Many see real value in helping AMD and other vendors compete with CUDA, calling this potentially worth a lot of money and technologically important.
  • Some believe open‑source software and models, plus strong inference on commodity hardware, are the realistic path to “owning” NVIDIA’s current dominance.

Hiring bounties, AI, and the future of coding

  • The bounty‑as‑interview model is contrasted with multi‑stage corporate interviews; some find it fairer, others see it as exploitative if undercompensated.
  • There’s concern that AI coding agents will flood bounties with low‑quality patches, shifting value from coding to task specification and verification.
  • One commenter speculates that as LLMs make both writing and understanding large codebases easier, huge legacy projects (LLVM, Linux, Chrome) may be harder to justify vs. focused, smaller stacks like tinygrad.

Community sentiment

  • Enthusiasts praise the openness, clear technical mission, tiny stack, and hardware/software co‑design and express strong hope that tinygrad succeeds in pushing back against “rent‑everything” compute.
  • Skeptics question the marketing emphasis (celebrity references, line counts), code ergonomics vs. PyTorch, and the founder’s public political writings, with some saying they’ll stick with mainstream frameworks for now.

GOG is getting acquired by its original co-founder

Acquisition Rationale & Structure

  • Official line: CD PROJEKT wants to focus on RPG development; selling GOG lets each pursue its own mission.
  • Many commenters see GOG as a lower-margin, more volatile business being spun off from a stronger studio business.
  • Some speculate GOG is being ring‑fenced so that if CD PROJEKT is ever acquired, GOG can remain independent and mission‑driven.
  • People generally like that GOG will be privately held by a founder rather than under public‑market pressure.

Financial Health & Viability

  • FAQ says GOG is “stable” with an “encouraging year”; some readers find that phrasing evasive.
  • Analysis of CD PROJEKT filings shows tiny profits on relatively large revenue and very high cost of sales (~70%+), likely driven by revenue share with developers.
  • GOG’s contribution to group profit is small; commenters see the spin‑off as rational but worry about long‑term sustainability and growth, especially against Steam.

DRM‑Free, Ownership & Offline Installers

  • Strong support for GOG’s DRM‑free stance and downloadable installers; many consciously buy there over Steam despite worse UX or higher prices.
  • Long debate over whether Steam purchases are “leases”: concerns about revocable licenses, delistings, changing content, and store shutdown risk.
  • Counterpoint: Steam has a decades‑long track record, delisted games usually remain downloadable, and for many users practical convenience outweighs theoretical ownership.
  • Users acknowledge that even GOG licenses are not legally “ownership” and can’t be resold, but offline installers are seen as a meaningful safeguard.

Linux, Clients & Ecosystem

  • Many want an official Linux Galaxy client and/or formal support for Heroic, with cloud saves, achievements, multiplayer, and Linux builds wired into GOG’s backend.
  • Others argue GOG’s value is precisely that no client is required; community tools (Heroic, Lutris, lgogdownloader, minigalaxy) plus documented APIs already exist.
  • Experiences with Heroic/Galaxy‑compatible features (cloud saves, achievements) are mixed; some report they work, others find them flaky.

Preservation & Catalog

  • GOG is praised as one of the few commercial actors serious about game preservation and classic titles running on modern systems.
  • Concrete examples: Heroes of Might and Magic 3, Master of Magic; GOG often preferred over newer “HD” or Steam versions.
  • Some note GOG’s biggest weakness is not enough new releases; many would rebuy modern games DRM‑free.

Piracy & DRM Debate

  • One camp claims widespread piracy makes DRM‑free AAA releases non‑viable.
  • Others counter that most Steam DRM is trivial, piracy is largely a service/price problem, and DRM mainly harms paying customers and enables artificial expiry of games.

User Experience & Trust

  • GOG generally seen as ethical and user‑friendly, but there are blemishes:
    • Galaxy’s technical quality and security (old CVEs).
    • A notable refund denial due to mis‑logged playtime.
    • Past missteps like always‑online HITMAN and Gwent on GOG.
  • Overall sentiment: cautious optimism; people hope independence strengthens GOG’s preservation/DRM‑free mission but remain wary about its financial and competitive position.

Static Allocation with Zig

Static Allocation as “Old but New”

  • Many note that “allocate everything at init, no heap afterward” is decades-old practice in embedded, early home computing, some DBs, and game engines.
  • Others argue it’s still underused in mainstream backend/web work, so repackaging it (e.g. as a style guide) is useful, not hype.
  • Several point out TigerStyle explicitly builds on prior work like NASA’s safety rules, not as something novel but as a disciplined application.

Motivations and Claimed Benefits

  • Determinism: avoiding runtime allocation improves latency predictability and makes worst‑case behavior easier to reason about.
  • Safety: in Zig without a borrow checker, banning post‑init allocation is used as a strategy to avoid use‑after‑free and scattered resource management.
  • Simpler reasoning: centralized initialization and fixed limits encourage explicit thinking about resource bounds (connections, buffers, per-request memory) and reduce “soup of pointers.”
  • Design forcing function: static allocation pushes you to define application‑level limits and batch patterns (regions/pools), similar to Apache/Nginx memory pools.

Critiques and Tradeoffs

  • Static reservation can hoard memory and starve other processes, especially on multi‑tenant systems; dynamic allocation plus good design is often “good enough.”
  • With OS overcommit, large “static” reservations don’t guarantee you won’t OOM later, and touching all pages at startup just shifts when failure happens.
  • You still need internal allocators (pools, free-lists), so “no allocation” really means “no OS-level allocation after init,” not that memory management disappears.
  • Fragmentation and exhaustion of fixed pools can be hard to debug (e.g. comparisons to LwIP), and you can still have logical use‑after‑free via index reuse.

OS, Databases, and Context

  • Discussion connects static DB buffers to Linux overcommit and OOM behavior; some see historical DB tuning as a driver for overcommit.
  • For a file/block‑backed database, static limits govern in‑memory concurrency rather than total data size, which many see as a good fit.
  • For an in‑memory KV store, commenters stress this implies a hard upper bound on stored pairs and paying allocation cost upfront.

Broader Reflections

  • Some see static allocation as aligning with safety‑critical and game/embedded practice; others note most modern apps favor GC and dynamic allocation for ease.
  • There’s debate over theoretical implications (Turing completeness, reasoning about programs), but consensus that real machines are finite anyway.
  • Several highlight the broader issue of how old techniques get lost and must be “re‑marketed” to new generations.

Swapping SIM cards used to be easy, and then came eSIM

Carrier Control vs. User Freedom

  • Many see eSIM as intentionally reducing user autonomy compared to physical SIMs.
  • Key complaint: moving an eSIM between phones usually requires carrier approval, online access, and often SMS-based verification to the old device.
  • This breaks the “pop SIM into new phone in 10 seconds” workflow, especially if the old phone is broken, lost, or abroad.
  • Carriers can block or complicate transfers, charge fees, or even lock eSIMs to a device, which users view as a power grab reminiscent of pre-SIM CDMA days.

Real-World Failure Modes

  • Numerous anecdotes:
    • Broken phone abroad, unable to receive SMS verification, stuck without number or 2FA for weeks.
    • Carriers requiring in-person store visits, postal QR codes, or bizarre ID checks to reissue or move eSIMs.
    • Travel eSIMs failing due to unsupported phone models, one-time QR codes, or poor customer support.
    • Horror stories of transfers getting stuck and numbers temporarily “lost.”
  • These are contrasted with physical SIMs that generally survive device damage and can be moved instantly.

Where eSIM Shines

  • Strong praise for travel:
    • Buy and provision data/voice before landing, often via apps; avoid language barriers, store visits, and local KYC hassles.
    • Easy to try new carriers or temporary plans; some MVNOs make eSIM swaps trivial via web portals with TOTP.
  • Useful for multiple lines (work/personal, multiple countries) on one device without juggling plastic SIMs.

Technology vs. Policy

  • Several argue the core eSIM tech is fine; problems stem from carrier and manufacturer choices and GSMA rules.
  • Spec allows carriers to block removal/transfer, supporting subsidized-lock business models.
  • Apple and other OEMs dropping SIM slots removes the physical “escape hatch” and amplifies bad carrier behavior.

Broader Pattern: Removed User-Friendly Hardware

  • Thread links eSIM to removal of headphone jacks, microSD, and other physical affordances.
  • Some users deliberately choose phones that retain physical SIM, SD, and 3.5mm jack, seeing these as last defenses against lock-in and hardware fragility.

Emerging Workarounds

  • “Physical eSIM” smartcards (eSIM-on-SIM) let users load eSIM profiles then move them like regular SIMs, but are seen as niche and pricey.
  • Consensus: best current setup is physical SIM for primary line, eSIM for travel/secondary use, and strong regulation to curb carrier abuses.

Show HN: Vibe coding a bookshelf with Claude Code

Vibe coding and ideal project size

  • Many see “vibe coding” as perfect for small, self-contained apps: one-off scripts, single-page tools, and “software for one” that would otherwise live in the “someday” pile.
  • Commenters note a clear size boundary: once a project has too many files or interdependencies, LLMs start to over-generate, introduce odd abstractions, and miss subtle bugs.

Workflows, architecture, and context management

  • Several describe a plan-first workflow: have the AI draft an implementation plan, refine it, save it as markdown, then have the model implement the plan.
  • Good results on larger projects reportedly depend on: well-defined tickets, clear module architecture, example code, and “agent.md” guidance.
  • Classic software design advice resurfaces: program to interfaces, keep modules decoupled, and limit how much code must be in context for any change.

Human intent, taste, and authorship

  • A recurring theme: the model handles execution, the human provides intent and taste. That’s framed as the main leverage.
  • Others argue “taste” itself can be modeled and tuned, as seen in image models; some see “AI taste” as already influencing UI and writing styles.
  • The article’s rhetorical style triggers debate about “AI-smelling” prose and whether polishing with LLMs flattens individual voice.

Usefulness, productivity, and learning trade‑offs

  • Supporters say LLMs collapse the cost of trying ideas: days of side-quest coding become minutes or hours, especially for non-full-time programmers.
  • Multiple people share similar vibe-coded projects (bookshelves, note libraries, movie trackers, learning tools), often built “for fun” rather than hard utility.
  • Critics worry that outsourcing implementation robs people of learning, craftsmanship, and the satisfaction of doing the “tedious” parts themselves.

Novelty, limitations, and skepticism

  • Skeptics observe that successful vibe-coded apps are almost always variants of things in the training data; they want to see genuinely novel algorithms or breakthroughs.
  • Others counter that most real-world coding is repetitive plumbing, not new compression algorithms, and automating that is already transformative.
  • There’s frustration that marketing implies 10–100× productivity, while open-source maintainers aren’t reporting decade-worth jumps in progress—only more AI-generated “slop” to triage.

Personal software vs SaaS and safety concerns

  • Some argue LLMs make it finally practical to “roll your own” personal tools instead of adapting to bloated, enshittified SaaS.
  • Others stress that vibe coding is unsuitable for safety‑critical domains (planes, air traffic control), yet expect people will still try, raising reliability and oversight concerns.

UK accounting body to halt remote exams amid AI cheating

AI, Accounting, and Gatekeeping

  • Some argue accounting bodies are clinging to their gatekeeper role and revenue streams, even though AI will be used on the job anyway and may eventually erode the value of certificates.
  • Others counter that accounting exams are “good gatekeeping”: they protect clients and create accountability, and AI is not yet reliably correct for real-world technical work.
  • There’s disagreement on whether accounting is a true “value-adding” profession or mostly compliance theater, but several commenters note its deep historical and societal importance.

Effectiveness and Value of Certifications

  • Certifications are seen by some as a response to a low‑trust environment; in an AI era they may become more important, not less, to prove real competence.
  • Others view many certs (e.g., some tech certifications) as profit centers with weak correlation to real skill, easily gamed by memorizing leaked question banks.

Cheating: Before and After AI

  • Cheating in exams long predates AI: solution manuals, fraternity “bibles,” organized answer selling, and even elaborate in‑person schemes (e.g., hidden earpieces).
  • AI dramatically lowers the barrier: students can paste questions into LLMs and get worked solutions, or quickly generate flashcards and study guides.
  • Some say this “democratizes” cheating (no longer only for the rich/connected); others see it as a serious erosion of academic integrity.

Remote vs In‑Person Exams

  • Many believe remote proctoring is fundamentally weak: VMs, second devices, hidden cameras, and LLMs are hard to police.
  • Returning to in‑person exams (often on paper) is widely seen as a necessary response, even though in‑person cheating (bathroom phones, covert devices) still exists but at higher risk and cost.

AI as Study Tool vs Crutch

  • There’s a split between viewing AI as a legitimate study accelerator (e.g., generating Anki cards, clarifying bad lectures) versus a dangerous shortcut that bypasses real understanding.
  • Several note the “temptation slope”: from using AI for study materials to having it do all homework, leaving students unprepared for in‑person exams or real work.

Purpose of Education and Assessment Design

  • Debate over whether education should mirror “real-world work” and accept AI use if results are correct, versus preserving academia as a place for deep understanding, not just job training.
  • Many advocate for heavier weighting of in‑person exams, well‑designed open‑book or oral assessments, and tasks that test conceptual grasp rather than pure memorization.
  • There’s concern that a whole cohort may graduate with inflated grades but shallow skills before employers and institutions fully recalibrate.

Kidnapped by Deutsche Bahn

Overall view of Deutsche Bahn (DB) reliability

  • Many commenters report DB as one of their worst rail experiences in Europe: frequent long delays, skipped stops, sudden terminations in small towns, and missed connections.
  • Others say the system “mostly works” if you build in large buffers (often 1–3 hours) and accept delays as normal, especially on long‑distance ICE/IC; regional/local trains are widely seen as more reliable.
  • Several note DB’s punctuality statistics are flattering: “on time” means <6 minutes late and cancelled trains don’t count. Shared links put German on‑time performance around the bottom of major European systems.
  • A recurring pattern: people travel for hours only to be dumped in a village, with little guidance, or carried far past their stop because the train cannot or will not stop.

Comparisons with other countries

  • UK: widely criticized for cost, complex pricing, cancellations and crowding; some defend service quality and compensation (“Delay Repay”) and say it’s at least as good as DB, sometimes better.
  • Switzerland and the Netherlands are held up as models: very high punctuality, dense synchronized networks, but also high costs and capacity issues on popular routes.
  • Italy, France, Spain, Denmark, Sweden: mixed reports. High‑speed lines often good; regional services can be unreliable. Several say “if you’re worse than Italy, you have a real problem.”
  • Neighboring networks now actively limit DB’s impact: Switzerland reportedly blocks late German trains to protect its own timetable.

Ownership, funding, and structure

  • DB is legally a private corporation fully owned by the state. Critics describe it as “privatize profits, socialize losses”: management chases profit and bonuses while taxpayers cover deficits.
  • Multiple commenters trace decline to 1990s “market‑oriented” reforms: underinvestment in track, closed lines and yards, loss of redundancy, and a massively fragmented internal structure with hundreds of subsidiaries billing each other.
  • Broad agreement that rail quality correlates mainly with sustained infrastructure investment and capacity, not simply public vs private ownership.

Language, communication, and bureaucracy

  • Repeated complaints about opaque announcements (“technical problems,” “issues around X”) and German‑only communication during disruptions, leaving tourists and non‑fluent residents stranded or confused.
  • Strong cultural critique of German rule‑following and bureaucratic rigidity: staff often refuse obvious, humane fixes (“not allowed,” “not registered”), even when the result is absurd detours.
  • Others defend staff, pointing to safety constraints (wrong track, no platform, dense traffic) and note many employees are helpful within the rules.

“Kidnapped” framing and responses

  • Some find the “kidnapped” metaphor offensive and melodramatic; argue this is a rerouting inconvenience, comparable to diverted flights or buses that skip stops.
  • Others say being carried far past your destination with no option to leave or re‑route does feel like a loss of agency, especially when caused by avoidable procedural or infrastructural failures.
  • A minority suggest extreme responses (e.g. faking medical emergencies or pulling the emergency brake); most condemn this as immoral, dangerous, and an abuse of emergency services.

Cars, climate, and modal choice

  • Multiple commenters say these kinds of failures push them back to cars or planes despite wanting to travel by train for environmental or comfort reasons.
  • Some argue Germany’s car lobby, political neglect, and austerity‑style funding choices have deliberately or negligently “enshitified” the rail network, undermining climate goals.

Linux DAW: Help Linux musicians to quickly and easily find the tools they need

Raspberry Pi / ARM Compatibility

  • Some expect few DAWs/plugins to run on Raspberry Pi due to lack of ARM binaries; others report that “pretty much everything” in Linux audio works on ARM if it’s open source.
  • KXStudio and Zynthian are suggested as RPi-focused ecosystems listing many compatible engines and plugins.
  • Consensus: open-source tools usually run on RPi/ARM; proprietary plugins rarely target it except a few (e.g., Pianoteq, some u-he).

Plugin UI & Knob-Based Controls

  • Strong debate over skeuomorphic knobs controlled by mouse:
    • Critics find mouse-knob interaction unintuitive, inconsistent (drag direction, rotation semantics), and sometimes inaccessible (e.g., OS magnifier conflicts).
    • Defenders value knobs for dense layouts, familiarity, quick visual scanning, and good mapping to MIDI controllers.
  • Common “good knob” expectations: linear drag, modifier keys for fine control, numerical readouts, double-click to type exact values.
  • Alternatives discussed: number boxes (less visual at-a-glance), XY pads (e.g., FabFilter-style EQ), mouse wheel, keybinds. No clear superior universal solution emerges.
  • Broader UI split: some prefer minimal, “lifeless” functional UIs; others find highly polished skeuomorphic designs creatively inspiring.

Telephony vs Music Audio

  • One view: real-time multichannel low-latency audio for musicians resembles telephony; surprising lack of shared tech.
  • Counterpoint (majority): constraints differ greatly—telephony is mono, heavily compressed, and tolerates far higher latency; music production demands many channels, full spectrum, and millisecond-level latency and sync.
  • Some note that standards like AES67 already lean on VoIP-era tech (RTP, SIP, PTP).

Linux Audio Stack & Distros

  • Several users report modern Linux audio is vastly improved, with PipeWire reducing JACK/ALSA pain and enabling low-latency workflows.
  • Others still encounter choppy UIs, resume-from-sleep issues, and the need for manual config tweaks; frustration with constant stack rewrites vs macOS’s long-stable CoreAudio.
  • Specialized distros (Ubuntu Studio, KXStudio) and PipeWire-with-JACK support are suggested over generic desktops like Mint for serious audio work.

FOSS vs Commercial Tools, UI Quality, Licensing

  • Site filters for “No charge” and “FOSS” are appreciated; some note a drop in visual polish among FOSS plugins and lament lack of designer involvement.
  • Explanation offered: many devs reserve their best-polished work for paid products; most FOSS plugins are “publish-and-move-on” with small communities, making designer collaboration hard.
  • Philosophical tension: some just want high-quality paid tools; others emphasize software freedom over convenience, criticizing proprietary DAWs/plugins for restricting user rights.
  • Wishlist for mainstream DAWs: containerized/reproducible environments, centralized license management, cloud/remote processing, and shareable projects that bypass per-user plugin licenses; several respondents doubt the economics and licensing politics.

Perceptions of LinuxDAW.org

  • Widely praised as a much-needed, well-organized, and surprisingly snappy catalog that greatly improves discoverability.
  • Filters by category (compression, saturation, etc.) and FOSS/commercial status are especially valued.
  • Criticisms: endless scrolling makes reaching the footer annoying; some initially misread it as an ad or confuse it with linuxmusicians.com.

Tool Suggestions & Gaps

  • Recommended plugins and synths include Surge XT, Vital, Dexed, ZynAddSubFX/Yoshimi, LSP plugins, and various commercial Linux ports (Toneboosters, Kazrog, u-he).
  • Some note omissions or low ranking of favorites (e.g., Helm).
  • Alternative workflows mentioned: terminal-based DAWs (e.g., ecasound frontends), live-coding tools (TidalCycles/Strudel), and non-traditional systems like Glicol.
  • Renoise is recommended as a Linux-friendly tracker-style DAW, particularly for electronic genres.

EU to build no-fee payments service like Visa/Mastercard and Apple/Google Pay

Motivation: Sovereignty and US Dependence

  • Many see an EU-run payment rail as overdue “public infrastructure,” like roads or power.
  • A key driver discussed is insulating EU citizens from US political sanctions and corporate decisions (e.g., ICC judges cut off from Visa/Mastercard).
  • Some frame it as shifting from a “US grip” to an “EU grip,” with concern that this further erodes national sovereignty inside the EU.

Architecture, Platforms, and Scope

  • The initiative is understood as both a central bank digital currency (CBDC) and a card/app payment scheme, not a cryptocurrency.
  • It will support physical cards (no phone required) and apps that likely rely on Apple/Google ecosystems and remote attestation, which worries people using de-Googled or FOSS devices.
  • Several note the irony of “built by Europeans only” while still depending on US mobile OSes and Chinese/Taiwanese hardware.

Privacy, Control, and CBDC Fears

  • Supporters argue GDPR and EU law provide stronger privacy protections than US tech firms, and prefer one secure state-level database to many leaky corporate ones.
  • Critics warn CBDCs enable fine-grained control: programmable money, spend restrictions, expiry, targeted account freezes for political dissent, and perfect transaction tracking.
  • Some would rather be tracked by EU institutions than US corporations; others reject both.

Banks, Fees, and Competition

  • The promise is zero interchange for merchants, undercutting Visa/Mastercard; banks would get standardized, lower fees.
  • Skeptics note the EU itself bans card surcharging, making it hard for merchants to steer customers to the cheaper option, contrary to official rhetoric.
  • Debate on whether this is meant to “get rid of banks” or simply provide a parallel public rail most people access via banks anyway.

EU Governance, Regulation, and Timing

  • Mixed views: some see this as necessary integration and a peace-preserving project; others see mission creep toward a federal superstate with heavy regulatory burdens (GDPR, AML, AI rules).
  • Frustration that smaller or less affluent regions (e.g., Brazil, India, Thailand, national QR/instant systems) moved faster, while the EU is seen as late and slow.

Existing Systems and Special Use Cases

  • Comparisons to SEPA Instant, Wero, Pix, Swish, and QR-based schemes; some argue the EU should just standardize and interconnect what exists.
  • Chargeback protections may be weaker than with current credit cards.
  • Some welcome relief from US payment morality filters (e.g., sex work, adult content) if EU rails don’t impose those restrictions.

You can't design software you don't work on

Limits of Generic Design Advice

  • Many agree high-level principles and shared terminology are helpful for framing problems, but “the map is not the territory.”
  • Several note the article itself becomes the kind of generic advice it criticizes and is tautological.
  • Some argue foundational CS concepts (types, invariants, error handling, time/space tradeoffs, concurrency, information theory) change slowly; what churns is surface-level tools and patterns. Others counter that the “body of knowledge” about how to structure systems is huge, evolving, and context-dependent.

Architecture vs Implementation

  • Strong support for the claim that you can’t effectively design features for a large, existing system without working in that system: real constraints are buried in code, data, and historical hacks.
  • The “Amazon free samples” example is used to show how “simple” requirements explode into many edge cases that are hard to foresee upfront.
  • Some push back: with enough prior experience building similar systems, you can design large applications without coding, especially for greenfield projects.

Is Software Engineering or Art?

  • One view: because practices and trade-offs shift rapidly, software is more creative art than stable engineering, albeit with engineering-adjacent disciplines.
  • Counterview: we do know how to build high-reliability systems (e.g., NASA style), but it’s too slow/expensive for most businesses, which optimize for “good enough” over “good.”

Role and Value of Architects / Analysts

  • Many criticize “architects” who don’t code or read the code: they give buzzword-laden, unusable guidance or rubber-stamp designs produced by senior devs.
  • Others defend a distinct architecture/analysis role focused on understanding business information flows and big-picture coherence, with programmers handling technical details.
  • There’s nostalgia for systems analysts and careful upfront analysis in older industries (e.g., banking); start-up cultures tend to favor rapid experimentation instead.

Consistency vs “Good Design”

  • The article’s line “consistency is more important than good design” is heavily debated.
  • Pro-consistency camp: heterogeneous stacks and “cowboy” tech choices (e.g., one-off Redux sections, random frameworks) impose cognitive and maintenance costs; consistency improves onboarding, velocity, and allows fixing bug classes once.
  • Skeptics argue that rigid consistency can entrench bad patterns, discourage incremental improvement (violating the Boy Scout rule), and push toward risky “boil the ocean” rewrites.

Domain Understanding and User Involvement

  • Several stress you can’t design good systems without deep understanding of the business and users.
  • Best results reportedly come when engineers are also active users, can directly see user issues, and have freedom to fix small problems opportunistically.
  • Discussion touches on XP’s “customer on the team,” with caveats that real customers often don’t want to be involved; effective “customer proxies” are needed.

Organizational and Incentive Problems

  • Commenters blame cheap, compliance-obsessed leadership for underinvesting in real architecture and refactoring time, leading to long-lived “Galactus”-style monstrosities.
  • Short average tenure (≈2 years) limits developers’ holistic understanding; “architects” often become meeting-bound approvers rather than real designers.
  • There’s also criticism of “Real Programmers” who optimize low-level details while resisting necessary business-driven change, creating complex but brittle systems.

Staying ahead of censors in 2025

China: Current Circumvention Tools and Performance

  • Commenters report widespread use of v2ray (VLESS/VMess), Trojan, and Xray-core, often chained via near-China VPS “first hops” (e.g., nyanpass setups) and then multiple Asian hops.
  • WireGuard to personal VPSs “just works” for some visitors; commercial VPNs like Mullvad work but slower.
  • Some use roaming SIMs plus self-hosted VLESS, or even custom proxies over Syncthing relays to reach residential IPs.
  • Bandwidth is described as limited but not hard-capped; congestion and poor peering dominate unless you pay for premium CN2 GIA transit.
  • One person notes extremely low speeds when using SSH as a proxy, while others see ~10 Mbps.

Legal/Sanctions Questions Around Tor and Similar Tools

  • A thread examines whether Tor needs OFAC or ITAR clearance to serve users in sanctioned countries.
  • Several argue Tor is “speech, not trade”: free, open-source software publication rather than a commercial export, so OFAC applies more clearly to paid deals (e.g., BrowserBox) than to Tor itself.
  • Encryption is claimed not to fall under ITAR anymore; others remain unsure and seek legal clarity.
  • Comparisons are made to GrapheneOS: open-source projects “supply” nobody directly; users fetch the code themselves.

Conjure, WebTunnel, and the Move From Obfuscation to Mimicry

  • Conjure (refraction networking using unused ISP address space) is praised as a major milestone: it undermines IP enumeration/blocklisting and forces censors to risk large-scale collateral damage.
  • Some doubt this would deter Russian censors, who already block huge IP ranges and tolerate heavy collateral damage.
  • WebTunnel’s SNI imitation and non-WebPKI certificate support are highlighted as useful not just against state censors but also for evading corporate tracking and ad infrastructure.

Why Focus on Russia/Iran, Not EU/UK/Australia?

  • Some object that the Tor post ignores rising censorship and speech-related arrests in the UK, EU, and other democracies (hate-speech laws, age verification, “online harms,” protest restrictions).
  • Others respond that Tor’s blog post is narrowly about technical blocking (DNS/IP/DPI) where Tor itself is targeted; in the UK/EU Tor isn’t generally blocked, so there’s less for Tor to do at the network layer.

Huge Subthread: UK Hate-Speech, “Malicious Communications,” and Arrest Statistics

  • A long argument centers on reported ~12,000 annual arrests in England & Wales for “online/offensive communications.”
  • Critics claim the UK increasingly criminalizes “insulting” or “abusive” speech, sometimes in private contexts (e.g., slurs in texts, memes, misgendering, slogans, silent prayer near abortion clinics), creating “thought crimes” and political policing.
  • Defenders counter that:
    • Laws target incitement to violence, serious harassment, and stirring up racial/religious/sexual-orientation hatred, not mere criticism.
    • Many cited cases involve explicit or contextual threats (e.g., “set fire to hotels full of asylum seekers”) or coordinated racist agitation (e.g., neo-Nazi sticker campaigns).
    • Arrest figures aggregate very different behaviours, including domestic abuse, stalking, and obscene messaging; raw totals are misleading without breakdowns.
  • Several cases (e.g., hate-speech prosecutions, abortion-clinic buffer zones, controversial tweets, the “Nazi dog” video) are hotly disputed; links and sources are debated, and some media outlets are dismissed as unreliable.
  • Comparisons to Russia/Iran are contested: some see Western trends as on the same spectrum; others insist equating UK practice with open dictatorships trivializes far worse repression.

Broader Free-Speech Philosophy and US–Europe Differences

  • One camp stresses that hate speech and troll farms are a direct attack on democratic consensus; they support attempts (even imperfect) to curb incitement and coordinated hatred.
  • Another camp argues that “hate speech” is inherently elastic and tends to become “whatever those in power dislike,” leading to selective enforcement and long-term danger for all sides.
  • US vs European approaches are contrasted:
    • US: strong constitutional protection against government censorship but heavy private and platform moderation, plus libel leveraged by the wealthy.
    • Europe/UK: more state restrictions on hate/insulting speech and protests, but less focus on private-platform absolutism; some see this as necessary, others as democratic backsliding.
  • Several note you cannot “solve” censorship purely with technology; political change and institutional integrity are also required.

Tor-Specific Technical Wishes and Questions

  • Users ask for:
    • Clearer, official instructions for setting up Snowflake on desktop (not only via Orbot on Android).
    • Easier GUI controls for selecting exit-node regions to bypass geo-blocking (Tor dev-side concern: must preserve fair load balancing between countries).
    • Native DNS tunneling support.
  • The article’s emphasis on “mimicry” (traffic that looks like normal HTTPS or legitimate SNI) is seen as the key 2025 shift: random-looking traffic is now itself a DPI signature.

Russia’s Current Network Censorship Situation

  • Recent reports describe:
    • Most off-the-shelf VPNs blocked.
    • Key Western vendor sites (Intel, Microsoft) self-blocking due to sanctions, complicating basic laptop setup.
    • Voice/video calls in most messengers and FaceTime blocked.
    • Outline VPN still functioning, but setting up servers is hard for residents lacking foreign payment options; iOS Outline app remains in the Russian store.

Show HN: Z80-μLM, a 'Conversational AI' That Fits in 40KB

Retro hardware & emulation

  • Many commenters love the “LLM on a Z80” angle and want:
    • A Z80 simulator bundled with the demos.
    • Ports to Game Boy, MSX, ZX Spectrum, Amstrad CPC, CP/M, and 48K Spectrum.
  • Existing CP/M/Z80 emulators were used to run the demos; they generally work, though one commenter struggled with the GUESS.COM game.
  • Discussion of Game Boy constraints:
    • 32KB ROM + 8KB RAM on original hardware.
    • 16KB banked ROM; suggestion to keep each LM layer in a single bank to minimize switching.
    • Main expected bottleneck is user text input, not bank-switch overhead.
  • Some worry performance on 8‑bit systems with bank switching will be “gnarly,” but see it as a fun challenge.

Model design & technical limits

  • The model is ~150k parameters, heavily quantized, and more “micro-LM” than a typical “small” model.
  • Commenters clarify it’s essentially an MLP without attention, embedding the entire input and using a short trigram-based “context.”
  • Questions raised:
    • Sensitivity of different layers/components to quantization; one reply reports first and last layers, and certain MLP blocks, degrade most under aggressive quantization.
    • Whether sparse weights were considered.
    • Token/s performance (no clear answer in thread).
  • Related exploration:
    • “Minimally viable LLM” that can have simple conversations.
    • Tiny models specialized for narrow tasks (e.g., regex generation).
    • Ideas like a “cognitive core” with minimal knowledge but good tool use.
    • RWKV and RNN-like architectures for efficient CPU inference.
    • Interest in what similar techniques could do on ESP32/RP2040 and smartphones.

Security and hidden information

  • One commenter asks if a secret (e.g., passphrase) baked into the weights would be recoverable from the model.
  • Responses:
    • With a network this small, reverse engineering is likely feasible.
    • More generally, this ties into model interpretability and “backdoor” research; a cited paper claims some backdoors can be undetectable to bounded adversaries.

Historical what-ifs & human perception

  • Strong comparisons to ELIZA, PARRY, and simple bots:
    • Some think this would have felt magical on 80s/90s hardware; others argue ELIZA-style scripting might feel more impressive given the terseness of replies.
  • Commenters note:
    • Similar techniques might have been technically possible on 60s–90s machines, potentially changing the trajectory of AI in games and interfaces.
    • Constraints of specific old hardware (e.g., IBM 7094 word memory vs. a 40KB Z80 binary).
  • One thread emphasizes that part of the “magic” is human: people work hard to interpret sparse, noisy output as meaningful, so even crude bots can feel conversational.

Implications for devices and software bloat

  • Some see this as a “stress test” proving that:
    • Very limited hardware can host non-trivial conversational behavior.
    • Embedded and IoT devices will soon ship with onboard LLMs.
  • Others speculate we’re at a “home computer era” for LLMs: with enough RAM, local open models plus custom agents can rival proprietary systems.
  • A long subthread contrasts this tiny model with modern desktop apps:
    • One side argues it exposes waste in chat apps needing gigabytes of RAM.
    • The other side counters that apps like Slack/Teams provide far more features (integrations, app ecosystems, rich video/screen-share, etc.), and that hardware and resource budgets have grown, changing tradeoffs.
    • Ongoing disagreement about whether modern “bloat” is justified or just developer convenience.

General reactions & use cases

  • Overall tone is enthusiastic: lots of stars, “super cool,” “magical” and “WarGames” vibes.
  • People imagine:
    • NPCs in games each backed by a tiny model.
    • Fuzzy-input retro RPGs/adventures that accept natural-ish language.
    • A tiny on-device assistant with huge context via external lookup.
  • Plenty of humor: jokes about AGI “just around the corner,” Z80 shortages, RAM prices, and SCP-style stories about haunted 8‑bit AIs.

Huge Binaries

Binary sizes and where the bloat comes from

  • 25 GiB+ binaries are described, with people noting that most of that can be debug info rather than executable code.
  • C++ debug symbols are highlighted as a huge contributor: templates, type info, local variable locations, line mappings, and multiple specializations generate massive DWARF sections.
  • Some extreme cases: LLVM-dependent builds >30 GB, 25 GB stripped binaries, and games or applications embedding large assets or model weights inside the executable.

Static vs dynamic linking at large scale

  • Large shops favor static (or mostly static) binaries for:
    • Startup speed and reduced dynamic loader overhead (PLT/GOT, symbol interposition).
    • Easier profiling, crashdump analysis, and fleet-wide tooling that assumes a single monolithic binary.
    • Binary provenance and security guarantees: “what’s running is exactly what we built”.
  • Reasons given for avoiding dynamic libraries:
    • ABI instability and header-only templates make reusable .so’s hard in big C++ monorepos.
    • Different builds use different library versions, defeating sharing.
    • Historical ld.so performance issues with many shared objects.
    • Operational weirdness at scale (e.g., bit flips or corruption making a shared library “poisonous” for all processes on a node).
  • Skeptics point out that huge cloud providers successfully use dynamic linking and managed runtimes, questioning whether static linking is truly required for scale.

Debug info handling and tooling

  • Detached debug files, split DWARF (-gsplit-dwarf), and compressed debug sections are widely known and used, but tooling is seen as clumsy.
  • Several note that debuginfo sections don’t affect relocation distances or runtime memory (they’re non-allocated ELF sections).
  • Operational practice: ship stripped binaries, keep symbol files in a “symbol DB” for post-mortem debugging.

Code size, dead code, and optimizations

  • Many argue that hitting a 2 GiB .text limit signals missing dead-code elimination: use LTO, -ffunction-sections + --gc-sections, identical code folding, tree-shaking, or better partitioning.
  • Others counter that even with these, large monolithic C++ services can genuinely approach 2 GiB of code.

Code models, thunks, and relocation limits

  • Discussion dives into x86-64 code models and the 2 GiB relative jump/call limit.
  • Medium/large code models, thunks/trampolines, and post-link optimizers like BOLT are discussed as strategies, each with performance tradeoffs.
  • It’s noted that a proper range-extension thunk ABI for x86-64 would be preferable to pessimistically upgrading everything to the large code model.

John Simpson: 'I've reported on 40 wars but I've never seen a year like 2025'

Perceived Uniqueness of 2025

  • Several commenters say this period feels closer to a world war than anything since WW2: overlapping crises, nationalist governments in Europe, US “civil war” rhetoric, regional wars, and economic stagnation.
  • Others argue similar or greater risks existed before (Cold War, Cuban Missile Crisis, Gulf War), but memory and hindsight downplay them.

Role of Media and Visibility

  • One view: what’s “different” is not risk but visibility—real‑time global coverage of every skirmish, speech, and cyber incident.
  • Pushback: people were very aware during Vietnam and the Cuban Missile Crisis; mass media and drills made risk tangible then too.
  • New factor highlighted: unfiltered, on‑the‑ground footage from ordinary people, not just state or network gatekeepers.
  • Some argue media sensationalism and “performing for the cameras” can actually scale up conflicts that would otherwise stay local.

US, Europe, and Ukraine

  • Disagreement over whether the US would really fight for Europe under current leadership; some see deep Russian influence and isolationism.
  • Debate on whether the US “promised” to defend Ukraine: legalistic reading of the Budapest Memorandum (assurances, not guarantees) versus a moral/political understanding that disarmament created an obligation.
  • One camp says US and Europe have been timid, undermining deterrence and encouraging nuclear proliferation.
  • Others argue there are rational limits: avoiding nuclear war and economic ruin; some think Europe should take primary responsibility and use the war to deepen integration.

Assessing Russian Power

  • Some note Russia’s slow, attritional campaign against a smaller Ukraine as evidence they aren’t a conventional superpower.
  • Others warn against underestimation: Russia tolerates huge casualties, has advanced drone warfare skills, and still holds a large nuclear arsenal; even a single functioning weapon is catastrophic.
  • Ukraine is framed as a serious, well‑armed, prepared military power, not a weak proxy, which explains Russia’s difficulties.

Middle East Escalation Debate

  • One commenter uses Israel’s strikes across the region as an example of wider militarization; others object that “attacking all their neighbours” is an unfair exaggeration and rhetorically dangerous.
  • The argument devolves into whether precision of language matters when describing patterns of force projection, with concern that repeated one‑sided framings fuel polarization and violence.

Civilian Casualties and UN Figures

  • Discussion of UN‑reported civilian deaths in Ukraine: several say official numbers are known undercounts because occupied areas are essentially unmonitored.
  • Comparisons drawn with Gaza: some see asymmetrical standards, or political bias, in how deaths are counted and framed.
  • One line of argument: if Gaza‑style counting were applied to Ukraine (including military and unverified deaths), total casualties there would likely be far higher than reported.

Globalization vs Localism

  • One perspective: for many rural populations, which elite controls the capital matters little; Western responses are more about Western media than local needs.
  • Others counter with examples: rural Ukrainians under bombardment, persecuted rural Chinese communities, devastated agriculture in Gaza, and rising costs in rural Britain all show global politics directly harming local life.
  • Debate over whether a truly “local” economy could insulate people: critics argue modern infrastructure, healthcare, and tools are inseparable from national and global systems.

Autocracy, Democracy, and Public Opinion

  • Concern that “World War Three” may manifest more as creeping autocracy than open global battles.
  • Some note patterns: charismatic strongmen, short voter memory, and repeated susceptibility to new “cults of personality.”
  • Others suggest overreach in wars has historically destabilized Russian regimes and might eventually do so again.

Meta: Moderation and Bias

  • Some claim political threads like this get flagged by pro‑Russia users or those nominally “against politics.”
  • Noted asymmetry in how discussions on different conflicts (Ukraine vs Gaza) are tolerated or promoted on platforms.

CIA Star Gate Project: An Overview (1993) [pdf]

Reality of the program vs reality of psychic powers

  • Broad agreement that Star Gate and related projects were very real, funded for decades and even explicitly authorized by Congress.
  • Strong disagreement over whether this implies remote viewing or psychic powers exist; many stress that program existence ≠ phenomenon validity.

Why it was funded / strategic logic

  • Some see this as classic CIA/DoD weird-science: small budgets, huge potential upside, Cold War fear that Soviets/Chinese might gain an asymmetric edge.
  • Others call it money laundering, crackpot capture, or institutional susceptibility to conspiracy thinking and cognitive dissonance.
  • A few argue it’s rational for a defense chief to fund low-probability, high-payoff research, even if it sounds like “mumbo jumbo.”

Evidence, methodology, and the AIR/Star Gate reviews

  • One side cites official reviews suggesting remote viewing results were statistically above chance, though vague and not actionable in real intelligence work.
  • Critics counter that hits are expected over thousands of trials (birthday-paradox style), and that results don’t generalize or reproduce robustly.
  • The Jessica Utts vs. Ray Hyman panel is discussed: Utts seen by some as strong evidence; skeptics point to her parapsychology ties and alleged lack of independence.

USS Stark incident and “prediction” document

  • A 1987 remote-viewing transcript resembling the USS Stark attack is seen by a few as “eerily similar” and suggestive of precognition.
  • Skeptics highlight vagueness, cherry-picking, and the possibility of backdating or disinformation; note there are many non-hits that get ignored.

Skepticism, standards of proof, and mundane explanations

  • Repeated emphasis on scientific method, extraordinary claims requiring strong evidence, and publication/selection bias.
  • Arguments that if RV worked even slightly, militaries and hedge funds would exploit it systematically; they do not, which is taken as negative evidence.
  • Others push back that some knowledge might be inherently hard to industrialize, or deliberately obscured as “born secret.”

Views on the CIA, personalities, and broader hoaxes

  • CIA is portrayed both as dangerously credulous and as an easy political scapegoat.
  • Specific figures (e.g., remote-viewing promoters) are criticized as vague, non-falsifiable, or grifter-like.
  • Several comments connect Star Gate to contemporary pseudoscience, conspiracy theories, and the recurring human tendency toward elaborate hoaxes.

You can make up HTML tags

Browser behavior & standards

  • Unknown HTML tags are allowed; by default they behave like inline elements similar to <span>.
  • Tags without a dash become HTMLUnknownElement; names with a dash are treated as HTMLElement and reserved for custom elements, never future native tags.
  • This enables styling and selection via CSS and JS (querySelector, :not(:defined), etc.) even before any JavaScript-defined custom element is registered.
  • Older IE needed shims/HTML5 shiv to recognize new/custom tags, but modern browsers handle them natively.

Benefits cited

  • Readability: custom tags can reduce “div soup” and make nested structures easier to visually parse and search in editors.
  • Organization: some use them to represent components (<card>, <x-hero>), layout contexts, or domain concepts (<invoice>, <toolbar>).
  • Interop: when used as web components, a single custom element can be consumed from different frameworks (React, Vue, Angular, etc.).
  • Automatic hydration: custom elements’ lifecycle (connectedCallback) lets behavior reliably attach to elements added later to the DOM.

Concerns & drawbacks

  • Semantics: many argue classes on standard elements (<article>, <header>, <blockquote>, etc.) are preferable; custom tags add no built‑in meaning.
  • Accessibility: unless ARIA roles/attributes are added, custom tags act like generic divs/spans for screen readers and assistive tech.
  • Confusion: unfamiliar tags can blur the line between native and custom behavior; reliance on many custom names creates cognitive load.
  • Maintainability: overuse can hurt readability, especially deeply nested or over‑specific names; naming debates are a practical friction.

Web components, Lit, and frameworks

  • Several comments connect this to web components and libraries like Lit and Shoelace, praising interop but noting verbosity and Shadow DOM complexity.
  • Styling inside Shadow DOM, especially with utility frameworks like Tailwind, is seen as awkward; some switch components to light DOM or share styles via adopted stylesheets.
  • There’s disagreement over whether SPAs and frameworks like React are still overused versus a resurgence of SSR plus light JS/custom elements.

CSS techniques & use cases

  • Common patterns: setting defaults via :where(:not(:defined)) { display: block }, using nth-child(... of .class), @media (scripting) for JS-on/JS-off behavior.
  • Concrete uses include custom syntax highlighting tags, <yes-script> for JS-only content, pseudo-<blink> recreations, and design systems built entirely from custom elements.

Self-hosting is being enshittified

Scope of “enshittification” and title mismatch

  • Several commenters say the article is mostly about DRAM pricing and feels scattered; they see little direct link between RAM prices and “enshittification” of self‑hosting.
  • Others argue specific projects (Plex, MinIO, Mattermost) may be getting worse, but “self‑hosting” as a whole is not.
  • Some note “enshittification” usually implies lock‑in and difficult switching; with self‑hosting you can often migrate to alternatives (e.g., Jellyfin, Zulip, Garage), so the term feels misapplied.

Open source, forks, and corporate control

  • One camp: if software is permissively licensed and the code is available, you can fork, pin a version, and are not really “enshittified.”
  • Counterpoint: forking and long‑term maintenance require sustained, coordinated effort; most users won’t do it, and corporations can out‑resource community forks.
  • This leads to a call for more “social safety nets” around important FOSS (foundations, community stewardship) rather than trusting vendor‑driven “open source.”

Plex and media self-hosting

  • Debate over what Plex changed: paywalls around remote/guest streaming, mobile apps, removal of features (e.g., auto photo upload, “watch together”), and heavy emphasis on their own streaming/social features.
  • Some long‑time users say their use case (LAN streaming of a personal library) is unchanged and not enshittified.
  • Others dislike dependence on Plex’s infrastructure for NAT traversal and object to data collection; they switch to Jellyfin + VPN/WireGuard, nginx, or Kodi.

Security, exposure, and control

  • One view: self‑hosting’s strength is choosing when to update; you can pin a “good” version.
  • Pushback: security fixes matter, especially for Internet‑exposed services; running old versions is risky, and you can’t realistically maintain your own fork.
  • Mitigations like VPNs, IP whitelists, and mTLS reduce pressure to update immediately.

Hardware requirements and DRAM prices

  • Many say the article overstates hardware needs; typical home/self‑host setups (few users, simple services) run fine on old NUCs, thin clients, or low‑RAM DDR3 boxes.
  • Some homelabbers do use 64–128GB and more complex stacks (Kubernetes, hypervisors, ZFS, ECC), but others call this unnecessary for most households.
  • DRAM price spikes and AI demand worry some, but others argue markets will adjust and used/refurb hardware remains cheap.

Homelab culture and future trends

  • Discussion around “homelab” ranging from a single box to quasi‑datacenters; some criticize pushing heavy tools (Proxmox, FreeNAS, k3s) on beginners.
  • A few see a broader squeeze on general‑purpose computing (TPM/Win11, cloud pushes) as more concerning than current self‑hosting software trends.
  • Others wish self‑hosting evolved toward more peer‑to‑peer, intermittently connected, identity‑based architectures instead of mimicking centralized SaaS.

CEOs are hugely expensive. Why not automate them? (2021)

Economic incentives & power dynamics

  • Some argue automating CEOs wouldn’t reduce rent extraction; it would just shift the surplus to AI vendors or whoever controls the system.
  • Principal–agent problems persist: whoever configures and oversees the AI would inherit the CEO’s leverage over shareholders and resources.
  • Others see CEO pay as rational: the role has huge leverage, very few people can do it well at scale, and even “obscene” compensation can be a bargain relative to impact.

Accountability, law, and liability

  • Legal and fiduciary duties (especially under Delaware law) are generally non-delegable and must be exercised by natural persons.
  • Commenters emphasize that accountability is the core blocker: who goes to jail or gets sued when AI-driven decisions cause harm or fraud?
  • Some jest about “rubber-stamp humans” or shells taking the fall, but others note this would violate numerous laws and be hard to sustain.

What CEOs actually do

  • One camp claims CEO work is mostly soft skills: setting tone and culture, aligning thousands of people, networking, salesmanship, and “people skills.”
  • Another camp is skeptical, seeing many CEOs as overpaid figureheads, PR machines, or cartel members justifying each other’s pay.
  • There is disagreement over how rare truly capable CEOs are, and how much luck, connections, and class background matter.

Feasibility of AI CEOs

  • Proponents see LLMs handling analytics-driven strategy, communications, PR, shareholder letters, and even much routine decision-making better than mediocre executives.
  • Skeptics stress current AI’s brittleness, lack of grounded judgment, inability to model complex human relationships, and dependence on clear reward functions that executive work lacks.
  • Some note similar “this job is too human to automate” arguments were made by artists, programmers, and others, and question why CEOs would be uniquely safe.

Networks, culture, and trust

  • Many highlight that CEO value often lies in personal networks, backchannel influence, and the ability to build or shift organizational culture—seen as hard to encode or replicate.
  • Trust from shareholders, customers, and employees is presented as inherently tied to a human figurehead, though a few challenge this assumption.

Power, inequality, and ideology

  • Several comments connect the debate to propaganda about meritocracy, extreme inequality, and why ordinary workers defend CEO pay.
  • A recurring theme: tech leaders eagerly discuss automating everyone else, but object when their own roles are questioned, revealing underlying class interests.

62 years in the making: NYC's newest water tunnel nears the finish line

Pop culture and public imagination

  • Multiple commenters connect the tunnel to its depiction in “Die Hard 3,” using it as a symbol of long-running mega-projects.
  • Jokes riff on future action/post‑apocalyptic movies featuring unfinished California HSR as the new “big infrastructure backdrop.”
  • Several people express excitement that this is the same tunnel they recall from the film.

Tours and public interest in megaprojects

  • Commenters hope for public tours before the final section is flooded and sealed for decades.
  • People reference other civil‑engineering tours, especially Tokyo’s Metropolitan Area Outer Underground Discharge Channel, and ask for similar lists for infrastructure tourism.

Why so deep? Engineering and geology

  • Users ask why parts of the tunnel are ~800 feet down and how depth affects drilling energy and rock pressure.
  • Responses:
    • Main reasons cited are: needing gravity flow over 60 miles; staying in solid bedrock to avoid unstable soils; and avoiding conflicts with dense surface/near-surface infrastructure.
    • Some clarify the average depth is closer to ~400 feet and that local geology (bedrock vs clay/silt in Brooklyn/Queens) likely drove design.
    • Several note that drilling difficulty depends far more on rock type and water ingress than on absolute depth; “depth vs effort” has no simple formula and is highly site-specific.
    • Tunnel boring here is more “hammering” than classic drilling; depth per se doesn’t slow the machines much.

Purpose, lifespan, and redundancy

  • Beyond capacity, a key purpose is redundancy: Tunnel 3 enables shutting older tunnels for inspection and major repairs.
  • Commenters note targets of ~200–300 years of service life, comparing to Roman aqueducts and ancient tunnels that still function in some form.
  • Speculation about how long an unmaintained tunnel would last is raised but remains unclear.

Desalination vs gravity‑fed supply

  • One thread asks when desalination plus cheap clean energy might beat a 60‑mile gravity tunnel.
  • Most replies are skeptical:
    • Tunnel: very high upfront capital but extremely low operating cost (gravity-fed, minimal energy, rare major maintenance).
    • Desalination: ongoing high operating and maintenance costs; economically more plausible where fresh water is scarce.
  • Some argue desalination is more relevant for the US West Coast; the East has abundant freshwater, though there’s mention of mismanagement and legacy water rights in California.

Timelines, cost, and corruption debates

  • Several participants see the ~62‑year timeline as evidence of political friction, funding pauses, and possibly broader US infrastructure dysfunction rather than technical limits.
  • Others question whether the duration is actually abnormal for such a massive urban project, pointing out:
    • Construction was intermittently funded and phased.
    • The tunnel must be extraordinarily reliable and long-lived; “patching after release” is difficult.
  • Debate over NYC corruption:
    • One side claims NYC infrastructure is uniquely expensive and graft‑ridden, citing investigative reporting on massively inflated labor and construction costs in transit projects.
    • Others counter that:
      • Corruption metrics aren’t clearly worse than in comparable US metros.
      • Federal prosecution data show a long‑term decline in corruption cases in Manhattan specifically.
    • There is no consensus on whether Tunnel 3’s schedule specifically reflects corruption vs. complex logistics and politics.

Comparisons to other projects and regions

  • Users compare the tunnel’s timeline and cost to:
    • European megaprojects: Alpine rail tunnels, London’s Elizabeth Line, and the Thames Tideway Tunnel, which had shorter build times once formally approved.
    • Some argue that drilling under a dense city is fundamentally harder than through mountains.
    • Others highlight a broader “Anglosphere cost disease,” with the US, UK, and Canada all paying more than countries like Spain or Japan.

Technology vs coordination problems

  • Several comments contrast rapid progress in software/AI with the difficulty of delivering physical infrastructure.
  • View expressed that:
    • Political, legal, and coordination barriers are harder than the engineering itself.
    • It’s often easier to get high-tech projects (e.g., self-driving cars) moving than to secure consensus and permits for trains, tunnels, or subways.
  • Some suggest AI could help with data integration, planning, and design, but others warn current AI is prone to producing convincing but incorrect outputs.

Transit, cities, and social preferences

  • A tangent emerges about public transit vs cars:
    • One view: Americans broadly dislike trains and sharing space with strangers; outside the densest city, subways are “almost useless.”
    • Counterview: When high-quality transit exists (e.g., NYC subway), it is heavily used and seen as a major urban asset; sharing space is part of the appeal of dense city life.
    • Arguments reference population trends, migration, and differing cultural expectations, but no agreement is reached.