Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 273 of 359

Riding high in Germany on the world's oldest suspended railway

Why suspended rail is rare / technical tradeoffs

  • Commenters dispute the claim that suspended systems are inherently quieter; some report the Wuppertal line as “quite loud” due to sway and wheel–rail angles.
  • A major drawback is “lock‑in”: suspended stock can’t easily transition to ground-level or underground track, unlike elevated conventional rail.
  • Structures work mostly in tension, needing large steel box beams and complex joints, versus simpler concrete viaducts in compression.
  • Tight cornering is one reason to consider suspended or monorail systems, but normal rail already manages curves via wheel/axle geometry; benefits are debated.

Use cases, cost, and alternatives

  • Wuppertal’s steep, narrow valley and river corridor make an over‑river suspended line unusually suitable.
  • Elsewhere, commenters say a concrete viaduct with standard trains, or straddle-beam monorails, are typically cheaper and more flexible.
  • Monorails can be cheaper than fully elevated conventional lines but more expensive than surface rail; junctions and tunnels are technically and financially painful.
  • Lack of standards and single‑vendor dependence for monorails/suspended systems raise long‑term maintenance and spare‑parts concerns.

Noise, safety, and maintenance

  • Safety record is seen as strong: only one major fatal accident in over a century, attributed to maintenance failures rather than design.
  • Discussion highlights the importance of night‑time maintenance, end‑of‑shift safety checks, and tracking “near misses” to prevent repeats.
  • The famous baby elephant fall is treated as a quirky historical footnote.

Aesthetics, shadows, and urban form

  • Strong split on appearance: some see the structure as grotesque over a river; others prefer it to burying waterways in culverts or freeways.
  • Debate over shadows: one side calls them a blight; others argue the slim profile is far less intrusive than full elevated roads or rail.
  • Old vs modern cityscapes trigger a broader argument about car-centric redesign, wartime destruction, architectural ornament, and costs.
  • Side thread compares historic horse‑manure problems with modern car pollution and noise.

History, longevity, and uniqueness

  • Commenters clarify that the Schwebebahn predates the unified city of Wuppertal; it was jointly planned by the earlier municipalities.
  • Its continuous use since 1901 is compared to other very old rail and bridge infrastructures that remain central today.
  • Suspended systems are extremely rare worldwide; currently only a handful operate, mostly in Germany, Japan, and China.

Cultural presence, tourism, and logistics

  • People share first encounters—from thinking it was a roller coaster to seeing it in comics and YouTube travel videos.
  • Some recommend combining visits with other rail/monorail trips in Japan or clubbing in Wuppertal.
  • One tangent describes difficulties using an Interrail pass on an international ICE, with expensive on‑train seat reservations.

I used AI-powered calorie counting apps, and they were even worse than expected

Scope & core reaction

  • Commenters generally agree the tested “AI calorie from photo” apps perform poorly and are oversold.
  • Many say they expected this: there simply isn’t enough visible information in a picture to estimate calories and macros reliably.

Why photo-based calorie estimation is fundamentally hard

  • Photos can’t reveal:
    • Cooking fats (oil, butter), sugar in sauces, or hidden ingredients.
    • Food variants (whole vs skim milk, lean vs fatty meat, low‑ vs high‑sugar yogurt, Coke vs Coke Zero).
  • Volume estimation is shaky: 2D images, inconsistent scale, and lack of depth data. Some note iPhones have depth/LiDAR, but say most apps either don’t use it or exaggerate their use of it.
  • Even in best case (standard containers, homogeneous foods), commenters doubt accuracy is good enough for the ~200–300 kcal precision needed for meaningful weight change.

Manual and LLM-assisted tracking vs “AI camera”

  • Several people report success with:
    • Traditional apps (MyFitnessPal, Cronometer, Macrofactor, Lose It, FoodNoms).
    • Using ChatGPT directly with detailed text/voice descriptions, weights, and labels.
  • Consensus: AI is useful as an assistant (parsing text, reading labels, logging meals, suggesting macros), not as a magic one-shot from photos.
  • Some say the effort of manual logging is part of why calorie counting works: it increases awareness and introduces friction before eating.

Debate on accuracy and usefulness of calorie counting itself

  • One camp: calorie labels and expenditure estimates are noisy (±20% or more), digestion varies, and CICO is oversimplified.
  • Another camp: despite imprecision, systematic tracking clearly works for many; not tracking is worse, and it’s especially useful for education (e.g., learning oil, restaurant meals, and alcohol are calorie-dense).

Business models, ethics, and user impact

  • Strong suspicion that some apps are hype-driven “snake oil”:
    • Heavy marketing, questionable revenue claims, likely paid/fake reviews.
    • Paywalls, upsells, and poor UX suggest quick money grabs riding “AI” branding.
  • Concerns:
    • Users may blame “calorie counting doesn’t work” when the tool is wildly off.
    • Risk of disordered eating if apps systematically under/overestimate.
    • Data-mining potential from detailed food-photo logs.
  • Some note there are more careful apps (e.g., SnapCalorie, Macrofactor, text-first tools) that stress education, databases, and clear communication of estimates, but even these admit substantial limitations.

Ask HN: In 15 years, what will a gas station visit look like?

Overall Change vs Continuity

  • Many expect 2040 US gas stations to look much like today: same basic layout, still selling gasoline and especially diesel, with some EV chargers added.
  • Others think that’s too conservative, pointing to rapid EV adoption in some regions (Norway, China, California) and predicting a tipping point where many urban stations shut down or convert.
  • Several argue 15 years is too short for a full transition because cars last a long time and current EV market share is still modest; some think 40–50 years for ICE dominance to end.

Shift From “Fuel Stop” to “Service Hub”

  • Stations are already evolving into mini-marts and fast-food venues; commenters expect more food quality, seating, and “destination” stops (Buc-ee’s–style) where charging time is spent eating, shopping, or working.
  • Longer EV dwell times could push stations toward lounges, offices, playgrounds, even “charging malls” and multi-story hubs, but there’s skepticism about whether charger turnover will support the business model.
  • Some foresee gas stations becoming primarily convenience/coffee shops with a few pumps or chargers; others think chargers will be more naturally integrated into supermarkets, big-box stores, malls, and airports.

EV Charging: Home vs Public, Centralized vs Distributed

  • One camp: most charging will happen at home or work; a large share of residents have single-family homes and can install chargers, making public “fuel stops” less central.
  • Another camp: many people lack safe/secure off-street parking; vandalism, cable theft, and urban density limit home charging, making public infrastructure crucial.
  • Ideas raised: VIN-based automated payments over the cable; battery swapping; democratized micro-stations at homes and small businesses; concerns about grid peak demand and complex load management.

Fossil Fuels, Trucks, and Alternatives

  • Broad agreement that gasoline demand shrinks but persists; diesel for heavy and medium trucks is seen by some as irreplaceable for decades, while others point to emerging electric freight and mining fleets as counterexamples.
  • Hydrogen gets mixed reviews: some see growth (e.g., in Japan), others consider it a dead end for personal transport due to cost, logistics, and safety.

Automation, Surveillance, and Payments

  • Expectation of more unattended or minimally staffed stations, heavy use of card/phone payments, potential biometric or membership systems, and reduced cash.
  • Several anticipate more cameras, facial recognition, hyper-targeted ads at the pump, and more product tie-ins (vapes, influencer goods, bubble tea).
  • Toilets are widely acknowledged as the one constant.

Self-hosted x86 back end is now default in debug mode

Debug backend, binary size, and debuggability

  • New self‑hosted x86_64 backend is used only for -ODebug; release modes still use LLVM.
  • Debug binaries are huge (e.g. “hello world” ~9.3 MB) but mostly due to debug info; stripping shrinks them dramatically.
  • Users compare -ODebug vs -DReleaseSmall sizes and ask if self‑hosted backends will ever match LLVM’s size/quality in release; answer: it’s a long‑term, not near‑term, goal.
  • Zig‑aware LLDB fork plus the new backend is reported to significantly improve the debugging experience.

Compile times, bootstrap, and build workflow

  • Self‑hosting backend plus threading cuts Zig self‑compile from ~75s to ~20s, with a branch at ~15s; a minimal subset builds in ~9s.
  • Significant time is attributed to stdlib features brought in by the package manager (HTTP, TLS, compression) and comptime‑heavy formatting.
  • Comparisons with D, Go, Turbo Pascal, tinycc, and C++ modules: some see Zig as rediscovering “fast compilers” while others note ecosystem/tooling complexity.
  • For contributors, advised workflow is to download a prebuilt Zig and run zig build -Dno-lib (optionally with -Ddev=...) instead of full bootstrap, which is slow due to WASM→C translation plus LLVM.

Comptime performance and metaprogramming

  • comptime is widely praised but also criticized as slow (reports of JSON parsing at compile time being 20x slower than Python).
  • Core devs say improving comptime means large semantic-analysis refactors; it’s planned but competing with other priorities.
  • Heavy use of std.fmt and comptime formatting currently dominates some compile-time cost; typical projects that use comptime “like a spice” are less affected.
  • Some argue pushing work to compile time is worth it; others recommend moving large tasks (e.g. big JSON) into build.zig instead.

Backends, Legalize pass, and non-LLVM ambitions

  • AIR is a high‑level IR; backends lower AIR → MIR → machine code.
  • The new Legalize pass rewrites unsupported high‑level AIR ops into simpler sequences a backend can handle (e.g. wide integer emulation), making new backends easier at the cost of some optimality.
  • This is expected to accelerate an upcoming AArch64 backend.
  • There’s philosophical support for non‑LLVM backends to improve iteration time and reduce dependence, but also recognition that LLVM enabled many modern languages and consoles.

Async/await, hot reloading, and game dev

  • Many are excited about hot code swapping and see it as a big win for game development; others note similar capabilities in MSVC/Live++ or C# hot reload.
  • Some devs already use Zig in shipped games; others still prefer C#/Rust for better existing hot‑reload and async ecosystems.
  • Async/await was removed; plan discussed is to reintroduce stackless coroutines as lower‑level primitives powering std.Io, giving flexibility between stackless/stackful designs.
  • Timeline is explicitly uncertain; some insist robust async is critical pre‑1.0, others want Zig to stay closer to C and process/threads.

Safety, segfaults, and language comparisons

  • A user is alarmed by many GitHub issues mentioning “segfault”; others argue sheer issue count is a poor maturity metric and common to all large compilers.
  • Consensus: Zig is not memory‑safe by design; it aims to make unsafe operations explicit and simple, not to prevent them, so user code can absolutely segfault.
  • Debate veers into C’s “legacy cruft,” memory‑safety history, hardware tagging, and formal methods; some argue C can be written safely with abstractions, others say practice and incentives suggest otherwise.
  • Compared to C, Zig is praised for: explicit optionals and errors, tagged unions/enums with exhaustive switches, slices with lengths, defer/errdefer, integrated build system, allocator‑driven APIs, and straightforward C interop.

Ecosystem, funding, and readiness

  • Zig’s foundation is said to spend most revenue directly paying contributors; compared favorably to some other foundations.
  • No concrete 1.0 date; many important tasks (incremental compilation, backends, async, comptime speed) compete for attention.
  • Recommendation from commenters: pin a stable Zig release for serious projects rather than track nightly, and treat Zig as evolving but not yet “finished.”

Building supercomputers for autocrats probably isn't good for democracy

AI as a Tool for Authoritarian Control

  • Many comments argue the main AI danger isn’t “rogue AGI” but that regimes can achieve near‑total social control by:
    • Correlating all online writings, stylometry, and other signals to infer identity, beliefs, and early-stage dissent.
    • Fusing existing data streams (purchases, communications metadata, social graphs, location, cameras, drones, phones) into a now‑processable surveillance panopticon.
  • Some say stylometry at mass scale is technically limited; others counter that:
    • Practical demos (e.g., unmasking alts on forums) already work.
    • Authoritarians don’t need high accuracy—only plausible signals and a chilling effect.
  • LLM-backed “always-on” household devices are seen as making Orwell’s telescreen finally feasible: constantly observing, inferring preferences and political leanings before people consciously form them.

Propaganda, Misinformation, and “Flooding the Zone”

  • One view: LLMs’ primary near-term use is as force-multipliers for messaging—cheap, tailored, high-volume BS that can drown out genuine discourse.
  • Examples show LLMs easily generating rhetorically strong but unverified arguments on any side of an issue, suitable for automated campaigns.
  • Others respond that:
    • The internet is already saturated with low-quality content; attention is maxed out.
    • People consume information via identity groups and curated channels; more junk may have diminishing marginal impact.

States, Billionaires, and Power Structures

  • Debate over whether future power lies more with nation-states or ultra-wealthy individuals:
    • One side: states retain decisive advantages (armies, legal control over finance, heavy weapons); billionaires are fragile without state infrastructure.
    • Other side: cheaper drones and scalable violence narrow the gap; “tech feudalism” and private fiefs are plausible.
  • Some argue it’s more accurate to talk about generic “power structures” than “nations” per se.

OpenAI–UAE Deal and Moral Responsibility

  • Key fault line: Should companies simply follow government sanctions lists, or independently refuse to empower autocrats?
    • One camp: if the US hasn’t sanctioned UAE, it’s legitimate business; private firms lack mandate or knowledge to be global moral arbiters.
    • Opposing camp: “not illegal” ≠ “ethical”; knowingly strengthening repressive regimes is itself wrong, regardless of State Department policy.
  • Realpolitik argument: better that US-aligned Gulf monarchies get advanced AI than China; critics reply this is “arming” deeply illiberal regimes with powerful control tech.

Democracies vs Autocracies and Hypocrisy

  • Several comments challenge the idea that “democracies good, autocracies bad” cleanly maps to real-world behavior:
    • Point to mass violence, invasions, and large prison systems in self-described democracies.
    • Note Western tech firms have long sold surveillance and computing tools to repressive states (IBM in Nazi Germany, Cisco/Oracle, Palantir, etc.).
  • Nonetheless, many still hold that giving more AI capacity to overtly authoritarian governments predictably worsens repression and is bad for democracy everywhere.

Why Android can't use CDC Ethernet (2023)

Bug status and Android versioning

  • Thread notes that the original issue (EthernetTracker only matching eth\d) was fixed in 2023 by broadening the regex, then reverted due to tethering regressions, then re-landed for newer Android releases (V+ / Android 15+).
  • Similar change/rollback pattern is seen in LineageOS; fix exists but is gated to newer versions and needs user testing.
  • Android’s internal “T/U/V” names are just version letters (13/14/15), a legacy of the old dessert naming scheme; some see this as confusing but not intentional obfuscation.

Interface naming, MAC tricks, and tethering

  • Core problem: Android’s Ethernet code historically keyed off interface names (ethX vs usbX) rather than capabilities.
  • Some devices use usbX for tethering; treating them as client Ethernet breaks those setups (phone tries to be both router and client).
  • A workaround discussed: flipping the “global” bit in the MAC address to make the kernel name a CDC Ethernet interface ethX instead of usbX, which users report works on several Android versions and devices.

Real-world device behavior and chipset quirks

  • Many commenters report USB Ethernet “just works” on Android—usually with ASIX or Realtek dongles that use supported vendor drivers or NCM, not CDC ECM.
  • Others see failures on specific boards or OEM builds (e.g., Samsung Android 15), showing that behavior can differ by vendor, firmware, and chipset.
  • iOS: article’s “no CDC Ethernet” claim is refined—iOS doesn’t support CDC ECM, but does ship drivers for common USB Ethernet chipsets and works with some Realtek NCM devices.

Networking limitations on mobile OSes

  • Several complain that Android (and iOS) can’t easily use multiple networks simultaneously (e.g., Wi‑Fi without Internet plus cellular), or aggressively disconnect from “no Internet” Wi‑Fi.
  • This makes debugging networks or using local-only devices (dashcams, embedded gear) painful and often forces app-specific workarounds.

Rooting vs security and user control

  • One camp sees rooting as essential to fix problems like this (e.g., changing config_ethernet_iface_regex).
  • Others argue root massively expands attack surface and undermines Android’s permission model.
  • Ongoing back-and-forth about whether restricting root is necessary safety or “corporate FUD,” and how to balance power-user needs with mainstream security.

USB serial and other oddities

  • Android kernels often include USB serial drivers, but user apps can’t access /dev/ttyACM*; instead, they must reimplement USB-serial protocols in userspace via raw USB.
  • Commenters recall other historically hacky Android USB decisions and note that some adapters also need firmware the Android UI can’t diagnose.

We’re secretly winning the war on cancer

Personal experiences with treatment

  • Multiple commenters describe dramatic responses to modern therapies: rapid tumor shrinkage within hours of targeted infusions, long remissions from CAR‑T and clinical trials, and relatively tolerable chemo regimens like R‑CHOP compared with older treatments.
  • MD Anderson is repeatedly cited as an example of cutting‑edge care, particularly for rare blood cancers and lymphomas.
  • Others share losses (parents, spouses, young relatives with glioblastoma or aggressive breast/ovarian cancers), stressing that “winning” doesn’t match their lived experience.

Therapeutic advances

  • Strong enthusiasm for immunotherapy: checkpoint inhibitors (e.g., PD‑1 drugs), CAR‑T, and related approaches are seen as genuine breakthroughs, especially for certain blood cancers and multiple myeloma.
  • Novel modalities like tumor‑treating fields (electric‑field helmets for glioblastoma) and ferroptosis‑based strategies are highlighted as promising, with evidence of improved survival in select settings.
  • Commenters note better molecular profiling of tumors and more precise subtyping as quiet but major progress.

Limits and hard cases

  • Several argue most patients still get “slash, burn, poison” (surgery, radiation, classic chemo), with only incremental gains for many solid tumors; prostate cancer is cited as an area with mostly marginal improvements.
  • Glioblastoma is repeatedly mentioned as an example where progress is slow and outcomes remain grim.
  • Pain, toxicity, and chronicity (e.g., lifelong oral drugs for some blood cancers) remain major burdens.

Prevention, environment, and risk factors

  • Many see the big mortality drop since ~1990 as largely driven by reduced smoking, regulation of carcinogens, and broader environmental/occupational protections.
  • Debate over “Cancer Alley” and refinery regions: some claim excess cancer is just socioeconomic confounding; others argue pollution and poverty are causally entangled and can’t be “adjusted away.”
  • Rising early‑onset colorectal cancer concerns drive strong calls for colonoscopy or stool tests starting by 40–45; obesity, diet, plastics, PFAS, and broader lifestyle changes are all proposed as contributors, with no consensus.

Screening and diagnostics

  • Commenters emphasize that early detection is as important as better drugs; colonoscopy can prevent cancer via polyp removal, while FOBT/FIT are low‑risk, accessible options.
  • CT/MRI/PET access is highly variable: some report same‑day imaging, others weeks to months of delays, often due to insurance pre‑approval rather than machine capacity.

Access, cost, and health systems

  • Many stories hinge on excellent employer insurance covering six‑figure drugs; chronic therapies can list at ~$180k/year.
  • High costs motivate talk of emigrating to countries with public healthcare, but others note such systems may restrict immigration of people with expensive conditions.
  • US insurance bureaucracy (pre‑auths, denials, “do you really want this scan?” letters) is widely criticized as delaying care and adding stress.
  • European commenters note that advanced immunotherapies are often provided at no out‑of‑pocket cost within public systems.

Politics and research climate

  • One subthread argues that specific US administrations have harmed cancer progress via cuts to NIH/FDA‑related efforts, hostility to higher education and immigration, and anti‑vaccine or anti‑mRNA rhetoric.
  • Others are fatigued by politicization but concede that funding and regulatory policy directly affect cancer research and access.

Alternative and fringe treatments

  • A few repeatedly promote fenbendazole/ivermectin as “overlooked miracle” cancer drugs, linking to papers and non‑mainstream sites.
  • Several push back hard, calling associated rhetoric conspiratorial and stressing that credible treatments should rest on strong, depoliticized clinical data; overall efficacy of these agents in humans remains unclear in the discussion.

Other diseases and comparisons

  • Some contrast cancer progress with slower movement on type 1 diabetes, though others note big gains in T1D life expectancy and emerging regenerative approaches.
  • One commenter reminds that as deaths from other causes fall and populations age, absolute cancer cases can rise even while age‑adjusted mortality drops.

Role of AI and data

  • Machine learning (more than “gen AI” specifically) is cited as already useful in imaging—finding early cancers radiologists miss—and seen as crucial for sifting massive research datasets.

Are “we” winning?

  • Optimistic view: age‑adjusted death rates are falling sharply despite aging populations; specific cancers once nearly hopeless now have strong 5‑year survival; thousands of patients each month benefit from targeted and immunotherapies.
  • Skeptical view: incidence remains high or rising; many common cancers have only modest survival gains; access is uneven and often tied to wealth or geography; treatments remain brutal for many.
  • Several conclude that progress is real and accelerating, especially in some subtypes, but “winning the war on cancer” is premature and unevenly distributed.

What happens when people don't understand how AI works

Perceptions of AI Progress and Future Decline

  • Some see little practical coding difference between recent Claude versions and doubt rapid future gains; others report steady, noticeable improvements but far from perfection.
  • The cliché “the AI you use today is the worst you’ll ever use” is called vacuous; several argue LLM capability curves may already be flattening.
  • Many expect quality of service to degrade even if raw capability grows: enshittification via ads, paywalling, political/monetization bias, and lock-in, compared to Google Search and the wider web’s decline.
  • A minority believe current LLMs may already be the best we get in practice, before business incentives corrupt them.

Psychological and Spiritual Misuse

  • The “ChatGPT-induced psychosis” phenomenon alarms commenters: vulnerable, lonely, or psychotic users treating LLMs as gods, spiritual guides, or self-aware beings.
  • Others say psychosis will always latch onto something (religion, social media, conspiracies); LLMs are just a new “force multiplier.”
  • Some argue people have always worshiped man-made abstractions (state, leaders, texts); AI is just the latest idol.

LLMs as Tools vs Oracles

  • One camp uses LLMs as better search/summarization/coding tools: quick terminology lookup, domain overviews, SQLAlchemy snippets, law-like rules, etc., with external verification.
  • Another warns that many non-technical users assume factuality and don’t know about hallucinations, effectively treating chatbots as oracles.
  • This fuels a debate over calling LLMs “divinatory instruments”: critics say the analogy is overbroad and obscures differences from ordinary information retrieval; supporters say it captures how many people experience the interface.

What Counts as “Thinking” or “Understanding”?

  • Long arguments revolve around whether next-token prediction can be called “thinking.”
  • Some stress LLMs lack grounding, embodiment, goals, and rich world models; they see outputs as statistically fluent but ontologically empty.
  • Others lean functionalist: if behavior is indistinguishable from human answers in many domains (Turing-style), insisting it’s “not real understanding” is seen as semantics or human exceptionalism.
  • Related disputes touch on consciousness, free will, animal cognition, and whether all symbolic communication involves projection and interpretation.

How LLMs Actually Work

  • Several note that “trained on the internet” is incomplete: modern chat models crucially depend on supervised fine-tuning and RLHF from vast global workforces of labelers rating style, safety, and “emotional intelligence.”
  • This reframes chatbot niceness and apparent empathy as distilled human labor, not emergent soul.
  • Others push back that, despite human shaping, transformers still rely on large-scale pattern learning, not classical symbolic reasoning; there’s disagreement about how far beyond “pattern matching” current systems really go.

Impact on Work and Institutions

  • Many describe LLMs as force multipliers for already-competent people, not replacements for missing expertise.
  • There’s concern they’ll be misused by clueless management as a substitute for skilled staff, leading to layoffs, brittle systems, and an “idiocy multiplier.”
  • Skeptics emphasize that organizations still need deep human understanding; AI cannot rescue fundamentally bad leadership.

Language, Hype, and Public Understanding

  • Repeated concern that anthropomorphic marketing terms (“AI,” “reasoning,” “hallucination,” “agents,” “friends”) mislead the public and investors about capabilities and risks.
  • Some urge more precise language (LLM, pattern model, summarizer) and better education so people treat outputs as provisional, checkable suggestions rather than truths or revelations.

The wire that transforms much of Manhattan into one big, symbolic home (2017)

What an eruv is and why it exists

  • Commenters clarify that, in Jewish law, “work” on the Sabbath includes carrying objects between domains.
  • Urban space often falls into a gray “in-between” category, neither clearly public nor private.
  • An eruv reclassifies this ambiguous space as “private domain,” allowing carrying items like keys, canes, and babies without violating Sabbath rules.
  • Some frame it less as “tricking God” and more as a codified legal mechanism long debated in the Talmud, with substantial technical detail and constraints (e.g., limits on traffic, continuity).

Loophole vs. law: is this ‘cheating’?

  • Many non‑religious commenters see the wire as a loophole or “hack,” equating it to game cheats or semantic tricks.
  • Others argue that in Jewish thought, law is intentionally textual and legalistic: if a loophole exists, an omniscient God meant it to be there.
  • Some Jews reportedly see finding such workarounds as part of the religious “game,” comparable to common-law reasoning.
  • Critics counter that this ignores the “spirit of the law” and resembles Pharisaical legalism denounced in Christian scripture.

Who benefits: ‘vulnerable people’ and gender

  • Several comments explain the “vulnerable” as those especially constrained without an eruv: caregivers with young children, the elderly, disabled people who need mobility aids, and strict adherents who would otherwise be homebound.
  • One thread suggests this mainly eases burdens on women in conservative communities who handle childcare.

Maps, geography, and implementation

  • Commenters share updated Manhattan eruv maps and note that coverage has expanded over time.
  • Discussion touches on why areas like Times Square or Hell’s Kitchen were once excluded: high traffic, construction, or practical routing constraints.
  • Some report difficulty visually locating wires where maps claim they exist and mention rerouting during construction.

Technology, safety, and secular conflicts

  • Examples of Sabbath workarounds: automatic elevators, traffic lights that change without button presses, timers for electrical devices, “Sabbath goy” arrangements.
  • A contentious sidebar describes alleged opposition to installing automated fire safety systems in some buildings, with others insisting Jewish law allows—and even mandates—violating Sabbath rules to save life (pikuach nefesh).
  • There is light technical discussion of how one might electronically monitor eruv continuity, and whether energizing the wire would raise regulatory or utility concerns.

Comparisons to other religions and law

  • Numerous parallels are drawn to:
    • Christian canon law, fasting, and Lenten meat classifications (e.g., beaver or alligator as “fish”).
    • Islamic jurisprudence and debates over literal vs. “spirit of the law” readings.
    • Eastern Orthodox “oikonomia” and Catholic “dispensations” as mechanisms for relaxing strict rules.
  • Some commenters liken eruv reasoning to secular legal interpretation and common-law evolution; others warn that excessive “creative reinterpretation” can erode trust in any legal system.

Theology, absurdity, and meta‑debate

  • Threads debate whether any religious rules come from God at all, or are purely human constructs responding to ancient conditions (e.g., food safety).
  • Skeptics mock the idea of outwitting an omniscient deity; defenders reply that God cannot be “fooled,” only obeyed through the law as written.
  • Hypothetical extensions (a wire around the whole planet, or tiny loops in a drawer) are used both to satirize the concept and to probe its logical limits.

Administering immunotherapy in the morning seems to matter. Why?

Study design, randomization, and confounders

  • Initial skepticism focused on the idea that “morning patients are healthier,” but others pointed out this was a randomized clinical trial where infusion times were assigned, not self-scheduled.
  • Several commenters note that randomization addresses patient-side confounders (work status, support systems) but not all clinic-side ones (e.g., nurse alertness, staffing patterns).
  • There’s confusion and debate over how randomization “controls for confounders,” and recognition that some factors (shift quality, drug handling, clinic workflow) may not be fully randomized.
  • Some are uneasy about the very large reported effect size (hazard ratio ~0.45), calling it “implausibly high” and suggesting possibilities like data dredging, protocol drift, or hidden operational differences.
  • Others criticize the trial for being single-site, having evolving design/criteria, and lacking detailed reporting (e.g., CONSORT diagram), and see it as important but in need of replication.

Circadian biology and possible mechanisms

  • Many find a circadian explanation intuitively plausible: immune activity, cell replication, hormone levels, metabolism, and glucose dynamics all vary strongly by time of day.
  • Fasting overnight is raised as a factor (autophagy, lower glucose, different tissue environments), though commenters differ on how strong this effect is versus everyday eating patterns.
  • Some note existing “chronotherapy” work and even a Nobel-winning circadian clock literature as implicit support that timing can matter for treatment response.

Anecdotes on timing of treatment

  • Multiple personal stories describe major differences in side effects or outcomes when shifting medication from morning to afternoon/evening (chemo adjuncts, folic acid, allergy immunotherapy).
  • One striking case from the 1990s: shifting a DNA-analog chemo to evening based on progenitor-cell circadian data appeared to preserve immune function, but the protocol was not adopted because no trial existed.
  • Others mention oncology practices already preferring earlier-in-the-day infusions for some cancers (e.g., metastatic melanoma cutoffs).

Evidence-based medicine, bureaucracy, and ethics

  • Strong tension appears between:
    • “Don’t change treatment on N=1 hunches; that’s what trials and IRBs are for,” and
    • “The system is so slow and conservative that simple, low-risk ideas like time-of-day tweaks languish for decades.”
  • Commenters argue over whether small, low-risk pilot changes (e.g., shifting a few patients’ infusion times) should require heavy IRB oversight.
  • Some see current safeguards as essential against historical abuses; others see them as overgrown bureaucracy that blocks meaningful, low-cost optimization.

Generalization and open questions

  • People wonder whether similar timing effects apply to allergy immunotherapy, immunosuppressants for autoimmune disease, or other immune-modulating drugs; this remains unclear in the thread.
  • Practical questions arise about how to prioritize scarce morning slots, and whether circadian-shifted environments (e.g., artificial “morning” at noon) could substitute real-time morning dosing.

BYD's Five-Minute Charging Puts China in the Lead for EVs

BYD vs. Tesla and Perceived Quality

  • Some see BYD’s 5‑minute charging as “real innovation” compared to what they view as Tesla’s hype and poor build quality.
  • Others counter that Tesla’s early Superchargers were innovative for their time and argue BYD’s lower prices reflect worse fit/finish, software, and service, especially outside China.
  • Several commenters report recent BYDs (Seal, Sea Lion, Tang, Denza) as surprisingly high quality, saying older low-end models had a bad reputation but newer mid/high-end cars have improved.
  • UX/software is a recurring complaint for BYD (buggy Android skin), while Tesla is criticized for missing basics like a HUD and having low physical-build quality.

Tariffs, Industry Failure, and National Security

  • One camp argues Western auto industries have “failed” and tariffs just delay the inevitable while depriving consumers of cheap, good EVs.
  • Another defends protectionism: China used heavy subsidies and forced tech transfer; letting Chinese EVs flood markets could destroy remaining local manufacturing capacity, seen as essential for wartime industrial flexibility.
  • There is concern about mass unemployment of auto workers and lack of credible transition plans, hence political pressure for tariffs.
  • Ethical concerns about alleged forced labor in Chinese supply chains are raised as another reason for restrictions.

Public Transit, Zoning, and “Self-Sufficiency”

  • Some say if domestic car industries decline, national self‑sufficiency should come from strong public transit, not domestic cars.
  • Others argue public transit is the “opposite” of self‑sufficiency because it can fail under stress (strikes, evacuations, outages), while critics respond that car‑based systems also fail under stress (hurricane evacuations, congestion).
  • A long tangent debates US zoning: one side says low-density zoning blocks viable transit; the other says many people don’t want dense living and rural/suburban areas are cheaper and essential for food/production.

Battery Longevity and Fast-Charging Tech

  • Readers ask whether 1 MW charging will degrade batteries faster; replies note:
    • The article claims a new cooling system and chemistry improve high‑temperature lifespan.
    • General rule of thumb: batteries dislike extremes of temperature, state-of-charge, and very high C‑rates, but details are unclear without long‑term data.
    • Some speculate it could be marketing if cycle life tradeoffs aren’t disclosed.

Grid Load, Buffer Batteries, and Practicality of 1 MW

  • Skeptics argue 1.3 MW per stall can’t scale: replicating today’s fast-charger counts at that power would require enormous generation and grid upgrades.
  • Others point out total energy per vehicle doesn’t change—only peak power—so local buffering (battery storage at stations, “community batteries”) and load averaging can flatten demand.
  • Some note existing fast chargers often already use behind-the-scenes batteries or even diesel generators, effectively making “diesel-electric cars.”
  • Debate over efficiency: one detailed reply suggests diesel‑generator‑fed EV charging can be less efficient overall than directly driving diesel cars, once conversion losses are included.

How Transformative is 5‑Minute Charging?

  • Some say it’s not “groundbreaking,” since it’s mostly about pushing more power, still slower than filling a gas tank, and sensitive to temperature and charging curves.
  • Others argue 5 minutes is effectively as convenient as refueling when you include pull‑in/out and payment time; it crosses the threshold where charging stops being a planning burden.
  • Several emphasize that for daily driving, charging speed barely matters if home or destination charging is available; ultra‑fast charging mostly changes the long‑road‑trip experience.

Battery Size, PHEVs vs BEVs, and Real Usage

  • One commenter questions “huge batteries,” arguing a ~75‑mile pack plus gasoline (PHEV) is more practical and infrastructure‑light than large BEVs.
  • Responses highlight:
    • People want a single “do‑everything” car that covers the rare 400–600 mile trip, emergencies, and edge cases (“99.9th percentile trip”), not just average commutes.
    • Psychological “precautionary consumption” makes range a strong selling point even if rarely used.
    • Some argue PHEVs are often not cheaper than comparable BEVs and add complexity; others present price examples where full BEVs take many years of fuel savings to justify their higher upfront cost.
  • A few note that ultra‑fast charging helps justify smaller packs by making occasional long trips less painful, potentially reconciling both approaches over time.

A look at Cloudflare's AI-coded OAuth library

Meaning and Drift of “Vibe Coding”

  • Several commenters argue “vibe coding” originally meant AI‑ or copy‑pasted code that humans do not meaningfully review.
  • Others see it more broadly as focusing only on whether something “seems to work” and not inspecting the code itself.
  • Some think expanding the term to mean “not battle‑tested” or just “normal imperfect code” makes it useless marketing jargon.

Cloudflare OAuth Library: Bugs and Review Claims

  • The blog post highlights issues: overly permissive CORS, incorrect Basic auth, deprecated implicit grant, weak token randomness, and limited tests.
  • One side infers this shows humans offloaded responsibility to the LLM.
  • Others push back: Cloudflare’s own README claims thorough human review against RFCs; missing bugs shows fallible review, not total abdication.
  • A Cloudflare engineer joins to defend some design choices (e.g., CORS as safe for bearer‑token endpoints, token randomness as secure though not maximally efficient) and notes the LLM did not invent the higher‑level crypto design.
  • The discovery of a biased token generator still makes some lose confidence in the review quality, even if the bug isn’t practically exploitable.

OAuth and Security Complexity

  • Multiple commenters note OAuth is notoriously tricky; even heavily tested commercial implementations have had hundreds of security bugs.
  • The takeaway for some: this is exactly the kind of domain where deep expertise and exhaustive testing (including all spec MUST/MUST NOTs and abuse cases) are mandatory, regardless of LLM use.

LLMs as Coding Tools: Productivity vs. Subtle Bugs

  • Practitioners report ~2× speedups for short, throwaway tasks and ~10–20% on larger, long‑lived codebases.
  • They also report many subtle bugs—especially in concurrency, error handling, security, and “looks right” defaults.
  • LLMs are compared to power tools: great accelerators for experts, dangerous in unskilled hands.

Need for Domain Expertise and “Automation Bias”

  • Many stress that LLMs are most valuable when the user is already an expert who can specify and review output.
  • There’s concern that normalization of AI assistance will increase automation bias: reviewers will trust AI output too readily, especially under time pressure.
  • Worries extend to the career pipeline: if juniors lean on LLMs instead of learning fundamentals, where do future domain experts come from?

Learning and Information Quality

  • Some say LLMs are “rocket fuel” for learning when paired with high‑quality sources, references, and critical verification.
  • Others counter that LLMs frequently fabricate plausible‑sounding details and citations, which is especially dangerous for novices who can’t spot errors.
  • There is broad anxiety that AI‑generated content will pollute search results and documentation, making reliable information harder to find and freezing in old Stack Overflow patterns.

Testing, Multi‑Agent Review, and Comments

  • Several suggest AI should be used heavily to generate tests and critique specs/code, possibly with multiple models cross‑checking each other.
  • Skeptics note tests themselves can be wrong or gamed, and subtle bugs can still slip through.
  • Redundant line‑by‑line comments in the Cloudflare repo are seen as an LLM “tell”; some find them useless noise, others think they’re still better than the typical under‑documented human code.

Why not use DNS over HTTPS (DoH)?

Trust, Privacy, and the “Single Peeper” Question

  • Core disagreement: does DoH improve privacy by hiding queries from ISPs, or just shift all data to one big provider (often Cloudflare)?
  • Some argue Cloudflare (or similar) is more trustworthy than many ISPs, especially where ISPs are legally compelled to log and censor. Others see a US‑based CDN as strictly less trustworthy than a local/regional ISP bound by strong privacy law.
  • Several point out that whoever terminates your DNS (DoH, DoT, or UDP) can see your queries; encrypted DNS mainly stops intermediaries and local networks from snooping or tampering.

DoH vs DoT vs Other Protocols

  • Many say the article’s endorsement of DoT over DoH is incoherent: DoT has the same “single peeper” property, plus is trivial to block on port 853.
  • Pro‑DoH side: using HTTPS/443 lets DNS blend with normal web traffic, making censorship and ISP interception harder. Complexity of HTTP is seen as a justified trade‑off.
  • Critics prefer lighter, DNS‑specific schemes (DNSCrypt, DNSCurve, anonymized DNS, Oblivious DoH) and dislike “abusing HTTP” as a transport.
  • Some suggest running DoT over 443 with ALPN as a middle ground, but note that’s not how most infrastructure works today.

Censorship Resistance and Blocking

  • Several commenters in tightly controlled or meddling ISP environments say DoH is the only way some sites work at all; ISPs block or rewrite DNS, or run transparent DNS proxies.
  • DoH plus emerging ECH is seen as a path to making hostname‑level censorship and profiling much harder.

Self‑Hosting and Recursive DNS

  • Strong contingent recommends running your own recursive resolver (often behind VPN or WireGuard), sometimes publicly shared for extra anonymity and caching benefits.
  • Others run Pi‑hole/AdGuard/dnscrypt‑proxy with DoH/DoT upstreams, or Tailscale/Android “Private DNS” for system‑wide encrypted, filtered DNS.
  • Concerns noted: exposure to amplification attacks, need for rate limits, and that queries from resolver to root/TLD/authoritative servers are still mostly unencrypted (mitigated somewhat by QNAME minimization).

Application vs System DNS and Control

  • Major dislike: DoH inside browsers and IoT bypasses system DNS and network policy (e.g., Pi‑hole, corporate DNS, local zones). This weakens local administrative control and makes ad‑/malware‑blocking harder.
  • Counter‑argument: system DNS defaults are usually insecure and users rarely change them; app‑level DoH is a practical way to give “normal users” confidentiality from hostile networks.

Assessment of the Article

  • Many call the piece outdated (2018‑era Cloudflare‑only framing) and rhetorically loaded, mixing “Cloudflare bad” with protocol criticisms.
  • Several say its conclusion to “refuse to use DoH” is actively harmful: disabling DoH often just reverts to plaintext DNS, which is strictly worse for most users.

The last six months in LLMs, illustrated by pelicans on bicycles

Purpose and limits of the “pelican on a bicycle” test

  • Thread agrees this is an intentionally inappropriate task for text-only LLMs: they must write SVG code for a novel scene with no visual feedback.
  • Defenders say that’s the point: it stress-tests following a spec, compositionality, and abstract visualization, a bit like LOGO or CAD instructions.
  • Critics argue it’s a poor proxy for real engineering or design work, which depends on tacit knowledge, real-world constraints, and nuanced communication that aren’t online as training data.
  • Many see it primarily as a humorous, hype-deflating benchmark rather than a serious metric.

Quality, cost, and when to use LLMs

  • One camp: the outputs show LLMs are “all terrible” for creative/technical work and you should hire professionals.
  • Another: LLMs are “go-karts of the mind”—cheap, low-end tools that are “good enough” for many tasks where a Porsche-quality result isn’t needed.
  • Practical suggestions: for vector art, use image models (Midjourney, etc.) plus auto-vectorization instead of asking text models to hand-write SVG.
  • Consensus that writing complex SVG from scratch is hard even for humans; models are still much cheaper and faster, if you accept mediocre quality.

Benchmark methodology and contamination

  • Multiple complaints about evaluating probabilistic models from a single sample; calls for many runs and averaging.
  • Others counter that “one-shot” reflects how most users actually experience models and avoids human cherry-picking.
  • Concerns about using a single LLM as the judge; suggestions include human crowds, experts, and multiple models as evaluators.
  • As the pelican prompt spreads (talks, interviews, Google I/O), people worry it will leak into training data and be directly optimized against, reducing its value.
  • Some suggest rotating or hidden benchmarks (e.g., ARC Prize–style tasks, hashed prompts).

Humans vs LLMs on bikes and pelicans

  • References to projects where ordinary people draw impossible bicycles, showing humans also lack precise structural knowledge.
  • Disagreement whether “average human” still outperforms current models on basic correctness (wheels, chain, pedals) given time and references.
  • Cost comparisons: a human drawing from scratch vs thousands of model generations plus automated ranking.

Broader context: tools, hype, and safety

  • Mentions of better vector-ish tools (e.g., Recraft) and a Kaggle SVG competition that got strong results with specialized setups.
  • Discussion of mainstream virality of ChatGPT image generation (Ghibli-style portraits), with some downplaying it as fad and others seeing durable adoption.
  • Safety concerns around models “snitching” on wrongdoing (SnitchBench), agentic access to tools, prompt injection, and opaque memory features reducing user control.

FAA to eliminate floppy disks used in air traffic control systems

Legacy floppies: reliability, supply, and why change at all?

  • Several commenters note floppies are mechanically and magnetically fragile, with media quality having declined as demand shrank.
  • Others argue that high‑quality disks, handled carefully, can be very reliable and have been used successfully for decades.
  • A practical concern is supply: no major manufacturer makes new floppies, existing stock is finite, and drives/parts are also aging.
  • Some see the “floppy” angle as mostly PR/public embarrassment rather than the true technical driver.

Virtualization, emulation, and incremental upgrades

  • Popular suggestion: replace physical floppies with solid‑state floppy emulators (USB/SD/CF-backed) to keep legacy hardware/software but remove the weakest component.
  • Others propose virtualizing Windows 95/DOS-era systems on modern hardware or using DOSBox/QEMU while preserving original timing/behavior, which may be nontrivial.
  • Counterpoint: virtualization adds complexity and new failure modes to a safety‑critical system; “just virtualize it” is seen as oversimplifying a very high‑assurance environment.

Safety, fallbacks, and conservatism

  • Multiple comments stress that ATC is designed to function under total comms failure; paper strips, timed procedures, and standardized fallback rules are intentional resilience mechanisms that won’t (and shouldn’t) disappear.
  • Aviation’s “don’t touch what works” culture is defended as appropriate when lives are at stake, though others warn that over‑conservatism eventually increases risk as hardware becomes unmaintainable.

Politics, funding, and bureaucracy

  • Long history of failed or stalled U.S. ATC modernization efforts is noted; problem seen less as money and more as incentives, lack of accountability, and program mismanagement.
  • Worry that any large upgrade will become a political football or consulting bonanza, with risk of underqualified political appointees leading critical tech projects.
  • Some contrast this with other countries (EU, Canada, etc.) that have modernized ATC more smoothly under more stable governance/funding models.

Automation and alternative architectures

  • One thread debates fully decentralized, plane‑side collision avoidance vs centralized human ATC.
  • Advocates claim swarm‑like software coordination is solvable; critics point to split‑brain problems, heterogeneous fleets (especially small GA aircraft), emergency scenarios, fuel/throughput constraints, and much higher consequence of rare failures compared to cars.

Media framing and actual scope

  • A few commenters criticize coverage as shallow “LOL floppies,” noting that only specific older terminal systems appear to rely on them, and that the real work is broader, long‑term ATC modernization, not a single dramatic swap.

Re: My AI skeptic friends are all nuts

Scope of Skepticism vs. Hype

  • Many commenters say “skeptic” is the wrong label: most critics accept that LLMs are powerful and useful, but dispute grand claims (AGI soon, total job replacement, “learn to code is obsolete”).
  • Some argue skeptics often haven’t seriously tried current tools and are stuck on outdated impressions.
  • Others counter that skepticism is simply withholding belief without evidence, and hype far outstrips demonstrated capability.

Education, Homework, and “Dead Classrooms”

  • Strong concern over schools requiring LLM use, and teachers using LLMs to grade, leading to “LLM writes, LLM grades” situations.
  • Critics worry this undermines development of reasoning, writing, and problem‑solving skills, and amplifies a digital divide (wealthy students get better tools).
  • Some argue essay-writing is mostly busywork; others insist it’s core to organizing thoughts and learning logic.
  • Several say take‑home assignments and homework are effectively “dead” as honest assessment tools in an LLM world; some welcome the death of homework, others argue independent practice is essential.

Skill Atrophy and Critical Thinking

  • Multiple anecdotes of experienced devs feeling unable to work without LLMs, or forgetting basic patterns they used to know.
  • One side: atrophy of unused skills is fine—if you truly don’t need them, no loss.
  • Opposing side: coding and critical thinking are central job skills; if you can’t perform or verify them without a tool, you’re dangerously dependent, especially for future generations who never built the baseline.

Analogies to Past Technologies

  • Supporters compare fears to earlier panics over calculators, Google, IDEs, higher-level languages; abstraction and tool use are seen as the normal trajectory.
  • Critics respond that LLMs uniquely offload cognition, not just manual or syntactic work, and may hollow out thinking rather than just low-level implementation.

Socio‑Political and Economic Concerns

  • Some focus less on code quality and more on systemic effects: accelerated concentration of power and wealth, AI‑driven bureaucracy, erosion of human oversight and recourse, risk to democracy and social fabric.

Data Quality and Self‑Training

  • Brief debate on “AI slop” poisoning training data: worries that models will degrade as they train on their own output.
  • Others argue ranking, curation, and selection for popularity/quality can still sustain or improve models, though this is acknowledged as nontrivial and imperfect.

LLMs in Everyday Software Work

  • Several note that a large fraction of programming is routine “blue‑collar” glue work where LLMs and codegen shine and risks are lower.
  • Others insist even routine code must be reasoned about by humans; they distrust any generated code that hasn’t been deeply understood.

AI Skepticism as Politics and Research Strategy

  • One view frames strong AI skepticism as a partly political stance; skeptics reply concerns about AI’s downsides are broad and cross‑ideological.
  • An ML researcher argues the real issue isn’t whether LLMs work, but that almost all funding and attention are being funneled into one paradigm (scaling transformers), crowding out alternative approaches and creating a fragile “all eggs in one basket” situation.

<Blink> and <Marquee> (2020)

Nostalgia for the Early Web

  • Many reminisce about 90s/early‑2000s web: <blink>, <marquee>, “under construction” GIFs, guestbooks, web rings, counters, spacer GIFs, table layouts, image maps, frames, and tools like FrontPage, Dreamweaver, Fireworks.
  • Stories of hacks and workarounds: IE6 rounded corners via sliced images, frame-based chats before XHR, motion JPEG “streaming,” Netscape bugs, binary-editing browsers to disable blink.
  • Strong sense of “lost joy and wonder” and the accessibility of HTML then—kids learning by hand-writing sites in Notepad.

Actual Uses of <marquee> Today

  • Still used in niche or playful ways: animated personal homepages, parallax emoji scenes, news-ticker‑style RSS displays, truncated names in media UIs, tab labels, and retro-themed projects.
  • Some government sites (notably in India) still use marquees heavily, often alongside generally poor UX.
  • A few people unapologetically defend marquee as still useful for constrained text spaces or deliberate retro aesthetics.

Why Blink/Marquee Are Disliked

  • Core objections: moving text is hard to read, hijacks attention, and competes with the main content.
  • On the web, scrolling text is seen as unnecessary because layout can usually expand vertically; others push back that screen space is still finite.
  • Historical overuse and abuse (e.g., portal sites framing others’ content, attention-grabbing clutter) cemented a bad reputation.
  • Accessibility and usability issues: multiple scrollbars, broken back button, non-linkable content areas, and trouble with search engines.

Implementation and Compatibility Notes

  • <marquee> still works in modern browsers; in Chromium it’s implemented via CSS animations and requestAnimationFrame.
  • The default animation is noticeably unsmooth; tweaking attributes like scrolldelay helps, but many would prefer pure CSS.
  • Some legacy APIs and oddities persist (JavaScript’s String.prototype.blink, Android’s undocumented <blink> layout tag) for backward compatibility.

Security / Testing Uses

  • <blink> and <marquee> are used as cheap, visually obvious payloads for testing HTML injection/XSS.
  • Some deliberately whitelist marquee as an Easter egg; others use <plaintext> as an extreme “everything broke” indicator.

Broader Web-Evolution Debates

  • Long subthread on frames: nostalgia vs. detailed critiques (linking, navigation, responsiveness, accessibility, analytics).
  • Another on whether the web peaked around 2006–2010, the death of Flash, rising complexity, and Chrome’s dominance and impact on Firefox and open web standards.

Coventry Very Light Rail

Autonomous vehicles vs fixed transit

  • One branch argues future robotaxis (e.g., Waymo-style vehicles) will outcompete fixed-route rail on efficiency and ROI, claiming rapidly falling sensor costs and cheap base vehicles.
  • Others push back: hardware cost estimates are disputed, base vehicles aren’t “near zero,” and robotaxis still face congestion, empty repositioning trips, and low average car occupancy.
  • Several note AV benefits only fully materialize when almost all human drivers are gone, which is seen as decades away due to fleet turnover and politics.
  • Claims that buses are “barely more efficient than cars” and that urban density should be reduced are heavily challenged with occupancy data, congestion costs, and counterexamples from existing transit systems.

Rail vs bus tradeoffs

  • Supporters of rail highlight: dedicated right-of-way, higher people-per-hour at intersections, smoother ride, better accessibility, lower friction and energy use, reduced tire and brake dust, and stronger incentives for transit-oriented development.
  • Bus advocates counter that bus-only lanes, bus rapid transit, and electric buses can deliver similar service with far more flexibility and lower capital cost; many “advantages of rail” are really about right-of-way, not steel wheels.
  • There is debate over permanence: rail lines are harder (but not impossible) to remove than bus lanes, which can be politically repurposed.

What’s “very light” about Coventry VLR

  • Key technical features: battery power (no overhead wires), shallow 30 cm UHPC slab track that avoids most utility relocation, and a tight 15 m turning radius to fit existing streets and roundabouts.
  • Vehicles are small (capacity ~56), low-floor, bidirectional, and designed with potential future autonomy and high-frequency “turn up and go” operation in mind.

Cost, innovation, and ‘gadgetbahn’ concerns

  • Enthusiasts see UHPC slab track and wire-free operation as a serious response to UK’s extreme tram cost overruns (e.g., utility moves, deep trackbeds, expensive stations).
  • Critics label it “gadgetbahn”: bespoke hardware that sacrifices the main benefit of trams (high capacity) while adding complexity (batteries, charging) instead of using standard off-the-shelf tram or BRT solutions.
  • There’s disagreement whether overhead wires are actually a major cost driver; some argue they’re cheap relative to track, especially at high frequency.

Local Coventry context and scalability

  • Coventry already has a substantial (increasingly electric) bus network and a compact, walkable core, with constrained medieval and village-origin streets and a problematic inner ring road.
  • Several doubt a small-tram system will scale beyond the old city or outperform improved buses, and suspect the demonstrator chiefly serves as a showcase to sell the technology to other UK and international cities.

Joining Apple Computer (2018)

Psychedelics, creativity, and risk

  • Several comments latch onto the author’s mention of an LSD-inspired insight for HyperCard and compare it to artists like the Beatles and R. Crumb.
  • People debate whether psychedelics are necessary for “genius,” noting plenty of great pre‑LSD artists and suggesting survivorship bias.
  • Traditional set/setting (preparation, mindset, environment, sober sitters) is emphasized as critical for safe use.
  • There is disagreement over whether LSD or psilocybin cause “permanent” brain changes; some cite research on neuroplastic effects, others insist “permanent” is a very strong and unclear term.
  • Personal anecdotes range from beneficial use to severe harm, including a story of a friend’s psychosis and suicide. Several stress predisposition to mental illness and the typical age of onset as important confounders.

HyperCard, empowerment, and the loss of an open playground

  • HyperCard is praised as visionary: letting non‑programmers build interactive media and giving “keys to the kingdom” to ordinary users.
  • Many feel modern platforms (walled gardens, app stores, ad‑driven ecosystems) represent a regression from that spirit of empowerment.
  • There’s debate over how much capitalism and hardware supply chains inevitably push control toward large corporations, versus how much is just human preference to consume rather than create.
  • Some argue creation tools did exist (e.g., bundled suites) but were barely used; others counter that better, simpler tools for popular creativity are still missing.

Nostalgia, boredom, and the joy of early computing

  • Multiple commenters reminisce about first encounters with early Macs, Lisa, HyperCard, MacPaint, and the feeling that “anything was possible.”
  • A recurring theme: creativity often emerged from boredom and offline exploration; several say that to recapture that magic now, you likely need to turn off the internet.
  • Others push back on pure nostalgia, claiming each era (including today’s AI boom) has its own unique opportunities and that this period might be especially rich for small, determined teams.

Light mode vs dark mode

  • The story about switching from white‑on‑black Apple II text to paper‑like white backgrounds sparks a light/dark mode debate.
  • Some jokingly call this the “original sin” of light mode; others defend it as critical for graphics and readability.
  • A more technical comment notes that eye strain often comes from contrast between screen and room lighting rather than light vs dark itself.

Apple’s mission: empowering creatives vs enclosing them

  • Many see the author’s description of “making tools to empower creative people” as the original appeal of Apple and early personal computing.
  • There’s disagreement over whether this still describes Apple today: critics say the primary mission is now maximizing consumer device sales; defenders argue that creative workflows and features remain central to Apple’s products.
  • Walled gardens and restricted runtimes are criticized as undermining the empowerment ethos, even as some secure, constrained environments are acknowledged as a response to past malware and abuse.

General Magic and missed chances

  • The move from Apple to General Magic is discussed as an example of early “personal communicator” vision that was right but too early or poorly productized.
  • Commenters argue the company had brilliant technologists but lacked strong product leadership and “adult supervision,” and critically missed the web/Internet wave.

Networks, capital, and who wins

  • The tight web of connections (e.g., between founders and powerful figures in finance and big tech) prompts frustration from those who feel talented but under‑funded.
  • Several argue that access to capital, talent networks, and distribution channels is often more decisive than raw ability.
  • Historical analogies (scientific conferences, regional tech hubs like Massachusetts vs California) are used to show how geography and institutions cluster opportunity, potentially wasting talent elsewhere.

Meaningful work and modern software culture

  • The author’s reflection on building things used by millions leads to broader discussion about meaning: many value working where they believe “if we win, the world is better,” and deeply regret years spent on harmful or pointless products.
  • Others note that even noble efforts can fail to visibly improve the world, and that this may be inevitable.
  • There is criticism of modern software process overhead (sprints, JIRA, meetings, stakeholder management) as crowding out deep work and making it hard to do “amazing” engineering like the small, focused teams of earlier eras.

BorgBackup 2 has no server-side append-only anymore

Context: Removal of Borg 2 “append-only”

  • Original Borg 1.x append-only relied on its log-structured storage format; Borg 2’s new storage (borgstore) no longer “appends” in that sense, so the feature was removed as a misfit.
  • Several commenters initially worried this weakened ransomware protection, since a core goal is that compromised clients cannot delete or corrupt existing backups.

New permissions model in Borg 2

  • Borg 2 introduces server-enforced permissions via borg serve --permissions=… (and env var), with modes like all, no-delete, write-only, read-only.
  • “no-delete” is clarified to block both object deletion and overwriting in the Borg store implementation (posixfs backend), providing at least the same logical protection as old append-only.
  • Actual enforcement still depends on the backing store; Borg’s built-in file:/ssh: backend and borg serve can enforce it, other cloud/object stores must be configured to do so themselves.
  • Some confusion remains about how (or whether) this maps onto generic POSIX filesystems and cloud storage, and docs are seen as sparse/early.

Ransomware protection and alternative strategies

  • Many emphasize that strong backups require that client credentials cannot delete or modify existing snapshots.
  • Common alternatives:
    • ZFS (or btrfs) immutable snapshots and replication (local + off-site, e.g. rsync.net) as primary ransomware protection.
    • Object storage with write-only / no-hard-delete keys (e.g. Backblaze B2, S3 Glacier + lifecycle rules).
    • Read-only ZFS snapshots on backup providers as an additional safety net.
  • Some argue that once you rely heavily on ZFS snapshots/replication, sophisticated tools like Borg add less value (vs simple rsync + snapshots), though others still value Borg’s low-RAM dedupe and robustness.

Comparisons and migration options

  • Multiple users report moving or considering moves to restic, Kopia, duplicacy, rustic, or rsync-based schemes.
  • restic:
    • Has an append-only mode via rest-server --append-only or via rclone+restricted SSH; used successfully in production by several.
    • Its append-only has caveats: metadata pruning by an admin account can still remove historic data indirectly.
    • Praised for single static binary, many backends, but criticized for high memory usage on some large workloads.
  • Kopia:
    • Liked for GUI and speed, especially for non-technical users.
    • Retention policy model is considered confusing or “footgun-like” by some.
  • General sentiment: Borg is solid and battle-tested, but Borg 2’s long beta and shifting features push some toward restic/Kopia, while others are content to wait: release will be “when it’s ready,” with many breaking changes consolidated.