Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 112 of 521

Ask HN: What Are You Working On? (December 2025)

AI, Agents, and Developer Tools

  • Many projects center on AI copilots, agents, and eval tooling:
    • IDE integrations (Contextify, coding agents for Claude Code/Cursor, “Custom Copilot” alternatives) with emphasis on privacy, local context, and user control over workflows.
    • Multi-agent orchestration and visualization (Omnispect, agent OS, decision-tree tooling, KV-cache eviction research).
    • Infrastructure for running LLM stacks locally (Harbor, self-hostable MCP servers, Ollama/OpenWebUI setups).
    • Eval and safety tools (promptfoo, red-teaming frameworks, multi-agent test harnesses).
  • Skepticism appears around ceding too much control to Big Tech copilots and brittle cloud-only workflows; many projects explicitly prioritize local-first, open-source, or BYO-API-key designs.

Web, Infra, and Data Platforms

  • New web frameworks, SSGs, and knowledge tools: custom static site generators, Mint language, Mizu (Go web framework), Outcrop (fast knowledge base), activitypub servers, RSS tools.
  • Devops / infra tools: K8s PaaS (Canine), microservice orchestration, SOC2 and security analytics, observability (Signoz), Postgres tooling, job schedulers, local-first and embedded databases.
  • Several efforts to simplify C/C++ build and package management (“Cargo for C”), plus HTTP clients (pyreqwest), and ID tools (ULID-like types).

Productivity, Personal Data, and Life Simplification

  • Many personal-tracking apps: time trackers (with screenshot → LLM analysis), self-experiment CLIs, meal/health loggers, focus tools, habit/chores RPGs, typing trainers, self-tracking dashboards.
  • Strong “local-first” and privacy themes: offline note+spaced repetition systems, local file organizers, encrypted spreadsheets, local email search, AI visibility tools without GA.
  • A notable subthread on “digital simplification” (deleting VPSs, removing smart home tech, fewer apps). Some admire this; others question practicalities (e.g., taxes, hosting).

Games, Puzzles, and Creative Projects

  • Numerous indie games and puzzle sites (NES titles, Godot games, daily word/puzzle games, autobattlers, mods like Battlefield realism, retro map editors).
  • Discussion around balancing difficulty, growth (especially via TikTok/Wordle-style virality), and tooling (Kaplay, Bevy, custom engines).
  • Community feedback is largely enthusiastic; several games become part of commenters’ daily routines.

Education, Civic, and Legal Tools

  • Learning tools for kids and adults: curriculum-aligned school apps, language-learning (Cangjie, vocabulary apps), Rust/Bevy tutorials, research assistants for papers.
  • Civic/government projects: council data aggregation in UK/US, Puerto Rico need tracking, Berlin rent maps, history mapping, missing-person clustering.
  • Legal/administrative utilities: USCIS form fillers, flat-rate legal billing analytics, DAO experiments; some users question long-term necessity or impact.

Hardware, Embedded, and Physical Projects

  • Diverse hardware builds: e-bike batteries, handheld computers, custom thumb keyboards, golf launch monitors, analog computing modules, cloud chambers, game controllers.
  • Frequent use of LLMs to shortcut unfamiliar domains (Rust on MCUs, ESP32 firmware, PCB design), with some concern about AI’s limits on part selection and low-level detail.

Rust Coreutils 0.5.0 Release: 87.75% compatibility with GNU Coreutils

Memory safety and Fil-C vs Rust

  • One camp argues you can get “100% compatibility + memory safety” by compiling existing GNU coreutils with Fil-C, avoiding a rewrite.
  • Others note Fil-C’s GC-like runtime adds 2–4x overhead and higher memory use, roughly in “Java territory,” making it unappealing for new projects compared to Rust/Go/etc.
  • Debate over “unknowns”: C/Fil-C eliminate memory-unsafe bugs but still rely on old logic and environment quirks; Rust removes whole classes of memory bugs but can introduce fresh logic errors, as shown by recent Rust-based CVEs.

Performance impact

  • Some claim a 4x slowdown for Fil-C is acceptable for safety, especially because most coreutils are I/O bound.
  • A quick md5sum/sha256sum test showed no slowdown (even a slight speedup), but others suspect different compiler flags or intrinsics, so results are considered inconclusive.

Stability, maturity, and Ubuntu’s experiment

  • Critics object strongly to Ubuntu making uutils the default in a non-LTS release before 100% test pass, calling it premature, unstable, and lacking user benefit.
  • Defenders frame 25.10 as an explicit experiment to surface real-world bugs that tests miss, with the option to roll back before LTS.
  • There’s friction over opt-out being non-trivial (--allow-remove-essential) and over treating ordinary users as de facto beta testers.

Compatibility and bugs

  • “87.75% compatibility” comes from running the GNU coreutils test suite; known failures are ~12% of tests plus unknown gaps.
  • Some failures are described as obscure edge cases or unrealistic inputs; others (e.g., locale-aware sort) are seen as serious for non-English users.
  • uutils has contributed new tests and uncovered bugs in GNU coreutils itself.

Licensing and ethics (GPL vs MIT)

  • Strong disagreement over replacing GPL’d, decades-old GNU code with an MIT rewrite.
  • Concerns: loss of copyleft protections; enabling vendors to ship private security fixes; perceived disrespect toward GNU maintainers.
  • Counterpoints: clean-room reimplementation is legally/ethically fine; GNU itself reimplemented proprietary Unix tools under a new license; some view GPLv3 as burdensome and prefer permissive licensing.

Motivations and value of a Rust rewrite

  • Supporters cite: drop-in GNU compatibility, better error messages, UTF-8/i18n focus, performance, portability to non-Linux OSes, and a more maintainable language that attracts contributors.
  • Skeptics see limited concrete user benefit versus mature “titanium-stable” GNU tools, framing the rewrite as Rust evangelism or politics rather than clear technical necessity.

iOS 26.2 fixes 20 security vulnerabilities, 2 actively exploited

Security fixes, versions, and backports

  • Thread notes 26.2 fixes serious issues (RCE, data access, root), with concern for older OSes like iOS 15 if some vulns aren’t backported.
  • People debate Apple’s patch policy: links show only latest OS gets all security fixes; older versions get partial coverage due to “architecture and system changes.”
  • Some conclude they’ll avoid macOS/iOS as long-term platforms because of this; others argue Apple still supports devices longer than most vendors.

Hidden 18.7.3 and “dark pattern” debate

  • Many report that on iPhones only iOS 26.2 is prominently offered, while the security-only iOS 18.7.3 is hidden.
  • Workaround: enable iOS 18 Developer/Public Beta, install 18.7.3 (same build as release), then disable beta.
  • On macOS, users must click the “ⓘ” icon to deselect “Tahoe 26.2” and pick “Sequoia 15.7.3”.
  • Some call this a clear dark pattern (non-disruptive security update hidden behind extra/hard-to-discover steps). Others insist it’s just a reasonable default for most users and overusing “dark pattern” dilutes the term.

Liquid Glass UI, usability, and accessibility

  • The dominant topic is hostility to the new “Liquid Glass” look: busy transparency, diffraction effects, and wide corner radii that distract from content and hurt readability.
  • Reports of UI bugs: keyboard resizing, status text rendering black-on-black/white-on-white, Safari controls turning into “mystery meat,” layout shifts, CarPlay lag.
  • Accessibility toggles (Reduce Motion, Reduce Transparency, Increase Contrast, Show Borders, “Tinted” style) make it “barely usable” for some but introduce their own glitches and perceived latency.
  • A minority say they stop noticing the glass quickly and that the new features (e.g., spam filtering, iPad windowing) outweigh aesthetics.

Performance and device longevity

  • Multiple anecdotes of severe slowdowns after upgrading older iPhones and even a high-end M2 Max MacBook to 26.x, compared to devices on earlier OSes.
  • Others counter that 26.2 runs fine on their older devices and that perceived lag is often temporary indexing or poorly-updated Electron apps (partly fixed in 26.2).
  • Longstanding suspicion remains that major updates are used to push hardware upgrades; defenders attribute issues to batteries, heavier apps, and background work.
  • Some users now treat “never install a major OS” as best practice, relying on x.2 releases or staying on older versions until 26’s UX/perf improves—or planning to exit the Apple ecosystem entirely.

Private Equity Finds a New Source of Profit: Volunteer Fire Departments

Reaction to private equity in emergency services

  • Many see PE ownership of volunteer fire department software as “parasitic,” life‑threatening rent-seeking layered on already underfunded services.
  • Some argue this is just capitalism working as designed; others say it exemplifies why “large-scale financial engineering” is socially harmful.
  • A minority push back that PE is operating within laws intentionally written to favor it, and blame should include legislators and voters.

Systemic critiques: capitalism, PE, and law

  • Proposals range from banning PE outright to more targeted steps: ending carried-interest tax advantages, restricting debt-loading of acquisitions, and tightening rules against asset-stripping.
  • Debate over whether “capitalism is essentially evil”:
    • One side claims capitalism tends to destroy real markets and concentrates power.
    • The other insists critics rarely understand economic basics, conflating capitalism, markets, and money.
  • Some note a pattern where both direct regulation and incentive changes are proposed, but entrenched interests block both.

Volunteer fire departments, funding, and rural politics

  • Strong frustration that rural departments must run bake sales for trucks and maintenance while wealthier suburbs pay full-time firefighters.
  • One camp: rural voters repeatedly elect anti-tax, anti-safety-net politicians, so their underfunding is partly self-inflicted.
  • Counterpoint: this blames the powerless; both major parties serve capital, information is constrained, and many rural residents (including minorities, LGBT people, immigrants) don’t fit the stereotypical “redneck” profile.
  • Broader question: is it fair or sustainable to provide expensive services to very low-density areas without higher local taxes or stronger state/federal redistribution?

Why firefighters are volunteers

  • Explanations: in rural areas, needed staffing scales with land area, not population; incident volume is low; full-time staffing would be idle and unaffordable.
  • Some note other countries also rely heavily on volunteers, but usually with government-funded equipment, not fundraising.

Government vs open-source software solutions

  • Several argue this niche is ideal for open source, developed by firefighter–developers or a multi-department consortium, especially if regulations (e.g., NERIS compliance) are public.
  • Others question why individuals should volunteer coding labor for communities “too cheap” or too politically opposed to funding basic infrastructure.
  • Some suggest governments should provide standard, open-source reference implementations whenever they mandate data/reporting systems.
  • Skepticism exists about governments’ technical capacity, pay scales, and political vulnerability of internal dev teams.

Do departments need this software at all?

  • A few ask whether specialized systems are overkill for small volunteer departments that historically operated with paper, spreadsheets, or Airtable.
  • Others respond that regulatory and reporting requirements have effectively made such software mandatory, enabling regulatory capture by vendors.

Skepticism about the article

  • At least one commenter warns NYT narratives can omit context or contain factual errors; without industry knowledge, readers may not see the full picture.

Apple Maps claims it's 29,905 miles away

Apple Maps AirTag Distance Bug

  • Apple Maps reports an AirTag as ~29,905 miles away, exceeding Earth’s circumference, despite the actual straight-line distance being ~2,500–3,200 miles.
  • Commenters note road distance can exceed crow-flies distance (detours, elevation, non-straight roads) but not plausibly by a factor of ~12.
  • Some joke explanations reference altitude or geostationary orbit, but others point out altitude is negligible at this scale.

Speculated Causes

  • Hypothesis: routing engine summing an anomalously long route due to road closures or mis-marked segments (e.g., artificially inflated segment lengths to discourage routing).
  • Another suggestion: accumulated error from detailed road geometry or “fractal” measurement, but most find this insufficient to explain a 10x blowup.
  • Coastline-paradox–style arguments are mentioned and largely dismissed as too small an effect relative to the observed error.

Other Navigation / GPS Failure Stories

  • Multiple anecdotes of in-car nav getting “stuck”:
    • Tesla on a ferry continues to think it’s at the departure port for ~5 hours, showing the car driving through the sea and panicking about chargers.
    • Volvo and cheap aftermarket GPS units that latch onto the wrong road or region and stay there until a hard reset.
    • Devices jumping into fields, foreign countries, or staying on wrong roads due to aggressive “snap to road” behavior.
  • Explanations discussed:
    • Dead reckoning in tunnels/ferries using accelerometers, gyros, wheel sensors, and sometimes “speed pulse” wiring.
    • Conflicts between GPS, Wi-Fi positioning, and map data; possible anti-spoofing logic that distrusts sudden large jumps.
    • Almanac/ephemeris handling, assisted GPS via the internet vs satellite-only updates, and bad Kalman filter tuning.

UI and Map Quality Complaints

  • Apple Maps direction indicator on iPhone is reported as persistently inaccurate when walking/cycling; suspected compass issues and calibration settings.
  • CarPlay Maps allegedly has jittery zoom (broken hysteresis). Find My sometimes shows absurd timestamps.
  • Users lament the lack of a persistent scale bar in mainstream map apps, attributing it to minimalistic UI choices.

Broader Reflections

  • Some argue estimation/Fermi-question skills wouldn’t catch this kind of routing bug; it’s more about edge cases and data handling.
  • Thread mixes humor (fractal-path jokes, blood-vessel comparisons) with frustration and a sense that mapping and GPS systems still have many brittle corners.

AI and the ironies of automation – Part 2

Calculator Analogy vs AI Systems

  • Several comments challenge comparing AI to calculators: calculators are deterministic “calculators,” not “thinkers,” and they only go wrong if humans set up the problem wrong.
  • Others note that in real engineering practice even calculators can silently enable catastrophic errors when units, formulas, or orders of magnitude are wrong—only domain intuition catches this.
  • AI is seen as fundamentally different because it can fail in ways that are opaque and non-local, yet still look plausible.

Skill Decay and the Irony of Automation

  • Core theme: as agentic AI takes over execution, human experts risk losing the hands-on skills needed to intervene in rare but critical failures.
  • Maintaining expert competence then requires deliberate, ongoing “practice work,” which eats into the very efficiency gains automation promised.
  • This echoes Bainbridge’s 1983 “ironies of automation”: current systems still ride on a generation that learned to do the work manually; later generations may not.

Human-in-the-Loop, Non‑Determinism, and Oversight

  • LLM-based agents are criticized as non-deterministic and prone to rare but extreme errors (e.g., destructive commands), making unsupervised use unsafe today.
  • There’s concern that as failures become rarer, operators will be more bored and less attentive, yet are still expected to catch subtle, high-impact mistakes.
  • Some argue that where outputs are testable and processes deterministic, AI-generated pipelines can run largely unattended; others counter that LLMs don’t meet those conditions.

Corporate Efficiency, Signaling, and “Automating Shit”

  • Multiple commenters doubt that companies are truly “efficiency-obsessed”; they more often chase the appearance of efficiency and do “good enough” work that accumulates fragility and tech debt.
  • AI fits neatly into this signaling narrative: it’s adopted to look modern and efficient, not necessarily to build robust systems.
  • If a process is already bad, AI just lets you “automate shit at lightning speed.”

Experts as Managers and Orchestrators of AI

  • Experts are expected to transition into managing agents rather than doing the work themselves, a role many find less satisfying and for which they’re rarely trained.
  • In practice, a lot of time still goes into “programming the AI”: specifying goals, constraints, and acceptable changes—more akin to system design than simple oversight.
  • Some suggest intentional “manual time” (e.g., 10–20%) to keep skills sharp, but question whether that still yields real net productivity gains.

Analogies: Aviation, Automotive, Factories

  • Aviation is presented as a model: autopilots handle most flying, but pilots are heavily trained and periodically required to fly manually to prevent skill loss; regulation enforces this.
  • Commenters doubt software organizations will invest similarly in manual practice given short-term delivery pressure.
  • Automotive “levels of autonomy” are used as a metaphor: current AI coding tools feel like Level 2–3—most dangerous, with shared control and murky responsibility.
  • Others note that factory automation has succeeded despite operators no longer knowing fully manual operation; expertise migrated into process engineering.

Current Practical Limits of AI Tools

  • Outside coding, several experiences with document/PDF tools show frequent silent failures: dropped rows, duplicated data, truncated search contexts, and very confident but wrong answers.
  • Non-technical users are especially at risk of trusting such outputs without understanding limitations or needed validation.

Creativity, Culture, and Training Data Ecology

  • Some worry AI-generated content is “polluting the commons” of cultural data and displacing paid creative work, threatening future training data quality and creative ecosystems.
  • Debated whether paying clients actually prefer human-made art or will accept AI for cheaper, generic digital assets (e.g., stock-ish illustrations).
  • There’s concern that cultural output could converge toward low-cost, model-shaped “junk food,” undermining artists’ livelihoods and shrinking entry paths into creative fields.

Discipline, Atrophy, and Individual Use

  • A number of commenters report feeling skill atrophy or a strong reflex to “just ask the LLM,” likening avoidance of over-reliance to diet or exercise discipline.
  • Others frame AI as reducing friction and helping start projects they’d otherwise never begin, while insisting they skip AI when they truly need deep understanding or correctness.

Kimi K2 1T model runs on 2 512GB M3 Ultras

Model details and quantization

  • The demo uses Kimi K2 at 4‑bit quantization on two 512GB M3 Ultras; several people note this should be explicitly stated, though some assume “1T parameters” implicitly means heavy quantization.
  • There’s confusion between Kimi K2 vs K2 Thinking (K2T): they are different models with very different capabilities and post‑training. K2T is seen as closer to top-tier models like Sonnet 4.5.
  • Questions arise about context length and prefill speed; commenters warn that “it runs” at small context doesn’t imply usable performance at large, coding‑style contexts.

Behavior, style, and use cases

  • Kimi K2 is described as less capable than frontier models on complex reasoning but unusually strong at:
    • Short-form writing (emails, communication)
    • Editing and blunt critique
    • “Emotional intelligence” / social nuances in messages
    • Geospatial tasks
  • It is perceived as unusually direct, willing to call out user mistakes, and to clearly say “there is no answer in the search results.” Some users value this non‑sycophantic style.

Instruction-following vs pushback

  • One camp wants strict, assumption‑free instruction following (especially for coding), with the model asking clarifying questions rather than disagreeing.
  • Another camp prefers agents that take initiative, push back on dubious instructions, and warn about dangerous consequences (e.g., potential SQL injection).
  • A middle ground emerges: models should sometimes ask clarifying questions and sometimes challenge the request, but not blindly comply.

Training, architecture, and RLHF

  • Kimi is said to be based on a DeepSeek-style MoE architecture, trained with the Muon optimizer and “mainly finetuning.”
  • Debate over whether most Chinese models are downstream of DeepSeek/GPT; others point to Qwen, Mistral, Llama, ERNIE, etc. as independent efforts.
  • Several comments criticize mainstream RLHF for over-optimizing for politeness and flattery; Kimi is praised as a counterexample.

Benchmarks and prompting

  • Kimi K2 reportedly performs unusually well on the “clock test” and EQBench (with the caveat that EQBench is LLMs grading LLMs).
  • Discussion around more “linguistically technical” system prompts to force blunt, “bald-on-record” responses, illustrating how prompt wording strongly shapes behavior.
  • One commenter argues these are really “word models,” not true “language models,” since phrasing and register substantially affect outputs.

Local vs cloud, cost, and privacy

  • Running a 1T model locally on dual M3 Ultras (~$19K) is viewed by many as uneconomical versus cloud inference, especially given low personal utilization and very fast providers (Groq, Cerebras, etc.).
  • Others argue local is about:
    • Privacy and sensitive data (including “record everything” workflows and codebases)
    • Autonomy from future “enshittification” of cloud AI
    • Hobbyist experimentation and research
  • There’s disagreement over whether local makes sense only for privacy/hobby vs. future-proofing or high‑value bespoke uses.

Hardware and interconnect

  • Some speculate about macOS RDMA over Thunderbolt; the original demo is confirmed not to be using it yet, with expected future speedups.
  • Questions arise about Linux equivalents: vLLM can scale over standard Ethernet, but peak performance requires RDMA‑class interconnects.
  • Commenters also note refurbished/discounted M3 Ultras but point out that the lower-cost refurb configs don’t match the 512GB RAM spec in the demo.

Baumol's Cost Disease

Role and Limits of Baumol’s Cost Disease

  • General agreement that the Baumol effect is real and explains some relative price and wage shifts, especially where productivity lags.
  • Strong disagreement on how much it explains in the modern US: some say it’s overused as a rhetorical shield to obscure rent extraction, concentrated corporate power, and regulatory failure.

Housing, Land, and Regulation

  • Several argue housing costs are driven more by land scarcity and zoning than by construction productivity or market concentration.
  • Examples: teardown lots where land cost dominates; inability to split lots or build multifamily units due to zoning.
  • Point that “invisible land value tax” rises with fixed high‑productivity metro areas and limited creation of new economic centers.

Services, Local Labor, and Automation

  • Mental model: sectors with high share of non-automatable local labor (childcare, education, medicine) see structurally higher inflation.
  • Cultural/regulatory constraints (e.g., teacher–student ratios, medical workflows) limit productivity gains.
  • Some see this as justification for deregulation in healthcare, childcare, education, and housing to enable more supply and competition.

Wealth, Productivity, and Distribution

  • Dispute over whether higher productivity in some sectors always means “society is wealthier.”
  • Critics emphasize real resources and services (e.g., doctors vs finance quants) and note that rising mean income with rising inequality may not improve median welfare.
  • Concern that productivity gains invite new regulation/tolls that capture surplus instead of benefiting workers/consumers.

Finance, Advertising, and “Socially Useless” Productivity

  • Debate whether high-productivity finance (e.g., HFT) delivers real social value or merely reallocates rents.
  • Similar split on advertising: one side sees it as vital information infrastructure, especially B2B and medical marketing; others see it as wasteful, distortionary, and prone to scams, preferring non-advertising channels.

Discretionary vs Non‑discretionary Goods

  • Observation that “red” sectors (healthcare, housing, college) are low-elasticity, often financed (insurance/loans), and heavily regulated, which may damp price sensitivity and automation incentives.
  • Counterpoint: even “non-discretionary” goods (food, housing location) have substitution and policy choices; many pathologies blamed on Baumol are framed instead as regulatory capture.

Examples and Edge Cases

  • Discussion of software: some note SaaS price hikes (e.g., Photoshop), others point to cheaper or free alternatives and argue effective quality‑adjusted prices have fallen.
  • Dental hygienists cited as a clear Baumol-style case: hard to automate, reimbursement‑capped revenue, wage pressure after labor exits.
  • Multiple comments stress Baumol is distinct from generic inflation; it specifically concerns cross-sector wage equalization driven by uneven productivity growth.

The Gorman Paradox: Where Are All the AI-Generated Apps?

Where Are the AI-Generated Apps? (Visibility vs Reality)

  • Many commenters say AI-built apps do exist but are mostly:
    • Internal line-of-business tools (custom CRMs/ERPs, accounting tools, flight school rental systems, infra automation, embedded code).
    • One-off “vibe-coded” utilities, scripts, browser extensions and CLIs tailored to a single user or team.
  • These rarely hit app stores or public marketplaces, so app-store counts miss most of the impact.
  • Others note lots of obvious “AI slop” on the web and in app/game stores (e.g. low-quality Steam games, ugly marketing sites) but agree that high‑quality, widely used AI‑built products are scarce.

What AI Coding Is Actually Good At

  • Strong for: scaffolding CRUD apps, parsing common formats, internal dashboards, scripting, refactors, tests, rote API glue, and debugging when guided by an experienced developer.
  • Several report large personal speedups (often 3–10x) when they already understand the domain and review/shape the code as it’s written.

Where It Still Fails: The Last 20%

  • Major weaknesses:
    • Handling messy real-world edge cases (bank CSV quirks, changing APIs, odd hardware, OAuth failures).
    • Library/framework churn (e.g. webpack4→5, specific Arduino boards).
    • Architecture, long-term maintainability, security, performance, and ops.
  • Pattern described repeatedly: AI makes the first 60–80% trivial, then the remaining 20–40% becomes harder because you’re debugging unfamiliar, often bloated code. Sometimes it’s faster to rewrite by hand.
  • Rapid codegen can overload code review/QA and generate large amounts of technical debt.

Why No Visible Productivity Explosion?

  • Empirical metrics (app stores, some software output graphs) show no obvious inflection; skeptics invoke Amdahl’s Law and Theory of Constraints: speeding up coding (a fraction of the work) doesn’t speed shipping much.
  • Demand, attention, distribution and maintenance remain the real bottlenecks; markets are saturated, and shipping something people want is still the hard part.
  • “AI-generated” is often seen as a quality/liability red flag, so usage is underreported.

Diverging Narratives About the Future

  • Optimists expect exponential capability gains and eventual disruption akin to digital photography.
  • Skeptics see unreliable generation, benchmark games, and a hype bubble more like dot‑com/crypto, with limited real productivity so far.
  • Broad agreement: current tools are powerful assistants, not autonomous app factories.

Europeans' health data sold to US firm run by ex-Israeli spies

Israeli-Linked Firm and Surveillance Allegations

  • Several commenters assert a recurring pattern of companies run by alumni of an Israeli signals-intelligence unit operating data-heavy or “telemetry/surveillance” businesses, with little visible government pushback.
  • Others reject this as conspiratorial or xenophobic framing, arguing it’s natural that an elite technical unit produces successful cybersecurity entrepreneurs, similar to MIT/Stanford graduates.
  • Some see the article’s emphasis on “ex-Israeli spies” as inflammatory, akin to guilt by association, with accusations of propaganda and generalized suspicion of Israelis.

What Zivver/Kiteworks Actually Does & Security Concerns

  • Critics say the whole model (a web portal that decrypts or scans documents server-side) is structurally incompatible with true end‑to‑end privacy; the operator necessarily sees plaintext at some point.
  • Dutch-language reporting referenced in the thread claims security researchers found cases where data was transmitted in plaintext and not properly end‑to‑end encrypted.
  • Defenders counter that Zivver openly advertises server-side content scanning, so this is not a “backdoor” but the declared design; they see no concrete evidence of a state-intel dragnet behind the acquisition.
  • Some float a honeypot theory (intelligence services buying an already-flawed product to exploit), while others insist this remains speculative and conflates ordinary security bugs with espionage.

Jurisdiction, Extradition, and Trust in US/Israel

  • Multiple comments argue that any US- or Israel-controlled entity handling EU health data is problematic because of those countries’ surveillance laws and political track records.
  • There is debate over how willing the US and Israel are to extradite their nationals, and whether either state can be trusted not to leverage such data for intelligence purposes.

Unit 8200, Mandatory Service, and Ethics

  • Some describe the unit as the “MIT of Israel,” noting conscription funnels technically strong recruits into it, which later fuels the startup ecosystem.
  • Others stress that Unit 8200 is not merely “cybersecurity” but a core signals-intelligence and targeting organization, raising moral questions about veterans building products around highly sensitive foreign data.
  • A side debate unfolds about individual culpability under conscription versus voluntary military service.

European Attitudes to Health Data Privacy

  • Numerous European commenters state they care deeply about medical privacy—often more than about general data—citing fears around discrimination, blackmail, stigma (mental health, HIV, pregnancy), and future political shifts.
  • Some describe opting out of national electronic health records and express anger at being forced to use tools like Zivver without meaningful consent.

GDPR, Enforcement, and US Tech Dependence

  • There is frustration that GDPR is poorly enforced (especially via Ireland), producing cookie banners but limited real restraint on large platforms.
  • Commenters worry that EU governments themselves increasingly mandate or de facto require interaction through US platforms and clouds, undermining “digital sovereignty.”
  • Several argue Europe should insist public systems use EU-controlled, open, and auditable infrastructure, particularly for health data, even if that means funding “EuroTube/EuroMail”-style alternatives.

Shai-Hulud compromised a dev machine and raided GitHub org access: a post-mortem

Overall Reaction & Oddities of the Attack

  • Many praise the transparency and quality of the post‑mortem.
  • Several find the worm’s behavior strange: quietly exfiltrating secrets, then loudly force‑pushing and vandalizing repos.
    • Some see this as “script‑kiddie‑like”; others suggest it helps hide which keys were leaked and complicate rotation.

GitHub / Org-Level Protections

  • Suggestions:
    • Use GitHub org IP allowlists and stricter egress filtering from dev environments.
    • Protect main/production branches, require PRs, reviews, and MFA for admin actions.
  • Debate on force-push: some think it should be almost entirely disabled; others allow it on dev branches only.

SSH Keys & Local Developer Security

  • Many concrete hardening ideas:
    • Store SSH keys in hardware (YubiKey/FIDO), TPM, or password managers (1Password/Bitwarden) acting as SSH agents.
    • Require touch/PIN for each SSH auth, and use separate keys for Git access vs commit signing.
    • Always encrypt private keys with strong passphrases and avoid plaintext secrets on disk.
    • Use OAuth/HTTPS for Git operations and keep admin accounts separate and rarely used.
    • Some develop inside VMs/WSL or separate machines for added isolation.

“Compromised Laptop = Game Over?”

  • One camp: if the dev machine is compromised, the attacker can ultimately do anything the user can; hardware tokens and agents only raise the bar slightly.
  • Other camp: re‑authentication prompts, hardened agents, sandboxing, and noexec mounts can meaningfully reduce risk, even if not airtight.
  • Consensus: only independent hardware that shows what you’re signing really defends specific high‑value actions.

Secrets Management & Cloud Access

  • Strong push toward:
    • Ephemeral cloud credentials (e.g., browser/OIDC-style logins) rather than long‑lived plaintext keys.
    • Avoiding secrets in files and shell history; using managers or encryption instead.
  • Disagreement over whether the database/AWS can be called “not compromised” if attacker had potential access; some stress “assume breach” vs others trust logs and auditing.

Package Managers & Ecosystem Security

  • Heavy criticism of npm-style lifecycle scripts; “npm post-install scripts considered harmful.”
  • Confusion about pnpm: newer versions block dependency lifecycle scripts by default; commenters infer the team used an older version or project-level scripts.
  • Some argue blocking install scripts only partially helps, since malicious code can still run at runtime.
  • Yarn’s security posture is debated; some recommend migrating to pnpm or npm with scripts disabled by default.
  • Broader concern about ecosystems (npm, IDE plugins, browser extensions) that allow arbitrary third‑party code with minimal oversight.

Detection, EDR, and Leaked Data Discovery

  • People ask whether the exfiltration traffic was distinguishable from normal dev traffic, and note that an EDR product (SentinelOne, CrowdStrike, etc.) might have provided more forensic detail.
  • Desire for a “haveibeenpwned”‑style service for the dumped tokens, since the worm mixed victims and double‑encoded data, making it hard to know what was stolen.

“You should never build a CMS”

Git vs. CMS for content editing

  • Some argue Git-as-CMS is “hellish” for growing marketing/comms teams; non‑technical staff need WYSIWYG, not branches and rebases.
  • Others counter that many people can learn basic technical tools, and Git’s strong versioning is exactly what CMS UIs often weaken, leading to broken links, inconsistent assets, and siloed workflows.
  • A middle view: Git is fine as a backend if you wrap it in a friendly UI or simple scripts; forcing raw Git on marketers is unrealistic, but “powered by Git” doesn’t have to mean “use Git directly.”

AI agents vs. traditional CMS

  • One side reads the original AI-company post as: agents work better on code than through a CMS abstraction, so for some teams a CMS is now pure overhead.
  • Others clarify the original author explicitly said many teams still need a CMS; the claim is narrower: if agents can safely manipulate code/content, non‑technical users may not need a GUI for simple edits.
  • Some foresee “agent‑first” tooling where CMS content, docs, and tickets are all manipulated through APIs and MCP‑style servers rather than manual web UIs.

Complexity, scale, and custom builds

  • Many agree with the article’s core: simple homegrown CMSes inevitably accrete complexity, just like ad‑hoc build scripts that grow into build systems.
  • Several commenters describe successful lightweight setups (folders + Markdown/YAML, synced via Dropbox or similar) that beat generic CMSes for small, specific sites and were quick to build—especially now with AI-assisted coding.
  • Others stress these work only with a willing developer in the loop and narrow requirements; most orgs with non‑dev staff and richer workflows are better off with a mature, managed CMS.

WordPress and the CMS ecosystem

  • WordPress is cited as evidence that CMS problems are largely solved for common cases: drafts, approvals, scaling, caching, non‑tech editing, even headless use.
  • Critics respond that for complex ecommerce, large catalogs, heavy localization, or intricate data models, WordPress becomes a plugin‑laden, fragile stack; at that point either specialized SaaS (Shopify, etc.) or fully custom systems may be superior.
  • High‑end enterprise CMSs (Sitecore, AEM, etc.) are noted as serving a different tier than WordPress or static/Git setups.

Bias, marketing, and ethics

  • Both the AI‑company post and the CMS‑vendor response are widely seen as marketing pieces, though some readers still find them technically insightful.
  • There is significant criticism of the CMS vendor publicly naming the former customer and individual from the original story; some view this as discourteous or even a potential confidentiality/privacy issue, others as fair response to public criticism.

AI-written style and trust in content

  • Multiple commenters feel the CMS article “reads like LLM output” (short dramatic sentences, certain headline patterns); others strongly disagree and see it as obviously human.
  • The author insists they wrote it by hand, acknowledging their style may have been influenced by heavy AI usage.
  • This sparks a broader concern: informal “LLM radar” is fallible, and casual AI accusations risk becoming the new generic “shill” accusation in online debates.

Kids Rarely Read Whole Books Anymore. Even in English Class

Access to the article

  • Some readers note they couldn’t even read the piece due to the paywall; others share an archive link.
  • This is framed as ironic in a discussion about literacy and reading.

School reading, enjoyment, and book choice

  • Many recall rarely reading assigned novels fully, even years ago; they used skimming, summaries, or friends’ explanations.
  • Several say forced reading, especially “boring” or archaic classics, permanently damaged their enjoyment of fiction and poetry.
  • Others describe becoming avid readers when allowed to choose their own genres (fantasy, adventure, tie-ins to games/movies) and when incentives were positive (library prizes), not coercive.
  • There’s criticism that canonical texts are often poorly matched to kids’ interests or language level; some argue such books might be better as one option among many, after children already like reading.

Should reading be forced? Purpose of English class

  • One side: without some enforced reading, many kids will never reach the literacy level needed to later discover that reading can be fun.
  • Counterpoint: we already “force” them and many remain functionally weak readers; the approach, not the existence of requirements, is the problem.
  • Debated purposes of English:
    • basic reading/writing fluency;
    • analysis of texts and media, detection of manipulation/propaganda;
    • shared cultural references and canon.
  • Some see school largely as childcare and social-normalization; others argue it’s one of society’s most valuable investments and that people vastly overestimate their native-language competence, so “testing out” isn’t realistic for most.

Literacy, distraction, and decline

  • Several commenters claim many children (and adults) can’t read beyond ~6th-grade level or even understand words in test questions, blaming culture and digital distractions.
  • Others note that modern devices make sustained reading hard due to constant notifications and interruptions.

Cursive, analog clocks, and skills mix

  • Some lament kids’ inability to read cursive or analog clocks; others say cursive is obsolete and rarely used, and analog clocks are mostly decorative.
  • There’s an anecdote about an expensive program using rapid analog-clock reading as cognitive training, met with both praise and skepticism.

What “counts” as reading: books vs other media

  • Some argue whole books build a competitive edge in jobs requiring nuance and sustained thought.
  • Others note that many teens read long-form online (fanfic, web serials) and that medium and canon matter less than volume and engagement.
  • Debate over quality: critics say much online/YA content is shallow compared to “real novels”; defenders respond that lowbrow entertainment has always existed, people develop taste over time, and enjoyment is a legitimate goal.
  • One line of thought: literature may no longer be the central cultural medium; another: books remain uniquely information-dense, imagination-driven, and less passive than short digital “content.”

Curriculum design and modernizing texts

  • Some suggest shorter, lighter, or contemporary works would better hook most students than dense classics like The Scarlet Letter or My Ántonia, which many recall as “objectively dull.”
  • Others worry that “updating” literature can slide into pandering or “brainrot” adaptations, but agree that media literacy should track dominant forms (not just printed novels).
  • Underneath is a tension: is the goal to foster any love of reading, to transmit a specific canon, or to build analytical skill regardless of medium?

Linux Sandboxes and Fil-C

Fil-C’s goals vs existing tools (ASan, sudo-like tools)

  • Fil-C aims to make existing C/C++ binaries memory-safe at runtime by turning memory errors into panics, not silent corruption or RCE.
  • Commenters stress ASan is only a bug-finding tool with false negatives and is explicitly not for production; attackers can still get RCE under ASan.
  • Fil-C is seen as promising for hardening legacy tools like sudo/polkit and for testing codebases to surface subtle bugs. A Nix integration exists, though not yet upstreamed.

Memory safety vs sandboxing (seccomp, WASM, VMs, Landlock)

  • Thread distinguishes “memory safety” (constraining what memory a bug can touch) from “sandboxing” (constraining what the compromised process can do).
  • Seccomp is considered powerful but painful: architecture/libc sensitive, hard to compose across libraries, and blind to paths. Best suited for application-specific policies, not language-level defaults.
  • MicroVMs and full VMs are praised as strong sandboxes but often too heavyweight for per-tab/per-connection isolation; OS-level sandboxing plus privilege separation is usually preferred.
  • WASM is widely characterized as sandboxing, not memory safety: bugs still let attackers control all memory inside the guest. Some point out partial safety (protected stack, typed indirect calls) and future WasmGC, but consensus is that it mainly protects the host.
  • Landlock is briefly mentioned; one commenter dismisses it, another notes it works fine with Fil-C but isn’t used in the example ecosystems.

Rust, Go, and Fil-C tradeoffs

  • Rust enforces most safety statically, with unsafe as an explicit escape hatch; Fil-C enforces safety dynamically with runtime checks and a GC, trading some performance.
  • Fil-C is pitched as ideal for existing C where rewrites are infeasible; Rust is favored for new codebases that can accept its type system and borrow checker.
  • Some argue that combining Fil-C with OS sandboxing could allow more “unfettered” system access than WASM-based sandboxes.

Data races, capabilities, and trust

  • A long subthread debates Fil-C’s handling of torn pointer writes under data races, where a pointer’s numeric value and its capability may momentarily mismatch.
  • One side argues this can violate intuitive “pointer == object” reasoning and is weaker than JVM/Rust models; the other argues Fil-C’s formal memory-safety definition is capability-based and still prevents full “weird execution” and arbitrary memory control.
  • Beyond the technical dispute, some commenters are uneasy about perceived defensiveness and “big claims,” while others feel the criticism shades into FUD and that limitations are in fact documented.

Performance, scope, and ecosystem

  • Fil-C’s GC and checks can cause noticeable slowdowns in some workloads; it targets “non–perf-critical but security-critical” C/C++ more than high-performance new systems programming.
  • Shared-memory designs (e.g., certain web servers and databases) are noted as challenging for Fil-C today.
  • Several commenters see Fil-C as complementary to Rust/Go and to traditional sandboxes, not a universal replacement.

Miscellaneous

  • Some discussion about naming the project after its creator; most consider it harmless or even convenient for namespacing.
  • Minor nitpicking over use of “orthogonal” for memory safety vs sandboxing, with agreement they’re distinct but not fully unrelated.

An off-grid, flat-packable washing machine

Modern machines vs hand-crank design

  • Several commenters say current “smart” washers are overcomplicated, restrictive, and condescending (locked doors, forced cycles, auto-draining that prevents soaking, lid locks, no true manual mode).
  • Some express genuine interest in replacing finicky electronic washers with something simple, reliable, and user-controllable, even in developed countries.
  • Others argue serving a family with a hand-crank machine would be a “nightmare” and that people are romanticizing hard manual labor.

Laundry practice debates

  • Long subthread on whether separating loads (by fabric, soil level, colors) still matters.
    • Some say they’ve stopped separating and see no real difference, citing improved dyes.
    • Others note fabric wear, cleaning quality, and unbalanced loads as reasons to separate.
  • Confusion around “eco” and “auto” programs:
    • Some claim “eco” isn’t actually the most efficient in practice; others cite EU rules saying the default program must be most efficient by test definition.
    • Manuals with detailed water/energy tables are more common in Europe than North America.

Design tradeoffs: cranking, rinsing, spin

  • Concerns that this washer lacks proper centrifuging, so clothes will be wetter and drying slower.
  • Rinsing is unclear from the article; some assume users will manually refill with clean water and/or hand-rinse.
  • Multiple people suggest pedals (bike-style) would be ergonomically superior to arm cranking.

Appropriateness for off-grid and Global South

  • Some highlight patchy or unreliable electricity: a device that works by hand but can be motorized when power is available is seen as valuable.
  • Others point out very cheap, extremely simple electric washers already exist in many poorer countries and may be cheaper and higher capacity.
  • Water access is flagged as a bigger constraint than agitation: often it’s easier to carry clothes to water than water to the home, making a bulky machine less practical.
  • Cost and local manufacturability are questioned; metalwork can be surprisingly expensive in some regions. Open-sourcing the design is seen as potentially transformative.

“Just use a tub” vs contraption

  • A detailed thread argues that time, chemistry, and soaking (e.g., tubs, plungers, ash/alkali detergents) matter more than mechanical cleverness; the device may be overcomplicating a solvable problem.
  • Others counter with examples (e.g., cookstoves) where low-tech “improvements” must consider health tradeoffs and real-world living conditions; there’s tension over armchair theorizing vs lived poverty.

Repairability and anti-consumer design

  • Commenters praise the metal, repairable construction compared to sealed-drum, glue-welded commercial machines that are deliberately hard to fix.
  • Examples are given of lid safety switches and sealed tubs turning cheap component failures into near–full replacement costs.

Market, impact, and “fairer future” framing

  • Some note the project’s slow scale-up (hundreds of units over years) and question its real impact relative to the rhetoric (“fairer future”).
  • Others think it’s well-intentioned but misdirected: Westerners designing for “far away” problems instead of enabling local solutions or focusing on their own societies.
  • A few see niche uses in rich countries (off-grid cabins, campers, preppers) rather than among the very poorest.

Some surprising things about DuckDuckGo

Meta: nature of the post and HN norms

  • Some called the article a “shill/fluff piece” because it’s written by the CEO, others replied that company blogs by founders are normal HN content if interesting.
  • It emerged the CEO didn’t submit it to HN, and moderators reminded people not to attack submitters and to follow site guidelines.

Censorship, Bing dependency, and torrent results

  • A major thread questioned the “we don’t censor results” claim.
  • Critics argue DDG effectively serves censored results because upstream providers (especially Bing) already censor, and from a user’s perspective that distinction is meaningless.
  • Specific tests around DMCA‑sensitive queries (pirated media, torrent domains) suggest some sites and hashes don’t appear, while Russian torrent sites do, fueling accusations of selective or inherited censorship.
  • DDG’s responses:
    • They don’t remove results themselves, monitor upstream removals, and can add back missing results.
    • They comply with DMCA. Some argue that is censorship by any practical definition.

Search quality, speed, and captchas

  • Several long‑time users say DDG results have declined recently, especially for obscure or “literal” queries, driving them back to Google, Bing, Brave, Yandex, or SearXNG.
  • Others report the opposite: DDG is solid for literal/technical searches, using !g only when needed.
  • Complaints about over‑aggressive autocorrect and query rewriting: users want empty or sparse result pages rather than “made‑up” queries.
  • Some users in Asia report DDG feeling noticeably slower than competitors.
  • Captcha prompts on DDG (and Google) are a pain point; people speculate they’re tied to VPNs, privacy tools, or fingerprinting defenses.

AI, duck.ai, and “no AI”

  • Some came to DDG explicitly to escape Google’s AI‑heavy UX; they like being able to disable DDG’s AI features or use noai.duckduckgo.com.
  • duck.ai is praised as a simple, privacy‑oriented way to try multiple LLMs, though the interface and model‑switching UX draw criticism.
  • Others think DDG will become irrelevant if Google/Bing keep AI results proprietary and DDG can’t differentiate in AI.
  • Competing AI search experiences from Brave and Kagi are mentioned, with mixed views on quality and the broader “AI‑everywhere” trend.

Bangs and power‑user tooling

  • Bangs remain a widely loved differentiator, especially !g, !w, and site‑specific shortcuts.
  • Multiple users say bang maintenance feels neglected: broken entries, ignored submission forms, no changelog or public issue tracker.
  • DDG staff cite overwhelming spam and limited team capacity; they say submissions aren’t ignored but are de‑prioritized and tooling needs improvement.
  • Several users replicate/extend bangs via:
    • Browser keyword search/bookmarks (especially in Firefox).
    • Self‑hosted frontends like SearXNG.
    • Custom “search routers” and launchers (e.g., Alfred workflows).
  • Kagi’s similar “bangs/snaps” system is referenced as inspiration, including an open‑source shortcode list (though too large for some client‑side uses).

Privacy, tracking, and business model skepticism

  • Some distrust DDG’s privacy claims, pointing to:
    • Lack (in their view) of deep, open third‑party code audits.
    • Click‑tracking URLs that require tools like Privacy Badger/AdGuard to strip.
    • Heavy employee count vs. unclear revenue.
  • DDG counters with:
    • A U.S. market share around 3%, implying substantial ad revenue even with lower monetization than Google.
    • A formal review by an advertising industry body that accepted their privacy claims.
    • Emphasis that ads are anonymous, optional, and not based on personal profiles; they sell ad slots by keyword, not by user identity.
  • Some users remain unconvinced, preferring paid options like Kagi or other engines (Qwant, Brave, Ecosia) to avoid ad ecosystems altogether.

Interfaces, APIs, and missing features

  • Lightweight interfaces (html.duckduckgo.com, lite.duckduckgo.com) get strong praise for speed and lack of clutter/JS, though one user notes curl requests are blocked via iframe on some endpoints and wants a proper paid API instead.
  • DDG says they can’t easily offer a general search API due to upstream licensing (e.g., Bing); others point to Brave and various search APIs (SERP, Exa, Tavily) filling that niche.
  • Users request:
    • Reverse image search (image‑as‑query, like Google Images’ camera icon).
    • Better dark mode options on the HTML interface.
    • Bookmark/password sync independent of the DDG browser, for people using mixed browser stacks.

Reputation, alternatives, and miscellany

  • Opinions diverge sharply: some use DDG 80%+ of the time and celebrate its longevity and privacy mission; others say Brave, Kagi, Yandex, or even Yahoo now outperform Google and DDG.
  • There’s appreciation for extras like duck.com, no‑AI endpoints, Email Protection (tracker stripping), and support for Perl and open‑source orgs.
  • Minor topics:
    • The long, nursery‑rhyme name is still seen as a branding handicap despite the duck.com shortcut.
    • Frustration with search engines (not just DDG) rewriting queries (“did you mean…”) is widespread.
    • A few comments branch into DDG’s tech stack (Perl), hiring experiences, and remote‑work/timezone logistics mentioned on DDG’s “How We Work” page.

Recovering Anthony Bourdain's Li.st's

Archive & Missing Pieces

  • Commenters appreciate the successful recovery of almost all of Bourdain’s li.st entries and the effort to “rescue from the sands of time.”
  • One list remains missing (“David Bowie Related”), though an image preview exists on Reddit, prompting talk of a “community challenge” to reconstruct it.
  • Several people hope the associated images can also be recovered, speculating that they may still exist on cloud infrastructure or in old browser caches.
  • Another independent attempt to mine Common Crawl for the same content is acknowledged, with mutual credit added after the fact.

Site Design & Accessibility

  • Some readers find the light-on-light design, dotted background, and font choices hard to read, especially for older eyes, and argue that passing automated contrast checks isn’t sufficient.
  • Others report that dark-mode browser extensions are actually what make the text illegible. With extensions disabled, they find the site fine.
  • The author defends the current palette, suggests reader mode if needed, and asks for more concrete feedback.

Bourdain’s Appeal and Legacy

  • Fans describe him as kind, curious, open-minded, profane, and emotionally candid, modeling a non-cringey masculinity and making food/travel/culture feel accessible.
  • His mix of literary references, blue-collar kitchen background, and openness about addiction and depression resonated deeply; his suicide hit many hard.
  • Some see him as a cultural touchstone or even generational figure; others think that level of hero worship conflicts with his own anti-idolatry stance.

Critiques of Bourdain and His Shows

  • Skeptics emphasize he was an entertainer with a carefully curated persona; viewers never really “knew” him.
  • Several cite serious moral failings (e.g., paying to silence abuse allegations linked to his partner, abandoning family obligations) as disqualifying him as a role model.
  • Others report episodes where he misrepresented their hometowns, relied on dubious local “progressives,” or romanticized poverty, leading them to question the authenticity of other episodes.
  • There is discomfort with a rich Western man “holding court” abroad, sometimes appearing to speak as an expert on places he barely visited.

Tourism, Travel, and Ethics

  • A long subthread uses Bourdain as a jumping-off point to critique “travel is my passion” culture: affluent tourists consuming food and aesthetics in poorer countries, then returning home feeling enlightened.
  • Some view this as shallow “cultural primitivism”; others counter that tourism provides crucial income, and if it were net harmful, governments would restrict it.
  • Another faction notes that policy is set at national level while burdens (Airbnb conversions, rising rents, crowding) hit specific neighborhoods, fueling anti-tourist protests in places like Barcelona, Hawaii, and parts of Latin America.
  • Debate unfolds over whether anti-tourist sentiment is mostly economic, cultural, or a socially acceptable outlet for xenophobia, with examples of graffiti, “expat vs immigrant” double standards, and resentment of foreigners who never learn the local language.
  • Some argue that the problem is not travel per se but mass imitation of a certain “Bourdain-esque” aesthetic by people who lack his experience or depth.

Miscellaneous Notes

  • Multiple commenters value the lists for practical recommendations (bars, hotels, books, films like Tampopo).
  • There is a side discussion about ultra-expensive Kramer knives, whether they are “real tools” or status objects, and Bourdain’s relationship to them.
  • Fans express ongoing grief and note how often his old content still runs on TV without explicit acknowledgment that he is gone.

Workday project at Washington University hits $266M

Cost and scale of the Workday project

  • Commenters are shocked by $266M over ~7 years, translating (by one rough calc) to ~$7,600 per staff member, or ~$2k/year per student, which many call “insane” and comparable to hiring armies of secretaries.
  • Others argue ~$38M/year for HR/finance/CRM in a 20k+ employee / 16k student organization plus a major medical network is high but not obviously irrational, given complexity and risk.

Experiences with Workday and other enterprise tools

  • Many describe Workday as slow, confusing, unreliable, and the “worst software” they’ve used; people keep data outside of it because it loses input.
  • ServiceNow is also heavily criticized: bad navigation, broken back/forward, slow page loads, weird URL schemes, confusing UI.
  • Some counter that implementations vary; misconfiguration and over‑customization by in‑house teams can make any such platform miserable. A few note that compared to older tools (mainframes, BMC Remedy, 3270 UIs) these systems were once a huge step up.

Why buy Workday instead of building in‑house?

  • Defenders emphasize: international staff, visiting faculty, student information systems, multi‑role entities, integrations across many semi‑independent units, compliance (including HIPAA in the medical network), and long‑term support.
  • They argue universities lack the capacity to build, maintain, secure, and evolve systems of this scope, and that previous “homegrown” systems often became brittle, underfunded legacy code only a few people understood.

Calls for university-built or collaborative systems

  • Several ask why universities with strong CS departments don’t jointly build a modular open system for academia, noting past successes (e.g., older registration systems, classic university software).
  • Others respond that governance, maintenance, politics, data access/privacy, and differing needs across institutions make this very hard; students and PhDs shouldn’t be diverted from research into running core admin IT.

Administration, incentives, and consultants

  • Multiple comments blame administrative bloat, misaligned incentives, and a “consulting parasite” ecosystem (Workday/Palantir/ServiceNow/Accenture, etc.) that sells complexity, change requests, and overruns.
  • Some suspect “kickbacks” or at least opaque vendor–leadership relationships; others see mostly underestimated complexity and organizational dysfunction rather than outright corruption.

Want to sway an election? Here’s how much fake online accounts cost

Concerns about Hungary and Russian influence

  • Commenters flag the 2026 Hungarian election as a key test case, describing the ruling party as closely aligned with Russia and allegedly skirting Facebook’s political-ad restrictions.
  • Anecdotes and demographic data point to educated, pro‑EU Hungarians emigrating, feeding concern that the electorate is becoming less pro‑EU.

State of social media platforms

  • Several comments describe Twitter/X as saturated with racist and Nazi content, especially in default or “For You” feeds, with others reporting clean feeds and blaming user choices and algorithms.
  • There’s debate over whether Twitter is uniquely bad or just similar to TikTok, Instagram, Facebook, etc., all seen as “algorithmic content addiction generators.”
  • Some argue algorithms are designed to radicalize and reward outrage; others say they merely reflect majority tastes.

Effectiveness and mechanics of fake accounts

  • People question whether cheap fake accounts actually change votes. One side cites Cambridge Analytica, Team Jorge, and other bot networks as evidence that microtargeted manipulation works; another, drawing on digital‑ad experience, is skeptical they’re very effective.
  • Several stress that account price alone is misleading: quality, geography, spam detection, IP/proxy infrastructure, human labor, and platform bot‑detection all matter.
  • Cheap foreign accounts may still be useful for mass upvoting or astroturfing, but are less effective when not in the same cohort as the target audience.

Democracy, manipulation, and money

  • Broader worries: democracy’s structural vulnerabilities, social media as the “coup de grâce” after mass media, and a drift toward oligarchy or techno‑feudalism.
  • Debate over whether restricting manipulation tools (e.g., by making accounts costly) is good—likened by some to limiting access to bioweapons, by others to entrenching a “monopoly on manipulation.”
  • Discussion of campaign finance and Citizens United: some argue money and elite preferences already dominate policy; others note diminishing returns to ad spend.

Identity, regulation, and research

  • Strong suspicion that phone‑number requirements are more about tying online and real‑world identities than preventing abuse; this criticism extends even to privacy‑branded apps and AI services.
  • A long thread speculates that “think of the children” age‑gating and ID pushes are partly motivated by fear of AI‑driven bot swarms overwhelming democratic discourse, making human verification politically inevitable despite civil‑liberties concerns.
  • Commenters note that misinformation research was heavily funded after Cambridge Analytica but now faces U.S. political backlash, with grants cancelled and visas reportedly denied to fact‑checkers and moderators.
  • Clarification is offered that Cambridge Analytica was not a formal spin‑out of the University of Cambridge, despite name confusion.

Countermeasures and future outlook

  • Proposals include minimum per‑account fees, stronger identity verification, and heavy regulation or even blocking of major social platforms as a sovereignty issue.
  • Others argue these measures risk overreach, centralizing control, or simply won’t scale against future AI swarms.
  • A recurring pessimistic theme is that large, open platforms may be doomed to become “dark forests” dominated by bots, with only small, tightly moderated communities remaining relatively trustworthy.

Why Twilio Segment moved from microservices back to a monolith

Article timing & context

  • Many point out the post is from 2018 and argue its lessons predate today’s tooling, patterns, and AI assistance.
  • Others say the age is still relevant: it documents a common failure mode of “microservices done badly,” not an obsolete technical detail.

What “microservices” actually are

  • Strong debate over definitions: some argue microservices are about independently deployable services aligned to business capabilities, not about infra, HTTP, or multiple machines.
  • Others push back on authority-based definitions and note the term’s history from SOA, saying most real systems don’t meet the “pure” ideal anyway.
  • Several stress that if services can’t be deployed independently, you’ve built a “distributed monolith,” not microservices.

Shared libraries, coupling & distributed monoliths

  • The Segment case where a shared library forced redeploys across ~140 services is widely criticized as tight coupling that defeats microservice benefits.
  • Thread dives into nuances:
    • Some say any shared dependency with lockstep upgrades = distributed monolith.
    • Others argue shared deps are fine if consumers can pin versions and upgrades are backward compatible.
    • There’s a long subthread on protobuf/JSON schemas and backward compatibility vs version sprawl and tech debt.

Org structure, discipline & culture

  • Many see the real root cause in organizational issues: weak technical leadership, lack of “someone who can say no,” and Conway’s/Peter/Parkinson effects.
  • Argument that microservices are primarily an org-scaling tool; using them with a tiny team (3 people, 140 services) is self-sabotage.

Monolith vs microservices trade-offs

  • Multiple commenters emphasize both patterns can work or fail; dogmatism is the real problem.
  • Monolith advantages cited: easier refactoring, simpler wide-scale upgrades (e.g., security patches), fewer distributed-systems failure modes, better end-to-end understanding.
  • Microservice advantages cited: team decoupling, isolated deployments, clearer ownership, better fit for many heterogeneous integrations.

Testing, tooling & repos

  • A lot of the Segment story is reframed as test-quality and repo-layout problems rather than architecture per se.
  • Several suggest monorepos with good tooling (Bazel, dependency-based test selection) can keep services independent while solving many of the pain points they hit.