Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 230 of 356

lsr: ls with io_uring

Performance & Benchmark Results

  • lsr (Zig + io_uring) is reported ~70% faster than GNU ls on large directories and issues ~35× fewer syscalls.
  • strace comparisons (in a “calls” directory):
    • ls: 198 syscalls,
    • eza: 476,
    • lsr: 33.
  • Some argue syscall reduction is secondary to wall-clock time; others emphasize that fewer syscalls reduce kernel overhead and contention.
  • Another view: io_uring still does the same work in the kernel; fewer syscalls don’t necessarily mean proportionally less kernel work.

Why Core Tools Don’t All Use io_uring

  • Portability: classic tools target many POSIX-like systems; io_uring is Linux-only and relatively recent.
  • Stability & churn: io_uring’s API keeps expanding, and many prefer to wait to see if it “sticks.”
  • Programming model: effective use generally implies async/event-driven designs; many tools are written in simple synchronous C and would need major refactors.
  • Tool authors may not want to work in C or go through GNU coreutils’ contribution process, so they build separate replacements instead.

Security, Sandboxing & Adoption

  • Multiple participants describe io_uring as a “security nightmare,” citing a long series of kernel vulnerabilities, sandbox escapes, and container escapes.
  • Concerns include: direct user–kernel shared memory, rapidly growing surface area, ability to bypass syscall-oriented security mechanisms (e.g., seccomp), and poorer auditability of batched operations.
  • As a result, many environments (notably some container runtimes and large operators) reportedly disable or restrict io_uring.

Libc, Polyfills & Fallbacks

  • Idea raised: make libc transparently implement POSIX I/O atop io_uring. Pushback:
    • Emulating sync on async often adds extra syscalls (e.g., futex, wakeups) and complexity.
    • If calls are serialized anyway, you lose most of io_uring’s benefits.
  • Some discuss user-space “io_uring emulators” using worker threads and ringbuffers so apps can keep the same API on kernels without io_uring. Others note existing runtimes that fall back to epoll.

Filesystems, NFS & Real Workloads

  • Interest in behavior against NFS or flaky networks: io_uring doesn’t inherently fix blocking semantics or NFS’s “local disk” illusion.
  • Debate over NFS’s design: some see “pretending the network is a disk” as fundamentally flawed; others note all reliability is built from unreliable components anyway.
  • File system choice matters: ext4 can be slow with huge directories; XFS reportedly handles large dirs better. Some users see ls/du taking minutes on millions of files.

Feature Tradeoffs & Ecosystem

  • Users like eza’s rich icons, colors, and type detection; lsr is praised for speed but seen as visually simpler.
  • Suggestions to implement LS_COLORS/dircolors and to build similar io_uring-based versions of cat, find, grep, etc.; mention that tools like bat do far more syscalls than cat.
  • Note that io_uring currently lacks getdents; main benefit for ls-style tools is bulk stat (especially ls -l).

Zig, Tangled & Miscellaneous

  • Some discussion of Zig’s allocator model (passing an allocator interface around; different backends like smp_allocator vs page_allocator).
  • Tangled (the hosting platform, built on atproto/Bluesky) draws interest, but some question whether atproto is truly decentralized given Bluesky-centric auth.

“Dynamic programming” is not referring to “computer programming”

Multiple meanings and historical origin

  • Commenters note that “dynamic programming” means different things in:
    • Competitive programming / LeetCode (table-based optimization for overlapping subproblems).
    • Reinforcement learning and control theory (Bellman equations, Hamilton–Jacobi–Bellman, dynamical systems).
  • Several point out that both strands trace back to Bellman’s work.
  • The article’s story—“dynamic” chosen for its positive, impressive sound and “programming” meaning planning/scheduling—matches others’ recollections, though some cite disputes about details of the anecdote.
  • “Programming” is linked to “linear/integer/constraint programming” and to scheduling in operations research, not to writing code.

What dynamic programming is (according to the thread)

  • Many comments emphasize: DP is fundamentally about decomposing an optimization problem into overlapping subproblems with optimal substructure, not about any particular implementation.
  • In classical math/OR/control, DP is framed as backward induction on time-dependent systems, not recursion+arrays.
  • Several argue that in “true” DP, the table or value function and Bellman-type recurrences are the core, and memoized recursion is just one computational technique.

Memoization, caching, recursion debate

  • A large subthread argues over whether DP is “just cached recursion.”
    • One camp: practical DP = recursion + memoization; that’s what most CS learners see.
    • Other camp: this is reductive; DP is a problem-structuring method, caching is only one way to exploit it.
  • Distinctions are drawn between:
    • Memoization vs general caching (determinism, scope, global shared state).
    • Top-down (memoized recursion) vs bottom-up tabulation and “minimal-memory memoization.”
  • Some insist that conflating memoization with generic caching leads to bad designs and bugs.

Applications and contest culture

  • Multiple IOI/ICPC anecdotes:
    • Early contestants often failed problems due to using plain recursion and learning only later that DP was needed.
    • DP knowledge evolved from medal-winning edge to “table stakes” as resources improved.
  • Examples mentioned: SQL join-order optimization (e.g., DuckDB), Emacs redisplay, shortest/longest paths, edit distance, BLAST approximating Smith–Waterman, routing (Bellman–Ford), kinship/relatedness computations, query optimizers.

Naming, marketing, and confusion

  • Many dislike the term: “dynamic” feels vacuous, “programming” misleading; some prefer terms like “tabulation,” “optimization,” or “bottom-up memoization.”
  • Others broaden the criticism to “linear programming,” “extreme programming,” “wave function collapse,” “extreme learning machines,” etc., as examples of marketing-oriented or opaque names.
  • Several say such names made the concept seem harder than it is and delayed their understanding; at least one commenter still feels unclear even after reading the article and discussion.

Psilocybin decreases depression and anxiety in cancer patients (2016)

Study design and placebo/blinding challenges

  • Commenters note psychedelic trials struggle with blinding: most participants can tell if they received an active dose.
  • Strategies mentioned: low vs high dose (“micro vs macro”), active placebos like niacin or strong antihistamines to mimic bodily sensations, or comparing psilocybin to other hallucinogens.
  • Some argue mood interventions are inherently hard to blind and third‑party observers (family, monitors) may be better outcome raters.

Role of preparation, therapy, and music/setting

  • Multiple anecdotes stress that the benefit came from a structured protocol: screening for treatment‑resistant depression, extensive prep sessions, and dosing under supervision of trained therapists/trip sitters.
  • Guided soundtracks and carefully chosen music are described either as essential for steering thoughts and avoiding “loops,” or as something that interferes with the timelessness of the experience.
  • “Set and setting” (mindset, environment, sitter) are repeatedly emphasized as critical.

Reported benefits

  • Several people claim life‑changing relief from severe anxiety/depression, including in cancer contexts and long‑term treatment‑resistant cases; some report lasting improvements in empathy, sobriety, or sense of meaning.
  • Others describe psilocybin as enabling perspective shifts or “ego death” that break maladaptive patterns.

Adverse experiences and risks

  • Many counter‑anecdotes: onset of panic attacks, derealization, suicidal ideation, or psychotic‑like states after otherwise “normal” trips, sometimes lasting months or longer.
  • Concerns are raised about triggering latent psychosis or schizophrenia, especially with family history; some argue the risk is non‑trivial, others insist it is rare but real.
  • Debate over physical toxicity: one side characterizes euphoria as mild poisoning with potential renal harm; others demand evidence and point out misidentified or unclear mushroom species in cited cases.

Evidence quality, effect sizes, and placebo

  • Critical commenters highlight: small, self‑selected samples; many prior hallucinogen users; weak blinding; uncorrected multiple outcomes; and crossover designs.
  • Depression scores improve in both active and placebo arms, attributed to strong placebo effects, regression to the mean, and “turbo placebo” from a mystical‑seeming intervention.
  • Some psychologists argue psilocybin may help some individuals but current data do not yet justify broad clinical adoption and hype is outpacing evidence.

Dosing and pharmacology discussion

  • The trial’s 30 mg/70 kg dose is informally equated to roughly 2–5 g of dried Psilocybe cubensis, with wide variability by species, strain, and cultivation.
  • Debate over mechanisms: acute 5‑HT2A activation vs. longer‑term receptor density changes, anti‑inflammatory effects, and how this compares to chronic SSRIs or microdosing (with possible heart‑valve risks at sustained exposure).

Policy, economics, and regulation

  • Several threads ask why psilocybin remains Schedule I while amphetamines are widely prescribed.
  • Explanations raised: 1970s drug‑war politics targeting counterculture and minorities; path‑dependence from historical medical use of stimulants; stigma from recreational use; and limited pharma incentives for an infrequent‑use, easily home‑grown drug.
  • Some favor full legalization but criticize advocacy that minimizes risks, likening it to earlier marijuana debates.

Alternatives and broader mental health context

  • Mindfulness, moral/behavioral change, reduced sensory overstimulation, and charitable acts are offered as a non‑drug success story for depression, framed in Buddhist terms.
  • Others push back that “just be more moral” is not meaningful treatment advice, though cultivating loving‑kindness and reducing compulsive desire is seen by some as beneficial.

CP/M creator Gary Kildall's memoirs released as free download

Legacy and Personality of Gary Kildall

  • Many commenters express admiration for Kildall as an inventor, educator, and visionary who viewed computers as learning tools rather than profit engines.
  • Several contrast him with more aggressive business figures in tech, suggesting his distaste for business and marketing hurt his commercial success but made him morally preferable.
  • There’s regret that he isn’t as widely recognized as other “famous computer people,” despite foundational contributions (CP/M, BIOS abstraction, early GUIs like GEM).

CP/M vs MS-DOS and the IBM PC Deal

  • Repeated debate over why CP/M-86 lost to PC‑DOS/MS‑DOS:
    • One side emphasizes CP/M-86’s much higher IBM-set retail price and late delivery, making DOS a “no-brainer.”
    • Others cite an oral history from a DRI executive claiming IBM promised equal footing on price but then undercut CP/M-86 drastically, which Kildall later described as “the day innocence was gone.”
  • Disagreement over who set CP/M-86 pricing: some say IBM simply passed through higher royalty costs; others say DRI misplayed negotiations.
  • Discussion of Tim Paterson’s QDOS/86‑DOS as a CP/M-like stopgap IBM could ship quickly, later adapted into PC‑DOS/MS‑DOS. Timing (licensing vs purchase) is disputed but generally agreed to be very fast.

Gates, Jobs, Elites, and Nepotism

  • Mixed views on Gates: acknowledged as a highly talented programmer and early software entrepreneur, but also portrayed as intensely commercial and sometimes ruthless.
  • Long thread on whether his family connections (especially his mother’s nonprofit board overlap with IBM leadership) materially influenced IBM’s choice of DOS; some see plausible cronyism, others think IBM’s technical and financial vetting dominated.
  • Jobs is compared as a product and taste-driven figure, with both praise (design) and criticism (fanless designs, treatment of early employees).

Memoirs Release, Redactions, and Alcoholism

  • Excitement about the free release, but disappointment that only early chapters are available and that the rest may be withheld for decades.
  • Some argue the family is right to omit personal and alcoholism-related material; others feel posthumous editing distorts the historical record and could have offered valuable cautionary lessons.
  • Speculation that omitted sections concern family conflicts, with recognition that memories and later narratives are often unreliable.

Technical and Historical Side Threads

  • Tangent on whether early BASICs were “compilers” or pure interpreters, with detailed back-and-forth on tokenization, parsing, and definitions of compilation.
  • Explanation that CP/M’s BIOS was a pluggable device-driver layer (not a ROM BIOS), enabling quick ports; admiration for how fast this could be implemented on 1970s hardware.
  • Mention of other DOS-like systems (FreeDOS, TurboDOS, MP/M) and how bundling and ecosystem effects made replacing MS‑DOS unattractive.

Media, Archives, and Nostalgia

  • Multiple pointers to “Computer Chronicles” episodes (especially the Kildall special) and Internet Archive collections, plus an EPUB conversion of the scanned memoir for better readability.
  • Nostalgic recollections of GEM, early Windows, and prepress/desktop-publishing workflows where multiple OSes briefly competed before Microsoft’s dominance solidified.

An unprecedented window into how diseases take hold years before symptoms appear

Study and “functional reserve”

  • Commenters link the Biobank results to long-known ideas of “functional” or “cognitive” reserve: organs and cognition can compensate for damage for years before symptoms appear.
  • Examples given: kidneys losing nephrons until reserve is exhausted; Alzheimer’s starting with mild forgetfulness decades before disability; HIV and SARS‑CoV‑2 effects being buffered by higher cognitive reserve.

Kidney function and eGFR anecdotes

  • A healthy, fit person failed kidney-donor screening due to chronically low-but-stable eGFR, which later improved slightly; others note eGFR is an estimate that fluctuates and declines roughly 1 point/year after 30.
  • Creatine supplementation and contrast dye from imaging are mentioned as possible confounders or harms.
  • Cystatin C and direct GFR tests are cited as more accurate when donation risk is evaluated.

What “functional reserve” actually is

  • Some argue it’s not a single thing: redundancy at many levels (two kidneys, many nephrons, vascular elasticity, etc.) add up to reserve.
  • One explanation: only a subset of glomeruli filter at any given time; reserve units activate when others fail.
  • Analogies are made to redundant cloud infrastructure; gradual failure only shows when enough components are lost.

Preventive care, incentives, and inequality

  • Strong support for prevention as vastly cheaper than late treatment; used to argue for centralized systems like the NHS.
  • Others note perverse US incentives: prevention often out-of-pocket, treatment often insured.
  • Debate over whether “anyone can afford” prevention; critics highlight food deserts, unsafe neighborhoods, difficulty taking time off work.

Screening, diagnostics, and overtesting

  • Several warn that “more data” is not always better: many screenings (blood panels, imaging) produce false positives and harmful interventions.
  • PSA testing and widespread prostate cancer illustrate overdiagnosis vs meaningful disease.
  • Popular longevity frameworks and tests (e.g., coronary calcium scans, VO2 max, DEXA) are seen by some as mainly for “worried well” enthusiasts; often they don’t change recommended lifestyle actions.
  • CT scan radiation risk (one commenter cites “~5% of cancers”) is raised as a reason to avoid unnecessary imaging.

Self-healing, early detection, and Covid

  • Commenters stress that the body often clears early cancers and infections unnoticed; “too-early” detection can mislead statistics and trigger unhelpful care.
  • Some argue SARS‑CoV‑2’s long-term effects resemble known post-infection phenomena; others challenge blanket claims without specific evidence, noting huge publication volume alone proves little.

Why is AI so slow to spread?

How fast is AI actually spreading?

  • Several commenters argue AI adoption is very fast: ChatGPT hit hundreds of millions of users and “AI features” are being shoved into most products.
  • Others say that being embedded everywhere ≠ being meaningfully used; many users ignore AI buttons and just want reliable search or basic app functions.
  • Comparisons are made to PCs and the internet, with some saying LLMs have diffused into business talk much faster, but retention and real impact remain unclear.

Usefulness and productivity: mixed experiences

  • Some report major gains: automating metadata for video streaming, content summarization, internal search across tools, parsing files, generating boilerplate code, tests, and routine docs.
  • Others find AI slower than doing the task themselves, especially for integration troubleshooting, complex architectures, or specialized domains.
  • There’s a split: for some, AI is a “force multiplier”; for others, it adds a review-and-debug layer that cancels any benefit.

Reliability, hallucinations, and trust

  • Many comments focus on AI’s tendency to “bullshit”: wrong browser details, car torque specs, legal facts, sports trivia, or UI actions—with high confidence.
  • This unreliability is seen as disqualifying for law, medicine, safety‑critical code, and serious customer support.
  • Users want systems that say “I don’t know” instead of fabricating; current behavior undermines trust and slows adoption.

Business models, lock‑in, and inequality

  • Fears: big vendors underprice now, then hike prices once firms lay off staff and get dependent; AI amplifies existing corporate abuses and bias.
  • Others counter that switching providers and running local/open‑source models is possible, so moats are shallow.
  • Debate on inequality: some see AI as a huge divider (those with tools/skills vs. everyone else); others see potential leveling—cheap “AI lawyers/doctors” and Harvard‑like education access—but this is challenged because of errors and asymmetry (rich firms will also have better tools and prompts).

Data, context, and integration hurdles

  • A recurring theme: models lack organizational context. They don’t know legacy decisions, hallway conversations, or nuanced product strategy; encoding that is tedious.
  • Commenters call for a new “bridge layer” between corporate data lakes and AI, with proper access control, auditability, and UX for giving context.
  • Until then, many see AI as better for generic tasks than for deeply embedded, domain‑specific workflows.

Worker incentives, anxiety, and resistance

  • Non‑technical workers often see AI as a direct job threat, not a helper, especially where executives openly frame it as a way to cut headcount.
  • Some describe burnout and unrealistic expectations (“do double the work with AI”) without evidence of achievable productivity gains.
  • This produces quiet refusal or “sabotage” of AI initiatives, especially when people don’t share in the upside.

Developer workflows and coding agents

  • Enthusiasts: with clean architectures, good documentation, and carefully written “tasks,” LLMs can implement features plus tests; devs shift to specification and review.
  • Critics: that workflow is less fun, and on large, complex codebases AI often produces incoherent designs, subtle bugs, and wrong refactors—reviewing and fixing them is as hard as writing code.
  • Some see big gains for CRUD/front‑end/boilerplate; others say senior‑level engineering (design, invariants, performance) gets little benefit.

Hype, media narratives, and skepticism

  • Several comments criticize media like The Economist for assuming “AI is hundred‑dollar bills on the street” and blaming slow diffusion on inefficient workers or bureaucracy.
  • Others liken the atmosphere to crypto/NFTs: massive hype, weak evidence of broad, durable business value, and likely future disillusionment—though most expect AI to remain useful after any bubble pops.

Apple bans entire dev account, no reason given

Account termination & lack of explanation

  • The dev’s Apple account was terminated citing section 3.2(f) of the Apple Developer Program (ADP), but without a concrete, actionable explanation.
  • Commenters note this is common: Apple rejection/termination letters are highly generic and legally sanitized.
  • Some argue we can’t fully judge the case because the developer hasn’t shared much about what they did; others counter that regardless, such serious actions should always be clearly explained.

Section 3.2(f) & Apple’s power

  • 3.2(f) is seen as extremely vague, covering any act “intended to interfere” with Apple software/services or business practices.
  • People speculate it could be used to block apps that conflict with Apple’s plans (e.g., “Recall”-like screen recording tools) or even treat support contact as “interference.”
  • There’s criticism that Apple unilaterally controls developer identity and notarization, with no alternative attestation providers.

Broader pattern: bans, geoblocking, and fraud

  • Similar opaque bans are reported from AWS, Amazon retail, Imgur, and others, often triggered by login from “high-fraud” countries or cross-region usage.
  • Users describe geoblocking and “fake” error messages (e.g., capacity errors instead of honest 403s), and even “birthblock” of users born in occupied regions.
  • Some justify IP-based blocking as a crude but common “defense in depth” against botnets; others highlight the collateral damage.

Lock-in, ownership, and alternatives

  • Many emphasize that tying critical work or content to locked ecosystems (Apple, Amazon, etc.) is dangerous; bans can mean instant loss of purchases and data.
  • Apple is criticized as uniquely restrictive: you need its permission to run most software, especially on iOS; macOS notarization is becoming de facto mandatory.
  • Android and Windows are seen as somewhat more escapable via alternative OSes, sideloading, or offline use, though banking/government apps can limit that.

Developer risk & ecosystem effects

  • Developers express anxiety that their livelihood can be destroyed overnight with no recourse.
  • Some say this is yet another reason to avoid Apple platforms or not build a business inside any gatekeeper’s “moat.”
  • Hopes are pinned on regulation (e.g., EU/DMA) to open distribution and force more transparent processes.

Linux and Secure Boot certificate expiration

Impact of the 2011 Microsoft key expiration

  • Many Linux Secure Boot chains rely (knowingly or not) on a Microsoft third‑party key from 2011 that expires in Sept 2024.
  • If firmware isn’t updated to trust the new 2023 key, new shims/bootloaders may no longer boot, even if the OS itself is updated.
  • Some firmware doesn’t even check expiry, so behavior is hardware‑dependent and unclear.
  • Existing installs may keep working until something in the chain changes (e.g., new shim), which could surprise users months later.
  • Windows bootloaders are also affected by 2026 expirations, so this isn’t only a Linux problem.

Microsoft as CA and power/control concerns

  • Many comments criticize the fact that Linux boot depends on Microsoft’s PKI at all; this is seen as structurally anti‑competitive and a “single point of failure.”
  • Others argue Microsoft’s CA role was inevitable because nearly all x86 PCs are sold as Windows‑capable and no Linux player stepped up early with a competing PKI.
  • There is speculation about legal/regulatory remedies (e.g., EU‑run attestation or mandated multi‑vendor trust), but also fear such intervention could worsen lock‑in.

Security value vs. user freedom

  • Pro‑Secure‑Boot side:
    • Designed to block bootkits/MBR rootkits and support a chain of trust into disk encryption (e.g., TPM‑sealed keys).
    • Helpful in enterprise/server environments and for making FDE transparent for ordinary users.
  • Critical side:
    • For most personal threat models, boot‑level attacks are rare compared to user‑space malware.
    • The real, common “attack” is restricting what OS you can run; phones, consoles and some PCs already demonstrate this.
    • Anything that can lock you out of your own hardware is viewed as a bigger problem than the threats it mitigates.

Linux Secure Boot in practice

  • On mainstream distros (Ubuntu, Fedora, recent Debian), Secure Boot generally “just works” on the happy path, including with some NVIDIA drivers via signed DKMS modules or vendor packages.
  • Off the golden path (Arch, custom kernels, VMware/VirtualBox, out‑of‑tree modules), users report manual key enrollment, MOK dialogs, and recurring signing chores.
  • Tools like sbctl, UKI, and distro hooks can fully automate signing, but UX remains confusing, especially around MOK vs UEFI KEK/db.

Firmware/UEFI design and vendor failures

  • Many see UEFI (and Secure Boot) as over‑complex and poorly implemented; firmware is often buggy, unmaintained, and inconsistent.
  • Some hardware loads GPU or option ROM blobs signed with Microsoft keys before letting you enter firmware setup; replacing keys can brick access to the setup screen itself.
  • Others defend UEFI as standardizing what vendors were already doing and enabling cleaner dual‑booting vs BIOS.

Certificate expiry semantics

  • Multiple commenters question the point of 10–15‑year expirations: they don’t meaningfully mitigate key theft and instead threaten long‑lived hardware.
  • Suggested alternatives:
    • Treat expiry as a warning, not a hard failure.
    • Validate signatures “as of firmware build time.”
    • Use timestamping/DBX for revocation while avoiding hard global time dependencies.
  • Others argue expiry helps bound CRL/DBX growth and crypto deprecation, but acknowledge the ecosystem didn’t plan for vendor neglect.

User keys, alternatives, and mitigations

  • Several argue that “real” Secure Boot means enrolling your own keys and optionally dropping Microsoft’s, which many report doing successfully (often via sbctl).
  • Caveats: certain laptops (notably some Lenovo models) reportedly break video/firmware UI if Microsoft/Lenovo keys are removed.
  • Common fallback strategies:
    • Disable Secure Boot entirely.
    • Keep Secure Boot but rely on distro shims and hope vendors ship firmware updates.
    • In desperation, play clock‑games (set RTC back before expiry) – acknowledged as hacky and fragile.

Broader systemic worries

  • Thread connects Secure Boot to larger trends: Intel ME/AMD PSP backdoors, trusted computing as a tool for DRM and surveillance, planned obsolescence, and the risk of losing the ability to run old systems on old hardware.
  • Underlying tension: security engineering vs. user sovereignty. Many see current Secure Boot governance (especially with Microsoft as de facto root) as biased toward the latter’s erosion.

Fully homomorphic encryption and the dawn of a private internet

Performance and scalability limits

  • Many commenters argue current and foreseeable FHE is orders of magnitude slower (often cited ~1000×, possibly far worse) than plaintext due to bootstrapping and huge ciphertexts.
  • Even without bootstrapping, ciphertext blowup (often ~10³× larger) implies massive extra memory bandwidth and compute that hardware advances alone are unlikely to erase.
  • Latency impacts are framed as unacceptable for most user-facing tasks (e.g., milliseconds → seconds/minutes; 30s → hours).

Search, databases, and index problems

  • Fully private search over large indexes is highlighted as especially hard:
    • Naively, encrypted-key search is O(n) instead of O(log n); recent PIR-style schemes with polylog(n) queries require extreme preprocessing and storage blowups (petabytes for modest DBs).
    • For “FHE Google,” either the server must encrypt huge indexes per client or do near-linear work per query, both viewed as impractical.
  • Some systems mix FHE/PIR with partial leakage (e.g., hashed prefixes, subsets, anonymous networks), trading strict privacy for performance.

Use cases and economics

  • Strong consensus: generic “private internet” via FHE is not economically viable soon. A privacy‑first Google‑scale service would be vastly more expensive and far slower; few users would pay enough.
  • People often prefer free, data‑harvesting services; privacy‑preserving alternatives already exist but remain niche.
  • FHE is seen as promising for narrow, high‑value, low‑throughput tasks: finance, regulated data sharing, some medical or government/military computations, “data owner vs model owner” scenarios.

Alternatives to FHE

  • Confidential computing (TEEs: SGX/TDX/SEV/ARM TEE) is repeatedly described as the only realistic way to do private LLM inference and many cloud workloads, despite hardware‑trust issues and past breaks.
  • Specialized searchable encryption and PIR schemes can give near‑plaintext search performance for specific patterns, with FHE reserved for small filtered subsets.
  • Self‑hosting and encrypted backups are presented as a simpler, cheaper privacy route for many personal data uses.

How FHE works & security nuances

  • Multiple explanations outline homomorphism: operations on ciphertext correspond to operations on plaintext, enabling arbitrary circuits via additions/multiplications.
  • Some struggle with the intuition that “being able to compute means weaker encryption”; others note:
    • FHE is malleable (not NM‑CCA2) but can still be IND‑CPA/semantically secure.
    • Computation may leak allowed structure (e.g., which operations were run) but not plaintext, and “circuit privacy” research aims to hide even that.

Privacy, incentives, and politics

  • Commenters doubt big providers will voluntarily adopt FHE that blocks data harvesting without strong regulation or market pressure.
  • Governments may resist or demand backdoors; export controls and surveillance incentives are seen as major non‑technical obstacles.

NIH is cheaper than the wrong dependency

Paid Dependencies, Vendor Lock‑In, and Longevity

  • Several comments argue paid dependencies are often risky: incentives for lock‑in, single support source, and danger when the vendor folds or is acquired.
  • Counterpoint: paying can be good if it guarantees people whose job is to maintain and support the code; open source can also abandon users.
  • Many recommend only paying for components that implement open standards with alternative implementations, to preserve an escape route.

Dependency Minimalism, Vendoring, and NIH

  • Strong support for “dependency minimalism”: fewer moving parts, less that can break, easier long‑term maintenance.
  • Common heuristic: “Can I vendor it?” If yes, it may be better to copy small libraries into your tree and own them. Vendoring is framed as accepting maintenance responsibility, not just snapshotting.
  • Forking or vendoring dependencies (and even their transitive deps) is suggested to avoid disappearing repos and to understand build chains.
  • Others argue a strict “zero dependencies” stance is overreaction: most small, well‑scoped libraries are harmless and save time.

Evaluating Dependencies: Risk, Scope, and Lock‑In

  • Suggested process: only use OSS, review license, ubiquity, community health, security history, and ease of replacement.
  • Only adopt dependencies you’d be willing and able to maintain or fork. Some advocate preemptive forking; others say that’s too heavy and prefer vendoring or caching.
  • Ubiquity and scale are seen by some as a safety signal; others report severe bugs and slow responses even in big‑company libraries.

Where NIH Makes Sense (and Where It Doesn’t)

  • NIH is seen as appropriate when:
    • You need only a narrow subset of functionality, can implement it in ≤ a day, and want tight control.
    • The domain is core to your business (e.g., a custom ledger, highly tuned storage, niche graph DB, embedded time‑series engine).
  • Generally discouraged for: databases, cryptography, OSes, or large game engines—unless you truly have unique constraints and deep expertise.

AI and NIH

  • Some teams use LLMs to generate internal tools or codegen utilities instead of adding runtime dependencies.
  • Concerns: AI‑generated clones of libraries may inherit old CVEs, introduce subtle bugs, and lack automated vulnerability tracking.

Organizational and Human Factors

  • Dependencies can reduce time‑to‑market, which often dominates business decisions.
  • Long‑lived or safety‑critical systems (energy, finance, compliance) tend to avoid sprawling dependency trees due to security, audit, and maintenance costs.
  • Multiple commenters stress that hidden long‑term costs of upgrades, security fixes, and lock‑in are routinely underestimated.

USB-C hubs and my slow descent into madness (2021)

One‑cable dream vs messy reality

  • Many comments echo the article’s theme: people want a single USB‑C cable for power, display, network, and peripherals, but get there only after multiple failed hubs/docks.
  • DisplayLink-based docks are seen as a compromise: they enable many monitors (useful for M1 Macs) but introduce compression artifacts and DRM/streaming issues.
  • Thunderbolt 3/4/USB4 docks are praised as the only consistently reliable option, but they’re expensive, often need proprietary power bricks, and sometimes still have firmware or refresh‑rate quirks.
  • Monitor-integrated USB‑C hubs and daisy-chained displays are viewed as the cleanest desk setup, but are limited by macOS (no DP MST on some MacBooks, scaling options).

OEM/ODM reality and Realtek skepticism

  • Multiple hubs turn out to be the same OEM “Goodway” design rebranded at different prices. This is described as standard industry practice, not exceptional.
  • Realtek USB NICs are frequently blamed for flakiness (driver issues, feature gaps, weird failure modes like loops or pause frames killing networks), especially on Linux/BSD.
  • Others argue the silicon is often fine and the real problem is low-end manufacturers cutting corners on integration and firmware.

USB‑C spec complexity and broken devices

  • Long subthread on devices that only charge with their original USB‑C cable: explanation centers on missing or incorrect CC resistors and lack of USB‑C “dead battery mode,” making them depend on out‑of‑spec A‑to‑C cables that always supply 5V.
  • Some note that many products (including well-known brands and even Raspberry Pi revisions) have shipped with non‑compliant USB‑C implementations.
  • Debate over whether USB‑PD’s “0V by default until negotiated” is a design mistake or necessary to avoid dangerously tying two power sources together.

Power negotiation and PD hubs/chargers

  • USB‑PD power strips/hubs often momentarily cut power to all ports when a new device is plugged in, hard‑rebooting things like Raspberry Pis.
  • People report this behavior even on modern GaN chargers from major brands; opinions differ on whether it’s unavoidable or just poor design.
  • Some call for user-selectable behavior (e.g., a switch to favor existing loads vs new ones).

Cable capabilities, labeling, and testing

  • A recurring frustration is not knowing what any given USB‑C cable supports (data rate, video, PD wattage, Thunderbolt).
  • Suggested mitigations: personal labeling systems, USB testers/analyzers, and relying only on clearly logo‑marked cables.
  • Some see variable cable capabilities as a spec failure; others argue it’s a necessary tradeoff for backward compatibility, cost, and cable length flexibility.

Hub design gaps

  • Many consumer hubs provide mostly USB‑A and just one usable USB‑C data port, contradicting the “USB‑C everywhere” vision.
  • A few niche products (USB‑C–only hubs, Thunderbolt/USB4 hubs with multiple downstream C ports) are mentioned, but they’re rare or pricey.
  • There is interest in more integrated solutions: docks in monitors, desks, floor boxes, and better “USB power strips” for charging only.

My favorite use-case for AI is writing logs

AI for logging and boilerplate

  • Many commenters like using AI for tedious, low-level tasks: log lines, docstrings, CLI glue, and repetitive boilerplate.
  • They see value in “sketch the logic, let AI add the polish,” especially across many languages/projects where logging patterns differ.
  • Others argue that if logging is truly that mechanical, macros, snippets, or AOP-like approaches should handle it deterministically instead of LLMs.

Cognitive load, tooling, and determinism

  • One camp emphasizes that reducing cognitive overhead (remembering exact logger APIs, formatting, project-specific conventions) meaningfully improves focus and flow.
  • The opposing camp argues this is what editors, LSPs, snippets, and static tools are for; an LLM is an overkill “fuzzy hammer” where deterministic autocomplete and templates are more reliable and energy-efficient.
  • There’s tension between favoring highly predictable tools vs accepting probabilistic assistance that must be verified.

Programming as craft vs problem-solving

  • Some see complaining about logging syntax as disliking “real programming” and basic competence. They take pride in carefully hand-written, context-rich logs.
  • Others separate “liking programming” from liking its minutiae: they’d rather focus on solving business problems than on remembering logging variants, and view AI as another abstraction layer like garbage collection or sort routines.
  • Broader discussion touches on career motivations: many developers are in it for pay and outcomes, not love of the craft; AI shifts value from “knowing how” toward “knowing what you want.”

Logging best practices and pitfalls

  • Multiple comments critique the use of Python f-strings in logs:
    • They eagerly format even when the log level filters them out, harming performance in hot paths.
    • They break aggregation in tools like Sentry that rely on static format strings.
  • Suggested alternative: classic parameterized logging ("msg %s", var) or libraries that support lazy {} formatting.
  • Some highlight log level discipline (avoiding noisy or misleading error logs) and warn against logging secrets such as Redis URLs.

Libraries, models, and reliability concerns

  • JetBrains’ local models and Cursor-style completions are praised as helpful, but also noted for occasionally plausible-yet-wrong suggestions.
  • Debate over loguru: some like its ergonomics and features; others dislike its nonstandard formatting syntax and backtrace style.
  • Several caution that AI-written logs and code must still be reviewed; unreliable or misleading logs can severely complicate debugging.

People kept working, became healthier while on basic income: report (2020)

Effects of basic income on work and life choices

  • Many commenters note that multiple pilots (including this one) consistently report: reduced stress, better health, and most recipients continuing to work.
  • Critics emphasize the flip side: if ~75% “kept working,” then ~25% stopped, despite knowing the program was temporary; they see this as a large effect on labor supply.
  • Supporters counter that about half of those who stopped working went back to school, or into caregiving or parenting, which they view as socially valuable, not “idleness.”
  • There is deep disagreement on whether most people would still choose to work under a lifelong UBI: some cite pensions and early retirees as evidence many will quit; others argue UBI amounts are “basic,” not enough to fund a comfortable retirement.

Short-term pilots vs long‑term, universal UBI

  • Many argue that time‑boxed, non‑universal pilots cannot show true long‑term behavior: people assume payments will end and act differently than under a lifelong guarantee.
  • The specific Ontario/Hamilton study is criticized for: small, self‑selected survey (217 of ~4,000), no proper control group, advocacy-style presentation, and a design that clawed back benefits at 50% of earnings (more like a negative income tax than true UBI).

Funding, taxes, and macroeconomic risks

  • Concerns: a real UBI would cost on the order of trillions annually, require major tax increases, and might reduce labor supply at both ends (recipients and high earners).
  • Some fear inflation, especially in rents and essentials, as landlords and producers capture extra cash; others reply that UBI is largely a redistributive transfer, not new money, so aggregate demand need not surge.
  • Debate over whether UBI could replace existing programs (welfare, SNAP, disability, Social Security, Medicaid) or whether healthcare and high‑variance needs still require separate systems.

Targeted welfare vs universal cash vs alternatives

  • Some prefer expanding the Earned Income Tax Credit / negative income tax as a work‑incentivizing “UBI-lite.”
  • Others argue existing means‑tested systems are complex, stigmatizing, and create benefit cliffs; universality simplifies administration and reduces perverse incentives.
  • A competing proposal is guaranteed public employment at a living wage; critics say that’s hard to design, match to abilities, and may devolve into make‑work bureaucracy.

Moral framing and political resistance

  • One camp frames UBI as empathy and social insurance in a surplus society; another emphasizes “forced redistribution” and selective empathy (“my tax dollars”).
  • There is recurring reference to “welfare queen”–style narratives, and disagreement on whether UBI should be justified by compassion, self‑interest (reduced crime/health costs), or system‑stability arguments.

Anthropic tightens usage limits for Claude Code without telling users

Usage patterns and how limits are hit

  • Some users on $20–$100 plans hit limits in 1–3 hours, even with a single Claude Code instance and modest prompts; others on the same plans never come close and are surprised.
  • Heavy users describe:
    • Long-running “agentic” workflows with sub-agents, multi-step planning docs (hundreds of lines) and implementation phases, often costing the API-equivalent of $15–$45 per feature.
    • Multiple parallel Opus/Sonnet sessions (4–8+) running for hours, or even 24/7, on tasks like large refactors, migrations, debugging, test fixing, data analysis, etc.
    • Workflows where Claude repeatedly re-reads large files or entire folders, causing big token burn.
  • Others see frequent 429/529 errors or early downgrades to Sonnet and suspect dynamic throttling by time of day, account, or region.

Pricing, transparency, and business model

  • Many complain that limits were tightened silently, with confusing in-product messaging and no clear remaining quota indicators; some only infer costs with tools like ccusage.
  • There’s broad agreement that $100–$200 “Max” subscriptions can easily yield hundreds or thousands of dollars of API-equivalent usage, implying heavy subsidization.
  • Competing narratives:
    • “Uber/drug dealer model,” “enshittification”: underprice to hook users, then ratchet limits/prices.
    • Counterpoint: this is rational loss-leading in a space where compute costs will fall and models will get more efficient.
  • Some see flat-fee “unlimited” plans as inherently unsustainable and expect eventual convergence on metered pricing or stricter caps.

Productivity gains vs. skepticism and overreliance

  • Enthusiasts say Claude Code massively boosts throughput (often 2–3×), offloads boilerplate, accelerates learning of new patterns, and enables experimentation across stacks and domains.
  • Others find it boring, brittle, or slower than simply coding and searching themselves; several report loops, wasted tokens, or degraded code quality that then demands heavy human cleanup.
  • There’s a cultural clash around “vibe coding”:
    • Critics worry about skill atrophy and about projects becoming impossible when limits hit.
    • Supporters argue that as long as you understand and review the code, it’s a power tool, not a crutch—and that not using LLMs at all is now self-handicapping.

Lock-in, reliability, and alternatives

  • Some users now fear dependence on a third‑party tool with opaque, moving limits and compare it unfavorably to owning a keyboard or compiler.
  • Others rotate between providers (Claude, Gemini, Kimi, etc.) or look to local/open models (Llama 3.x, DeepSeek, R1 + tools like Goose/Ollama) to mitigate vendor risk.
  • Several note growing reliability and UX issues in Claude clients (slow history, cryptic errors, poor usage visibility) and ask Anthropic to prioritize stability and clearer quota communication.

The daily life of a medieval king

Reliability and Purpose of the Account

  • Several comments question how literally the king’s “day in the life” can be taken.
  • The text is seen as partly idealized and didactic: a model of how a “good” king should behave, possibly contrasted with a mentally ill successor.
  • Some liken it to modern PR / lifestyle pieces that smooth over messy reality.
  • Others frame it as medieval propaganda: informative about ideals, but weak as a strict diary of actual behavior (e.g., daily church attendance, listening to commoners).

Comparison with Other Medieval Rulers

  • This king is described as relatively sedentary; other monarchs spent more time hunting, on campaign, in mobile courts, or at seasonal residences.
  • Discussion notes that many medieval rulers weren’t linguistically or culturally aligned with their “national” subjects (e.g., French‑speaking English kings).
  • Legal reforms like mandating contracts in plain English are cited as examples of practical, competence‑oriented governance.

Daily Routine, Wine, and Leisure

  • Commenters highlight the late start, limited work hours, and structured mix of piety, audiences, and rest.
  • The morning glass of light, “well cut” wine draws attention; multiple replies clarify this meant wine diluted with water, a common historical practice.
  • The jewel‑admiring segment is compared to modern car or collectibles culture—time with close companions plus conspicuous luxury.

Modern vs Medieval Living Standards

  • A substantial subthread compares a 15th‑century king to someone in today’s global 5th wealth percentile.
  • Consensus: modern poor almost certainly have longer life expectancy and better medical options, but a king had unmatched food security and status (though tightly constrained life choices and obligations).
  • Several people reflect on “living like a king” today via indoor plumbing, refrigeration, varied diets, and drastically lower child mortality.

Health, Life Expectancy, and Medicine

  • Cited data for medieval English monarchs gives an average death age in the 40s; commenters stress how much this overstates life expectancy at birth due to high child mortality.
  • Others emphasize that even elites were vulnerable to infections that are trivial today and to violent deaths in war, hunting, or coups.
  • There’s debate over painkillers: one side claims “no painkillers,” others note long‑known remedies like opium, nightshades, willow bark, and herbal medicine.

Food Security, Preservation, and Diet Variety

  • Kings had guaranteed food but limited by pre‑Columbian ingredients and seasonality.
  • Participants list extensive preservation techniques (salting, pickling, drying, smoking, fermenting, sugaring, confit) and long‑keeping crops, arguing winter diets could still be varied by their standards.
  • A side debate challenges the common trope that medievals avoided water in favor of alcohol and disputes the idea that people were constantly drunk.

Global Hunger and Malnutrition Debate

  • One thread claims hunger has worsened for ~25 years and cites large annual death tolls from hunger and related disease.
  • Others contest this, pointing to data showing decades of improvement, a plateau, and only recent reversal; they demand more precise sourcing for large mortality numbers.
  • Malnutrition’s changing nature is noted: undernutrition now coexists with rising obesity, even in poorer countries.

Work Hours, “Four‑Hour Days,” and Fulfillment

  • Some note the king seems to do ~4 hours of overt “work,” comparing this to modern managers or knowledge workers.
  • Others argue much of what looks like leisure (religious services, public displays, audiences) was in fact required labor to maintain legitimacy.
  • There’s a parallel drawn to hunter‑gatherer “original affluent society” claims, with pushback that “work” is often defined too narrowly in those arguments.

Power, Delegation, and Productivity

  • A long branch contrasts how kings/CEOs accomplish much via delegation versus ordinary individuals.
  • Commenters debate the value of human secretaries versus tools like email and calendars: some see tech as clearly superior; others emphasize that a good assistant exercises judgment and acts as a force multiplier.
  • The underlying point: people with large staffs and authority can “get a lot done” without personally doing most tasks.

Monarchy, Government Ideals, and Contact with Subjects

  • Several note that this king appears to spend more time hearing petitions than some modern politicians do with constituents, though population and logistics were very different.
  • Comparisons are made between monarchs and mafia dons in terms of personal rule and patronage.
  • A philosophical thread argues that absolute monarchy has very high variance (best and worst regime type), while democracies are more middling but safer.
  • Others draw an analogy to modern civics teaching: idealized descriptions of how systems should work (separation of powers, accountability) often diverge sharply from practice, yet revealing those ideals is still historically informative.

Ask HN: What Pocket alternatives did you move to?

Major Alternatives People Chose

  • Wallabag is the most-cited replacement.

    • Reasons: no ads, Pocket import works well, self-hosting possible, good content extraction, ePub export (especially for Kobo), KOReader integration.
    • Criticisms: Android/iOS apps feel basic/outdated; Firefox-based server-side scraping sometimes fails; self-hosting can be heavy for weaker NAS boxes.
  • Instapaper frequently mentioned.

    • Pros: simple, stable, “doesn’t try to do too much,” direct Pocket import, Kindle digest, IFTTT support.
    • Several people say it’s far ahead of Pocket in handling paywalled content; others had previously left Instapaper for Pocket.
  • Raindrop.io also popular.

    • Pros: smooth Pocket import, good tagging, clean UI, open API; free tier sufficient for some.
    • Used both personally and for work, sometimes alongside other tools.
  • Readwise Reader / Readwise.io

    • Seen as “power user” oriented; strong highlighting and TTS, good on e‑ink, praised ongoing development.
    • One person left it due to ugly/unchanging UI.

Self-Hosted & Open-Source Options

  • Wallabag, Readeck, Karakeep, Shiori, Linkding, Linkwarden, Readeck+Kobo mod, and others are used by people who don’t trust hosted services to persist.
  • Karakeep praised for AI tagging + Meilisearch, but criticized as immature: SQLite-only, limited export options, tag overload.
  • Shiori liked for simplicity, multi-DB support, and local copies.

Native / Minimalist Approaches

  • Some abandon third‑party services entirely:
    • Browser bookmarks + Reading List, or RSS readers’ built‑in “read later.”
    • iOS tricks: Safari Reader + print-to-PDF, or just Safari Reading List (with debate over offline reliability and PDF usability).
    • Plain-text lists, email-to-self workflows.

Knowledge-Management & Obsidian-Based Setups

  • Several use Obsidian (web clipper, ReadItLater, Slurp, Relay) to store cleaned markdown locally, often synced across devices and used collaboratively.
  • Others fold articles into broader tools like Notion, DEVONthink, Zotero, or Eagle.

New / Niche Services & Personal Projects

  • Mentioned: Fika, Folio, Cubox, Curio, Fabric, CouchReader, Histre, Mozaic, Veo3, link.horse, linksort, bukmark, simple PWAs and Python/Docker apps, ArchiveBox lists.
  • Some actively building Pocket replacements; questions arise about offline storage, export, APIs, and e‑reader/Firefox support.

Meta Reflections

  • Debate over “just Google pocket alternatives” vs real-world feature gaps.
  • Many admit they rarely read what they save; others argue long-term bookmarking is still highly valuable.

Nintendo Switch 2 account bans continue: warning after buying old copy of Bayo 3

Scope of the “Ban” (Brick vs Online Lockout)

  • Several commenters stress the console isn’t technically bricked: it loses online services (multiplayer, updates, redownloads) but still runs existing games.
  • Others argue that for people who specifically bought the device for online play or digital access, this is “effectively bricked,” functional difference or not.
  • There’s confusion over whether banned users can redownload purchased titles or only keep what’s already installed; behavior seems restrictive and is viewed as “very heavy” even if not full bricking.
  • Some point out that most people cannot or will not install custom firmware as a workaround.

Used Games, MIG Flash, and Who Gets Punished

  • A key concern: Nintendo bans hardware when the same cartridge ID is seen multiple times (e.g., MIG Flash dumps), but the system can’t reliably tell pirate from innocent second-hand buyer.
  • Many see punishing both sides (or any side when it’s ambiguous) as morally unacceptable; some say Nintendo should punish neither if they can’t distinguish.
  • Commenters recall prior warnings that buying used Switch 2 carts could become a “roulette” because of this detection.
  • There’s disagreement about the specific incident:
    • Some say this case is clearly tied to the owner using a piracy-focused device and is being misrepresented as a used-game problem.
    • Others cite reports of innocent users being banned over used carts and bans later being reversed, suggesting a real risk. Exact timeline/evidence is described as unclear.

Resale, Digital Lock‑In, and Trust

  • Many see this as part of a long pattern: hostility to modding/piracy, erosion of resale rights, and digital purchases tied to fragile online services.
  • Shutting down Miiverse and older eShops is cited as evidence Nintendo doesn’t honor long-term digital ownership or platform features.
  • Some predict or fear a shift toward carts as mere license keys, further undermining physical media.

Consumer Reactions and Alternatives

  • A sizable group vows to skip Switch 2, avoid online-dependent games, or rely on emulation/PC/Steam Deck instead.
  • Others argue Nintendo still has no real substitute: unique first‑party games, kid‑friendly ecosystem, portability, and massive sales suggest most buyers accept — or ignore — these risks.

The patterns of elites who conceal their assets offshore

Moral views on offshoring and tax evasion

  • Many see elites’ offshore concealment as straightforward theft from society: they benefited from public systems then avoid contributing back.
  • Others argue that rich actors are primarily protecting themselves from “state expropriation” (asset seizure, arbitrary taxation, nationalization), not street crime.
  • There’s tension between “end corruption” vs “let me use the same tricks”; some lament that many commenters want access to loopholes rather than enforcement against them.
  • A minority defend the right to “escape” the state if one can, framing the state as just another power bloc using coercion.

Wealth, value, and exploitation debates

  • Long subthread on whether wealth creation is zero-sum:
    • One side: value depends on finite resources/energy; creating wealth in one place effectively devalues something elsewhere.
    • Other side: value is not bounded by resources; technology and productivity massively expanded living standards, so “the pie can grow.”
  • Arguments over whether all large fortunes are inherently exploitative: some say no individual can morally earn billions; others emphasize risk-taking, capital, and productivity multipliers (e.g., “lazy guy with an excavator” outproducing many with shovels).
  • Transactions as value-creating vs value-destroying:
    • One camp: every voluntary transaction creates mutual surplus.
    • Critics point to fraud, lemons, advertising arms races, and crime as transactions that destroy or externalize value.

Mechanics and incentives of offshore finance

  • Offshore structures serve tax minimization, asset protection, and anonymity: shielding against governments, lawsuits, spouses, and even heirs.
  • For the ultra-wealthy, funds often never “re-shore”: offshore entities directly purchase companies, jets, yachts; individuals then lease or borrow against these assets.
  • Repatriation sometimes happens via political “tax holidays,” especially in the U.S.
  • Some note ordinary pensions and non-criminal investors also route via offshore hubs for neutrality and legal predictability.

Enforcement, politics, and journalism

  • Skepticism that elites who control enforcement (politicians, legislatures, courts) will meaningfully crack down on havens they themselves use.
  • Discussion of blacklists: current EU list is small and omits major players (US, UK, Switzerland, Luxembourg, China), likely due to political power.
  • Investigative consortia (e.g., Panama Papers) are praised as crucial; collaboration is partly for safety after journalists have been killed over such work.

Offshoring, civil liberties, and activism

  • Some argue offshore and crypto-like tools are also vital for activists in authoritarian or repressive contexts, where domestic accounts can be frozen.
  • Example given of a UK political group having its bank account frozen; advice: keep funds offshore in a different jurisdiction.

Conceptual/terminology debates

  • “Elites” criticized as a mislabel for billionaires/oligarchs; some reserve the term for the educated professional “top 20%” who work for the ultra-rich.
  • “Civic participation” discussed via the Singapore example: low protest activity and constrained civil liberties despite low corruption.

My experience with Claude Code after two weeks of adventures

Enthusiastic Views and Use Cases

  • Several commenters say Claude Code (CC) feels like “changing jobs” or working with a capable junior/mid engineer: great at boilerplate, CRUD, refactors, debugging, and greenfield apps in mainstream stacks (TS/Next.js, Laravel, Ruby, Python, Go).
  • Reported wins include: Airflow DAGs, billing dashboards, cloud cost analyzers, DB rebuild tools, mobile apps with nontrivial features, and data‑migration pipelines, sometimes in days instead of weeks.
  • People like using CC to overcome “white page” inertia, explore unfamiliar tech, and offload tedious work while keeping interesting problem‑solving for themselves.

Negative Experiences and Limitations

  • Others find CC slow, fragile, or unusable on complex, legacy, or niche codebases (C/C++ systems, Win32, Haskell/Bazel/Flutter, Godot, etc.).
  • Common failure modes: incomprehensible or redundant code, runaway edits, “grep hell”, invented APIs, non-durable designs, and gaslighting-like insistence on nonexistent functions.
  • Visual/UI tasks and subtle design details (e.g., a wrong-looking SVG icon) often slip past its “self‑verification”.

Best Practices and Workflow Patterns

  • Strong consensus that success depends heavily on:
    • Good tests and fast feedback loops; otherwise it may “fix” tests instead of code.
    • Structured codebases with clear conventions.
    • Per-folder CLAUDE.md / repomaps, explicit architecture and business rules.
    • Using plan mode, small tasks, explicit constraints, and having it keep a journal of issues/decisions.
  • Many treat it as an intern: you must review every change “like a hawk” and be ready to discard bad runs.

Comparison with Other Tools

  • Cursor: praised for in‑editor UX and tab completion; some find Cursor+Claude better than CC alone, others say CC’s agentic behavior and tool use are clearly superior.
  • Other tools mentioned: Cline, Aider, opencode, RepoPrompt, Kimi‑K2, Gemini 2.5, GPT‑4.1/o3, Grok 4; quality varies by model and task.

Impact on Developers and Jobs

  • Strong debate over whether CC makes juniors unnecessary and eventually lets a few seniors oversee AI‑generated code for many.
  • Pushback that CC doesn’t learn, repeatedly makes the same mistakes, and that mentorship is an investment in future seniors.
  • Some seniors report real speedups (1.5–10× by their own metrics); others cite studies or experience where perceived gains hide slower real progress.

Costs, Limits, and Hype

  • Rate limits on Pro vs Max tiers frustrate heavy users; some route through Bedrock or APIs.
  • Worry that current prices are “Uber‑era subsidies” and may spike later.
  • Many complain that rave posts lack concrete details (stack, size, tests, diffs), fueling suspicion of influencer hype and calling for more rigorous, specific case studies.

Apple Intelligence Foundation Language Models Tech Report 2025

Contributor List & Anti‑Poaching Speculation

  • Some suspect the long, randomly ordered contributor list is meant to blunt poaching.
  • Others note large projects often use flat or alphabetical ordering, and with many non‑Latin names, English alphabet order is arbitrary anyway.
  • People point out it’s easy to find core researchers via references or LinkedIn, so any anti‑poaching effect is weak at best.

On‑Device 3B Model & Hardware

  • The ~3B-parameter model already runs on current iPhones, iPads, Macs and visionOS in the 26 betas, accessible via the Foundation Models framework and even Shortcuts.
  • Users report decent latency (few seconds) and that it can act as an offline chatbot despite being tuned for summarization, extraction, and writing tools.
  • Discussion of ANE vs GPU: GPU tends to be faster but less efficient; ANE is optimized for low power. Fitting 7B+ models in 8 GB RAM is seen as technically impressive but impractical for real use.

Siri & Apple Intelligence User Experience

  • Many comments complain that Siri still fails simple or multi-part requests, especially compared with ChatGPT or Gemini, and call Siri “a joke.”
  • Some note current Apple Intelligence models already power features like notification summaries and writing tools, but users often find these underwhelming or intrusive.
  • Multiple insiders-style comments describe attempts to bolt an LLM onto legacy Siri as an integration nightmare (multi‑turn flows, “smart” UI, privacy, backwards compatibility), arguing a full reset is needed.

Apple’s AI Strategy, Privacy, and Data Centers

  • Strong split in perception:
    • One side sees Apple as badly behind frontier models, over‑promising with “Apple Intelligence,” and now partly retreating to OpenAI/Anthropic APIs and legal/PR positioning (e.g., “responsibly sourced” training data, Private Cloud Compute).
    • The other side argues Apple should not chase frontier models; their differentiator is privacy, on‑device inference, and tight OS integration, leaving generic chatbots to apps.
  • Private Cloud Compute is praised as a technically strong privacy design, but skeptics doubt scalability and note that truly powerful, tool‑using agents will likely require large cloud models with lots of tokens and user context.

Training Data, Applebot, and Robots.txt

  • Apple claims not to use user private data, to filter PII/profanity, and to honor robots.txt for Applebot.
  • Critics argue Apple scraped before clearly documenting AI training use, making later “you can opt out” messaging feel disingenuous.
  • Others respond that robots.txt has long been the standard opt‑out mechanism and that expectations for advance crawler disclosure are new and mostly born of anti‑LLM sentiment.
  • Some propose adding crawler “categories” (e.g., search vs. LLM‑training) to robots.txt to give publishers finer control.

Alt Text, Accessibility & Training

  • The paper’s use of image–alt‑text pairs sparks debate: alt text is heavily advocated for accessibility, yet is now prime supervised data for vision‑language models.
  • Some see this as “free annotation labor” for AI; others argue it’s still morally consistent to write alt text for disabled users even if it’s scraped.
  • A few note they already use LLMs to draft alt text but still review edits manually.

Developer Experience & Structured Output

  • iOS developers are enthusiastic about the Foundation Models framework: typed Swift structs, guided/structured output, streaming partial fields, and a clean bridge to external models.
  • Commenters compare this to “structured output” / grammar‑based sampling already available elsewhere, noting that forcing strict structure can reduce model quality and sometimes needs a two‑pass “think then structure” approach.

Model Updates & LoRA Adapters

  • People wonder how often on‑device models will change; the base weights are gigabytes, so frequent silent updates seem unlikely.
  • Apple appears to rely on LoRA‑style adapters for specialization; these must be retrained per base‑model version, suggesting model changes will likely track OS point releases, not constant churn.

Is Apple Behind or on Its Own Track?

  • Critics frame Apple’s current AI as a “slow‑motion train wreck”: small models, weak Siri, lack of headline features vs. ChatGPT/Gemini, and RAM‑constrained hardware.
  • Defenders counter that:
    • Apple has always moved slowly and then integrated deeply;
    • They don’t need to win the model race, only own the device, UX, and privacy story;
    • Users who want frontier chatbots can easily install apps or use the OS‑level ChatGPT integration.
  • There’s broader pessimism about Apple’s product coherence post‑Jobs, contrasted with arguments that partnering (e.g., with OpenAI) fits a long history of Apple leveraging outside tech while keeping tight control of the platform.