Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 78 of 347

System 7 natively boots on the Mac mini G4

Classic Mac OS versions & stability

  • Thread heavily debates which classic version was “best”: some praise Mac OS 9.2.2 as peak Mac OS; others argue System 7 (often 7.6) or Mac OS 8.1 were the real zenith.
  • Experiences conflict: several recall Mac OS 9 as crash‑prone and prone to disk corruption (no memory protection, cooperative multitasking, flaky IDE), while others found 9 much more solid than early System 7, which they remember as unstable until ~7.6.
  • Comparisons with Windows 95/98: opinions split on whether Win98SE was more or less stable than Mac OS 8/9. Everyone agrees NT‑based Windows (2000/XP) were far ahead architecturally.

Performance, UI “snappiness,” and animations

  • Many remember classic Mac OS as extremely responsive: minimal, purposeful animations, nearly instant UI, and little perceived latency.
  • Some emphasize that classic animations were brief, information‑rich (e.g., zoom rect from icon to window) and didn’t block input, unlike many modern, decorative animations.
  • Others note early Mac UX “awfulness” as much about low RAM and slow disks as OS design.
  • There’s discussion of preemptive vs cooperative multitasking; several point out that preemption was feasible on 68k/PPC (Amiga, Lisa, Apple’s own alternate OSes) and that limitations were mostly historical/compatibility debt.

Hardware, clones, and architecture

  • Nostalgia for 90s PowerPC hardware (Performa, PowerTower/PowerCenter, StarMax clones) and the confusion of “MHz wars” versus real performance (cache sizes, FSB speeds, pipeline design).
  • Interesting detail on CHRP‑ish machines mixing Mac and PC subsystems (PCI, ISA, PS/2, ATX) and strange Open Firmware device trees.
  • One nitpick: System 7 on a Mac mini G4 still relies on the built‑in 68k emulator; it’s not “native” in the sense of running directly on PPC without that layer.

Legacy use and retro setups

  • A small business refurb market exists for Mac mini G4s running hacked Mac OS 9, used in production by dentists, vets, museums, and repair shops needing legacy software.
  • System 7 on a mini is seen mostly as a curiosity due to missing drivers; for almost all real‑world classic apps, Mac OS 9 or emulation (e.g., vMac) suffices.
  • Some users still wrestle with native‑boot OS 9 vs Classic mode on later G4 iMacs; OS9Lives images and SSD + IDE adapters are common solutions.

Retro tools, languages, and emulation

  • Python deprecations broke a System‑7‑related tool; this sparks debate about removing obscure features vs maintenance burden.
  • For tooling that preserves old Macs, people argue between ultra‑portable C89, Go, Free Pascal/Lazarus, etc.; maintainability and developer time often win over maximal portability.
  • Alternatives for running classic software on modern hardware include Executor, Advanced Mac Substitute, and historical efforts like Rhapsody and GNUStep.

HyperCard and old‑school productivity

  • Multiple commenters reminisce about HyperCard as a rapid‑prototyping powerhouse used even in professional contexts.
  • There’s criticism of overkill modern stacks (Electron/React) for simple tools, contrasted with how quickly similar things were built in HyperCard‑style environments; Decker is suggested as a modern homage.

Confessions of a Software Developer: No More Self-Censorship

Openness, Shame, and “Not Knowing”

  • Many appreciate the author’s vulnerability; several say openly admitting gaps (“I don’t know”) has been a career superpower, increasing trust and making others eager to help.
  • Others stress that confession alone isn’t enough: it should be coupled with a visible effort to close gaps. There’s criticism of holding others to standards one hasn’t met oneself.
  • Multiple commenters confess their own gaps (basic main() syntax, string length APIs, SQL joins, calculus, functional languages) and argue this is normal in a broad field.

Looking Things Up vs. Memorizing

  • Common theme: constantly re‑googling language/library details is seen as fine; “knowledge is knowing where it’s written down.”
  • Many now lean on IDE autocomplete or LLMs, especially for shell scripts and obscure syntax, and say this is a big productivity boost.
  • Some push back that not understanding fundamentals (e.g., SQL) can seriously hurt projects; they distinguish harmless lookup from never learning core concepts.

Testing, Uncle Bob, and OOP/Polymorphism

  • The author’s shame about tests and OOP triggers debate:
    • Some say Uncle Bob–style dogma (TDD everywhere, 100% test coverage) has done real harm, producing pointless tests and over‑abstracted code.
    • Others defend high coverage as a forcing function for quality, and argue excuses about “bad tests” usually mask indifference to quality.
  • Polymorphism and patterns: some celebrate “discovering” them; others warn that aggressively refactoring switches into class hierarchies often worsens readability and is context‑dependent.

Remote Work vs. Office: Deeply Split

  • Many strongly reject “remote work sucks,” citing remote as life‑changing: no commute, better family time, health, and the ability to live away from expensive cities.
  • Others, including the author, report real downsides: loss of ambient awareness, harder mentoring and pairing, more conflict/misunderstanding, loneliness, and home–work boundary issues.
  • Several argue the real variable is culture and tools (IRC/Slack norms, public vs DM chat, surveillance/HR fears, notification overload), not remote itself.
  • Repeated insistence that preferences are highly individual; trying to universalize one mode (RTO or remote) is seen as unfair and often politically charged.

Cyberharassment / Lobsters Incident

  • Commenters dig up the referenced thread about an undisclosed AI‑generated PR.
  • One side views the response as legitimate shaming of deceptive behavior that burdened maintainers.
  • Others find the ban and tone excessive or at least puzzling; whatever the intent, the episode clearly had a strong chilling effect on the author.

AI, Career Anxiety, and Industry Culture

  • Some fear we’re “engineering ourselves into obsolescence” and feel unsafe even voicing AI skepticism at work.
  • Others are unconcerned, seeing future roles as “tech leads for AI agents” or simply willing to pivot careers.
  • Broader complaints surface about cargo‑cult Agile, metrics gaming, shallow management, and the pressure to appear encyclopedic rather than openly ask “stupid” questions.

Airbus A320 – intense solar radiation may corrupt data critical for flight

Incident and Scope of the Airbus Action

  • Discussion links the fleet action to JetBlue 1230 on Oct 30: a sudden uncommanded pitch-down, injuries, and emergency diversion.
  • European regulators describe a vulnerability where an ELAC (Elevator Aileron Computer) fault could command elevator movement strong enough to risk structural limits.
  • Not all A320-family aircraft are affected; only a subset with specific ELAC hardware/firmware combinations.
  • Fix appears to be a software rollback to an earlier ELAC version plus added error checking and automatic restart of the failing component.

Radiation Type and Likely Mechanism

  • Commenters converge on “cosmic rays / solar particle events” (single-event upsets) rather than ordinary sunlight.
  • A coronal mass ejection or elevated geomagnetic activity is suspected, but exact event–flight correlation is unclear.
  • High-altitude aircraft are acknowledged to see much more radiation than ground systems; some argue they should be closer to space-grade “rad-hard” design.

Hardware vs Software Fix and SEU Mitigation

  • Some are uneasy that a software change is addressing what appears to be a hardware susceptibility.
  • Others note software can add redundancy (multiple copies, checksums, self-supervision, watchdogs, automatic restart) and is a valid way to turn silent data corruption into detectable failures.
  • Thread references traditional measures: ECC/EDAC, triple modular redundancy, voting logic, lockstep CPUs, disabling caches, memory scrubbing, and rad-hard components.

Redundancy, Legacy Designs, and Certification Constraints

  • Older Airbus flight computers and ADIRUs were designed in the 1990s, sometimes without EDAC; later variants added it.
  • Multiple independent computers and sensor triplexing are used so a single erroneous unit can be outvoted or rejected, but past incidents show algorithmic edge cases where two bad sensors can dominate.
  • Strong motivation to reuse certified hardware and software for decades; changing flight computers triggers expensive recertification and complex pilot training issues, so evolution is incremental.

Operational, Safety, and Perception Issues

  • Groundings have caused missed connections, overnight stays, and significant disruption; passengers are told planes need a software update, which some find unsettling.
  • Several argue immediate grounding is rational risk management and reputational protection, especially contrasted with Boeing’s history.
  • Commenters emphasize wearing seatbelts at all times due to unpredictable turbulence and control issues.
  • Some skepticism remains about whether radiation alone explains an issue apparently unique to this specific ELAC version; EMI or design regressions are suggested but unresolved.

Flight disruption warning as Airbus requests modifications to 6k planes

Current Airbus Issue and Solar Radiation Explanation

  • Discussion centers on Airbus’s finding that intense solar radiation corrupted data in an ELAC flight‑control computer, causing a sudden altitude drop on a JetBlue A320 and triggering a global directive affecting ~6,000 aircraft.
  • Many note it’s positive that action is being taken before a crash, contrasting implicitly with other manufacturers, while also stressing Airbus’s own history of serious incidents.
  • Some are skeptical of “solar radiation” as a catch‑all explanation and want more technical detail and reproducible evidence.

Software vs Hardware Mitigation

  • A large subset of aircraft will be fixed via a software update that is reportedly a rollback; ~2,000 need hardware modifications.
  • Commenters debate how software can mitigate radiation: ideas include better checksums, voting algorithms, watchdogs, and redundancy rather than shielding alone.
  • Others suggest alternative root causes such as power‑bus glitches, solid‑state relay failures, or bugs in failover/voting logic between redundant computers.
  • There is discussion about old designs lacking ECC/EDAC and newer hardware being more hardened, but legacy fleets will remain vulnerable for years.

Pilot Error vs System Design (AF447, Qantas 72, etc.)

  • Long subthread revisits previous Airbus accidents: stall events, mode changes when sensors fail, and independent sidesticks with no tactile cross‑feedback.
  • One side emphasizes multiple documented crew errors and CRM breakdowns; the other argues that confusing automation modes, poor HCI, and hidden complexity made “pilot error” almost inevitable.
  • The idea that accidents result from interacting technical, organizational, and human factors, not just “bad pilots,” is strongly argued.

QA, Redundancy, and Fly‑by‑Wire

  • Aerospace software QA is described as far more rigorous and well‑funded than typical tech, but still bounded by assumed environmental ranges and commercial pressure.
  • Some express unease that fly‑by‑wire places software between pilots and control surfaces; others note mechanical systems also fail and that Airbus uses triply redundant, dissimilar computers.

Radiation Risk to Passengers and Crew

  • Side discussion notes that passengers face minimal additional cancer risk, but frequent‑flying aircrew have measurably higher risk from high‑altitude radiation.

How good engineers write bad code at big companies

Architectural patterns & overengineering

  • Some commenters push back on “drive-by attacks” on CQRS/DDD/TDD, arguing they’re useful patterns when applied pragmatically, but often turned into dogma that drives needless complexity.
  • The article is seen by some as conflating “pure engineering” (reducing systems to cohesive concepts) with “architecture astronaut” overdesign, which they view as opposites.

Primary causes of bad code

  • Many argue the main driver is rushing to meet deadlines and shifting or ill-defined requirements, not lack of skill or unfamiliarity with the codebase.
  • Short-term incentives (shipping features, promotions, looking good to management) routinely beat long-term code health, even when this hurts delivery speed later.
  • Some claim the worst code comes from unstable foundations and constant requirement changes; others say that’s exactly where responsible engineers should push back, but doing so is often punished.

Seniority, expertise & tenure

  • Debate over whether a good senior can “avoid bad code from day one”: critics say “good” is contextual and deep understanding of a large codebase simply takes time.
  • Institutional knowledge is seen as undervalued; frequent reorgs, fungibility, and layoffs destroy expertise and encourage shallow, duct-tape solutions.
  • Some dispute the article’s “1–2 year tenure” framing as misleading, noting growth effects and citing longer real tenures in some big-tech teams.

Business incentives vs code quality

  • Repeated theme: management doesn’t know how to assess code quality, only visible outcomes (features, metrics), so maintainability and refactoring are deprioritized.
  • Engineers who ship quick-and-dirty code often get rewarded; those who invest in cleaning things up struggle to justify it unless they can tie it directly to revenue, churn, or roadmap acceleration.
  • Others argue “bad” or “ugly” code is often economically rational “good enough for now,” analogous to cheap but acceptable physical products.

Process, culture & management

  • Many anecdotes of refactors blocked (“too late to change”), legacy schemas frozen due to fear of breaking changes, and architecture decisions never revisited.
  • Code review culture is criticized for fixating on style/naming while ignoring design/requirements and big-picture correctness.
  • AI assistance is seen by some as amplifying the volume of syntactically correct but conceptually poor code, especially in the hands of “tactical tornadoes.”

Craft, motivation & burnout

  • Several commenters describe a trajectory from caring deeply about code quality to nihilism: realizing the organization neither rewards nor protects that effort.
  • There’s tension between seeing programming as a craft (like fine cooking) versus as production work (bricklaying under time pressure); most agree some compromise is inevitable, but feel current incentives are skewed heavily against long-term quality.

Imgur geo-blocked the UK, so I geo-unblocked my network

Archive.org and UK blocking

  • Commenters disagree on whether archive.org is “blocked in the UK.”
  • Consensus: it’s generally reachable, but many UK mobile and PAYG broadband packages enable “adult content” filters by default, which incidentally block archive.org.
  • Unblocking usually requires age verification with the ISP (credit card, online toggle, or phone/shop visit), and specifics vary widely by provider and contract type.

UK content controls, Online Safety Act, and public opinion

  • Several posts describe long-standing UK practice of default adult-content blocking on mobile networks, justified as cheap, broad parental control.
  • There’s debate whether Imgur’s withdrawal is due to the Online Safety Act (age checks) or pre-existing data protection rules enforced by the ICO.
  • Some argue this is straightforward child-data protection; others see it as part of a broader authoritarian drift and “landslide” of censorship.
  • Polls reportedly show strong support for child-protection laws, but commenters note such polling hides nuance (e.g., people support “protect kids” but not incidental blocking of benign sites like Imgur).

Why Imgur left and how they block

  • Imgur is geoblocking UK traffic and also appears to block many VPN exit IPs, sometimes returning misleading “over capacity” errors instead of a clear geoblock message.
  • This makes casual circumvention with off-the-shelf VPNs unreliable.

Network-level workarounds and split tunneling

  • Many readers have built similar setups:
    • Policy-based routing (PBR) on OpenWRT, UniFi, MikroTik, OPNsense/pfSense, etc., to send only specific domains/IPs via VPN or WireGuard.
    • Use of DNS tricks, nftables/ipset, or nginx SNI inspection to route by hostname.
    • Raspberry Pi or small PC acting as a VPN router so all LAN devices benefit without per-device configuration.
  • Some note router-based PBR on raw IPs is brittle with CDNs; hostname-aware proxies are more precise.

Practical issues, limits, and side effects

  • Solutions stop working when you leave home unless you also VPN back into your own network, leading to “double VPN” scenarios.
  • IPv6 support is a weak point in some consumer gear (e.g., UniFi’s WireGuard), complicating full coverage.
  • Several users comment on the long history and fragility of free image hosts: when services shut down, forums become graveyards of broken embeds, and Imgur’s current behavior is seen as another turn of that cycle.

28M Hacker News comments as vector embedding search dataset

Permanence of HN Comments & Desire for Deletion

  • Several commenters wish there were an account/comment delete option and note they would have written differently had they known how reusable the data would be.
  • Others stress that HN has long been widely scraped; once posted, comments are effectively permanent and likely embedded in AI models and countless private archives.
  • Some push back on the “carved in granite” metaphor by citing link rot, but others argue both can be true: original sources vanish while many independent copies persist.

Privacy, GDPR, and “Right to be Forgotten”

  • Multiple people ask how to get their comments removed from third‑party datasets or tools built on HN data.
  • GDPR is cited as giving EU users a strong legal basis to demand deletion, though enforcing this across all copies is seen as practically impossible.
  • Some call HN’s “no deletion” stance a serious privacy breach and likely a GDPR violation, though untested in court.
  • There is skepticism that any large company truly hard-deletes data (especially from backups); others note GDPR risk makes willful non‑deletion of EU data unlikely.

Licensing, Terms of Use, and Commercial Use

  • HN’s terms give Y Combinator a broad, perpetual, sublicensable license over user content.
  • People question whether a third‑party dataset vendor is “affiliated” enough to rely on that, and whether commercial derivative use is allowed given HN’s stated bans on scraping and commercial exploitation.
  • Debate ensues over whether embeddings are legally “derivative works” and how that differs from human memory or personal note‑taking.
  • Some accept that posting on a third‑party platform inherently means ceding control via contract; others emphasize user expectations and fairness rather than strict legality.

Reactions to AI / Dataset Use

  • Some feel violated or socially “betrayed” that their conversational history is now trivially searchable and used to train/benchmark models.
  • Others shrug, arguing public text is inherently open to any form of processing, including AI training, and even relish their comments’ tiny influence on future models.
  • A few say LLMs reduce their motivation to post helpful content since it now benefits firms they dislike more than individual humans.

Technical Details: Size, Compression, and Embedding Models

  • Commenters confirm that ~55 GB Parquet for 28M comments plus embeddings is plausible; raw text for all HN posts can be under ~20 GB uncompressed and single‑digit GB compressed.
  • Several note how little storage text actually needs and discuss text as lossy “concept compression.”
  • There’s interest in the concrete hardware and costs: one similar HN embedding project reports daily updates on a MacBook and historic backfill on a cheap rented GPU; hosting costs are dominated by RAM for HNSW indexes.
  • Advanced users criticize the choice of all‑MiniLM‑L6‑v2 as outdated and recommend newer open‑weights embedding models (e.g., EmbeddingGemma, BGE, Qwen embeddings, nomic-embed-text), with trade‑offs around size, speed, context length, and licensing.
  • Others are focused on lightweight client‑side models (<100 MB) and share candidate models and leaderboards for comparison.

Search, Semantics, and Potential Applications

  • Some ask for comparisons between vector search and “normal” text search; BM25 is cited as the standard baseline in retrieval papers.
  • Ideas are floated for UI features like “find similar sentences” and semantic threading of discussions to reveal when the same debate has occurred before.
  • Prior work using stylometry to link alternate accounts from HN writing style is mentioned as a cautionary example of how analyzable the corpus already is.

Molly: An Improved Signal App

Android multi‑device support

  • Major draw: Molly allows linking two Android devices (e.g., phone + tablet, or two phones) the way Signal allows desktop/iPad links.
  • Users describe this as solving a long‑standing “artificial” limitation in official Signal, especially for migrating family from apps like Viber that already support multi‑device on Android.
  • Some confusion: registering the same number as a primary on both apps logs one out; Molly must be added as a linked device to keep Signal active.

Local database & at‑rest security

  • Molly encrypts the local database with a user‑supplied password and can lock/unlock it on a timer, plus wipe RAM on lock.
  • Supporters see this as fixing Signal’s “regressions” around at‑rest security and offering defense in depth against device seizure or forensic tools, especially at borders.
  • Critics argue this is an incoherent boundary: if an attacker has a rooted or compromised phone, they can capture keystrokes or unlock the app anyway; they see more friction than real protection for most users.
  • Broader debate about whether Signal’s reliance on OS‑level encryption is sufficient vs the need for app‑level DB encryption.

Push notifications, blobs, and FOSS variants

  • Molly has two variants: one using Firebase (FCM) and one FOSS build using UnifiedPush/websockets, avoiding Google Mobile Services and other “proprietary blobs.”
  • Some view Google as part of their threat model and prefer Molly/FOSS; others note UnifiedPush limitations (e.g., with multiple devices) and potential battery impact.

Backups, server control, and federation

  • Molly is praised for local backup options and the possibility of using private Signal‑compatible servers.
  • Signal’s own server code is open source, but there is no federation with official servers and likely never will be; Molly can talk either to official Signal or compatible alt‑servers, but not both at once.
  • Some see Molly as improving “digital sovereignty”; others note that any third‑party fork adds supply‑chain risk (new signer, extra trust).

Trust, centralization, and threat models

  • Skeptics worry about trusting a small fork for E2EE and mention research showing Signal’s notification behavior can leak fine‑grained usage patterns; Signal is criticized for slow response.
  • Defenders stress Signal’s open source, non‑profit status, minimal metadata design, and public stance against backdoors; they argue no decentralized E2EE system yet matches its privacy guarantees.

UX, features, and update policy

  • Several users complain about Signal’s design, lack of features (e.g., live location, richer multi‑device, web client), forced updates, and phone‑number requirement.
  • Others report both Signal and Molly as stable and sufficient, noting that frequent updates are expected for a high‑security app, even if changelogs are sparse.
  • Molly’s UI is reported to be essentially Signal with a different theme, despite the project’s “design” marketing; the lack of screenshots on the Molly site is widely criticized.

Codex, Opus, Gemini try to build Counter Strike

Overall reaction & nostalgia

  • Many found the experiment fun and nostalgic, evoking CS 1.6 / Source era and old-school Quake/Doom aesthetics.
  • Several commenters emphasize that, despite the Counter-Strike framing, the result is really a very simple, Minecraft-looking generic FPS, far from a real CS-like game.
  • Some enjoyed actually playing the demos, mentioning bugs (e.g., farming kills by repeatedly shooting dead players) and getting repeatedly insta-killed.

Technical quality & limitations

  • Multiple commenters say this is roughly what a junior dev could produce in their first weeks: a bare-bones demo glued together from three.js and a backend, with little attention to architecture, netcode, or competitive FPS design.
  • Shooting and networking are implemented naively (send “shot” events and directly reduce HP), which experienced game devs note is nothing like how real competitive shooters handle hit detection, latency, or prediction.
  • Missing features: lobbies, robust physics, proper game modes, cheat prevention, and production-grade engineering; it’s called a “pre-prototype” at best.

Copyright, licensing, and LLM training

  • A shader snippet referencing “Preetham” raised suspicion of LLM plagiarism; investigation shows it originates from three.js examples (MIT-licensed) and/or common implementations of a 1999 daylight model.
  • This sparks a broader debate:
    • Concern that LLMs regurgitate licensed or unlicensed code without notice, creating business risk.
    • Counterarguments that small algorithmic snippets are hard to meaningfully copyright, and this particular case wasn’t LLM output but a bundled dependency.
    • Discussion of derivative works, court rulings on generated output, and fear of copyright trolls versus the practical limits of enforcement.

Impact on developers & workflow

  • Some developers feel depressed that LLMs may remove the “fun” parts of coding, leaving review and bug-chasing in low-quality “it-compiles” codebases.
  • Others say LLMs have made programming more enjoyable and productive, offloading boilerplate, plumbing, and scaffolding so they can focus on design and harder problems.
  • Consensus that LLMs currently behave like junior developers: useful with guidance, but far from autonomous or production-safe.

Economics, usefulness & moving goalposts

  • Skeptics highlight the high cost of complex agentic workflows (e.g., multi-thousand-dollar research tasks) and call these outputs “costly slop” with unclear economics.
  • Supporters give concrete examples where LLMs saved weeks or months (e.g., generating UI mockups, reviving old projects, fixing legacy builds).
  • Several note the rapid “moving goalposts”: what was recently impressive (a crude FPS built from scratch) is now quickly dismissed as trivial or insufficient.

Model comparisons & benchmarks

  • Some claim Gemini’s version is worst and benchmark marketing overstates its real-world performance; others say it actually feels better to play than some alternatives, aside from odd graphics.
  • Discussion of “thinking levels” / parameters leads to debate about whether such knobs are genuine capability or just overcomplication.

The unexpected effectiveness of one-shot decompilation with Claude

LLMs as Decompilation and RE Tools

  • Many commenters report strong results using Claude and other LLMs (especially with Ghidra/IDA) to:
    • Clean up decompiled C, infer function purposes, and identify assembly tricks.
    • Comment JIT output or highly optimized/minified code, and compare compiler outputs.
  • Gemini is noted as also good at assembly and bytecode-level tasks; Codex is seen as more tuned for mainstream dev work.

Workflows, Heuristics, and Tooling

  • The post’s “headless loop + heuristics + compiler match” approach is praised as a concrete, useful pattern.
  • Key techniques:
    • Work function-by-function when possible; whole-file input is sometimes needed when registers are reused unpredictably.
    • Use a “give up after N attempts” heuristic to cap wasted tokens.
    • Exploit large context windows to analyze wide code regions and trace flows.
  • Some want more structured, step‑by‑step tutorials and tighter grammars for valid C, but others say simple “compile + feed errors back” loops are enough.

Limits, Complexity, and Non‑Expert Use

  • Commenters warn that one‑shot reverse engineering for non‑experts is still weak; you must give the model tight constraints, goals, and validation.
  • LLMs often misestimate task difficulty and duration—both over‑ and under‑shooting.
  • There’s debate over what “one‑shot” means (single prompt vs single example vs non-interactive loop).

Documentation and Developer Workflow

  • Many see LLMs as excellent for generating “how it works” docs, translating and synthesizing sparse or foreign‑language documentation.
  • Skepticism about auto‑invented rationales (“why it’s this way”); human review is desired.
  • Some argue LLMs reduce the need for human docs; others frame docs as an “error-correcting code” to detect mismatches between intent and implementation.

Legal, Licensing, and Privacy Concerns

  • Strong thread on distinctions between “open source” vs “source available” and how decompilations are derivative works with their own, but constrained, licensing.
  • Clean‑room reverse engineering is contrasted with distributing decompiled code.
  • Several raise concerns about uploading copyrighted binaries to cloud LLMs: potential evidence trails, DMCA/fair‑use ambiguity, and jurisdictional risks.

Decompilation, Obfuscation, and the Future of Software

  • Some speculate that near‑trivial decompilation could make most binaries effectively “source available,” provoking shifts to cloud‑only or hardware‑locked distribution.
  • Others expect counter‑moves: LLM‑assisted obfuscation or exotic schemes (e.g., homomorphic VMs) to make analysis harder.
  • There’s disagreement on timelines: some think “everything decompilable” is far off; others see it as inevitable and beneficial for preservation.

Game Preservation and Retro Computing

  • Multiple examples of LLM‑assisted ports and analysis: classic BIOSes, Prince of Persia on Apple II, and older PC/console games.
  • Matching original binaries requires reconstructing old toolchains and flags; flakiness and inter‑function dependencies often prevent 100% exact matches, but “99%+ matching, 100% functional” is common.

Bringing Sexy Back. Internet surveillance has killed eroticism

Whether Erotic Culture Has “Died”

  • Many argue eroticism hasn’t vanished; it’s more overt and commercial than ever (porn, OnlyFans, sexualized games, NSFW subcultures).
  • Others agree with the essay’s thesis that private erotic connection and flirtation have become riskier, even as sexualized media flourishes.
  • A separate camp says they actually want less sexual content in everyday life, finding “sex sells” marketing numbing and exhausting.

“Too Online” and Bubble Effects

  • A recurring critique is that the essay reflects a very specific, terminally‑online, progressive/queer social bubble, not society at large.
  • People report living normal offline lives where erotic thoughts, mild flirting, or saying “my hairdresser is hot” are not socially catastrophic.
  • Several note prior experiences of mistaking niche Reddit/Twitter norms for “what Americans think,” then discovering offline attitudes were very different.

Surveillance, Shame, and Cancel Culture

  • Many resonate with the fear of being publicly shamed via screenshots, clips, or call‑out posts; this produces self‑censorship and anxiety around sex, jokes, and even compliments.
  • Others insist there is no centralized “villain,” just platforms optimized for outrage and engagement, plus our own appetite for validation.
  • Strong disagreement over “cancel culture”: some see it as a real climate of mob punishment that now spills into intimate life; others call it an exaggerated or partisan framing.

The Hairdresser Anecdote & Private Fantasy

  • The friend’s demand that the author apologize to the hairdressers for private erotic thoughts is widely seen as absurd and even creepier than the fantasy.
  • Many draw a hard line between thoughts vs actions: internal arousal is natural; the ethical boundary is how you behave and whether you drag unaware workers into it.
  • Some note useful distinctions between “intimate” versus “sexual,” and that professional touch (physio, hair, massage) can feel intimate without being exploitative.

Broader Cultural & Generational Shifts

  • Commenters mention: more cautious workplace interactions, location‑tracking in relationships, loss of sex scenes in mainstream film, and youth having more porn but often less partnered sex.
  • Several link current moral panics about sex (on both left and right) to older American puritanism, now expressed through new ideological lenses and online enforcement.

So you wanna build a local RAG?

Semantic chunking and document structure

  • Several comments stress that embedding whole documents hurts performance; semantic chunking plus added context about each chunk’s role in the doc can dramatically improve retrieval.
  • Anthropic-style “contextual retrieval” (generate summaries/metadata around chunks) is cited as particularly effective.
  • Some wonder if GraphRAG / knowledge-graph approaches could better capture cross-document structure, similar to personal knowledge tools.

Lexical vs semantic search (vectors) debate

  • One camp argues you can skip vector DBs: full-text search (BM25, SQLite FTS, grep/rg, TypeSense, Elasticsearch) plus an LLM-driven query loop often works “well enough,” is cheaper, simpler, and avoids chunking issues.
  • Others report that pure lexical search degrades recall, especially when users don’t know exact terminology; multiple search iterations inflate latency.
  • A common framing: lexical search gives high precision / lower recall; semantic search gives higher recall / lower precision.
  • Many advocate hybrid systems (BM25 + embeddings, fusion, reranking) as the current best practice, though added engineering complexity is questioned.

Evaluation and real-world usage

  • Strong emphasis on proper evals: build test sets of [query, correct answer], use synthetic Q&A, and compare BM25 vs vector vs hybrid configurations.
  • People note dev-created queries are biased (they “know the docs”); real users phrase things differently, revealing much poorer performance.
  • Some propose automated pipelines and LLM judges to continuously score RAG changes.

Local models and infra

  • One view: running a local LLM is overkill; keeping only docs and vector DB local is already a big win.
  • Others say consumer hardware (16–32GB GPUs or high-RAM laptops) can run substantial local models and that medium-sized orgs can self-host if they value privacy.

Practical challenges and tooling

  • Document parsing (especially PDFs with tables, images, multi-page layouts, footnotes) is described as a major unsolved pain point, often more limiting than retrieval method.
  • Various tools/stacks are mentioned: llama.cpp-based apps, Elasticsearch, Chroma, sqlite-vec, local RAG GUIs, Nextcloud + MCP, and open-source RAG frameworks with benchmarks.
  • Some highlight language issues: many embedding models are English-only; multilingual or language-specific models and leaderboards are needed for non-English RAG.

Airloom – 3D Flight Tracker

Overall reception

  • Many commenters call the 3D visualization “amazing”, “mind-blowing”, and “dangerously” engaging, especially when watching takeoffs/landings and long trails over time.
  • Several mention bookmarking it, running it on a second monitor, or wanting it as a screensaver.

Visualization & UX feedback

  • Altitude scaling: default vertical scale makes climbs look too steep; reducing to 1.0 improves realism. Early bug where rescaling left “stair-step” trails was fixed.
  • Planes sometimes appear to clip through terrain or hover above runways; attributed to terrain meshes, pressure-altitude vs MSL handling, and edge cases in ground elevation.
  • Airspace overlays are useful for pilots but sometimes fail to load after searching.
  • Requests: smoother motion via interpolation, hiding ground when camera goes below it, option to remove glow effect, clearer side-panel controls, and quicker pinch-zoom on trackpads.

Maps, airports & performance

  • Initial airport selection is a random set of ~20 US airports; users want defaults that are busier and/or currently in daylight, plus a “near me” mode (now supported via ?airport=nearme).
  • Strong desire for higher‑resolution map tiles and optional FAA sectional charts; current provider has rate‑limit and outage issues, prompting discussion of self-hosting or paid tiles and low-detail fallbacks.
  • Performance is generally smooth, but some report periodic stutters on certain Macs and very slow pinch-zoom.

Use cases & comparisons

  • Compared to FlightAware and ADSBexchange, this is seen more as a beautiful overview/visualization or for replaying interesting flights, not for detailed operational tracking.
  • Ideas include integration with LiveATC, VR/3D versions, educational uses for student pilots, and special flights (Zero‑G, fighter training, balloons).

Project direction & tooling

  • Built by a solo developer in spare time; people discuss monetization, premium features (saved presets), and naming.
  • Some want a native desktop app; others warn against Electron due to bloat and advocate native toolkits or sticking with the web app.

Stellantis Is Spamming Owners' Screens with Pop-Up Ads for New Car Discounts

Backlash to In‑Car Ads and Screens

  • Many commenters see Stellantis’ pop-up ads as a hard “no‑buy” line, adding the brand to existing personal boycotts (alongside others that push ads on devices).
  • People object to having ads on long‑term, high‑value items (cars, fridges, appliances), likening it to someone slapping a billboard on your house.
  • Several argue that receiving a new, intrusive ad after purchase should reopen the return window or trigger lawsuits/class actions.

Connectivity, Tracking, and OTA Updates

  • Strong concern that embedded SIMs and telematics are used to track location and driving behavior, potentially sold to insurers or other third parties.
  • Some owners report explicitly disabling telematics/OTA and accepting trade‑offs (e.g., slower dealer updates, recall fixes taking hours).
  • OTA updates are viewed by many as net‑negative: fear of bricked cars, updates blocking starts, and lack of clear liability when an update breaks something.
  • A minority defend OTA as valuable (safety fixes, feature updates) and note disabling connectivity may void warranties or break features.

Workarounds, Hacking, and Right to Repair

  • Popular advice: locate and disable the cellular modem or telematics module, though people note future vehicles may resist this or refuse to operate.
  • Stories of infotainment “jailbreaks” (e.g., to unlock CarPlay) show it’s technically possible but non‑trivial.
  • Stellantis’ “secure gateway” and similar systems are criticized as locking out diagnostics and independent repair, feeding calls for right‑to‑repair and regulation.

Car Quality, Brands, and Subscriptions

  • Stellantis/Jeep are repeatedly labeled unreliable and overpriced, with anecdotes of extensive repairs at relatively low mileage.
  • Others counter that such repairs over 10–12 years can be “normal” for that segment, though there’s broad agreement that modern Jeeps are problematic.
  • Subscription‑gated features (e.g., remote start, some Toyota/Subaru services) are heavily disliked; many say they’ll drive older cars “into the ground” rather than pay.
  • Remote start itself triggers debate: some find it basic winter/summer comfort; others see it as wasteful, polluting idling.

Old Cars, Manuals, and Resisting the Trend

  • Many want simpler, pre‑connected cars: late‑90s/2000s are cited as the “peak” era; some plan to maintain older vehicles indefinitely or buy used.
  • Reverse‑camera mandates are noted as making screens nearly universal, pushing screen‑averse buyers to the used market.
  • Long side‑thread on manual vs automatic: manuals praised for control, anti‑theft, and involvement; others say modern automatics/DCTs are as efficient or better and see manual enthusiasm as cultural rather than practical.

Regulation and Collective Action

  • Multiple comments argue that only regulation (privacy rules, ad bans in vehicles, right‑to‑repair laws, limits on arbitration clauses) will stop these practices.
  • A few urge concrete political engagement—contacting representatives, building campaigns—rather than relying on individual hacks or passive complaining.

AI Adoption Rates Starting to Flatten Out

Interpreting the “flattening” claim

  • Many commenters argue the headline is overstated or wrong given the charts: small-firm adoption is clearly rising; large-firm adoption shows recent declines but not a clear long‑term trend.
  • Others counter that several consecutive months of decline, especially in larger firms, does start to look like a real trend, absent evidence of a transient shock.
  • Some suggest a more accurate framing would be “stagnation” or “plateau,” especially for big companies.

Definitions and data quality

  • Strong confusion and disagreement over “adoption” vs “adoption rate”:
    – Some read “rate” as a time derivative; others as “percentage of firms using AI.”
    – Title vs axes vs text are seen as internally inconsistent.
  • Criticisms of the charts: no y‑axis label, misleading use of “rate,” 3‑month moving average hidden in a footnote, and missing grid lines.
  • The two datasets (Census vs Ramp) differ by ~3x, raising questions about representativeness and methodology.
  • Census question wording (“use AI in producing goods or services”) may exclude a lot of incidental or exploratory use, making absolute levels hard to interpret.

Enterprise vs small business dynamics

  • Small firms (1–4 employees) show the cleanest, steadily rising line; some see this as the leading indicator, with larger firms to follow later.
  • Others note that large-firm adoption appears to have fallen, which is concerning given AI company valuations and capex plans.
  • Speculation that grassroots use (developers, tiny businesses) grows while mid‑/large‑company managers slow or resist adoption if they feel threatened.

Bubble, valuations, and macro outlook

  • Several commenters see classic bubble dynamics: huge capex and valuations assuming exponential revenue growth, but only modest or stalling adoption.
  • Expectation from some that most AI startups will be wiped out or acquired cheaply; long‑term value may still be huge but concentrated in a few players after a shakeout.
  • Others say flattening would be a normal point on a technology S‑curve, not necessarily an “AI winter.”

Use cases, productivity, and developer experience

  • Mixed experiences on coding:
    – Some see LLMs as strong force multipliers for boilerplate, glue code, and minor tasks.
    – Others find quality unreliable, edge cases missed, and codebases turned into “slop” that’s hard to maintain.
  • Several developers report skill atrophy, loss of deep understanding, and reduced joy in programming when relying heavily on AI; a few intentionally cycle “no‑AI weeks” to keep skills sharp or have quit AI tools entirely.
  • Debate over whether non‑adopters will be unemployable in ~5 years vs. whether deep engineering skill (independent of AI) will remain the scarcest and most valuable asset.

Agents, UX, and mainstream adoption

  • Strong skepticism that autonomous “agents” are actually in real, unsupervised production use; most usage is interactive co‑pilot style.
  • Many note the average person doesn’t know what to do with LLMs; value will rise as AI is embedded into familiar applications and workflows, hiding complexity.
  • Growing frustration with hallucinations and inconsistent answers (e.g., between different models) erodes trust and may dampen further direct-chat adoption.

Credit report shows Meta keeping $27B off its books through advanced geometry

What Meta Is Doing Structurally

  • Meta sets up a separate entity (e.g., “Beignet”) that:
    • Borrows ~$27B via bonds to build a 2+ GW hyperscale data center.
    • Owns the campus; Meta signs a short (4‑year) lease with renewal options.
  • Economically, Meta:
    • Is the only realistic tenant.
    • Bears almost all the risk and provides guarantees/”residual value” support.
  • This structure keeps the assets and debt off Meta’s consolidated balance sheet under current accounting rules, while credit markets still largely treat it as Meta risk.

How Common / Legal This Is

  • Commenters say project-specific entities and off-balance-sheet vehicles are “as common as breathing” in:
    • Large construction, real estate, film, infrastructure, and other capital‑intensive industries.
  • Seen as standard project finance rather than inherently nefarious, though:
    • Some argue that even “standard practice” can create systemic risk via opacity and misvaluation.

Debate on Risk, Ratings, and 2008 Comparisons

  • One side: This is reminiscent of Enron and pre‑2008 games:
    • Ratings agencies accept whatever the accepted definitions allow.
    • Form (short lease, separate box) is used to hide leverage that would otherwise lower Meta’s rating.
  • Other side: This is not 2008:
    • Underlying credit quality is strong; unlike subprime borrowers.
    • Bond yields are closer to junk, showing the market does price extra risk.
  • Concern: If this becomes the template for AI capex, reported leverage for big tech will systematically understate true obligations.

Article Style, Comprehension, and Satire

  • Strong split on the Substack piece:
    • Some praise it as a sharp, technically informed, satirical “finance Borat” that exposes how much risk can be boxed off legally.
    • Others find it unreadable, too snarky, and confused about GAAP, preferring more straightforward coverage in mainstream outlets.
  • Multiple commenters note that many readers’ confusion stems from lack of finance context, not literacy; others counter that unclear, jokey writing is a real barrier.

Broader Context: Power, Jobs, and AI Bubble Concerns

  • Discussion about:
    • Power infrastructure needed for multi‑GW campuses and long‑term energy contracts.
    • Limited direct job creation from hyperscale data centers versus the boosterish PR and ads.
    • Skepticism that AI demand and hardware values will justify this scale of leveraged build‑out.

Can Dutch universities do without Microsoft?

Structural European Tech Weakness & Federalization

  • Commenters see Europe’s failure to build competitive tech platforms in the 2000s as a long-term strategic error, leaving it dependent on US and Chinese firms.
  • Fragmentation (27 states, 27 rulebooks) is blamed for making scaling harder than in the US; some argue “federate or decline,” others strongly resist ceding more sovereignty to Brussels.
  • EU institutions are described as slow, complex, and shaped by an old neoliberal/globalization mindset, yet also praised for preventing US-style executive overreach.

US Cloud Dependence & Legal Sovereignty

  • Core concern: Microsoft, AWS, Google are subject to US law (CLOUD Act, sanctions), so any “sovereign” EU cloud run by them is inherently suspect.
  • AWS/Microsoft “European sovereign cloud” plans are criticized as cosmetic unless there is full legal and operational separation and no US ownership leverage.
  • ICC/Netherlands example is repeatedly cited as a warning that individuals and institutions can be digitally cut off by US political decisions, even if details of that case are disputed.

Why Universities Are Locked In

  • Identity and collaboration are seen as the real lock-in, not Word/Excel themselves: Azure AD/Entra, Exchange/Outlook, Teams, OneDrive/SharePoint underpin auth, storage, calendars, and workflows.
  • Universities once ran their own mail and fileservers; “free” edu bundles from Google/Microsoft during the last decade (especially around COVID) led to mass outsourcing and skill atrophy.
  • Decision-makers and users favor familiarity and convenience; migration risk is career-threatening for administrators and offers little visible upside.

Alternatives & Practical Feasibility

  • A common proposed stack: LibreOffice + Collabora/OnlyOffice + Nextcloud + Matrix/Jitsi + self-hosted email/anti-spam. Technically feasible, especially via shared services (e.g., SURF in NL).
  • Main barriers: coordination (getting 10–20 universities to co-fund), long-term ops (petabyte-scale storage, redundancy, security), and lack of EU-style “big tech” vendors offering one-stop solutions.
  • Some argue unis are better positioned than corporations (existing sysadmin expertise, research grants) and that EU-wide funding could bootstrap open, sovereign platforms.

Security, Quality, and Mission

  • Several admins describe Microsoft 365/Azure as insecure by default and operationally convoluted; others insist it’s still the best option for orgs without large security teams.
  • There is discomfort with public universities effectively serving as training pipelines for Microsoft/Oracle/Cisco ecosystems instead of emphasizing open standards and tooling independence.

Petition to formally recognize open source work as civic service in Germany

Overall sentiment

  • Many commenters like the idea of recognizing open source as civic service, especially to address maintainer burnout and lack of support.
  • Others think the benefits of “Ehrenamt” are minimal (small tax allowances, minor cultural discounts), so the proposal is mostly symbolic.
  • A visible minority is outright opposed, seeing this as unnecessary state involvement or a distraction from existing structures (e.g., founding a Verein).

Eligibility and impact criteria

  • Suggested conditions: contributor shouldn’t be sole owner; projects should be “high impact” and not heavily corporate-sponsored; only merged work should count.
  • Pushback:
    • Excluding owners/maintainers would exclude exactly those with the heaviest responsibility and burnout risk.
    • Much critical work (triage, reviews, security, docs) doesn’t show up as merged PRs.
  • Ideas for measuring impact: adoption thresholds, dependency graphs, inclusion in public infrastructure, or curated project lists.
  • Concern that any simple metric (e.g., GitHub stars) is trivial to game and often tied to foreign companies.

Incentives, abuse, and gaming

  • Strong concern that explicit incentives will generate spammy, low-quality contributions, citing Hacktoberfest as precedent.
  • Debate over whether to count unmerged work: some note real effort is often discarded; others warn that rewarding unmerged changes worsens spam.
  • Several argue the program must be designed from the start to disincentivize abuse rather than relying on after-the-fact enforcement.

German legal and bureaucratic context

  • Clarifications:
    • This is about recognizing individual volunteer work (Ehrenamt), not creating non-profit organizations.
    • Benefits include small tax-free expense allowances (e.g., ~840€/year) if you already receive money; no pay for most volunteers.
    • Civic service typically requires a recognized non-profit host; it’s not just “committing from home.”
  • Some argue the right path is existing structures: create a Verein, seek “gemeinnützig” status under current law (often via “education”).
  • Skepticism that petitions meaningfully change policy; thresholds for mandatory debate are high, and such petitions often serve mainly as PR.

Corporations, taxpayers, and exploitation risks

  • One camp worries this is a bad deal for taxpayers and a great deal for big tech: Germany would subsidize OSS that global corporations exploit.
  • Others counter that:
    • The financial scale is tiny (expense reimbursements, not salaries).
    • Germany already extracts substantial tax from tech work and should reinvest in public digital infrastructure.
  • Concern that companies could disguise underpaid labor as “volunteering” to dodge wages and social contributions, though others note reimbursement caps and legal limits (e.g., no volunteering for private firms in some countries).

“Open source” vs public good

  • Disagreement on whether all open source is inherently a public good:
    • Supporters say making code available for free is, by definition, public benefit.
    • Critics point to open-source ransomware, trivial or abandoned projects, and “digital litter” that imposes real costs.
  • Some suggest tying eligibility to:
    • Stricter definitions (FSF-style “free software” or European licenses like EUPL).
    • Explicit contribution to the common good rather than mere openness.

Alternative and complementary ideas

  • Proposals include:
    • Amending tax law (§52 AO) to treat support of open source itself as charitable.
    • Funding open source like scientific research via dedicated state programs and institutions (with examples of existing German FOSS charities and initiatives).
    • Focusing on “free software” rather than all “open source” as the recognized civic contribution.

A trillion dollars (potentially) wasted on gen-AI

Scaling vs. limitations of LLMs

  • One side invokes the “bitter lesson”: bigger models + more data + more compute have repeatedly delivered breakthroughs since the 2010s; perceptrons only took off once scaled.
  • Others push back that “scaling is all that matters” is historically wrong and technically shallow: many AI waves (expert systems, early MT, speech) hit walls where missing cognitive abilities couldn’t be fixed by more compute.
  • Critics reference diminishing returns, No Free Lunch, narrow specialist models, and current issues with transfer, hierarchy, causality, and world models as evidence that LLMs are not the final paradigm.

Was the trillion dollars “wasted”?

  • Supporters of the boom argue investment isn’t wasted just because AGI isn’t reached; LLMs already power useful services, much like cloud/SaaS or earlier infrastructure bubbles.
  • Others stress opportunity cost: trillions on datacenters and GPUs vs. funding many researchers or other societal needs; magnitude matters for “waste.”
  • Some see a classic bubble: speculative promises of AGI and “every job automated,” VC incentives to pump a story, and systemic risk if pensions and broad capital pools are exposed.
  • Counterpoint: tech booms have always overbuilt and misallocated in the short term but left valuable infrastructure and knowledge.

Definitions and status of AGI

  • There’s no consensus on “AGI”: views range from “we’re already past it” (LLMs outperform average humans on many cognitive tasks) to “we’re nowhere close” (hallucinations, lack of agency, no stable self-knowledge).
  • Proposed criteria include: lack of hallucinations, stable reasoning, synthesis, recursive self-improvement, and the ability to operate autonomously in the real world (power plants, fabs, etc.).
  • Many see current systems as “human-in-the-loop AGI” or on a continuum: superhuman in some domains, subhuman in others, with “jagged” capabilities.

Real‑world utility and risks

  • Multiple practitioners report 2–3× productivity gains, especially in coding, refactoring, debugging, documentation, and research, plus lower mental fatigue.
  • Others emphasize unreliability, hallucinations, and the need for expert oversight; inexperienced users may be misled, making LLMs “dangerous conveniences.”
  • There’s frustration at both overhyping (“AGI 2027”, total job automation) and at critics who ignore clear practical value.

Economic, hardware, and bubble dynamics

  • Some welcome the rich burning capital on GPUs, noting spillovers: better tools, open-source models, and possibly cheaper compute later.
  • Others worry about environmental impact, RAM and power prices, short GPU lifecycles, and eventual e‑waste if this stack is abandoned.
  • Comparisons are made to dot‑com fiber overbuild (later useful) vs. housing/crypto bubbles (primarily wealth transfer and waste).

Views on the article and research trajectory

  • Several commenters see the article as overstating “waste” and rehashing long‑running “deep learning is hitting a wall” claims that already missed major progress.
  • Others say its technical critiques (data hunger, weak generalization, opacity, lack of causality) still largely hold, and LLMs alone won’t deliver AGI or promised returns.
  • Broad agreement that future progress likely needs new architectures and hybrid approaches, but disagreement on whether today’s scaling push was a necessary step or a trillion‑dollar detour.

A Remarkable Assertion from A16Z

AI authorship of the A16Z reading list blurb

  • Many commenters see the “literally stop mid-sentence” claim as classic LLM hallucination: confidently specific, trivially false, and stylistically “AI-slop.”
  • Others propose human error (misremembering endings, conflating with other mid-sentence-ending novels) but are seen as less plausible by most.
  • GitHub history is cited: descriptions were generated in Cursor/Opus (“opus descriptions in cursor, raw”), with explicit “AI GENERATED NEED TO EDIT” notes, then lightly human-edited.

How the Stephenson description evolved

  • An earlier AI draft compared his endings to a “segfault,” which at least had the right type of exaggeration.
  • A later commit changed it to “literally stop mid-sentence,” and introduced a misspelling of his name; this suggests human post-editing of AI text, not pure machine output.
  • Debate: AI only wrote blurbs vs. AI also helped choose books. Consensus in-thread: list likely human-chosen but machine-described, which undercuts the list’s claimed authority.

Debate over “literally”

  • One camp: “literally” is now widely used as an intensifier, not meant literally; that’s likely what the editor intended.
  • Counterpoint: even as an intensifier it’s misleading here, because the statement is a concrete, checkable claim about text and simply false.
  • Linguistic side-notes: historical use of “literally” as intensifier since the 18th century; concern that losing a precise word for non-metaphorical truth is a kind of drift toward “Newspeak.”

Are his endings actually bad?

  • Several readers say his endings are perfectly normal, no more abrupt than Shakespeare or Frank Herbert; the mid-sentence claim is pure fabrication.
  • Others report a consistent pattern: gripping first 80% then endings that feel rushed, bloated, or mistimed, especially in later novels.
  • Comparisons are made to genuinely mid-sentence endings (e.g., certain postmodern and unfinished works) to emphasize how different that is from his books.

“Inhuman Centipede” and broader LLM criticism

  • The article’s “Inhuman Centipede” metaphor for models training on their own slop resonates; commenters trace similar prior uses and link it to a feared self-reinforcing garbage loop.
  • The incident is treated as emblematic of a venture firm and broader Silicon Valley culture: shallow literary engagement, AI-generated PR, and “nerd shibboleths” to signal taste rather than genuine reading.
  • Multiple personal anecdotes highlight how often LLMs fail on practical tasks, reinforcing skepticism about using them for authoritative recommendations.