Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 162 of 524

Google Removed 749M Anna's Archive URLs from Its Search Results

Site vs. Google Search for Anna’s Archive

  • Several commenters say they never used Google to search within Anna’s Archive; its own metadata search (title/author/format/date) is “good enough.”
  • Others note Google could add value with full‑text search of book contents, but AA only exposes metadata, so Google likely didn’t have full text anyway.

LLMs, DMCA, and Piracy

  • People wonder whether and how LLM providers honor DMCA takedowns and whether they can “launder” copyrighted content into ostensibly legal outputs.
  • Reports are mixed: some models refuse to provide pirated links or copyrighted text; others still surface torrent or archive links.
  • There’s concern that LLMs are just “regurgitating trash” and cannot reliably distinguish good from bad sources, making them vulnerable to manipulation.

Perceived Decline of Google Search

  • Many describe Google search as increasingly useless: SEO spam, AI overviews, ads, and hidden or capped result sets.
  • Some argue Google intentionally deprioritizes organic “good” results beyond early pages to boost ads and AI features; others ask for concrete evidence and note that court findings mainly show AI features reduce clicks on “10 blue links,” not that the best results are deliberately buried.

Alternative Search Engines

  • Yandex is praised as especially good for DMCA‑sensitive or pirated content, “like Google circa 2005.”
  • Kagi, Startpage, DuckDuckGo, Brave, Ecosia, and Bing are repeatedly cited as better than Google for relevance, though each has trade‑offs (indexes, UI, sponsorship, Copilot clutter).
  • Debate over personalization: some want it off entirely; others say query/locale‑aware personalization (e.g., “Kafka,” “C string”) can be genuinely useful but is poorly executed.

Corporate Motives, DMCA, and Censorship

  • One side argues Google is simply complying with DMCA using a public transparency log and that communities over‑dramatize this.
  • Others reply that large corporations are structurally driven by profit/valuation and routinely behave “sociopathically,” so defending them is misplaced.
  • Some highlight asymmetric enforcement: DMCA removals that protect rightsholders move fast, while consumer‑benefiting changes or antitrust remedies take years.
  • Allegations appear that Google and X also remove politically sensitive war‑crime documentation, seen as siding with powerful states.

Anna’s Archive, LibGen, and Archiving Efforts

  • Several see Anna’s Archive as continuing the original Google‑like mission of organizing and opening access to “high‑quality” information, especially after LibGen and z‑lib crackdowns.
  • Others think it’s reasonable for pirate links not to top book‑search results; the homepage still appears, so determined users can find it.
  • People discuss mirroring AA via torrents (tens of TB, compression, filtering large PDFs, de‑duping editions) and suggest a dedicated “piracy search engine” based on DMCA‑reported URLs, with Yandex already filling that niche.
  • Alternatives mentioned: WeLib, open‑slum, and Telegram‑based Nexus/LibrarySTC bots for academic papers.

Legality of Downloading Digital Copies of Owned Books

  • Answers differ by jurisdiction, but consensus in the thread: owning a physical book generally doesn’t grant a right to download unauthorized digital copies.
  • Creating your own digital copy is more likely to be legal; downloading from an infringing source remains problematic, though enforcement usually targets distributors rather than individuals.

Broader Web Search and AI Tensions

  • Commenters note: more walled gardens, more legal barriers, and the need to search across multiple engines and maybe personal indexes.
  • There’s concern that AI systems (e.g., Gemini) trained on web content now reduce traffic to the very sites they were trained on, raising fairness and conflict‑of‑interest questions.
  • Some see AI + RAG over large book corpora as a huge competitive advantage, even as ordinary students and researchers lose free access to those same texts.

UPS plane crashes near Louisville airport

Apparent sequence and severity of the accident

  • Multiple videos show the left (No. 1) engine area engulfed in flames during the takeoff roll, with a large ground fire trail and extensive industrial damage.
  • Several commenters note stills showing the entire left engine later found beside the runway, suggesting engine separation, and possible damage to the tail (No. 2) engine from debris.
  • The aircraft was heavily fueled for a long Louisville–Honolulu cargo flight; estimates in the thread range from tens of thousands of gallons/pounds up to ~200–250k gallons mentioned in early dispatch notes, contributing to the huge fire. (Exact quantity remains unclear.)

V1, engine-out performance, and pilot decision-making

  • Many comments explain that multi‑engine airliners must be able to safely continue takeoff if a single engine fails at or after V1; aborts above V1 are generally prohibited because there isn’t enough runway to stop.
  • MD‑11s are designed to fly on two of three engines, but commenters stress that “engine failure” vs. “engine detaches and shreds the wing/damages another engine/hydraulics” are completely different problems.
  • There is disagreement over whether the crew made a conscious “heroic” choice to protect people on the ground versus simply following standard V1 procedures with incomplete information and almost no time. Multiple posters urge waiting for NTSB data before attributing intent or blame.

Damage mechanisms and comparisons to past accidents

  • Several posts compare this event to American Airlines 191 and El Al 1862: engine/pylon separation, wing leading‑edge damage, slat/hydraulic issues and asymmetric lift leading to uncontrollable roll.
  • Some suspect an uncontained engine failure or structural/pylon issue; others mention a pre‑flight delay reportedly for left‑engine work, but later note an NTSB briefing stating no immediate pre‑departure maintenance is known—this remains unresolved in the thread.

Cameras, sensors, and cockpit information load

  • Long sub‑thread on whether external cameras (tail/wing views) should be standard to let pilots visually confirm damage.
  • Pro‑camera side: could clarify situations like severe engine damage, wing deformation, gear status, or fuel leaks, avoiding reliance on cabin crew or fly‑bys.
  • Skeptical side: during takeoff emergencies pilots are already at cognitive limits; extra video feeds risk information overload, and current fire/fault detection systems are designed to trigger simple, unambiguous alerts (“ENG FIRE/FAIL”) rather than describe exact failure modes.

Runway overruns, barriers, and EMAS

  • Question raised: why no barriers between runway ends and “important” infrastructure.
  • Responses explain:
    • The kinetic energy of a fully loaded widebody at takeoff speed is enormous; solid barriers would be unsurvivable.
    • Engineered materials arrestor systems (EMAS) exist and are effective for landing overruns at lower speeds, but are not designed for high‑speed rejected takeoffs.
    • Any “extra” land at runway ends is already treated as safety margin; designing to routinely use that margin is discouraged.

Airport siting, land use, and noise/safety buffers

  • Many comments note how “lucky” it was that the jet came down in a relatively sparse industrial zone rather than the nearby downtown or residential areas.
  • Discussion of zoning practice: guidance usually discourages dense residential/commercial development off runway ends, but many legacy airports (Midway, San Diego, Love Field, etc.) are now tightly surrounded by housing and schools due to urban growth and political pressure.
  • Some describe past buyouts and demolition of neighborhoods near Louisville’s UPS hub under the banner of “noise/safety,” later replaced by warehouses—leading to cynicism about mixed motives, though this crash is cited as grim validation of the underlying safety logic.

Maintenance practices, MD‑11 age, and outsourcing debates

  • MD‑11 production ended in 2000; current fleets are elderly, mostly ex‑passenger airframes converted to freighters. Commenters note cargo aircraft often fly fewer daily cycles, but conversions and age increase complexity.
  • Speculation ranges from maintenance error (including historical concerns about forklift engine handling in DC‑10/MD‑11 pylons) to manufacturing defects or foreign repair practices; several people link to pieces on outsourced maintenance and foreign repair stations.
  • Others, including people with maintenance experience, push back hard: foreign MROs typically undergo rigorous FAA/EASA oversight; blaming “foreign work” without evidence is called out as uninformed.
  • Multiple posters emphasize that early “it must be maintenance” claims are premature and the NTSB’s independent investigation will determine cause.

Aviation safety, regulation, and institutional roles

  • Broader discussion on how extraordinarily safe modern commercial aviation is, despite occasional catastrophes.
  • Some argue for stronger regulation and against cost‑cutting “race to the bottom”; others note that deregulation and intense competition coexist with historically low accident rates.
  • There’s praise for the NTSB’s structure and culture: separated from the FAA, methodical, reluctant to speculate, and focused on system fixes rather than individual blame.

Emotional reactions and personal context

  • Many express horror at the ground devastation and sympathy for the crew and affected workers; several relate past local crashes hitting neighborhoods and how that shapes their perception of risk.
  • A story appears of a UPS pilot whose first day was supposed to be on this flight but was removed from the roster, underlining the role of chance.

Mr TIFF

Emotional impact and recognition of “Mr TIFF”

  • Many commenters were unexpectedly moved by a story about a file format, describing it as beautiful, touching, and even tear‑inducing.
  • There’s strong appreciation for finally giving proper credit to an unsung engineer whose name most professionals had never heard despite widespread use of TIFF.
  • Several connect this to a broader theme: tech culture often erases or ignores its own history and quiet contributors; efforts like this feel like “digital wakes” and cultural repair work.

Historical research, Wikipedia, and sources

  • Commenters praise the detective work and note how easily such history could have been lost if one person hadn’t cared enough to dig.
  • A side thread discovers that the inventor had in fact commented on the TIFF Wikipedia talk page years ago, confirming the “42” joke and adding details about naming.
  • This leads to debate over Wikipedia policies:
    • Primary vs secondary sources, “verifiability not truth,” and “no original research.”
    • Whether a user’s self‑identification on Wikipedia or HN could qualify as a citable source.
  • Some argue the hidden talk‑page evidence was “obvious in hindsight”; others emphasize how nontrivial it is to find such material without already knowing what to look for.

TIFF format, design, and technical legacy

  • Multiple practitioners reminisce about extensive TIFF use in publishing, mapping, geodesy, microscopy, geospatial imaging (GeoTIFF/COG), clinical trial scanning, and camera RAW/DNG.
  • The tagging and extensibility model is praised for accommodating projections, metadata, and varied use cases.
  • Others criticize that same extensibility for causing “Thousands of Incompatible File Formats,” with inconsistent vendor extensions and quirks.
  • Several note TIFF is still very much alive in niche and professional domains, even if less visible to end users.

“42”, hidden text, and trivia

  • People highlight the magic number 42 in the spec, confirmed by the inventor as a Hitchhiker’s Guide reference.
  • Discussion branches into whether 42 is “special,” ASCII asterisk jokes, and other numerological or humorous takes.
  • Commenters also examine two TIFF 6.0 PDFs, one containing the inventor’s name in white‑on‑white “invisible” text; theories range from Easter egg to lazy redaction.

Digital preservation and loss

  • The thread broadens into concern that early magazines, Usenet, and plain text preserved this story, whereas modern web platforms (social networks, proprietary sites) are already losing huge amounts of content.
  • People list vanished services and now‑broken links, and share personal strategies of archiving material locally and via the Wayback Machine.

Meta: the book and ongoing oral histories

  • Some struggled to find the linked book, prompting feedback about site UX; the author explains a desire not to push the book too hard given the story’s tone.
  • The author mentions having interviewed around 100 people, especially lesser‑known Apple‑era engineers, to capture similar stories before they’re lost.

I took all my projects off the cloud, saving thousands of dollars

Cost and pricing comparisons

  • Many commenters agree AWS is often far more expensive than VPS/dedicated options (Hetzner, OVH, Linode/DO) once you need moderate CPU/RAM/disk, especially for RDS and large block storage.
  • Extremely tiny or highly intermittent workloads can be very cheap on cloud (free/near‑free tiers, Cloud Run, Lambda, tiny S3/ECR usage).
  • For large, cold storage (PB scale), some find Glacier‑class services cost‑competitive vs building massive storage systems; at TB scale, local NAS or rented servers win easily.
  • Several people highlight that “2×” cloud premium is common and acceptable; others report 5–10× or more for comparable capacity.

When cloud is a good fit

  • Widely cited use cases: rapid MVPs with startup credits, bursty/seasonal load, large multi-region services, LLM or GPU-heavy work, and regulated industries needing ready-made certifications and SLAs.
  • Cloud helps bypass slow internal procurement and CapEx constraints; OpEx and “self‑service servers” were a huge part of its original appeal.
  • Managed services (databases, Redis, CI/CD, backups, global distribution) can be cheaper than hiring/retaining infra specialists, especially for fast‑moving startups.

Arguments for self‑hosting / bare metal

  • Many report running sizeable SaaS, forums, or side projects on 1–3 dedicated servers or home hardware with Cloudflare/tunnels, at a fraction of cloud cost and with acceptable uptime.
  • For the majority of businesses that don’t need “five nines,” simple setups (one DB, one app server, maybe a hot spare) are seen as sufficient and much cheaper.
  • Some frame self‑hosting as ideological: resisting “enshittification” and corporate control, promoting independence and decentralization.

Operational complexity and risk

  • Pro‑cloud voices emphasize the hidden labor of self‑hosting: backups, restores, security patching, intrusion detection, audits, off‑site redundancy, and hardware failures.
  • Others counter that much of this work also exists on cloud VMs, and that modern tooling (Docker, Ansible, k3s, etc.) plus AI assistance lowers the barrier.
  • A recurring worry with cloud is surprise bills and account lockouts; with self‑hosting, the main “catastrophe mode” is getting hugged to death during traffic spikes.

Lock‑in, architecture, and semantics

  • Several note that many “leaving the cloud” stories were barely using cloud‑specific services (mostly EC2/RDS/Redis), so migration was straightforward.
  • There’s disagreement over what “the cloud” even means: some treat any remote VPS/dedi as cloud; others reserve the term for hyperscalers and their proprietary services.
  • Hybrid and multi‑provider strategies are popular in the thread: keep compute or data where it’s cheap, use cloud only where its unique features matter.

Tone and meta‑discussion

  • Multiple commenters find the article ranty, straw‑manny, and needlessly antagonistic toward “cloud people,” even if they broadly agree AWS is often a bad deal for small projects.
  • Others see it as a useful counterweight to the default “everything must be on AWS” mindset, but wish for more rigorous TCO comparisons and fewer culture‑war vibes.

I was right about dishwasher pods and now I can prove it [video]

Dishwasher Heating, Hot Water, and Regional Differences

  • Large subthread on how dishwashers heat water:
    • In North America many machines are plumbed to hot water but have weak internal heaters (10–15A, 110V; often ~800–1200W), so purging cold water from the line can materially improve pre‑wash temps.
    • In 230V regions (EU, AU, NZ), machines more often take cold water only and heat it quickly with stronger elements; hot connection is less common or even discouraged.
    • Some argue newer machines will just heat longer if inlet water is cold; others note many models time heating rather than targeting temperature, so they never reach optimal enzymatic temps in pre‑wash.
    • Debate over whether manuals actually tell users to run the tap first; some do, some don’t.

Pre‑Wash Cycles and Detergent Dosing

  • Many commenters confirm that most dishwashers have a pre‑wash, even when not obvious in the UI; you can detect it by a short run–drain–then long wash.
  • Where there’s a latching detergent door, people infer a pre‑wash exists (door opens later). Some machines also have explicit pre‑wash trays.
  • Others report models (especially newer Bosch/Miele) where manuals explicitly say pre‑wash detergent isn’t needed, and some drop the pod almost immediately. Program behavior (Eco vs Quick vs Heavy) varies a lot.

Pods vs Powder: Performance, Cost, and Availability

  • Strong divide in experience:
    • Several say cheap powder cleans as well or better than pods, especially when some is added for pre‑wash; pods are seen as expensive, overdosed, and single‑stage.
    • Others find pods (especially “premium” ones) clearly outperform available powders, especially on difficult soils or plastics; some report faster, shorter “auto” cycles and less odor buildup with pods.
  • Suspicion that big manufacturers deliberately under‑formulate boxed powder to push higher‑margin pods; others note this isn’t provable from the limited disclosed testing.
  • In some countries (UK, Poland) dishwasher powder has become hard to find; in others (NZ, parts of EU) it’s still common.

Convenience, Safety, and Environmental Concerns

  • Fans of pods emphasize simplicity, no measuring, less spillage, safer around kids and pets, and fewer user‑error issues (overdosing, clogging dispensers).
  • Powder proponents emphasize: much lower cost per load, adjustable dosing for load size and pre‑wash, less plastic, and better machine longevity.
  • Some worry about pod films as microplastics; others counter that they’re designed to dissolve completely.
  • Rinse aid: widely acknowledged as effective for drying, especially with modern non‑heated dry cycles, but a minority cite studies suggesting potential gut effects at high exposure; pushback notes tiny household doses and ubiquitous commercial use.

Critiques of the Video and Promoted Product

  • Several enjoy the deep technical dive and cycle tracing; others find the style verbose, somewhat hand‑wavy, and built around a single relatively crude test dishwasher.
  • Skepticism around the promoted “better powder”:
    • It is substantially more expensive per load than even premium pods, undermining earlier cost‑savings arguments for powder.
    • Some see the video as a well‑produced infomercial with limited transparency (no linked study; no head‑to‑head with mainstream powders).
  • Others are unbothered, treating the channel as primarily educational/entertainment and appreciating any clear, evidence‑backed improvement tips.

Practical Takeaways Users Report

  • Commonly adopted tips from this and earlier videos:
    • Purge hot water at the nearby sink before starting (where the machine is on hot).
    • Use some loose detergent in the tub or pre‑wash area plus more in the main dispenser.
    • Experiment with non‑obvious program combinations (e.g., Normal + high‑temp/sanitize) rather than default “Heavy” or “Eco”, whose labels often don’t match real energy/water use.
    • Regularly clean filters and understand your specific machine’s manual and hidden cycle diagrams.

Singapore to cane scammers as billions lost in financial crimes

Singapore’s political & economic model

  • Described as unusually prosperous, militarized, and stable, yet effectively one‑party and highly interventionist.
  • Debate over labels: “state capitalism,” “Asian Switzerland,” “pure authoritarianism,” or akin to fascism without racial scapegoating.
  • Some see the core feature as high public trust in a technocratic government that prioritizes long‑term planning over short electoral cycles. Others emphasize lack of press freedom, speech, and genuine electoral competition.

Freedom vs security trade‑offs

  • Strong concern over the new law allowing police to control accounts of suspected scam victims; viewed by some as a dangerous normalizing of financial control that could extend to political repression.
  • Others note similar or worse precedents in liberal democracies and argue that Western self‑image of valuing liberty is overstated.
  • Several commenters stress that “freedom from” crime, drugs, poverty, corruption, and instability is the freedom most Singaporeans care about, and they appear broadly satisfied with that trade.

Corporal punishment, crime, and deterrence

  • Some argue Singapore’s caning and harsh drug penalties are key to its lack of visible street crime, vandalism, and disorder, and advocate importing elements of this to countries like the US.
  • Counterarguments call corporal punishment “barbaric” and akin to torture, raising wrongful‑conviction risks and moral objections.
  • Others note apparent double standards: elites and locals sometimes receive “kid gloves” compared with foreigners in corruption and money‑laundering cases.

Scams: impact and prevention measures

  • Commenters describe devastating financial losses, especially among elderly victims whose cognitive decline is exploited; emotional harm is highlighted.
  • Singapore’s response is framed as incremental: multilingual education campaigns, app‑level warnings, SMS “LIKELY SCAM” labels, then escalating to harsher penalties.
  • Some suggest making money flows more trackable and reversible to reduce scams, but acknowledge major privacy and collateral‑damage concerns (e.g., innocent accounts frozen, downstream users hit).

Low crime and everyday life

  • Visitors note everyday benefits of low petty crime: unattended property not stolen, clean public spaces, safe late‑night streets and transit.
  • There is disagreement on whether this stems mainly from strict laws, wealth, city‑state scale, or deeper cultural factors.

Why do we need dithering?

Is Dithering Still Needed?

  • Strong disagreement with the article’s claim that we “don’t really need dithering anymore.”
  • Many point out obvious banding in modern games and gradients (e.g., blue skies, dark scenes) when dithering is absent or poorly done.
  • 8 bits per channel (24-bit color) is often insufficient for smooth gradients, especially large or nearly monochrome ones; dithering hides banding without increasing bit depth.
  • Dithering is widely used in modern rendering pipelines: render to high-precision buffers, then dither when quantizing down.
  • Display hardware frequently uses spatial/temporal dithering (“frame rate control”) to simulate extra bits of color.

Tradeoffs, Techniques, and Use Cases

  • Static dither patterns can be used in video to avoid flicker and keep content compressible.
  • Screen-space dithering is used for cheap transparency and to improve dark scenes in games; some dislike the resulting “sparkly” artifacts, especially when combined with TAA.
  • For streaming, adding real noise is problematic because codecs remove it; better if noise/dither is added on the client side (e.g., synthetic film grain).

Beyond Images: General Signal Processing

  • Dithering is emphasized as a fundamental quantization tool, not just a graphics trick.
  • In audio, dithering and noise shaping are standard for high-quality 16‑bit output.
  • Conceptually, “add jitter as close to the quantization step as possible” applies to any thresholding or bit-depth reduction, including Monte Carlo sampling, geometry, etc.

Aesthetic and Nostalgia

  • Some use dithering intentionally as a “retro” or low‑fi aesthetic, referencing classic games and 1‑bit/limited‑palette hardware.
  • Return of the Obra Dinn and PlayDate titles are cited as exemplary stylistic uses.
  • A subset of commenters say that, given modern full-color displays, their motivation to dither is mainly aesthetic or for file-size games (e.g., PNG‑8).

Color Spaces, Perception, and Theory

  • Discussion touches on sRGB’s nonlinearity, gamma (≈2.2), and why proper dithering and calculations should be done in linear light with higher internal precision.
  • For multi-color dithering and palette selection, commenters suggest working in perceptual spaces (e.g., Lab) to measure color similarity.
  • With sufficiently high spatial resolution, dithering trades spatial resolution for perceived color resolution, making lower bit depth more usable.

Deepnote, a Jupyter alternative, is going open source

Positioning Deepnote as “Jupyter’s Successor”

  • Many commenters find the “successor” claim presumptuous and misleading, since Deepnote has no formal relationship with the Jupyter project and Jupyter is still very active.
  • The launch post’s tone (especially early versions) is widely criticized as disrespectful to Jupyter: cherry‑picked contribution graphs, job‑post statistics with dubious framing, and a vibe of “Jupyter is dying, we’re its replacement.”
  • Several people suspect the blog post was written or heavily shaped by an LLM and then quietly edited after backlash, which further hurts trust.
  • General sentiment: the tech might be interesting, but the messaging alienates the exact developer community they’re trying to win over.

Deepnote’s Offering and Open Source Move

  • Some users say Deepnote has long been the nicest Jupyter UI, but locked behind a cloud subscription; open‑sourcing under Apache 2 is praised.
  • Others note confusion: the repo doesn’t yet seem to expose a fully runnable local notebook environment; key pieces are “coming soon.”
  • Deepnote argues notebooks must be reactive, collaborative, and “AI‑ready,” and that this requires a rich project format (YAML + metadata, secrets, multiple block types) beyond plaintext.
  • Skeptics question real‑time collaboration demand (most want git‑style workflows) and dismiss “AI‑ready” as marketing buzz.

Jupyter: Strengths, Weaknesses, and Alternatives

  • Defenses of Jupyter: still “best in class” for many, especially via VS Code; excellent for teaching, ad‑hoc analysis, and REPL‑like workflows where long precomputation is reused.
  • Critiques:
    • .ipynb mixes code and outputs (huge base64 blobs), making git diffs painful.
    • Hidden kernel state causes non‑deterministic behavior and confusion.
  • Tools like nbconvert and jupytext partly address these problems. Some say Jupyter doesn’t need a “successor,” just better practices.

Marimo as the De‑Facto Successor (in the Thread)

  • Marimo is repeatedly recommended as the real Jupyter successor. Praised features:
    • Notebooks as plain .py files (git‑friendly, no embedded output).
    • Optional reactivity with deterministic execution and no hidden state.
    • UI amenities: multi‑column layout, interactive widgets, DataFrame viewers, SQL cells, LLM integrations.
    • Static web export via WASM.
  • Downsides: currently Python‑only; some dislike losing persistent “messy state” workflows; recent acquisition by CoreWeave raises enshitification/lock‑in concerns.

Broader Notebook vs Script / Standardization Discussion

  • Ongoing tension: some prefer plain scripts for simplicity and portability; others view notebooks as superior REPLs and narrative/teaching tools.
  • Several argue notebooks should compile to high‑quality, static, executable HTML for publication, not be the final artifact themselves.
  • Concerns are raised about corporate control of de‑facto standards vs nonprofit stewardship like Jupyter’s, and about startups using “successor” language as marketing rather than community consensus.

Codemaps: Understand Code, Before You Vibe It

Reactions to Codemaps and Windsurf

  • Several senior engineers report strong satisfaction with Windsurf, calling it “miles ahead” of some competitors and highlighting Codemaps as a standout feature that improves code understanding and UX.
  • Others found Windsurf “trash” in practice, complaining it generates unwanted changes and increases review/deletion overhead compared to writing code manually.
  • Codemaps is praised for reducing duplicated code and making it easier to tag/collect relevant abstractions. Some users already used similar workflows manually (e.g., AGENTS.md, requirements docs).
  • UX feedback: current sidebar view is too cramped; users strongly want Codemaps in the main editor pane. The team quickly agreed and said a PR already exists.

Comparisons with Other AI Coding Tools

  • Users compare Windsurf with Cursor, Claude Code, Codex, GitHub Copilot Agent Mode, Zed (via ACP), OpenCode, and abacus.ai.
  • Some say Windsurf has the best overall UX; others prefer Codex for cloud environments and superior PR review bots; some are sticking with VS Code + GitHub Agent Mode + Sonnet due to flexibility and pricing.
  • CLI-heavy workflows may find Windsurf less natural, though its Cascade/terminal-in-chat pattern is called out as strong.
  • Zed’s ACP is appreciated for being editor-agnostic and avoiding lock-in.

Value of Code Visualizations vs Business Context

  • One camp argues Codemaps-like diagrams are limited: knowing dependencies and flows without “why” (business context and design rationale) is insufficient; traditional design docs and reading code are seen as enough.
  • Others counter that:
    • LLMs can use whatever context you provide (docs, AGENTS.md, comments).
    • A lot of business context leaks into code anyway.
    • For many tasks (especially debugging and onboarding/context switching), structural understanding alone is highly valuable.
  • Comparison to long-standing static-analysis diagrams: skeptics see little novelty; proponents argue LLMs add judgement about what to surface and at what level of abstraction, avoiding “machine-code-like” diagrams.

Skepticism About AI Coding Productivity

  • Some strongly doubt AI tools improve throughput, citing studies where self-reported productivity gains didn’t match measured output, and observing friends mostly use AI for tasks they already know how to do.
  • Others report large practical wins (e.g., prototyping SaaS quickly, delegating dead-code cleanup to agents with tools like Knip), but acknowledge issues like unused methods/files and context loss after compaction.

Trust, Scale, and Miscellaneous

  • Concerns are raised about trusting auto-generated maps: if they’re wrong, they can mislead worse than ignorance; verifying everything may negate time savings.
  • One commenter sees the product as targeted at Fortune 500–scale codebases; others note that “onboarding” is really continuous context switching even in smaller teams.
  • There’s some pushback on perceived marketing/astroturfing and on AI hype in general, plus minor side threads on Linux package upgrade instructions and prior visualization tools.

Michael Burry a.k.a. "Big Short",discloses $1.1B bet against Nvidia&Palantir

Burry’s Track Record and Credibility

  • Commenters are split: some see a skilled investor who profited on prior macro shorts (e.g., S&P 500 puts in 2023); others call him a “perma-bear” who has “predicted 20 of the last 1–2 crashes,” citing missed upside (e.g., GameStop squeeze) and losing Tesla/ARK-type shorts.
  • Reported performance numbers (e.g., ~56% annualized over a period; ~255% over 10 years) are mentioned but questioned as unaudited and not necessarily impressive versus broad index returns.

Size and Nature of the Nvidia/Palantir Bet

  • Many emphasize that the headline “$1.1B bet” refers to notional value from a 13F, not the premium paid.
  • 13F rules require reporting options as if they were equivalent to holding the underlying shares (delta=1), so actual capital at risk could be much smaller and is not disclosed.
  • It’s clear these are put options, not direct shorts, so downside is capped to the premiums; no margin calls on the options themselves.

Why Puts Instead of Shorting

  • Several explanations: puts limit losses, avoid borrow/margin recall risk, and give leveraged exposure to a sharp drop.
  • Others warn that high implied volatility in NVDA/PLTR means expensive options; the market has already “priced in” a lot of crash risk, making this a tough trade to profit from unless timing is excellent.

Nvidia vs Palantir: Valuation and Business Reality

  • Broad agreement that Palantir’s valuation is more extreme: references to ~600x P/E and ~80x forward revenue vs Nvidia at much lower multiples despite similar growth and higher margins.
  • Nvidia: many argue shorting it is dangerous—revenue and earnings growth are huge, demand for AI compute appears real, and its software/ecosystem moat (CUDA, tooling) and lack of credible near‑term alternatives are stressed.
  • Palantir: seen by many as “government/party” infrastructure with deep integration into surveillance and defense; this political embeddedness may make the business durable but doesn’t justify current multiples in some commenters’ view.

Macro, AI Bubble, and Timing Risk

  • Ongoing debate over an AI/tech bubble: some predict an inevitable correction before upcoming elections; others note such crash calls have been made “every year” and markets keep rising.
  • “Fed put” and government support are seen as strong forces preventing deep market collapses, though some argue a sector‑specific unwinding (AI/semis) is still plausible.
  • Multiple reminders: being early on a short—even if ultimately right—can destroy capital; options suffer time decay and require getting both direction and timing right.

Options Mechanics and Retail Risk

  • Long, detailed subthreads explain puts, calls, deltas, theta decay, and the dangers of shorting and option selling.
  • Several experienced voices strongly discourage inexperienced retail traders from copying these kinds of trades, recommending “paper trading” or simple index investing instead.

The Rust Foundation Maintainers Fund

Funding announcement lacks specifics

  • Several commenters note the announcement contains almost no concrete funding details: no amounts, criteria, timelines, or processes.
  • One participant involved with the effort says this announcement is about “finding money” and that how/what/who to fund is still being worked out in parallel.
  • Others point out the list of large corporate sponsors on the foundation’s main page and implicitly connect that to expectations around transparency.

Governance, structure, and transparency concerns

  • Strong criticism that the Rust Foundation is a 501(c)(6) (trade association) rather than a 501(c)(3) (charity).
  • Some argue the foundation would better serve the community as a 501(c)(3) with clearer, public accounting of income and expenses.
  • Skeptics question the need for a “new fund” at all, suggesting existing money should already be directed toward maintainers.
  • There is suspicion this may be a “shell game” or “sleight of hand” with existing funds, and that announcing a new fund without structural or transparency changes “bodes poorly.”

Rust vs. Zig and “language war” dynamics

  • A large part of the thread veers into Rust vs. Zig dynamics and why Rust gets more backlash.
  • One view: Rust came first and became mainstream; exposure fatigue plus some Rust skeptics coalesced around Zig and promote it by attacking Rust.
  • Several comments describe early Rust evangelism (2010s) as mostly grassroots, technical, and respectful, in contrast to today’s more combative “language wars.”
  • Some feel Zig leadership leans into adversarial, high-engagement “Rust vs. Zig” discourse, while Rust leadership is generally more restrained; others counter that parts of Rust’s core leadership historically behaved in a hostile or “supremacist” way toward non–memory-safe languages.
  • There are conflicting characterizations of which side’s leadership is more toxic; participants dispute each other’s recollections.

Culture, politics, and identity around Rust

  • One thread argues that some of the anti-Rust sentiment comes from “anti-woke” programmers who object to Rust’s inclusive culture and prominent LGBTQ presence.
  • Another long comment ties resistance to Rust to long-standing C/Unix “purity” and control ideals: Rust’s safety model and permissive licensing challenge those identities, whereas Zig is seen as fixing C’s rough edges while still “trusting the programmer.”
  • Others question how widespread these political/cultural dynamics really are, but agree this thread in particular is unusually heated.

NoLongerEvil-Thermostat – Nest Generation 1 and 2 Firmware

Project approach & technical details

  • The current image is largely stock Nest Gen1/2 firmware with a small boot script (/bin/nolongerevil.sh) added.
  • That script injects its own trust material and overrides DNS/hosts so traffic for Nest’s cloud (e.g., frontdoor.nest.com) is redirected to a hard‑coded IP of the new backend.
  • A fake Nest root CA is added so the device will trust certificates from the new server; this effectively subverts the original TLS trust chain.
  • Exploitation relies on known Nest bootloader vulnerabilities (via OMAPLoader) to gain filesystem access. Some are surprised it’s this easy to replace the root of trust; others note most IoT gear doesn’t pay for robust secure boot.
  • Multiple people see this as a stepping stone toward full custom firmware and/or MQTT integration, possibly with Home Assistant.

Open source, trust, and “no longer evil” claims

  • The backend server and code are not yet open source. The site promises they’ll be released “soon,” after a community bounty is processed.
  • Several commenters are uneasy that users are currently trading Google’s proprietary cloud for another opaque service without a privacy policy or self‑hosting support.
  • Others argue the reverse‑engineering work is substantial and that early imperfect releases are still valuable.
  • There is debate over whether this qualifies as “new firmware” for bounty purposes, given it mostly redirects traffic rather than replacing Google’s code.

Reactions to Google EOL and e‑waste

  • Many owners feel burned by Google disabling cloud functionality, saying they’ll avoid future Google hardware and preferring devices that integrate locally with Home Assistant.
  • Some insist the devices aren’t literal e‑waste because the thermostat still functions offline, but others counter that the premium price was for now‑removed “smart” features.

Alternatives, DIY efforts, and safety

  • Recommendations center on Z‑Wave/Zigbee/Matter thermostats with local control, especially Honeywell T6 Pro, Venstar, and various OpenTherm or EMS-ESP boiler controllers.
  • Some are designing replacement PCBs and fully custom firmware for Nest hardware.
  • Commenters stress HVAC safety, especially with gas systems, and urge the project to add explicit “no warranty” licensing and legal disclaimers.

Pg_lake: Postgres with Iceberg and data lake access

Overall reception & positioning

  • Many see pg_lake as a big milestone: “Postgres with an open data lake,” close to an “open source Snowflake” for some workloads.
  • Others stress it is not a Snowflake replacement: Snowflake still leads on cross-org sharing, governance, large-scale analytics, and broader platform features.

Vendor lock-in, cost, and Snowflake strategy

  • One camp: “Just pay Snowflake” – managed infra, reliability, and focus on product value outweigh theoretical lock-in; everything has some lock-in (cloud, hardware, OSS ecosystems).
  • Opposing view: compute on proprietary warehouses is very expensive; you “pay to see your own data,” especially for BI/visualization workloads. Iceberg/Parquet-on-S3 avoids this by letting many tools query the same storage.
  • Several call out Snowflake’s high compute and storage pricing relative to raw cloud costs.
  • Some argue Snowflake supports Iceberg for strategic reasons: to stay competitive as Iceberg becomes a standard and enable bi‑directional migration.

Architecture & query execution model

  • Postgres remains the frontend and catalog; DuckDB is the analytical engine behind a separate pgduck_server process.
  • Foreign tables (USING iceberg) map to Iceberg/Parquet data; pg_lake analyzes queries and pushes down what DuckDB can execute efficiently.
  • Simple queries can be fully executed in DuckDB; more complex ones are split between DuckDB and Postgres.
  • Separate process chosen for threading, memory-safety, shared caching, restartability, and clearer resource limits.

Use cases and benefits

  • Periodically offloading “hot” Postgres data to cheap Iceberg/Parquet storage while still querying it (tiered storage).
  • Querying large S3/GCS-based datasets (e.g. logs, telemetry) from the same Postgres used for OLTP, including joins with local tables.
  • Simplifying ETL/ELT pipelines that currently shuffle data between Postgres and data lakes via custom jobs.
  • Easy COPY to/from Parquet; schema auto-detection from existing Parquet files.

Comparisons to related projects

  • DuckLake: DuckDB as frontend + engine, Postgres as catalog vs pg_lake: Postgres frontend + catalog, DuckDB as engine, using Iceberg for interoperability.
  • pg_mooncake: similar vision (Postgres+lakehouse), but commenters describe pg_lake as more mature and already used in heavy production.
  • pg_duckdb: embeds DuckDB per Postgres backend; pg_lake’s authors prefer a single external DuckDB instance for stability and resource control.

Access control & security

  • S3 access is configured via DuckDB “secrets” (credentials/IAM roles) in pgduck_server.
  • Postgres-side privileges are coarse-grained (pg_lake_read/write roles); finer-grained, per-table grants would need more work.
  • Some interest in integrating with enterprise IAM that understands SQL grants better than S3 policies.

Limitations, maturity, and open questions

  • Type mapping: most Postgres types are supported via Parquet equivalents or text; very large numerics and some edge cases are limited.
  • External Iceberg read-only support exists but is currently constrained; REST catalog support is new and not fully documented.
  • Scaling: today it’s a single DuckDB instance on the same machine; good for many workloads but not a distributed engine. Concerns raised about “hot neighbor” problems and memory-intensive queries; answer is mostly “use more RAM / careful sizing.”
  • One commenter is broadly skeptical of data lakes and filesystem-backed analytics in general, calling the whole paradigm misguided.

Server DRAM prices surge 50% as AI-induced memory shortage hits hyperscalers

Scope of the DRAM Price Spike

  • Commenters report large increases across the board:
    • Desktop DDR5 nearly doubling in ~2 months; multiple anecdotes of 25–100% jumps vs late 2023 / early 2024.
    • DDR4 also rising as demand spills over; server RDIMM sticks that were ~$90 now seen at ~$430.
    • Even used ECC and desktop RAM on eBay has roughly doubled compared to year‑old posts.
  • Some say RAM had become “ridiculously cheap” pre‑spike; others strongly reject the idea that higher prices are “more reasonable.”

Regional Differences and Tracking

  • PCPartPicker trends are confirmed to be US‑centric; price rises there are clear.
  • UK and Japan users also report recent spikes using Amazon/camelcamelcamel and local price trackers.
  • Southern Europe data appears flatter to some; others insist prices are up ~40% across Europe, suggesting delays or low turnover in local channels.
  • PCPartPicker adds EUR‑grouped trends during the thread in response to these questions.

Causes: AI Demand, Hoarding, and Supply Constraints

  • Links cite:
    • OpenAI’s Stargate plans potentially consuming a large fraction of global DRAM output.
    • SK Hynix sold out of production for next year; Adata saying AI datacenters are “gobbling up” DRAM, SSDs, HDDs.
  • Hyperscalers reportedly hoard GPUs that can’t even be powered yet, indirectly hoarding attached RAM.
  • Some speculate on bulk buying and speculative reselling; others note that previous attempts to flip DDR4 weren’t highly profitable.

Manufacturer Strategy and Market Power

  • Several comments argue manufacturers learned from past oversupply crashes and now deliberately underproduce rather than risk low prices; collusion is hinted at but not proven.
  • Others counter that shortages are dangerous for vendors and that maximizing output to meet demand is still most profitable.
  • Another view: fear, inertia, and technical limits (e.g., HBM vs commodity DRAM, long fab lead times) explain the slow response more than conspiracy.

Impact on Consumers and Builders

  • Many regret “just missing” the cheap era when building PCs, NAS boxes, or high‑RAM workstations.
  • DDR4 systems (e.g., AM4) are touted as a relative safe harbor.
  • Some liken the situation to prior GPU booms where high‑end demand cascaded down and even “junk” parts became valuable.

AI Trajectory and Efficiency Debate

  • Some hope the DRAM crunch will force smaller, more efficient models (quantization, MoE, distillation).
  • Others respond that intense work on inference efficiency has been ongoing from day one, with many architectures and hardware startups already chasing lower costs.
  • One faction hopes the “AI craze” crashes to normalize prices; another argues AI demand will persist and is needed to fund advanced fabs.

This week in 1988, Robert Morris unleashed his eponymous worm

Date and article accuracy

  • Commenters note confusion between 1988 vs 1998 and Nov 2 vs Nov 4; consensus is the worm was released Nov 2, 1988, and the HN title/article timing is just editorial sloppiness.
  • Some suggest updating Wikipedia from primary/secondary reports linked in the thread.

Morris, background, privilege, and career

  • Many are struck that after a felony conviction he still finished a PhD at an elite university and later became faculty at the institution whose network he used to mask origin.
  • Several point to his father’s senior NSA role and long security pedigree, suggesting this likely smoothed outcomes; others argue the sentence was in line with how early computer crimes were handled.
  • His later academic work (e.g., distributed systems, routing, DHTs) is portrayed as genuinely top-tier, and some say that alone explains his academic trajectory.

Intent, ethics, and legal consequences

  • Debate over whether the worm was “harmless research gone wrong” vs a knowingly reckless attempt to gain unauthorized access to every Internet host.
  • Some emphasize that even at the time, unleashing self-replicating code on others’ systems without consent was clearly unethical among technically literate people.
  • Outcome: felony conviction, probation, and fine; some think this was lenient given the scale, others say it matched norms for non-financial computer crime then.

Impact on security culture and technical lessons

  • Thread highlights how the worm pushed a shift from “trust users” to “trust mechanisms,” and helped people internalize that buffer overflows are exploitable, not just crash bugs.
  • Later work on stack overflows and widely publicized exploits is described as a second wave that finally made industry take memory safety seriously.
  • Discussion of specific exploit vectors: sendmail DEBUG mode and gets()-based buffer overflows in fingerd.

Why we see fewer similar worms

  • Reasons given: more secure defaults (firewalls, fewer exposed services), fewer trivial RCEs, OS hardening initiatives, and a shift toward scams/social engineering rather than blind worms.
  • Others note that large-scale self-spreading systems still exist (botnets, IoT malware) but are quieter, more financially driven, and often target very weak devices.

Firsthand accounts and historical context

  • Multiple posters recall the day: university networks crawling, machines repeatedly reinfected, admins yanking sendmail, or even entire countries temporarily disconnecting from the Internet.
  • Several reminisce about the much smaller, slower, research-focused Internet and the relative informality around “computer crime” compared to later decades.

Myths, numbers, and narratives

  • The famous “10% of the Internet” statistic is called out as essentially invented at the time based on a rough host-count guess.
  • Some dispute claims that the worm was the turning point for security culture, pointing to earlier hacker culture, phreaking, and publications; they see it as one major milestone among others.

Language safety and ongoing vulnerabilities

  • Commenters connect the worm’s exploits to C’s unsafe APIs; note that many newer languages (and older non-C-like systems languages) avoid these issues by design.
  • Despite decades of lessons, examples are given of modern C/C++ projects still replicating gets-style patterns, reinforcing why memory-safe languages (and constructs like slices/spans) matter.

Tesla's ‘Robotaxis' Keep Crashing—Even With Human ‘Safety Monitors' Onboard

Waymo vs. Tesla: Maturity and Direction

  • Many see Waymo as “years ahead” of Tesla, already operating driverless services in multiple cities, while Tesla’s robotaxis remain limited pilots with safety drivers.
  • Some argue Tesla may never achieve true self‑driving without changing direction (e.g., adding lidar), though others note multiple companies can eventually reach the goal.
  • There’s concern Tesla is already losing any first‑mover advantage as others commercialize.

Sensors and “Premature Optimization”

  • A major thread blames Tesla’s vision‑only approach and early decision to drop lidar, characterizing it as optimizing for cost before having a robust working system.
  • Waymo’s use of lidar and HD maps is framed as the opposite strategy: accept higher hardware cost to gain reliable performance and operational data, then optimize cost later.
  • Several posters note lidar prices have already dropped dramatically and will likely continue to fall, undermining Tesla’s original cost argument.

Economics and User Priorities

  • Debate over whether robotaxis will compete mainly on price per mile or on comfort/style.
  • Some think Tesla and Chinese OEMs can dominate if they reach low cost per mile; others argue car cost per km is only a modest part of the fare and that safety, comfort, and brand will matter.
  • Long digression on how Americans value time, image, and convenience over pure transport cost.

Crash Rates, Safety, and Data Transparency

  • Cited figures: ~4 Tesla robotaxi crashes in ~250k miles vs Waymo roughly one crash per ~98k miles, with Tesla’s having safety drivers and Waymo’s not. Some claim Tesla’s rate is ~10× humans; others challenge the methodology.
  • Posters stress that comparisons must consider severity, fault, and driving context (urban vs highway), as well as interventions by safety drivers—data Tesla does not disclose.
  • Waymo is praised for detailed public safety datasets; Tesla is criticized for redactions and avoiding regimes (like California permits) that require reporting.

Media Framing and Bias

  • Several see the Miami Herald piece as a “hit” or clickbait, pointing to an unrelated burned‑Tesla video at the top and emphasis on very low‑speed incidents.
  • Others counter that Tesla’s broader Autopilot/FSD safety record justifies skepticism and tougher scrutiny than individual fender‑benders suggest.

Trust, Liability, and Readiness

  • Some argue machines must be an order of magnitude safer than humans to be socially accepted, given accountability concerns.
  • One current FSD user reports heavy daily use but says it is clearly not ready for unsupervised operation, still making “silly” and sometimes dangerous errors.
  • Broader worry that companies are prioritizing hype and stock price over transparent safety metrics, eroding public trust in AVs generally.

Modular monolith and microservices: Modularity is what matters

Core Theme: Modularity Over Architecture Labels

  • Broad agreement that modularity, not “monolith vs microservices,” is the key design concern.
  • Good modularity means clear domains, explicit contracts/APIs, clean dependency trees, and the ability to evolve or extract pieces with minimal pain.
  • Several note you can have:
    • A single deployable that behaves like multiple services (different roles via config, different routes, horizontal scaling per endpoint).
    • Many deployables that are effectively a tightly coupled monolith due to unmanaged API changes.

Enforcing Modularity

  • Main challenge is not the idea but enforcement over time and headcount.
  • Three approaches discussed:
    • Social/”architect as gatekeeper” – works only for small teams.
    • Education/culture – tends to drift.
    • Tooling – e.g., multi-module builds that forbid forbidden imports; language ecosystems differ here.
  • Strong top‑down direction (from CTO/executives) is seen as necessary to keep modularity and avoid microservices-by-default.

Microservices: Benefits and Costs

  • Pros cited:
    • Network boundaries force people to think about contracts, data passed, and backwards compatibility.
    • Independent deployability and dependency upgrades; each service can move at its own pace.
    • Organizational scaling: teams own services, align with Conway’s law, and can be staffed/operated independently.
    • Isolation of failures and scaling hotspots (e.g., video encoding, high‑traffic endpoints).
  • Cons cited:
    • Explosion of services (“nano-services”), more teams, more tooling, more operational and security surface.
    • Debugging and development friction when many services must be running; hard local setups.
    • Versioning pain at boundaries; breaking API changes become much harder.
    • Often misused for low‑traffic, low‑complexity systems where a monolith would suffice.

Monoliths & Modular Monoliths

  • Many argue 99%+ of apps are better off starting as a monolith, scaled vertically and then horizontally as needed.
  • Modular monolith strategies: vertical slices by feature, shared libraries, environment‑driven role selection, separate deployments of the same codebase.
  • Good monoliths can handle substantial scale and are easier to reason about, debug, and refactor.
  • Pathology cases (giant, outdated monoliths with huge startup time and tech debt) are blamed on deferred maintenance and poor tooling, not on monoliths per se.

Nuance & Spectrum

  • Multiple commenters frame this as a spectrum: from a single, well‑structured deployable; to a few coarse‑grained services; to hyper‑granular microservices.
  • Consensus trend: start simple and modular, split into services only where scale, organizational structure, or security/data‑isolation clearly justify the added complexity.

Former US Vice-President Cheney Dies

Cheney’s Legacy and Accountability

  • Strong consensus that his record—especially post‑9/11 policy—is deeply negative and morally stained.
  • Some argue it is fitting he lived to see his family sidelined within the modern Republican Party, though others say that exile was about opposing Trump, not his earlier record.
  • Minority view sees him as a “lesser evil”: a dangerous but ultimately system‑bound operator who handed over power peacefully and “didn’t blow it all up,” provoking sharp pushback as callous toward victims.

Wars, Profiteering, and Casualties

  • Iraq and Afghanistan labeled “forever wars” that helped fuel the rise of Trump and damaged U.S. strategic standing.
  • Heavy emphasis on Iraqi civilian deaths (hundreds of thousands) versus the more commonly cited U.S. military toll.
  • Halliburton is repeatedly cited as emblematic of the military‑industrial complex and alleged war profiteering, including its role in Vietnam and Iraq and its massive payout to Cheney preceding his vice presidency.
  • Some see his daughter as continuing a hawkish, pro‑war line.

Executive Power and Civil Liberties

  • Cheney portrayed as perhaps the most powerful vice president, driving expansion of executive authority and the “unitary executive” theory.
  • Detailed criticism of his role in warrantless surveillance, torture, secret prisons, Guantánamo, and turning the “war on terror” into a near‑global battlefield.
  • Successor administrations are criticized for decrying these power grabs rhetorically while “pocketing” most of them in practice.

U.S. Parties, War, and System Design

  • Dispute over whether war profiteering is uniquely Republican: several argue both parties support Pentagon spending when it’s “their” war (e.g., Ukraine).
  • One thread links Cheney’s actions to the inherent dangers of presidential systems: dual mandates, difficulty removing leaders, and personality cults, arguing the U.S. constitution is showing its age.

PNAC, Foreign Policy Agendas, and Influence

  • Commenters highlight Cheney’s involvement in the Project for the New American Century and its pre‑9/11 advocacy for regime change and U.S. dominance.
  • Some connect this to broader Israel‑aligned policy networks; others push back against the idea that a foreign state “controls” U.S. policy, framing it instead as aligned interests and domestic lobbies.
  • Noted that such agendas are often published openly (PNAC, “Project 2025”) yet still surprise the public when implemented.

Public Memory, Humor, and Death

  • The hunting‑accident shooting of a lawyer is recalled as a symbol of his power, especially the victim publicly apologizing afterward; it also fueled enduring jokes and pop‑culture portrayals.
  • Brief thread compares reactions to Cheney’s death with those to other controversial figures (Castro, Jack Welch), arguing HN sentiment reflects political alignment and perceived personal impact.
  • Several comments reflect on the “equalizing” nature of death, noting that no degree of power spared Cheney from it, even if he died surrounded by family, unlike many he affected.

Studio Ghibli, Bandai Namco, Square Enix Demand OpenAI to Stop Using Their IP

Anti-piracy analogy & data harvesting

  • Many compare AI training on copyrighted works to classic piracy: “downloading content for AI training is stealing.”
  • Others argue the “you wouldn’t steal a DVD/car” analogy is weak because digital copies have zero marginal cost and harm is indirect or market-dependent.
  • Some highlight the irony that past anti-piracy campaigns themselves used infringing material, underscoring the complexity and hypocrisy around IP.

Ads, attention, and what counts as “payment”

  • One side claims pervasive advertising “steals” time, attention, mental health, and device resources.
  • Counterargument: viewing ads is a voluntary payment for a service; you can refuse by not using the service or by paying directly.
  • Tension appears when companies call ad-blocking “theft” while asserting ads are a fair exchange.

Transformative use, scale, and AI vs humans

  • Broad agreement that AI pushes the limits of “transformative use” doctrines: the law never anticipated systems that ingest everything and output in any style at scale.
  • Some insist embedding works in vector spaces is not meaningfully transformative; others say we don’t fully understand human creativity either, so process-based distinctions may be shaky.
  • A recurring theme: scale and automation change the ethical and legal calculus even if AI “learned” similarly to humans.

Style, copyright, and legality

  • Several comments stress that styles are generally not copyrightable; specific characters, plots, and compositions are.
  • Disagreement over whether painting in “Ghibli style” is infringement or simply fair use / non-actionable inspiration, especially for non-commercial personal work.
  • Others argue that when a commercial product (e.g., OpenAI) systematically enables Ghibli-like output and sells access, it crosses into direct competition and likely infringement.

Artist livelihoods and cultural impact

  • Strong concern that AI undermines artists’ ability to earn a living by cheaply cloning styles built over lifetimes.
  • Some say this is akin to “corporate piracy” or exploitation; others counter that art has always been copied and that business models—not art itself—must adapt.
  • A few take a hard line: many artists may have to “get a stronger business” or leave the profession; others warn that losing working artists degrades culture, critical thinking, and “human” entertainment.

Enforcement, jurisdictions, and future models

  • Debate over whether model training is currently illegal; some say it’s clearly willful commercial infringement, others assert training is lawful but outputs may infringe.
  • Non-US perspectives note that many countries lack broad fair-use concepts; examples from Japan suggest that even using Ghibli-like AI images commercially could trigger counterfeiting laws.
  • Some expect outcomes analogous to Napster (banned) vs YouTube (licensed); others predict large payouts, “firewalls” around national IP, or robots.txt-style opt-outs becoming mandatory.

Fairness, double standards, and what the law ought to be

  • Several point out a double standard: individuals happily pirate but condemn OpenAI; big tech invokes “fair use” aggressively while defending its own IP.
  • Others emphasize that beyond what’s currently legal, society must decide whether it’s fair for a few companies to appropriate “the treasure of humanity” without consent or attribution.
  • There’s no consensus: views range from “end fair-use harvesting” to “I hate copyright more than I hate AI companies,” with many admitting they are genuinely torn.

Over $70T of inherited wealth over next decade will widen inequality, economists

Capitalism, Socialism, and Inequality

  • Several comments frame rising inherited wealth as a natural outcome of capitalism; the proposed countermeasures are progressive taxation and strong public education.
  • Others argue socialism/communism perform worse (corruption, shortages, lack of incentives), though some distinguish social democracy from “actually existing” state socialism.
  • European social democracies are cited both as evidence that socialism-ish systems can be rich and as examples where history/colonialism, not just policy design, drove wealth.

Effectiveness and Design of Inheritance/Wealth Taxes

  • One major thread: inheritance and wealth taxes have often been tried (France, Sweden, others) and rolled back as ineffective or distortionary; skeptics point to capital flight, tax havens, and corruption.
  • Others counter that this is precisely why global or at least bloc-wide (US/EU) progressive wealth taxes are needed; unilateral moves fail because “capital has no nation.”
  • Many proposals discuss high tax-free thresholds (e.g., first $1–2M per heir or per lifetime), then steeply progressive rates up to near-100% on very large bequests, often combined with sovereign wealth funds or per‑adult “inheritance” at 21.
  • Practical issues raised: asset valuation, easy avoidance via trusts/gifting, and the risk that mid‑upper‑middle inheritors get hit while billionaires don’t.

Housing and Intergenerational Wealth

  • Strong view that in much of Europe, inheritance is the only realistic path to home ownership; critics say this is a housing supply and asset-bubble problem, not a justification for untaxed inheritance.
  • Housing is described as a core wealth engine and de facto retirement plan, with policy (e.g., “property ladder”) built around constant price appreciation, which shifts wealth from younger to older cohorts.
  • Suggested fixes: land value taxes, deregulated/streamlined building, anti‑NIMBY rules, large-scale public/social housing, and heavier taxation of multiple/investment properties.

Is Inequality Itself the Problem?

  • One camp says only overall living standards matter; inequality per se is “not an intrinsic bad.”
  • Others argue extreme wealth inequality implies extreme power inequality, political capture, and eventual instability/violent redistribution; communism or other radical shifts are seen as a likely backlash if current trends persist.

Inheritance, Compounding, and Fairness

  • Debate over whether inheritance increases inequality or simply preserves it: critics highlight compounding returns (r>g) and multi‑generation examples where capital income outpaces a lifetime of labor.
  • Moral views diverge: some see inheritance tax as “mafia theft” or “grave robbing”; others emphasize that unearned wealth corrupts, entrenches dynasties, and that society has a claim once needs are comfortably met.