Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 461 of 543

Tesla Cybertruck Drives Itself into a Pole, Owner Says 'Thank You Tesla'

Incident and Driver Responsibility

  • The Cybertruck owner reports FSD failed to merge as a lane ended, hit a curb and then a light pole, yet publicly thanks Tesla and blames himself.
  • Many see this as misplaced responsibility: legally the human is at fault, but commenters argue Tesla also bears responsibility for marketing something called “Full Self Driving” while shifting liability to drivers.
  • Some suspect social-media clout or cognitive dissonance: easier to say “I screwed up” than “something I believe in (FSD/Tesla) is unsafe.”

FSD Safety, Quality, and Edge Cases

  • Numerous owners describe FSD and Autopilot as anxiety‑inducing: random slowdowns, phantom braking, weird lane choices, confusing UI, and dramatic “take control now” errors.
  • Others report thousands of miles on v13 with no interventions and claim it’s dramatically reduced fatigue and feels safer than many human drivers.
  • Several argue partial automation (SAE Level 2/3) is inherently dangerous because humans can’t maintain vigilance while not actively driving; some propose banning Level 3 specifically.

Comparison with Other Automation Approaches

  • Waymo is repeatedly cited as a counterexample: constrained geofenced operation, more sensors (LIDAR/radar), better safety metrics, and corporate willingness to accept liability.
  • Debate over whether real self‑driving requires near‑general intelligence or just extremely conservative “don’t hit anything” behavior plus richer sensing; Tesla’s camera‑only approach is widely criticized.

Testing, Data, and Regulation

  • Disagreement over how much real‑world beta testing on public roads is acceptable. Some see it as necessary progress; others as unethical experimentation on uninformed third parties.
  • Calls for stricter regulation, mandatory reporting, and independent auditing of safety data; skepticism toward Tesla’s self‑reported statistics and avoidance of stricter jurisdictions.

Cybertruck Design and Broader Ethics

  • Many view the Cybertruck as prioritizing occupant safety and aggressive aesthetics over safety of others, calling it a “death trap” and noting its illegality or de‑facto bans in parts of Europe.
  • Concerns that normalization of distraction (phones, FSD overtrust) is under‑punished compared to DUI, and that society is shifting norms to excuse risky tech in the name of “progress.”

Reviving the joy and honor of working with your hands (2015)

DIY Housebuilding and Trade Skills

  • Large subthread on whether an individual can quickly learn trades to build houses.
  • Some argue residential plumbing/electrical are conceptually simple, can be learned in weeks, especially by smart, motivated people; new construction is framed as “stupid simple” compared to troubleshooting old work.
  • Others strongly dispute this, emphasizing safety, long-term reliability, and edge cases that only experience teaches; they say they wouldn’t trust a “60‑day electrician.”
  • Building codes and inspections are cited as guardrails; disagreement over whether licenses ensure competence or mainly enforce time-in-apprenticeship “rackets” that gatekeep competition.

Quality, Specialization, and Incentives

  • One side prefers large, specialized crews that have built thousands of houses, citing efficiency and accumulated expertise.
  • Others argue a person building their own house has stronger incentives to exceed minimum standards and avoid corner-cutting, especially compared to production builders working to code minimums.
  • Some note hybrid approaches: DIY generalists who hire specific trades where tools or regulations make more sense.

Vocational vs Academic Paths and Status

  • Multiple comments lament the rigid academic/vocational split after school and the cultural devaluation of hands‑on paths.
  • Anecdotes from the UK, Germany, and the US describe tradespeople (plumbers, electricians) often living better than degree-holders, yet still lower status.
  • Stories of vocational tech schools, military-style structure, and older generations of engineers who were required to use machine tools.

Physical Toll, Aging, and Career Switching

  • Acknowledgment that many trades are physically punishing: injuries, chronic pain, harsh environments; some counter that sedentary tech workers also suffer from back issues.
  • Debate over starting a trade in one’s 40s–50s: seen as possible via community college and eventual self-employment, but early wages, injury risk, and physical demands are major barriers; often perceived as a “young man’s game.”

AI, Robotics, and Future Prospects

  • Question raised whether trades will still be “worth it” in 10 years given AI progress.
  • Most responses expect persistent demand: robots are viewed as far from handling messy, embodied tasks like snaking toilets or complex on-site work.
  • Some speculate that if AI erodes office jobs, trades may become more competitive but also better positioned to capture value, since people will still pay a premium to “make water flow again.”

Maker Movement, Shop Class, and Cultural Value

  • Nostalgia for defunct makerspaces (e.g., TechShop) and concern that remaining spaces skew toward lightweight, kid-oriented activities rather than “big iron.”
  • Several references to “Shop Class as Soulcraft” and Nordic “slöjd” as models that integrate handcraft into education to build judgment, respect for labor, and practical understanding of materials.
  • Many describe deep satisfaction, relaxation, and cognitive benefits from machining, woodworking, and similar crafts, while some caution against romanticizing trades or equating manual work with moral superiority.

UnitedHealth hired a defamation law firm to go after social media posts

Reaction to UnitedHealth’s Defamation Strategy

  • Many see hiring a “reputation defense” firm as doubling down on bad behavior instead of fixing claim-denial practices.
  • Multiple comments invoke a Streisand-effect framing: trying to silence critics will amplify criticism.
  • Some describe the move as “wounded animal lashing out” and evidence the company cares only about share price, not patients.

Dispute Over the Surgeon’s Social Media Story

  • The original viral claim: insurer called mid-surgery, forcing the surgeon to scrub out to justify an inpatient stay, then denied the stay and sent a legal threat over her posts.
  • The lawyer letter (linked and read by commenters) reportedly asserts:
    • Calls were labeled non-urgent and requested a callback “when convenient.”
    • Hospital paperwork requested outpatient/observation, not inpatient, prompting the calls.
    • The surgeon allegedly acknowledged her office’s coding error on a recorded call.
  • Several commenters find this plausible and blame hospital administration or reception for escalating the call into an OR interruption.
  • Others note that “call back when convenient” is effectively “answer now or face long delays and denials,” so the pressure on clinicians is still real.
  • Whether specific statements by the surgeon are actually false or just exaggerated is viewed as unresolved/unclear.

Defamation, Free Speech, and Chilling Effects

  • Debate over whether the surgeon’s conduct meets U.S. defamation standards (actual malice, reckless disregard).
  • Some argue a C&D letter after a public post is an intimidation tactic, likely to chill criticism even if a lawsuit would fail.
  • Others counter that provably false statements about specific facts can legitimately be challenged.

Broader Critique of U.S. Health Insurance

  • Many describe insurers’ business model as “delay, deny, wear you out,” backed by personal stories of months-long fights over obviously necessary care.
  • One thread notes UnitedHealth’s high claim-denial rate and links this to incentive structures (80/20 rule, profit on paid claims vs. opex).
  • Comparisons:
    • Kaiser-style integrated systems can avoid some denial dynamics, but may under-prescribe complex/rare treatments.
    • Non-profit insurers and public systems still have administrative overhead; profit removal alone won’t fix everything.
  • Comments mention looming UnitedHealth layoffs, offshoring, and “enshittification” — extracting more value while letting service quality and reputation degrade.

Hospitals and Providers Also Criticized

  • Several point out hospitals’ and doctors’ own abusive or incompetent billing practices, fake or miscoded procedures, and dysfunctional admin staff.
  • Some emphasize that the surgeon’s office reportedly mis-coded the claim, so not all blame belongs to the insurer.

Ethics of “Propaganda” and Journalism Quality

  • One participant openly endorses using half-true horror stories against insurers for “propaganda value,” prompting pushback about long-term damage from strategic lying.
  • Multiple commenters criticize the articles (Fortune and others) as shallow: not clearly stating which facts are contested, what the recordings show, or how typical this scenario is.
  • Overall sentiment: the system is structurally broken; insurers, providers, and regulators all share responsibility, but UnitedHealth’s legal threats exemplify the worst impulses.

IT Unemployment Rises to 5.7% as AI Hits Tech Jobs

Scope of the Unemployment Spike

  • Several commenters note IT unemployment (5.7%) vs overall (4%), but question attributing the difference to AI.
  • Some argue IT labor cycles have historically tracked major tech shifts and claim current deep unemployment is aligned with recent AI advances.
  • Others insist the primary drivers are overhiring during the pandemic, higher interest rates, and generic cost-cutting.

Skepticism About “AI Caused It”

  • Many see the AI angle as headline bait: “X as Y” / “amid Y” framing that implies causality without evidence.
  • Critiques focus on:
    • Article relying on very limited sources and one month of data.
    • Vague category definitions of “IT jobs.”
    • Journalistic habit of inventing narratives to fit whatever’s “hot” (AI).

AI as Tool vs Job Killer

  • Strong consensus that current AI is a productivity tool, not a full programmer replacement:
    • Works well for experienced devs with good specs; unreliable for non‑programmers.
    • Still needs validation, especially for security and edge cases.
  • Some report dramatic gains with newer models (e.g., generating running code from solid requirements), suggesting substantial headcount reductions or hiring freezes may eventually follow.
  • Others stress that messy requirements, politics, and system context are where humans still dominate.

Offshoring and Remote Work

  • A large thread argues most US job loss is from accelerated offshoring (Poland, Eastern Europe, India, Israel, Mexico, Brazil, etc.), not AI.
  • Reported shifts:
    • Entire engineering and product orgs, including leadership and P&L, moving abroad.
    • Direct hiring in low‑cost countries replacing earlier “sweatshop” outsourcing.
    • WFH seen as proof jobs can be done remotely, enabling global labor arbitrage.
  • Some offshore engineers express moral unease; others emphasize this is a systemic management/economic choice, not worker guilt.

Management Behavior and Capitalism

  • Many see AI as a pretext to:
    • Cut staff, freeze hiring, and demand “30% more output” with the same headcount.
    • Justify budget shifts from general IT to “AI initiatives.”
  • A minority warns that aggressive replacement of entry‑level roles with AI/overseas labor could break the talent pipeline and lead to long‑term skill shortages.

Overall Sentiment

  • Broad agreement: current data don’t convincingly show AI as the primary cause.
  • AI is affecting expectations and narratives today; real, direct displacement (if it comes) is expected to lag and be intertwined with offshoring and cost pressures.

JetBrains Fleet drops support for Kotlin Multiplatform

Reaction to Fleet Dropping KMP

  • Many are relieved JetBrains is refocusing on IntelliJ/Android Studio instead of a separate Fleet-based KMP IDE that was announced only months ago and now reversed.
  • Several see this as evidence that Fleet as a strategy hasn’t worked: too incomplete to replace IntelliJ, not compelling enough to draw VS Code users, and now losing its primary differentiator for Kotlin devs.
  • Some interpret this as the beginning of the end for Fleet, or at least strong deprioritization.

Kotlin Multiplatform: Promise and Pain Points

  • KMP is praised for real-world success in shared codebases across Android, iOS, desktop; some users report surprisingly smooth experiences compared to older cross‑platform tools.
  • Many consider KMP best suited for shared non-UI code (models, networking, persistence, business logic) with platform‑native UIs on top.
  • iOS interop is a major complaint: KMP exposes Objective‑C interfaces, leading to poor Swift ergonomics (weak typing on enums, reference semantics, threading caveats, Kotlin exceptions not catchable from Swift).
  • These issues create many edge cases and erode the benefits of code sharing for some teams.

Fleet’s Strategy and Positioning

  • Commenters are confused about Fleet’s purpose: VS Code competitor, experimental UI, collaboration platform, or eventual IntelliJ replacement.
  • It reuses IntelliJ backends through a “smart mode”, but that also brings along the complexity and performance concerns.
  • Some note JetBrains is juggling three UI stacks (Swing, Fleet’s custom UI on Skiko, Compose Multiplatform), which seems unsustainable.
  • A widely shared view is that keeping Fleet closed-source doomed it: no plugin ecosystem, no community momentum, and no clear path to match IntelliJ or VS Code capabilities.

JetBrains Product Line & Single-IDE Frustration

  • Long‑time users dislike having to juggle many separate IDEs (IntelliJ, CLion, Rider, GoLand, RustRover, etc.) and want “one IDE with all plugins.”
  • Some languages (Python, Go) can be added to IntelliJ via plugins, but C++ and C# remain locked to separate IDEs, complicating mixed-language projects and devcontainer/remote setups.
  • Comparison is drawn to Eclipse/NetBeans’ long‑standing mixed‑language support and JNI debugging, which JetBrains still doesn’t unify cleanly.

IntelliJ vs VS Code: UX, Performance, Philosophy

  • Strong split:
    • Pro‑JetBrains: powerful refactoring, deep code understanding, stable workflows, batteries‑included experience, project‑wide insight, superior debugging/merge/diff, especially for Java, Python, Rust, SQL, etc.
    • Pro‑VS Code: snappier UI, simpler mental model, rich LSP ecosystem, lighter resource usage, better JSON‑based config, easier to run in containers and varied environments.
  • UI debates:
    • Some consider IntelliJ’s “classic” dense UI a huge productivity win; others see it as cluttered, dated, and intimidating to newer devs.
    • The new simplified UI and icon‑only sidebars are polarizing; “classic UI” plugin is viewed by some as temporary and its eventual removal a deal‑breaker.
    • Accessibility issues (low contrast, font rendering) and “hieroglyphic” icons in JetBrains IDEs are specifically criticized by some.
  • Performance is contentious: some find IntelliJ unbearably sluggish and memory‑hungry; others report it’s fine after indexing and prefer it over VS Code’s LSP flakiness. Suggestions include switching to newer JVM GCs (e.g., ZGC) and GPU rendering for big gains.

AI Features and Competitive Gap

  • Many feel JetBrains is far behind VS Code and AI-focused forks (e.g., Cursor, Windsurf) in AI-assisted development.
  • JetBrains AI Assistant is criticized as weak and paywalled on top of existing licenses; GitHub Copilot’s IntelliJ plugin is seen as less capable than the VS Code version.
  • Some blame JetBrains’ plugin APIs and closed components for limiting advanced AI agents; others point to early, underwhelming internal efforts like “Junie”.
  • A minority is glad JetBrains hasn’t gone “all‑in on AI,” preferring minimal, non-intrusive assistance or external tools (Aider, Continue.dev, Ollama) wired in manually.

Collaboration & Real-Time Editing

  • Fleet’s real-time collaboration pitch doesn’t resonate widely; most developers rarely need simultaneous editing.
  • When used, it’s mainly for mentoring, pair programming, or remote guidance, and many find screen sharing or JetBrains’ existing “Code With Me” plugin sufficient.

Perceived Quality Trends at JetBrains

  • Some long‑term users report more memory leaks, regressions, and long‑standing bugs in recent years, especially in WebStorm and JS/TS tooling; this has pushed a subset to VS Code.
  • Others report the opposite: noticeable performance improvements vs 5–6 years ago and still regard PyCharm, CLion, Rider, RustRover, and DataGrip as market-leading.
  • Overall, there’s consensus that JetBrains still delivers uniquely powerful “power tools,” but growing disagreement over whether the tradeoffs in bloat, UX changes, and slow AI response are worth it.

E Ink’s color ePaper tech gets supersized for outdoor displays

Digital art, “programmable posters,” and home use

  • Color e‑ink for art is seen as promising but technically constrained: Kaleido panels offer ~4k colors; good for readers, marginal for high‑fidelity art and photos. Spectra 6 can look better with more primaries and dithering, but refresh is extremely slow.
  • Some argue current commercial frames ($600–$2,500) are reasonable versus buying art; others counter you’re paying for reusable “paper,” not for unique pieces.
  • Several people want large, borderless or wallpaper‑like e‑ink art walls, but current prices and yields make this more fantasy than near‑term product.

DIY and small programmable displays

  • Hobbyists are building their own frames using Waveshare panels, Raspberry Pis/ESP32s, and 3D‑printed or wooden frames, with total costs around $150 (7.3") to $420 (13.3").
  • Battery‑optimized builds can reach ~9–10 months on a single 18650 cell with one Wi‑Fi refresh per day, but require careful choices of regulators, RTCs, and power‑gating the controller board.
  • Off‑the‑shelf products like Trmnl and Visionect provide turnkey programmable e‑ink dashboards, but larger panels (~32") still cost around $2,500.

Pricing, yields, and scalability

  • Multiple comments note that panel cost scales worse than linearly with area; large color e‑ink (30–32") is ~$1,700+, and 75" is expected to be “more than a used car.”
  • The idea of a 75" 4K e‑ink wall display for $150–$200 is treated as wildly unrealistic under current manufacturing yields.

Use cases: signage vs. advertising

  • Many see the tech as ideal for low‑power, outdoor information displays: bus stops, timetables, directories, maps, food truck menus, meeting room boards, solar‑powered public info panels (helped by the wide temperature range).
  • For advertising, supporters highlight:
    • Zero power while static, enabling battery/solar operation.
    • Timed and remotely updatable ads, avoiding labor and vehicle downtime for vinyl swaps.
  • Skeptics question:
    • Whether total lifecycle cost beats printed posters.
    • Whether advertisers will accept washed‑out colors, low contrast, slow/ugly refresh, and lack of video compared to LCDs.

Technology characteristics and limitations

  • Strengths mentioned: matte, non‑emissive look; visibility in bright light; static image with no power; wide temperature range.
  • Weaknesses: limited color gamut/bit‑depth; slow full‑color refresh (seconds), especially on Spectra/Gallery; reduced contrast for color layers; need for external lighting at night.

Patents, competition, and market dynamics

  • One line of discussion blames E Ink’s patent control for high prices and slow ecosystem evolution.
  • Others strongly dispute this, citing multiple alternative reflective technologies that failed commercially and arguing that physics, limited consumer demand, and preference for bright LCD/AMOLED are the main bottlenecks.

Advertising, visual pollution, and bans

  • Several commenters dislike ads in any form, not just bright LCDs, and point to cities experimenting with billboard bans and “visual pollution” rules.
  • Some argue for outright removal of public ads; others are resigned but would at least prefer non‑emissive, mostly static e‑ink displays.
  • There’s concern that low‑power e‑ink could lead to pervasive, always‑on programmatic ads on every surface.

Marketing imagery and trust

  • The lead promo image is widely called out as AI‑generated and visually inconsistent, which undermines trust and makes some question whether the depicted product and contrast levels are realistic.
  • Later real‑world expo photos are seen as more credible representations of current capabilities.

Intel's Battlemage Architecture

Architecture, Power, and Efficiency

  • Commenters note Intel’s performance per mm² lags AMD/Nvidia, but Battlemage’s power consumption appears well controlled, implying a trade‑off toward larger, cheaper‑to-design dies rather than density.
  • Several replies explain that:
    • Power is dominated by charging/discharging transistor gates; a bigger die doesn’t automatically mean thicker wires or proportionally higher power.
    • Wires don’t scale as well as transistors; interconnect and routing complexity are major constraints, and density can increase hotspots.
    • Clock speed and voltage are tightly coupled: pushing clocks requires higher voltage, causing more than linear (often approximated as quadratic or worse) power increases for modest performance gains.
    • GPUs often get better perf/W by using more area at lower clocks instead of fewer units at high clocks.
  • One person argues performance/mm² is a poor cross-vendor metric; performance/watt and performance/cost are what really matter.

VRAM, Memory, and Product Segmentation

  • Thread contrasts current midrange cards: 8 GB (RTX 4060/RX 7600), 12 GB (B580), 16 GB (RX 7600 XT), and observes Nvidia’s slow VRAM growth since GTX 1060.
  • Multiple posts discuss why we don’t see cheap 256 GB consumer GPUs:
    • Bus width and GDDR chip capacities (e.g., 16 Gbit, emerging 24 Gbit) limit maximum VRAM.
    • Clamshell designs double capacity but complicate cooling and power (VRAM modules draw notable watts each).
    • HBM could offer more capacity but is extremely expensive, packaging-intensive, and supply-constrained by data-center demand.
    • Vendors also avoid cannibalizing lucrative enterprise SKUs; some see this as deliberate segmentation or “cartel‑like” behavior.
  • People mention Chinese aftermarket VRAM‑upgraded cards and modded consumer GPUs, but note rarity and software challenges.

Pricing, Value, and Availability

  • B580 is praised at its stated MSRP, undercutting competitors with more VRAM, but several note it often sells well above MSRP or is hard to find, weakening its value story.
  • Comparisons highlight poor value of many xx60/xx60 Ti Nvidia cards, especially for VRAM-heavy workloads.
  • European pricing examples show regional variation; some local stores sell at MSRP while large online retailers show scalped prices.

Linux and Driver Experience

  • Mixed but generally improving picture:
    • Some report Intel as the least-bad Linux GPU vendor with strong upstream contributions and near launch-day support on bleeding-edge kernels.
    • Others describe historically rough Intel dGPU drivers and teething issues with newer driver stacks (e.g., transitions between i915/xe and various VAAPI/Media drivers).
  • Several users share positive experiences with Alchemist and Battlemage on Linux for:
    • Gaming (via Proton),
    • Video encoding/transcoding (including AV1),
    • General desktop and 3D workloads.
  • Pain points: needing new kernels/mesa on non-rolling distros, fan control/firmware issues on some boards, early boot output quirks, and confusion over which media/VA drivers to use on older generations.

AI / Compute and VRAM-Hungry Workloads

  • Multiple commenters are primarily interested in Battlemage for compute (LLMs, ML training, video transcoding) rather than gaming FPS.
  • PyTorch now supports Intel “xpu”; people report:
    • Arc being attractive because VRAM, not raw FLOPs, is often the bottleneck for hobbyist ML.
    • B580 and possible 24 GB “Arc Pro” variants as appealing low-cost options for AI-curious users and small training/inference setups.
  • There is demand for cards that “just double the memory,” even at double the price, especially for home LLM inference; some prefer this over multi-GPU complexity.

Naming, Presentation, and Article Format

  • “Battlemage” sparks a side thread:
    • Intel’s Arc generations follow fantasy-class names in alphabetical order (Alchemist, Battlemage, Celestial, Druid), seen as dorky but more memorable and coherent than many past codenames.
    • Many feel such theming fits the gamer aesthetic and is no stranger than Nvidia’s scientist-themed names.
  • Readers criticize the article’s charts as blurry; the author blames platform image handling (Substack/WordPress compression), jokingly likening it to temporal anti-aliasing.

Virtualization and Homelab Use

  • Some homelab enthusiasts are disappointed that Battlemage appears to move away from SR‑IOV and GPU partitioning that could be coaxed out of Alchemist, reducing suitability for virtualized multi-tenant setups.

Industry and Strategy Concerns

  • A few comments worry Intel’s financial pressures could cause it to abandon discrete GPUs before they reach competitive midrange/high-end performance, despite the value they bring at the low/mid tiers.
  • There is brief discussion of Nvidia’s large patent portfolio; one commenter argues current patent practices hinder Western competition while being largely ignored in China.

I tasted Honda’s spicy rodent-repelling tape and I will do it again (2021)

Reactions to the article and style

  • Many readers loved the piece, calling it “timeless,” “Dave Barry–level,” “Sedaris-level,” and one of the funniest things they’ve read in years.
  • Others found it unnecessarily verbose and engagement-driven, preferring a straight factual explanation.
  • Several praised its structure (“every line makes you want to read the next”) and subscribed to the newsletter; a minority questioned why non‑fiction “needs” to be entertaining.
  • A side thread debated whether such writing could be AI-generated; consensus leaned toward “too distinctive and funny” for current LLM output, especially given its 2021 date.

Human curiosity and tasting deterrents

  • Many admitted to tasting Nintendo Switch cartridges or canned-air bitterants out of curiosity; descriptions converged on “extremely bitter” and lingering.
  • Coin-cell batteries and some cables were reported to have similar bitter coatings.
  • One commenter joked that an LLM would never predict the article’s title, reflecting how odd yet compelling the premise is.

Rodent behavior and biology

  • Commenters clarified that rodents don’t just “like” gnawing; they must do it to keep ever-growing teeth in check.
  • Analogies were drawn to other animals with problematic overgrowing tusks/teeth, and pet-rat owners described needing regular tooth trimming.
  • Multiple stories described rats and mice chewing car wiring, conduits, and even lead sheathing, causing expensive or dangerous failures.

Capsaicin, evolution, and repellents

  • Long subthread on capsaicin: birds lack the relevant receptor, so peppers evolved to deter mammals (seed grinders) but not birds (seed dispersers).
  • People use capsaicin-treated birdseed and sprays to deter squirrels and rodents, with mixed long-term success; some squirrels and deer seem to adapt.
  • Others cautioned about indoors use (irritation) and noted capsaicin is fat‑soluble, so dairy/oil help more than water.

Does the tape work? What else can you do?

  • Some wondered if licking underestimates how spicy it is when chewed (as rodents would).
  • Others cited videos and personal experience suggesting rodents sometimes ignore capsaicin products, raising questions about the tape’s price–performance ratio.
  • Alternatives mentioned: peppermint oil, mint/catnip, mechanical sealing, trapping/poison, cats, foxes/coyotes, and municipal “go to the source” campaigns against rat habitats.

Social media as customer support

  • Big side discussion: why the author tweeted Honda.
  • Many recounted situations where complaints on Twitter (or similar) got faster, higher‑level responses than phone or email, especially with telcos, banks, airlines, and ISPs.
  • Others argued this is degrading overall support quality and increasingly unreliable as platforms change.

Safety, glamorization, and contaminants

  • Some worried the article might encourage copycat “lol I ate the thing” stunts, especially among kids.
  • Others noted the article’s disclaimers and treated it as humor rather than a how‑to.
  • A few expressed concern about undisclosed contaminants or supply‑chain substitutions in non-food products, even if MSDSs look benign.

Cars, wiring, and soy‑based plastics

  • Numerous anecdotes of rodent-chewed wiring in cars, Jaguars/Land Rovers, Subarus, and equipment, sometimes costing thousands.
  • Debate over whether newer bio‑based or soy‑derived insulation is more attractive to rodents: some cited lawsuits and mechanics’ anecdotes; others argued it’s mostly myth and that modern bioplastics are chemically similar to petro-based plastics.
  • General agreement that “warm engine bays + things to gnaw” are the core attractor; tape only addresses the gnawing, not the shelter.

Warnings and regulation tangents

  • The Honda PR email line about “everything causing cancer” triggered discussion of California Proposition 65.
  • Many criticized Prop 65 for over-warning (“this building may contain chemicals”), desensitizing people; others noted evidence it pushed manufacturers to reformulate products nationwide.

Miscellaneous side threads

  • Long, playful digressions on: property rights vs. shortcuts through courtyards; freedom to roam; firearms and trespassing; birds hitting windows; evolutionary botany (nixtamalization, seed coatings); poisonous plants and foraging anecdotes; and the referenced poem (“This Is Just To Say”).
  • Multiple commenters said the piece exemplifies what they enjoy about Hacker News: deeply unnecessary yet meticulous curiosity pursued for its own sake.

Backblaze Drive Stats for 2024

Drive Brand Reliability & HGST/WD/Seagate

  • HGST was absorbed into WD; its Ultrastar line lives on, Deskstar is gone.
  • Some readers infer WD (especially large 16–22 TB models) now looks “best,” but others argue modern HDD vendors are broadly comparable, especially for enterprise‑tier drives.
  • Seagate is widely perceived as weaker, with repeated references to problematic models (e.g., earlier 3 TB lines, IronWolf failures). Others report long-lived Seagates, especially enterprise Exos.
  • Several note that HGST historically performs very well in Backblaze data, but individual users still report the opposite in their own small fleets.

Usefulness & Limits of Backblaze Drive Stats

  • Many use these stats to choose drives for NAS/home servers and credit them with peace of mind.
  • Others stress the data mainly shows relative patterns, not guarantees, and often applies to models that are no longer current when the report appears.
  • Multiple commenters emphasize batch effects: failures often cluster in drives with adjacent serials or same procurement batch; this limits how predictive global AFR numbers are.
  • Some conclude the right lesson is to “plan for failure” (RAID + backups) rather than chase small AFR differences.

RAID, Backups & Home Storage Practices

  • Strong recurring theme: RAID is for availability, not backup. Offsite or cloud backup remains essential.
  • Debate over RAID levels: many advocate mirrored setups (RAID1/10, ZFS mirrors) over RAID5/6 for large drives; concern about rebuild-time and read‑error risks.
  • Best practices suggested: mix brands/batches, buy from multiple vendors, avoid identical serial sequences, and rely on scrubbing (e.g., ZFS) to catch bitrot.
  • Several run single large disks plus cloud backup, accepting downtime risk in exchange for simplicity and cost.

HDD vs SSD, Noise & Heat

  • Some are shifting bulk storage to TLC/QLC SSDs as prices drop; others argue HDDs still win for heavy-write workloads.
  • One thread discusses environmental cost of SSD manufacturing versus HDD plus lifetime power use; conclusions are unclear.
  • Heat is repeatedly cited as a major enemy of drive longevity; poor chassis cooling and cramped NVR/NAS boxes are blamed for failures.
  • A few wish for “underspinning” HDDs to reduce noise, but others doubt modern mechanics allow wide RPM ranges.

Backblaze’s Role, Content Marketing & Scale

  • The reports are praised as high‑quality “content marketing” that genuinely benefits the community; readers lament that major clouds don’t share similar stats.
  • Some note Backblaze now operates around 4.4 exabytes of raw storage, prompting side discussions about 64‑bit limits and very high‑density chassis.
  • A separate thread questions Backblaze as an investment: respect for the product, but concern over profitability and competition with hyperscalers.

Boring tech is mature, not old

Postgres and “boring” upgrades

  • One thread anchor is upgrading Postgres (v12→16) due to EOL: people like its predictability but say major upgrades still require careful planning.
  • Pain points mentioned: pg_upgrade wiping statistics (requiring ANALYZE and more downtime), planner behavior changes affecting query performance, and headaches when DBs live inside containers.
  • Advice: avoid exotic extensions, consider reindexing after some version jumps, benchmark performance before/after. SQLite is floated as a simpler, more “boring” alternative for some use cases.

What “boring” / mature tech means

  • Common definition: stable behavior, slow‑changing APIs, few surprises, lots of operational experience, good docs and community knowledge.
  • Boring tech lets teams focus on product and user problems instead of yak‑shaving infrastructure. Examples: Postgres, Java, .NET, C/C++, PHP/MySQL, Django, Rails, Go, Debian, shell tools, git/ssh.
  • Some note cryptography and infra (build systems, test infra) should be boring: predictable and invisible when working.
  • Others push back that “boring” is often just a label for “my preference” or “what makes me money,” and can be used to shut down trade‑off discussions.

Mature vs old vs dead projects

  • Distinction drawn between:
    • Mature: small change rate, responsive to issues, few surprises.
    • Old but unstable: upgrades break things, poor defaults, fragile abstractions.
    • Dead: no maintainer engagement, broken sites, missing downloads.
  • GitHub “last commit” is considered a poor standalone signal; better signals are: issue/PR responsiveness, security fixes, user reports, quality of open/closed issues.
  • Some wish software could “finish” and stop changing, but note that API churn, dependency incompatibilities, and security vulnerabilities force ongoing work.

Dependency, LTS, and ecosystem churn

  • Frustration with short “LTS” windows (.NET’s 36 months, similar patterns elsewhere) making stacks less boring than their branding implies.
  • Python’s dependency conflicts, native libs, and numeric stack are cited as especially fragile across machines. npm is seen as chaotic; some say Go and Rust are only boring if you pin/vendor carefully.
  • Others argue that languages whose effective package manager is “GitHub + curl | sh” are not mature in practice.

Career and organizational trade‑offs

  • Many praise boring tech for reliability and business value; infra choices rarely matter to customers compared to features and UX.
  • Counter‑argument: specializing only in legacy stacks (COBOL, etc.) can pigeonhole careers into low‑demand, lower‑paid niches. Balance stable tools in production with exploring newer tech on the side.
  • Several warn against resume‑driven greenfield rewrites with immature stacks; they often create long‑term wreckage. Incremental refactors and in‑place rewrites are favored when possible.

Old vs new and hype cycles

  • Repeated anecdotes of Kubernetes, complex microservices, and NoSQL deployments later replaced by a few VMs and Postgres, with big gains in cost and reliability.
  • Some are tired of blanket “K8s bad, VMs good” tropes, arguing Kubernetes can be a mature, boring solution when used sanely.
  • General consensus: neither “boring” nor “shiny” is inherently better. Evaluate stability, security, tooling, and team competence; adopt new tech selectively and with clear benefits.

Firing programmers for AI is a mistake

AI as Programmer Replacement: Hype vs Current Reality

  • Many commenters say they see no real-world cases of AI fully replacing teams, only slowed hiring and normal macro-driven layoffs with “AI” used as PR cover.
  • Claims from CEOs about replacing coders with AI are treated skeptically until corroborated by rank‑and‑file staff.
  • Where AI does reduce headcount, it’s mostly “soft replacement”: not backfilling roles, cutting some contractors, or trimming low‑value clerical‑style coding work.
  • Several stress that for any non‑trivial system you “still need a programmer” to understand requirements, architecture, deployment, and debugging.

What AI Is Actually Good At Today

  • Strong consensus: LLMs are excellent for boilerplate, forms, layout tweaks, scripts, tests, unfamiliar APIs, quick prototypes, and learning new stacks.
  • They are much weaker at:
    • Significant changes in large, messy, mature codebases.
    • Cross‑cutting changes across many services.
    • Non‑obvious systems tradeoffs (performance, reliability, security, data flow).
  • People using tools like Cursor, Windsurf, Copilot, Claude/GPT report net productivity gains, but not a clean 2× in real production work.

Risks: Quality, Tech Debt, and Safety

  • Repeated warning: AI‑generated code can be “plausible slop” – compiles, passes happy paths, but hides subtle bugs, security issues, and long‑term maintainability problems.
  • Tech‑diligence and “post‑mortem” practitioners say tech debt already silently cripples companies; AI‑accelerated slop could create many “dead by year five” products.
  • Safety‑critical domains (aviation, medicine, payments, infra) are seen as especially risky if managers chase short‑term savings.

Juniors, Pipeline, and Skills

  • Widespread concern: companies were already under‑investing in juniors; AI gives them another excuse.
  • If juniors are replaced by seniors+AI, the pipeline of future senior engineers collapses, mirroring the “COBOL crisis” pattern.
  • Others counter that new developers will be “AI‑native” and can learn faster, if they’re forced to understand and debug, not just paste prompts.

Management, Economics, and Hype Cycles

  • Many compare “fire devs for AI” to past fads: offshoring, no‑code/low‑code, Metaverse, etc.—short‑term cost‑cutting that later proved brittle.
  • Key point: AI is largely a productivity multiplier, not a proprietary moat; competitors get the same tools, so pure cost‑cutting offers little strategic advantage.
  • Some argue the real driver is higher rates and market consolidation, with AI serving as a convenient narrative.

Longer‑Term Speculation

  • Views diverge:
    • Optimists see AI enabling many more small products and one‑person companies.
    • Pessimists foresee widespread replacement of “clerical programmers,” enshittified software, and mass deskilling.
    • A minority discuss true AGI/ASI as a qualitatively different, civilizational event, but most treat that as too speculative for hiring decisions today.

Sid Meier's Civilization VII

Overall reception & value

  • Mixed sentiment: some players find Civ VII fun, stable, and the best mechanics since at least V/VI; others see it as emblematic of “everything wrong with modern gaming.”
  • Many balk at the €70–130 pricing, especially given “early access” branding used only for 5 days of early play and the game feeling unfinished.
  • A common strategy: skip VII for now and buy VI (or earlier) complete editions on deep discount; several note VI is currently very cheap or free via bundles/Netflix.

Gameplay & mechanics

  • New age system: some are excited it livens up the mid-game; others dislike that wars or unit progress effectively reset at age transitions, feeling arbitrary and immersion-breaking.
  • Diplomacy and city-states are reported as very strong (possibly overpowered vs AI), good for solo play.
  • Positive notes on new unit/commander systems, building/specialist simulation, and “souvenir” mechanics for single-player.
  • Some find the game too simple and missing “hidden depth”; map generation is criticized as unnatural and sometimes worse than VI.

UI, UX, and visuals

  • Broad agreement the UI is a major weak point: cluttered, mobile-like, grey/flat, screen-hungry, and sometimes unreadable (especially on Steam Deck).
  • Some prefer VII’s UI over VI visually, but many call it messy and unpolished, with basic layout/padding issues.
  • Art style compared unfavorably to older entries; realism in buildings can make them harder to distinguish.

Performance, AI, and technical issues

  • Experiences diverge: some report much better late-game performance than V/VI with fast turn times; others hit multi-minute loads, crashes, and even unplayable states.
  • AI remains a major complaint: still poor at warfare and reliant on bonuses at higher difficulties; at least one commenter says VII’s AI is worse than VI’s.

Monetization, DLC, DRM, and reviews

  • Strong resentment of day-one content packs, “founder’s edition” upsells, and the perception that eras/content (e.g., Atomic Age) were carved out for later DLC.
  • Denuvo and constrained modding are disliked; some call the game a “storefront disguised as a game.”
  • Noted gap between critic scores (~80) and mixed Steam user reviews triggers accusations of critic–publisher coziness; others say this pattern is normal for new Civ releases.
  • Several recommend waiting for patches and expansions, citing the historical arc: V and VI were weak at launch but improved dramatically.

Platform & VR discussions

  • macOS version runs on Metal; there’s curiosity whether this implies better optimization than past Mac ports.
  • Debate over the VR version: some see Civ as a natural fit for “god game” VR; others think it’s a niche distraction and a misallocation of development effort, especially if Quest-only.

Franchise comparisons & alternatives

  • Many still regard Civ IV (sometimes with mods) as the peak; V with Vox Populi is also cited as “peak Civ.”
  • Ongoing 1-unit-per-tile vs stacks-of-doom debate: 1UPT praised for tactical depth and epic battlefields; criticized for tedium and weak AI.
  • Several say overall Civ quality has declined since IV; others argue accessibility and refinement have improved.
  • Alternatives frequently mentioned: Old World, Paradox titles (EU4, CK2), Humankind, Endless series, Manor Lords, and even Freeciv/Civ II for those who prefer older styles.

Buying timing & player behavior

  • Strong norm: only buy a Civ game when its successor launches, after all DLC, balancing, and discounts.
  • Some long-time fans still pre-ordered or bought the founder’s edition and are cautiously optimistic; others cite this release as the point they stop being loyal day-one buyers.

How about trailing commas in SQL?

Motivations for trailing commas in SQL

  • Main benefit is easier manual editing of hand-written SQL:
    • Add/remove/reorder list items (SELECT columns, table columns) without juggling commas.
    • Cleaner diffs and git blame (only the changed line, not the previous one to add/remove a comma).
    • Easier to comment out individual lines during interactive exploration and debugging.
  • Consistency with many programming languages (JS, Python, etc.) and some SQL dialects (BigQuery, Snowflake, DuckDB, ClickHouse) that already allow trailing commas.
  • Especially desired for SELECT and CREATE TABLE; some argue adding them there would cover “99% of the pain”.

Arguments against / skepticism

  • Some find trailing commas visually suggest a missing element or bug; they prefer the syntax error as a guardrail.
  • Concern about “Robustness principle”–style leniency: allowing more sloppy input might hide mistakes or encourage unsafe string-built SQL (vs prepared statements).
  • Others see the change as trivial vs the cost/complexity of touching a very old, messy standard and a huge ecosystem; consider this bikeshedding.
  • A subset simply dislike the aesthetics or feel keystroke savings are negligible compared to time spent thinking about logic.

Existing workarounds and styles

  • Leading-comma style in SELECT / lists:
    • Makes all but the first item uniform and visually aligns commas, which some say reduces errors.
    • Critics say it just moves the “special” case and looks non-idiomatic.
  • Tricks to simplify WHERE clauses:
    • Start with WHERE true or WHERE 1=1 (or 1=0 OR 1=1 for DELETE) so every condition uniformly begins with AND/OR.
  • Other hacks: dummy “pad” column, appending a constant at the end, or relying on ORMs / join()-style helpers to construct lists.

Grammar, standards, and partial solutions

  • SQL grammar is already convoluted; adding trailing commas everywhere might introduce ambiguities, especially where keywords can also be identifiers.
  • Some propose:
    • Allowing trailing commas only where they’re unambiguously illegal today (e.g., selected lists, column definitions).
    • Treating commas more like optional separators or even “whitespace” (Clojure-style), though this collides with existing aliasing syntax without mandatory AS.
  • Debate over “backwards compatibility”:
    • Engines can add support without breaking old queries.
    • But new queries using trailing commas won’t run on old engines.

Broader perspectives

  • Some see this as just one more quality-of-life improvement among many small papercuts; others say energy should go into better editors, diff/merge tools, or alternative query languages (PRQL, BigQuery pipe syntax, LISP-like or newline-delimited syntaxes).
  • Several commenters conclude it’s ultimately a style/preference issue: if unambiguous, many would like trailing commas allowed; others will still choose alternative formatting conventions.

TSMC 2nm Process Disclosure – How Does It Measure Up?

Ambiguous PPA Claims and Questionable Graphs

  • Several commenters find the paper “frustratingly marketing‑like”: mostly relative numbers, minimal hard data, and graphs that look like commercials.
  • A key TSMC scaling graph is called out as spurious: reverse‑engineering the bars shows ~55% improvement from N3 to N2 where public statements suggest ~30%; this mismatch may explain why the graph was apparently removed.
  • Some see this as part of a broader trend: IEDM papers from TSMC being more marketing than technical, with missing pitches, SRAM cell sizes, and absolute numbers.

Node Naming and Diminishing Returns

  • The “2nm” label is widely treated as pure marketing; commenters note that the numerical naming convention (dividing by √2 each generation) breaks down below 2nm.
  • There’s frustration that “3nm to 2nm” suggests a big geometric shrink but real gains (30% power / 15% perf / modest density) are incremental.

Intel 18A vs TSMC N2

  • Intel’s 18A is viewed as a “2nm‑class” competitor emphasizing performance over density, continuing Intel’s historical bias.
  • Some recall Intel’s past schedule slips (10nm) and are skeptical of 2025–2026 HVM timelines, though others repeat the official roadmap.
  • Debate exists on whether 18A is “literally” TSMC 3nm plus backside power; others argue process differences are substantial.

Value of 2nm: Who Needs It?

  • For hyperscale datacenters, power and cooling dominate TCO; modest efficiency gains can justify very high chip prices, while inefficient parts become unsellable.
  • Apple is cited as a customer that pre‑funds new nodes and insists on leading edge, puzzling some but praised by users for battery life and performance.
  • Commenters stress you need either huge volumes or Nvidia‑like margins to justify 2nm NRE.

Power vs Performance Tradeoffs

  • Some argue post‑Dennard, power efficiency is “king” because it ultimately gates usable performance.
  • Others push back: foundries still offer both high‑performance and high‑density cell libraries; products mix them differently, showing the performance/power tradeoff still matters.

Design Cost and RISC‑V

  • One comment claims 3nm design NRE exceeds $500M; another counters with an analysis suggesting $50–75M instead, highlighting disagreement.
  • Multiple participants argue RISC‑V doesn’t “need” 2nm; current cores are far behind ARM/x86 in microarchitectural sophistication, not limited by node.
  • Discussion centers on how performance mostly comes from predictors, caches, pipelines, etc., which are expensive, IP‑heavy, and tied to specific vendors.
  • Ideas surface around “GPL‑for‑hardware” style licenses and open, high‑performance core releases to bootstrap an open hardware ecosystem.

Edge Devices and Node vs Design

  • A side thread explores Raspberry Pi and Jetson power use for wildlife/object‑detection workloads.
  • Current Pi (16nm) is seen as far from needing 2nm; commenters note manufacturing node isn’t everything—SoC design and software optimization often matter more for power.
  • Jetson examples (20nm Nano, newer 4/8nm Orin) illustrate that big efficiency differences can arise on relatively similar or older nodes.

Nvidia Blackwell on 4N, Not 3N

  • People ask why Blackwell stayed on 4N when 3nm is “in full production.”
  • Suggested reasons:
    • Very large die (~750 mm²) are far more yield‑sensitive; cutting‑edge nodes are optimized first for small mobile SoCs.
    • Critical IP (SerDes, HBM PHY, special SRAM/CAM) often lags on new nodes, as early adopters don’t need it.
    • Using 4N allows higher yields and margins, with performance gains driven more by architecture and higher TDP than by lithography.
  • There is some back‑and‑forth about how mature N3 really is for larger non‑mobile parts and the extent to which Apple’s big SoCs show yields are acceptable.

Slowing Scaling and Expectations

  • Some see the weaker node‑to‑node gains and marketing spin as evidence the “era of easy VLSI scaling is over,” with society still demanding exponential progress.
  • Others emphasize that scaling continues, just at higher cost and smaller incremental PPA steps; ASML’s CEO is cited as still seeing a multi‑node roadmap, but on a slower cadence.

Why Fabs in Arizona?

  • Explanations include:
    • Historical semiconductor presence (Motorola, Intel, others) and existing talent/supply chains.
    • Stable geology and climate, plus designated industrial water from local projects.
    • Favorable tax policy and politics, especially in the context of U.S. industrial policy.
  • Some question Arizona’s water suitability, noting fabs use huge amounts of water and recycling is energy‑intensive and partial rather than total.

Jeep Introduces Pop-Up Ads That Appear Every Time You Stop

Technical control, jailbreaking, and safety pretexts

  • Commenters expect manufacturers to hide behind “safety” to block jailbreaking of infotainment systems, even when those systems are already poorly secured.
  • Example raised: other brands’ systems have been rooted due to weak firmware signing; people worry Jeep’s online control of ads implies deep remote access.
  • Prior remote Jeep hacking is cited as a reason to distrust any “always-on” connection between cloud services and in‑car systems.

Jeep/Stellantis reputation and business strategy

  • Many see this as yet another reason to avoid Jeep, citing long‑standing reliability and quality complaints and calling the move “enshittification.”
  • There’s debate over Stellantis’ financial health: one view says they’re near bankruptcy and desperate for revenue; another counters with recent cash and profit figures.
  • Some frame Jeep as mismanaged within a larger conglomerate that chases high-margin SUVs and gimmicky revenue schemes instead of products customers actually want.

Ads, subscriptions, and the recurring‑revenue mindset

  • The Jeep pop‑up ads are compared to BMW’s paid heated seats, Roku/streaming ads, and subscription‑locked hardware; people see the same pattern of post‑sale monetization.
  • Several argue that carmakers no longer treat the sale as the end of the transaction but as the start of a continuing revenue stream, eroding ownership and autonomy.
  • Some note that widespread stock ownership and executive incentives help drive this short‑term, revenue‑at-all-costs behavior.

Safety, legality, and distraction risk

  • Even if ads only appear when stopped, commenters argue they nudge drivers into interacting with the screen after the car starts moving again, increasing distraction.
  • Fears include software bugs making ads appear while driving, impaired access to navigation at critical moments, and potential conflicts with phone‑use laws in some countries.
  • A few doubt regulators will intervene in the near term, especially in the US.

Opt‑out mechanisms, connectivity, and user control

  • Disabling the ads reportedly requires creating an online account, accepting terms, and remotely changing settings—seen as coerced consent.
  • People worry that remote settings can be silently reset, and that built‑in SIMs and telematics are hard to disable without hacks and side effects.
  • Broader concern: buyers “own” the car but not full control over its software, features, or data.

Consumer backlash, segmentation, and future choices

  • Many vow never to buy a Jeep (or any ad‑supported car), even if ads are later removed; some advocate sticking to older “dumb” cars with physical controls.
  • Others suggest Jeep may be targeting a segment less sensitive to such intrusions, while enthusiasts and privacy‑minded buyers will migrate elsewhere.
  • There are calls for tools that catalog anti‑user practices across models and years to guide purchasing decisions.

Meta’s Hyperscale Infrastructure: Overview and Insights

Serverless, PHP, and Architecture Terminology

  • Debate over calling Meta’s PHP/Hack web tier “serverless”:
    • Some argue this stretches the term; it’s really a monolithic service with many endpoints, not FaaS in the AWS Lambda sense.
    • Others say “serverless” is a compute model (stateless, no persistent process or OS access for app code), and PHP/CGI shared hosting essentially fit that model historically.
  • Distinction between FaaS and PaaS is seen as blurred by marketing (e.g., calling Fargate “serverless”).
  • At Meta, infra is “serverless” mainly from an application engineer’s perspective; infra teams still deal heavily with performance, limits, and hosting.

Meta as a Public Cloud Provider

  • Some read the article as positioning Meta to launch a public cloud; others who know the infra say it’s not realistic:
    • Infrastructure is deeply entangled with internal tools, assumptions, and a single “customer” (Meta’s own apps).
    • Strong process and access coupling, custom compilation targets, and bare‑metal execution make multi‑tenant public use difficult.
  • Even if technically possible, commenters argue:
    • The market is crowded (AWS, GCP, Azure, etc.).
    • Business incentives are weak given Meta’s existing margins.
    • Significant trust and productization work would be required.

Threads Launch: Speed vs Product Value

  • Many are impressed by the claim: infra teams had two days’ notice to prepare for a launch that scaled to 100M signups in 5 days.
  • Others question whether “shipping fast” matters if the product is perceived as:
    • Lacking novelty, clear purpose, and a distinct culture.
    • Over‑dependent on Instagram funneling users and dark patterns.
  • Strong disagreement over outcomes:
    • One side calls Threads a flop or net‑negative, citing weak monetization and unclear societal benefit.
    • Others note claimed 300M+ MAU / 100M DAU and position it as roughly comparable to X/Twitter in scale, with potential future revenue.
    • Skepticism remains about metrics (bots, passive/forced accounts, insularity of content).

Engineering Culture and Work Environment

  • War‑room style, high‑pressure launches are described as both exhilarating and stressful:
    • Some prefer this to slow, bureaucratic organizations dominated by planning decks and approval gates.
    • Others highlight burnout risk, fear‑driven motivation, and the intensity of operating at that scale.
  • Meta’s bootcamp and high hiring bar are cited as mitigations for risks of “anyone can edit anything” and continuous deployment.

Internal Tooling, Observability, and Deployment

  • Strong interest in Meta’s deployment system (Conveyor) and its logging/observability stack; linked papers are referenced, but no open code.
  • Meta is praised for:
    • Extensive logging and analytics that power experimentation and rapid iteration.
    • A highly effective experimentation platform seen as a major strategic advantage.
  • Some find the model of ubiquitous serverless functions + global monorepo dystopian and hard to debug; others who’ve used it say it works surprisingly well at scale.

Technical Design Choices and Generalizability

  • RPC: Questions about the absence of Thrift in the article; speculation about possible gRPC use is met with pushback that Thrift remains common and performance is comparable.
  • Networking and control planes:
    • Commenters highlight Meta’s preference for centralized controllers with decentralized data planes for networking and service mesh, viewing this as an optimal pattern at very large scale.
  • Hardware standardization:
    • Meta’s “one server type” (single CPU, unified DRAM size for non‑AI workloads) surprises some; others note industry drift that way to reduce complexity and NUMA issues.
  • Databases and “boring tech”:
    • Criticism that Meta’s infra exists to cope with self‑inflicted complexity (legacy PHP/MySQL without FK constraints, huge monolith).
    • Counter-arguments stress that hyperscale problems (global failover, routing, sharding) genuinely lack off‑the‑shelf “boring” solutions.

CDN, PoPs, and Latency Discussion

  • One thread questions whether multi‑hop CDN→PoP→DC paths are actually faster than a direct DC fetch.
  • Multiple responses explain:
    • Long‑lived, high‑bandwidth internal links, connection reuse, and congestion control make edge termination and caching faster in practice.
    • Extra hops add small latency to first byte but significantly reduce time to last byte and data‑center load.

Ethics, Impact, and Technofetishism

  • Strong cynicism: extensive, brilliant engineering is seen as serving ads, surveillance, and manipulation.
  • Calls for boycotting Meta products and not integrating with their ecosystem.
  • Others, especially non‑engineers, express genuine awe at the sheer scale and complexity as a “modern wonder,” regardless of purpose.
  • Some push back that awe here is “technofetishism” if it ignores the banal or harmful end goals compared to more aspirational uses of technology.

Nvidia's RTX 5090 power connectors are melting

Electrical Limits and Why Connectors Melt

  • Many comments drill into basics: 600 W at 12 V implies ~50 A; the C13-style “wall” connector only works because it’s at 120–240 V (much lower current).
  • Heat is driven by I²R in the cable and especially the contacts. With 12VHPWR/12V‑2x6, each of 6 power pairs should carry ~8.3 A, while typical Micro‑Fit–style contacts are rated ~9.5–10 A – almost no safety margin.
  • Tests cited in the thread show badly unbalanced current: one wire measured ~22 A and >150 °C while others carried very little, suggesting serious distribution issues.

Critique of 12VHPWR / 12V‑2x6 Design

  • Several analyses describe Nvidia’s newer cards shorting all 12 V pins together on a bus bar with a single shunt, so no per‑pin load balancing or meaningful fault detection.
  • By contrast, older multi‑8‑pin designs (and earlier xx90 cards) used multiple shunts and separate wiring paths that naturally equalized current and gave more headroom.
  • Commenters slam the spec for effectively running connectors and 16–18 AWG wiring at or above rated current, calling the design “broken by design” and extremely sensitive to minor seating or manufacturing issues.

Third‑Party Cables vs Nvidia Responsibility

  • Some blame the featured failure on an aftermarket cable and user error (using a 12VHPWR cable with a 12V‑2x6 card).
  • Others counter:
    • GPU‑side ports are intentionally backward‑compatible, so plugging in old‑standard cables is “allowed by design.”
    • All GPU power cables are “third‑party” from either the PSU’s or GPU’s perspective; the connector standard should tolerate realistic variance.
    • There are already multiple 5090 failures, including with manufacturer‑supplied cables, suggesting a systemic margin problem.

Higher Voltage Rails and Alternative Connectors

  • Strong thread arguing 12 V is the wrong choice at kilowatt scale; calls for 24 V or 48 V rails to cut current (and I²R losses) by 2–4×, even if it requires extra DC‑DC stages on the card.
  • Others note real obstacles: ATX ecosystem inertia, regulation around “safe” extra‑low voltage, cost/complexity of 48→1 V conversion, and need for industry‑wide PSU changes.
  • Proposed alternatives: multiple independent connectors, screw‑in or ring‑lug terminals, high‑current RC/XT‑style plugs, external “brick” PSUs or even separate mains cords for GPUs.

General PC Connector Frustration and Cost Pressures

  • Many vent about internal PC connectors being hard to seat, fragile, and unergonomic compared to USB/HDMI, despite multi‑thousand‑dollar parts hanging off them.
  • Some engineers explain why good connectors are genuinely hard and expensive: precise crimps, retention, contact geometry, compactness, long life, backwards compatibility, and unit‑cost targets measured in cents.
  • Others respond that, on a $2,000+ flagship GPU with huge margins, shaving a dollar or two on power connectors is unjustifiable.

Power Consumption and Product Strategy

  • Discussion questions why consumer GPUs need 500–600 W at all, comparing this to “space heaters for games and boilerplate AI.”
  • Counterpoint: the halo tier is explicitly “as fast as physics, cost and power allow”; demand for 4K/RT/high‑FPS and AI means the market rewards absolute performance more than efficiency at the top end.
  • Some users state they are choosing AMD cards or skipping upgrades entirely over both price and power/connector concerns.

Safety, Regulation, and Liability

  • Questions raised about possible breaches of safety norms when running near or above connector/wire ratings; suggestions that CE or product‑liability regulators might eventually step in.
  • Others note UL is private and enforcement is mainly via market and insurance, not criminal law; nonetheless, repeated melting incidents are seen as a serious reputational and risk issue.

Apple software update “bug” enables Apple Intelligence

Auto‑opt‑in, dark patterns, and Apple Intelligence

  • The update behavior that re‑enables Apple Intelligence after it was disabled is seen as user‑hostile and more reminiscent of Windows/Office 365 tactics than “classic” macOS.
  • Several commenters doubt it’s an honest “bug,” noting similar patterns: Office 365 auto‑enabling AI and billing, Windows pushing Edge, Apple pushing News/Fitness and “Siri suggestions” that feel like ads.
  • Some suggest A/B testing or KPI pressure on product managers as the real driver; others expect eventual class‑action‑style lawsuits with minimal payouts.
  • One practical hack mentioned: mismatching display and Siri languages can prevent Apple Intelligence from running.

Perceived decline in Apple software quality

  • Many anecdotes of bugs and regressions: flaky Bluetooth on recent iPhones and Macs, Airdrop prompts disappearing, unreliable Apple Watch connectivity, timer UI glitches, Mail sync oddities, notes/data loss, and confusing photo deletion rules.
  • Longtime users compare today’s state to Snow Leopard’s “no new features” refinement era, blaming annual release cycles and feature pressure for reduced quality.
  • Frustration centers on Apple’s opaque bug process: hard to know if issues are seen, prioritized, or ever fixed; consumer‑facing support often pushes full device wipes with no clear outcome.
  • Engineers are portrayed as aware but constrained by prioritization, “theme of the year,” and career incentives; “cowboy coding” and unsanctioned fixes are discouraged.

Bluetooth: spec vs implementation debate

  • Some argue Bluetooth has “always sucked” and needs a replacement; others say BLE is fine and the problem is poor vendor stacks and old chip SDKs.
  • End‑user perspective: if interoperability is consistently flaky, the distinction between bad spec and bad implementation is meaningless.
  • Mac‑specific annoyances (sleeping Macs hijacking headphones, random disconnects) reinforce the sense that Apple is “dropping the ball,” despite owning most of the stack.
  • Apple’s move to in‑house Wi‑Fi/Bluetooth chips is noted; disagreement over whether this is mainly cost‑driven or also about quality.

Pushback, alternatives, and antitrust

  • Commenters are increasingly tired of AI upsells and lock‑in across Apple, Microsoft, and Google; some call for stronger antitrust laws.
  • Suggestions to “use their software less” run into practical barriers: lack of true substitutes, bundling, switching costs, and network effects.
  • A subset has moved workloads to Linux or GrapheneOS to escape clutter, ads, and closed systems, valuing hackability and user control over polish.

Usefulness of Apple Intelligence and Siri

  • Many disable Apple Intelligence as not particularly useful; a few find notification or website summaries helpful, though others cite public criticism of summary quality.
  • Siri is widely described as outdated and unreliable; Apple Intelligence currently doesn’t power it. Some think the first LLM effort should have been a Siri overhaul.
  • Workarounds like “Hey Siri, ask ChatGPT” are popular, especially while driving, but provoke concern about distraction and poor voice UX.

We replaced our React front end with Go and WebAssembly

WebAssembly for Frontends

  • Many see this as a strong demonstration of WASM’s promise: bringing “any language” to the browser and enabling backend‑oriented teams to build rich UIs.
  • Others argue WASM is appropriate for niche, heavy client‑side computations (FFmpeg, game engines), but “absurd” for typical CRUD/DOM apps where JS/TS excel.
  • There’s relief that the app uses DOM elements, not canvas‑only rendering (which would harm accessibility).

Go + go-app Tradeoffs

  • go-app’s Go‑based virtual DOM is praised for compiler‑checked UI, editor completion, and reuse of Go logic, though some find the “HTML via Go function calls” style ugly versus template‑based approaches.
  • Go WASM currently produces large binaries (tens of MB uncompressed) due to its runtime and async model; new Go releases and alternatives (TinyGo, Zig, Rust) are discussed as ways to shrink output.
  • They replaced JSON with gob over WebSockets for performance; commenters note gob is not hardened against adversarial input and has caused perf/robustness issues elsewhere.

Bundle Size, Performance, and Architecture

  • 32MB (≈4MB compressed) is seen as huge compared to typical JS/WASM bundles, but some argue it’s acceptable for a professional app used all day, where caching amortizes cost.
  • Concerns remain about startup time, CPU/battery use, and multi‑tab initialization; suggestions include workers, chunked WASM loading, and (hypothetical) shared runtimes.
  • Some question why so much data crunching happens client‑side instead of pre‑aggregation on the backend; counterpoint is that pushing work to clients can control infrastructure costs.

Was React Really the Problem?

  • Several are unconvinced by the claim that “TypeScript/React didn’t scale” for a complex UI; they point to virtualized tables and standard browser techniques that can handle huge datasets.
  • Some who tried the new UI report basic UX rough edges, reading this as inexperience with frontend rather than a React limitation.
  • Others defend the decision as primarily about eliminating a duplicated React codebase and unifying logic in one language/ecosystem, not about React’s raw technical limits.

One Language Everywhere & Full‑Stack Debate

  • One camp strongly values single‑language stacks (Go everywhere, or TS everywhere) for small teams: easier cross‑editing, fewer context switches, and lower barriers for contributors.
  • Another camp argues front‑ and back‑end are effectively different disciplines with different concerns; forcing one language can hide real complexity and reduce use of best‑fit tools.
  • Large subthread debates “full‑stack”: some see full‑stack engineers as highly productive generalists; others view them as dabblers who underperform specialists, especially on UX or complex backend work.

Ecosystem, Tooling, and LLMs

  • Skeptics note hiring React devs is far easier than finding Go+WASM frontend developers.
  • Several argue LLMs make popular, stable stacks even more attractive, since models answer questions about mainstream tools far better than about niche frameworks.
  • Others say good developers can learn new stacks quickly and that teams should optimize for adaptability over narrow framework experience.

Accessibility & Maintainability Concerns

  • Because the UI uses real DOM elements, commenters believe screenreader access should be achievable, unlike canvas‑only UIs.
  • Some foresee long‑term maintenance pain: large binaries, esoteric stack, thin ecosystem, and dependence on a relatively young library. Others call it an interesting, valid experiment whose real success will only be clear over time.

Google Maps now shows the 'Gulf of America'

Significance of the Renaming

  • Some see the name change as largely arbitrary: countries routinely use different names for shared features, so the US can call it what it wants in its own system.
  • Others argue this is not normal: renaming such a large, internationally shared feature by one politician, quickly and by fiat, is described as “extremely unusual” in the modern era.
  • Critics contrast it with Denali/Mt. McKinley: that change followed decades of local usage and formal requests, whereas “Gulf of America” was invented recently with no apparent grassroots demand.
  • A British commenter likens it to colonial powers unilaterally renaming places, which historically bred resentment.

Motives and Power Dynamics

  • Suggested motives:
    • “Shock-and-awe” / “flood the zone” trolling to distract from more consequential policies and court fights.
    • Red meat for a political base and a jingoistic vanity move (“makes us look bigger on a map”).
    • A symbolic show of power and a “loyalty test” — do institutions and individuals adopt the new term?
    • A niche theory: renaming as a way to sidestep previous executive orders on oil drilling in the Gulf of Mexico.
  • Some frame it as xenophobic and expansionist rhetoric, in line with talk of annexing Canada, buying Greenland, or controlling Gaza and Panama.

Inclusivity vs. Jingoism

  • Supporters argue “Gulf of America” could be more inclusive since the US and Mexico are both in North America.
  • Opponents respond that the executive order itself clearly uses “America” to mean the United States and to “honor American greatness,” not the continent.
  • Critics see it as petty historical revisionism, akin to authoritarian regimes renaming places for ego or propaganda, not inclusion.

Geopolitics and Overreaction

  • Some worry it needlessly antagonizes neighbors and weakens alliances, questioning when Mexico might rationally seek security guarantees against the US.
  • Others say people are catastrophizing; the renaming is dumb, rude, and symbolic but not geopolitically decisive, and outrage mainly serves as a distraction.

Google Maps and Naming Policy

  • Commenters note the US GNIS database now carries “Gulf of America,” and Google appears to be following that, not inventing its own label.
  • Outside the US, many see “Gulf of Mexico (Gulf of America),” which some find odd or sycophantic.
  • There’s debate about whether the EO’s wording really covers the whole gulf; some argue Google has over-applied a name intended just for US coastal waters.
  • Other map providers (Apple Maps, MapQuest, Waze, OpenStreetMap) initially differed but are reported as gradually aligning or partially adopting the new name.