Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 248 of 358

Features of D That I Love

Core Language Features Highlighted

  • .init default initializers vary by type (e.g., int = 0, float = NaN, enums can choose a sentinel), seen as powerful but mentally heavier than simple zero‑init.
  • Design by Contract: in/out/invariant blocks and scope(exit) for robust postconditions and cleanup; some note prior art in Eiffel and Ada/SPARK.
  • Error handling: scope to guarantee no pointer escape; scope(exit)‑style constructs praised as a unifying mechanism that could replace much exception machinery.
  • CTFE: functions run at compile time when used in constant contexts, without special keywords; regarded as one of D’s standout features.
  • UFCS (Uniform Function Call Syntax): f(a)a.f(), enabling pipeline‑style code and making free functions feel like methods; also used heavily with templates.
  • $ shorthand for array length and overloadability for multidimensional slicing: liked by some, distrusted by others because overloading can hide semantics.
  • Attributes (in, out, inout, ref, scope, return ref) and pure functions: give strong semantic guarantees but contribute to perceived complexity.
  • Strong C interop: ImportC, BetterC, and direct calling both ways noted as a major but under‑advertised asset.

Contracts / Invariants Debate

  • One side: invariants are “just runtime asserts”; similar effects can be built via helper objects, aspects, or scope_exit‑like patterns.
  • Other side: having invariants as first‑class, auto‑run before/after public methods adds semantic weight; enables tooling, model checking, and clearer intent.
  • Subtlety noted: invariants often don’t hold in the middle of methods, so public methods calling each other require care; D’s mechanism exposes this.

Ergonomics & Readability

  • Some love UFCS chains (stdin.byLine...uniq...map!...sort...copy) as data‑flow pipes; others find them visually noisy and hard to parse, preferring explicit function calls or Elixir‑style |>.
  • Template instantiation with ! (map!(a => a.idup)) and names like idup (immutable dup) are seen as either neat or opaque, depending on taste and familiarity.
  • Operator overloading (including $) prompts concern about “clever” misuse vs defenders noting similar risk with + and that abuse is rare in practice.

GC, Memory Model, and “Better C/C++” Ambition

  • One camp: D’s garbage collector is “viral” – large parts of the stdlib (e.g., exceptions, string ops) rely on it, making truly @nogc code painful and limiting D’s viability as a C/C++ replacement.
  • Counter‑camp: GC is optional in practice; you can avoid allocating with it, use malloc/RAII/buffer types, BetterC mode, and treat GC as one tool among many.
  • Several participants argue this GC tension, and the long‑standing difficulty of a smooth no‑GC story, is an existential issue that has never been fully confronted.

Ecosystem, Tooling, and Adoption

  • Reasons cited for modest adoption:
    • No big corporate sponsor or marketing push during critical years.
    • Early history of D1 vs D2 and multiple runtimes created confusion.
    • Lacking or flaky tooling: LSP, debugging, autocomplete, DUB behavior; compared unfavorably to Rust/Go/Zig tooling.
    • Library ecosystem: many bindings out of date; using D at scale often means becoming upstream for lots of libs instead of solving the domain problem.
  • Desired but missing pieces: a first‑class, batteries‑included web framework (Django/Rails‑style); a strong, official GUI stack; coherent WASM story beyond “C subset only.”

Language Design, Complexity, and Niche

  • Supporters describe D as pragmatic, readable, and highly productive, with “batteries‑included” stdlib, strong metaprogramming, and multi‑paradigm flexibility (procedural, OO, functional).
  • Critics see a “jack of all trades” without a tight design: many overlapping features (parameter qualifiers, conditional compilation), complex stdlib internals, and no clear niche where it decisively beats existing languages.
  • Some feel D over‑pivoted to attract C++ users (despite GC skepticism) instead of leaning into attracting Java/C#‑style communities.
  • Overall sentiment: technically impressive, with beloved features (contracts, CTFE, UFCS, C interop), but held back by ecosystem, tooling, and a diffuse positioning in the language landscape.

ICEBlock, an app for anonymously reporting ICE sightings, goes viral

App trust, privacy, and honeypot fears

  • Many commenters are suspicious because the app is closed source, centralized, and iOS‑only; some explicitly worry it could be a honeypot to identify dissidents rather than migrants.
  • Others cite the claim that the app “does not collect or store user data,” which a reporter allegedly verified via network analysis, but note that this could change in an update.
  • Multiple people emphasize that Apple still has a list of all downloaders and push targets, which could be subpoenaed, even if the developer keeps no logs.

iOS-only design and Android controversy

  • The developer’s explanation: on Android they’d have to keep device IDs or accounts for push notifications, creating subpoena risk; Apple’s new broadcast push channels supposedly avoid that by letting Apple manage device mapping.
  • Several Android-knowledgeable commenters say this is misleading:
    • Apps can be sideloaded outside Play Store, use their own polling‑based notification, or use privacy‑respecting services.
    • Technically, iOS and Android both need tokens; the privacy difference is more about pushing risk to Apple (CYA) than actual user anonymity.
  • GrapheneOS publicly disputes the “Android can’t be private enough” rationale, which deepens skepticism about the developer’s technical understanding.

Legality and First Amendment issues

  • A large contingent argues that reporting police/ICE presence is protected speech, analogous to: Waze’s speed‑trap reports, flashing headlights, or radio scanners.
  • Some lawyers/case‑law‑aware participants mention federal and state rulings upholding such speech, though they note the Supreme Court’s current unpredictability.
  • Others counter that intent matters: an app explicitly designed to help people evade lawful detention might be painted as obstruction, even if the legal theory is weak.

Motivations for using the app and views on ICE

  • Supporters frame it as basic self‑protection: avoiding potentially dangerous or harassing encounters with armed agents, even for citizens and legal residents, given mistaken detentions and weak accountability.
  • Opponents see it as aiding lawbreaking and undermining “rule of law,” arguing undocumented presence is illegal and deportation is a legitimate state function.
  • Non‑US readers ask why anyone would oppose deportations; replies describe decades of lax enforcement, people with deep roots being removed, due‑process concerns, and the current administration’s highly visible, militarized raids.

Data quality, abuse, and Sybil attacks

  • Several note the app is already being flooded with fake reports; a few people suggest hostile users (or even ICE supporters) can trivially render it useless.
  • Others point out that in practice laypeople often misidentify agencies (DEA/HSI called “ICE”); even good‑faith reporting could be noisy.
  • The core technical concern: without identities or reputation, the system is inherently vulnerable to spam/Sybil attacks.

Authoritarian drift and Streisand effect

  • The harsh official threats against the developer are widely described as authoritarian and chilling, especially attacks on media simply for covering the app.
  • Many see the episode as part of a broader pattern: expanding ICE power and budget, masked paramilitary‑style raids, weak judicial accountability, and growing normalization of retaliation against speech.
  • Others note the classic Streisand effect: political denunciations massively increased awareness and downloads of a tool that was previously obscure.

Sony's Mark Cerny Has Worked on "Big Chunks of RDNA 5" with AMD

Mark Cerny, RDNA5, and AMD Collaboration

  • Commenters note Cerny appears to work as a consultant rather than a Sony employee, influencing both PlayStation SoCs and AMD GPUs.
  • The article’s “RDNA5” branding is questioned: Cerny himself is quoted as saying “RDNA 5, or whatever AMD ends up calling it,” suggesting a name in flux.

RDNA, CDNA, and UDNA Convergence

  • Several posts argue that AMD’s public roadmap ends RDNA at 4, with a shift to “UDNA1,” a unified architecture with CDNA (HPC/datacenter).
  • There’s disagreement on how similar RDNA and CDNA already are: some claim broad commonality, others detail substantial differences (wavefront width, execution model, latency, feature sets).
  • UDNA is seen as both an architectural and organizational consolidation, potentially merging teams and long-term strategy.

Console Custom Silicon and Semi-Custom Paths

  • Sony could theoretically request an RDNA4-derived design instead of adopting early UDNA, resulting in a “RDNA5” that remains semi-custom and never appears as a retail GPU.
  • Past semi-custom work (e.g., console/APU overlap, Steam Deck–style chips) is cited as precedent, with console learnings later feeding into APUs.

Generational Leap: PS4 → PS5

  • Many feel the visual jump from PS4 Pro to PS5 is small relative to the FLOPS increase, especially compared with earlier eras (PS1→PS3).
  • Explanations offered:
    • More pixels (4K vs 1080p) consume much of the extra compute.
    • Hardware progress has slowed (Dennard scaling and Moore’s law weakening, rising node costs).
    • Huge performance gains now often go into higher FPS and faster loading instead of visibly new effects.
    • The biggest PS5 leap is widely credited to NVMe SSD + hardware decompression, not raw GPU power.

Engines, Bloat, and Unreal

  • Some argue engine practices, especially Unreal’s default pipeline optimized for highly dynamic scenes, waste potential for many game types.
  • Others respond that fully dynamic lighting and environments dramatically improve workflows and design freedom, even if they cost performance.

Anti-Aliasing and Image Quality Tradeoffs

  • TAA is debated:
    • One side says it’s an efficient, necessary replacement for supersampling/MSAA and has improved significantly.
    • Critics argue temporal and upscaling techniques sacrifice clarity, introduce ghosting/blur, and inflate FPS metrics while degrading real image quality.
    • There’s agreement that objective metrics for temporal artifacts are poor, making evaluation hard.

Performance vs Fidelity and Player Preferences

  • Multiple comments say most players choose performance modes when presented with a choice; a cited Sony stat claims ~75% pick performance.
  • Others question how representative this is across genres, noting fast competitive games may skew the data.
  • Some users report being FPS-tolerant (e.g., 25–30fps is acceptable if stable); others insist modern displays make low framerates intolerably blurry.

Optimization Culture and Rising Costs

  • Several posts lament that modern games are less optimized, with studios relying on hardware advances and middleware.
  • A few argue the bottleneck has shifted:
    • Asset production (high-res models/textures) dominates cost, leading to teams with far more artists than programmers.
    • AAA engines increasingly optimize for artist workflows rather than peak runtime performance.
  • Others counter that even in earlier eras, many console games already used C and higher-level tooling; the “all hand-tuned asm” narrative is overstated.

Storage, Streaming, and New Techniques

  • PS5’s SSD and streaming capabilities are highlighted as enabling design changes (fewer fake loading corridors, highly detailed continuous worlds).
  • Examples mentioned include Cyberpunk’s serialization bottlenecks on PS4 and newer techniques like Nanite:
    • Supporters say Nanite shines in extremely complex scenes and is optional.
    • Critics say it adds overhead and can hurt performance in simpler content.

Hardware Progress, GPUs, and AI/Crypto

  • Some posters attribute weaker generational jumps to fundamental tech limits (SRAM/IO scaling stalling, expensive shrinks).
  • Others note that GPU vendors now prioritize datacenter/AI features (high VRAM, interconnects), potentially slowing pure gaming advances; there’s disagreement on how much this affects consoles.
  • A side thread argues that software has also grown more capable at leveraging parallelism, while another stresses that fully saturating modern multi-core + GPU + NPU systems remains rare.

APIs and Vulkan-on-PlayStation Debate

  • One question asks why Sony doesn’t support Vulkan on PS5.
  • Defenders of Sony’s proprietary APIs (GNM/GNMX) say consoles benefit from ultra-low-level, hardware-specific interfaces and avoid Khronos politics.
  • Pro-Vulkan voices argue standards reduce developer burden and avoid NIH/lock-in; they criticize Vulkan’s extension “spaghetti” but still see it as the best collaborative option.
  • There’s nuanced discussion of Vulkan’s strengths (barrier model, SPIR-V) and weaknesses (complex extensions, OpenGL legacy).

AMD Software Stack and ROCm

  • A commenter reports that recent ROCm releases now “just work” with tools like llama.cpp on AMD GPUs, contrasting with years of painful setup.
  • Others note llama.cpp can bypass ROCm entirely via Vulkan, but ROCm compatibility is treated as a useful barometer of AMD’s software maturity.

Miscellaneous

  • Some express excitement that UDNA and Cerny’s work could improve AMD’s datacenter competitiveness against Nvidia, with the caveat that poor drivers/support could again damage trust.
  • There’s skepticism about current-gen consoles’ limited exclusive library, but anticipation that titles like GTA VI may finally showcase the hardware.

Show HN: CSS generator for a high-def glass effect

Implementation & Performance

  • Commenters note the effect relies on “many layered tricks,” not one simple CSS rule.
  • The blur component of backdrop-filter is identified as the most resource-intensive, especially at high radii and with constantly changing backdrops (scrolling, video).
  • Some users report jank/slow scrolling on mobile (especially Firefox on Android), others say it’s smooth on powerful Apple hardware, highlighting hardware variability.

Visual / Physical Accuracy Debate

  • Several comments dive into how realistic the blur is relative to real glass:
    • Standard backdrop blur only samples pixels directly behind the element’s bounds, missing contributions from nearby content; some call this a spec- or implementation-limited approximation.
    • SVG-mask-based workarounds can extend the blur kernel beyond bounds, but then may misrepresent how light interacts with non-emissive objects.
  • There is disagreement on whether the “tweaked” version is more correct or just visually distracting and depth-confusing.

Relation to Apple Liquid Glass & Platform Moat

  • Many frame this as adjacent to, but distinct from, Apple’s Liquid Glass:
    • This tool intentionally does not attempt full refraction/edge distortion.
    • Commenters argue Apple picked an effect that’s easy to imitate poorly but hard to reproduce faithfully, especially on web/cross-platform stacks.
    • Some think the moat is as much GPU/OS optimization and power efficiency as pure aesthetics.

Design Value & UX Concerns

  • Some love the look, especially with subtle texture and bevels; others find the glass/refraction trend distracting or skeuomorphic.
  • Concerns are raised that highly transparent, refractive layers can harm contrast and readability if overused; blur remains better for relaxing the eye.

Browser & Platform Constraints

  • Advanced SVG-filter-based “real glass” demos exist, but often fail on Firefox and sometimes Safari, reinforcing cross-browser limits.
  • Webview apps on iOS are reported capped at 60fps, making heavy effects feel worse than native.

Tool UX and Practical Considerations

  • Discussion of trade-offs between a polished ~40+ line CSS stack vs a simple filter: blur() line.
  • Some feedback about mobile layout (generator overlays the effect) and shadow behavior.
  • Texture assets are free to use but should be self-hosted in production.

Related Projects & Alternatives

  • Multiple links to other glassmorphism generators, SVG-filter demos, and a JS-based “real refraction” glass prototype are shared.
  • Houdini is mentioned as a longer-term path to richer GPU-style effects in CSS.

Firefox 120 to Firefox 141 Web Browser Benchmarks

Browser performance in practice

  • Many say mainstream desktop browsers now feel similar in speed; hardware and network are usually the bottleneck, not the engine.
  • Several note mobile is different: some sites are painfully slow in mobile browsers, but this is blamed on site design, not engines.
  • The benchmark result (~12% speedup from Firefox 120 → 141) is appreciated as evidence of continued optimization, not bloat.

Chromium-by-default mindset

  • Multiple comments report sites that run poorly or refuse to load on Firefox despite working fine if Firefox pretends to be Chrome via User-Agent spoofing.
  • Blocking Firefox based on UA is criticized as lazy and brittle; capability/feature detection is promoted as the correct pattern.
  • Some blame outdated/polyfill-heavy frontends and frameworks that were only ever tested on Chromium.
  • Others argue this behavior is timeless: in the IE era devs assumed IE; now they assume Chrome; the deeper issue is developer shortcuts, not a specific browser.

Real-world performance anecdotes

  • Mixed reports: one user says multiple Twitch streams make Firefox unusable; others can run many streams fine and suggest addons, config, hardware, or drivers as variables.
  • Another notes large GitHub reviews used to be faster on Firefox than Chrome, illustrating that bottlenecks can flip between engines.
  • A WebRTC-in-background bug (linked Bugzilla ticket) is described as breaking Salesforce/CRM calling popouts in Firefox; others find the use case unclear or niche.

Firefox updates and workflow disruption

  • Some complain Firefox’s update process is intrusive: “restart to finish update” can appear mid-work, sometimes blocking new tabs and losing page state.
  • Others report worse experiences with Chromium, where updated-but-not-restarted instances behave unpredictably (broken audio, tabs).
  • Several clarify that most of the “restart page” pain is on Linux when package managers overwrite Firefox under a running instance.
  • Workarounds discussed: disabling automatic updates, letting Firefox update only when not running, using the upstream tarball, or carefully managing updates on corporate machines.
  • On macOS, some report a smoother flow: optional background download and apply-on-restart.

Ad blocking, battery, and web bloat

  • Commenters say battery life differences between modern browsers have largely converged.
  • Many insist the biggest performance win is still installing an ad blocker; web “snappiness” hasn’t improved much in 20 years due to ads, heavy frameworks, and auth walls consuming hardware gains.
  • Phoronix’s own site is criticized as ad-bloated and unpleasant without reader mode.

Browser choice, forks, and ethics

  • Several praise Firefox as a solid, improving browser and a critical counterweight to Chromium dominance; keeping Manifest V2 (and thus strong ad blockers) is seen as a key differentiator.
  • Some recommend Firefox-based variants: privacy-tuned builds (e.g., with hardened defaults) and “featureful” forks combined with tab/containers extensions.
  • A few users switched away (e.g., to Edge) over removed Firefox features or perceived instability, while others emphasize the benefits of sticking with the native/system browser.
  • A long subthread revisits Mozilla’s handling of a past CEO controversy, debating whether it shows Mozilla as “evil,” intolerance of certain views, or simply responding to external pressure and employee concerns. Opinions are sharply divided; moderators mark parts as off-topic.

The Moat of Low Status

Status, leadership, and the “first dancer” example

  • Thread centers on whether going first (e.g., first on the dance floor) is high- or low-status.
  • One view: it’s a fundamentally high-status move—taking leadership, setting group direction, showing lack of insecurity.
  • Counterview: if you’re unknown, it’s a status gamble; success can confer status, failure looks “cringe.”
  • Several emphasize status is relative to the room, not absolute; even elites can feel low-status among their peers.
  • Others say the move itself is neutral; body language and reactions determine whether it reads as high or low status.

Growing older and caring less

  • Multiple commenters say age naturally erodes status anxiety; gossip and pecking orders feel boring, “DGAF” becomes easier.
  • This detachment improves mental health and makes learning new things less fraught.
  • Some note this is cushioned by implicit status given to older men and people with money.

Using low status (or shamelessness) as a tool

  • Many personal stories: solo travel, language learning, starting piano late, wearing bold clothes, initiating conversations.
  • Key pattern: forcing oneself to do the awkward thing produces outsized benefits—skills, serendipitous encounters, confidence.
  • Several frame “willingness to look stupid” or shamelessness as a superpower or deliberate strategy (embrace impostor, accept being “the dumbest in the room”).

Workplace status and “stupid questions”

  • Senior people describe intentionally asking basic or “dumb” questions to use their status buffer for the team’s benefit.
  • Others report this often raises their status as thoughtful leaders, though it depends heavily on team culture.
  • Strong debate over “never be afraid to ask stupid questions”:
    • One side: many disasters come from unasked basics; blank stares from senior engineers often reveal real gaps.
    • Other side: there are contexts (surgeons, pilots, hostile orgs) where such questions can damage credibility; judgment and alternative learning channels matter.

Privilege, real low status, and criticism of the article

  • Several argue the piece is written from a very high-status, elite background; “low status” here mostly means embarrassment, not structural marginalization.
  • Some say truly low status (homelessness, severe disfigurement, systemic exclusion) is rare but miserable and not romantic.
  • Others claim the real moat is that low-status people are punished when they succeed or excel, not just when they fail.

Risk, learning, and talent

  • Some praise the article’s “moat” framing and connect it to ideas like “The Dip” and pain vs suffering.
  • Others caution against survivorship bias: most “putting yourself out there” ends in rejection, so resilience to repeated failure is crucial.
  • Disagreement over “you’ll be bad at anything new”:
    • One side: expecting to suck at first is healthy and liberating.
    • Other side: innate talent and transferable skills mean some people do start strong; for others, effort may never yield competitive performance, so picking battles matters.
  • On learning, several stress that practice without reflection produces entrenched mediocrity; theory, coaching, and deliberate practice are needed, in poker and elsewhere.

Moats, naming, and safe environments

  • Some dislike “moat” as a metaphor, preferring “cage of low status”; others defend “moat” as a barrier that filters out most would-be learners.
  • Multiple comments emphasize the importance of “no asshole zones” and moderation tools so people can endure the low-status phase without being shredded by ridicule.

Exploiting the IKKO Activebuds “AI powered” earbuds (2024)

Evidence of Compromise (“It runs DOOM”)

  • Commenters treat “runs DOOM” as the modern equivalent of “cat /etc/passwd” – not directly useful, but strong proof of effective control over the device.
  • Some pedantry over Android not having /etc/passwd, but consensus that ADB plus sideloaded APKs is enough to demonstrate a serious compromise.

Core Security Failures

  • Leaving ADB enabled in production hardware is seen as inexcusable; once discovered, the rest of the findings are unsurprising.
  • Device communicates directly with OpenAI, implying a hardcoded API key on-device; this is widely criticized as a textbook secret‑management failure.
  • “Decrypt” routines partly reduce to base64 or trivially reversible schemes; several people note how common it is for developers to confuse encoding with security.
  • Chat logs and sensitive data appear to be logged server-side (at least in some modes), raising strong privacy concerns.

Vendor Response & Ethics

  • Some think the company’s initial responsiveness (rotating the key, adding a proxy) is better than most. Others note:
    • Use of a free Gmail address, lack of timelines, and mixing “sponsorship” offers into security emails make the response look amateurish and borderline like a bribe.
    • They stopped engaging before all issues were fixed, undermining any goodwill.

System Prompt, China Politics & Censorship

  • The system prompt forbidding “Chinese political” content and invoking “severely life threatening reasons” is seen as both darkly comic and revealing of censorship constraints.
  • Discussion splits between:
    • How LLMs interpret vague bans on “China politics” (e.g., Tiananmen, Xinjiang), and how to express forbidden topics you can’t name.
    • A long tangent on hate‑speech laws vs criticism of the state, with strong disagreement over whether such laws are clear protections or inherently abused tools of censorship.

LLM Guardrails, Safety-Critical Use & “People Will Die” Prompts

  • Many argue prompts are “magical incantations,” unsuitable as primary guardrails in life‑critical systems; real constraints and failsafes are needed.
  • Others counter that nothing is 100% foolproof anyway; prompts can still reduce error rates in non‑critical contexts.
  • Broader concern that LLMs are already in public safety / insurance workflows as decision-support, with “human in the loop” often functioning as an accountability dodge.

IoT / AI Security & Market Dynamics

  • Thread generalizes this case to a pattern: fast‑cycle, low‑margin hardware and “AI gadgets” often ship with near‑zero security design, hardcoded keys, and no real secret lifecycle.
  • Some see this as a major opportunity (and headache) for cybersecurity work; others stress that “one mistake can cause a breach,” even if professionals shouldn’t be punished for every slip.

China vs US Surveillance & Sinophobia Debate

  • Heated debate over whether criticizing Chinese-made connected devices is justified risk analysis or biased “everything Chinese spies on you” rhetoric.
  • Several point out similar or worse surveillance and data‑sharing practices by US firms and governments; others argue the lack of legal recourse and the PRC’s political system make Chinese-origin products uniquely untrustworthy.

Miscellaneous Reactions

  • Some mock the low technical bar (debuggable Android, trivial APK decompilation) and “corny sci‑fi” style prompts as emblematic of unserious AI engineering.
  • Terminology like “sideloading” and calling mobile OS images “ROMs” is criticized as marketing-driven language that normalizes locked-down platforms.
  • A few users report poor hardware reliability of the earbuds themselves, independent of the security issues.

The first American 'scientific refugees' arrive in France

Research Funding and Salaries (France vs US/Europe)

  • Multiple comments stress that French researchers, especially early-career, are significantly underpaid, often near minimum wage, and that French research is broadly underfunded.
  • The “Safe Place for Science” program promises equal pay with French researchers, which reassures some but alarms locals who already feel under-resourced.
  • Some argue research is underfunded “everywhere,” with academic careers structurally unattractive relative to industry.
  • Disagreement over whether Europe or the US spends more on academic research: one side claims Europe spends more; another counters with data showing comparable or lower EU levels, then narrows the comparison to academic R&D only (~$100B in both blocs).

Quality of Life vs Income

  • One cluster argues that lower French salaries are offset by universal healthcare, free education, pensions, stronger labor protections, five weeks’ vacation, better public transport, and safer, more livable cities.
  • Others emphasize higher disposable income in the US (OECD data cited), arguing Americans should value what they have economically.
  • Counterarguments say disposable income is a poor proxy: being poor/middle class is portrayed as better in France due to social safety nets and lower systemic risk (health, education).
  • Several personal anecdotes contrast “more money but worse life” in the US with richer everyday experience and security in Europe; critics respond that such stories cherry-pick lifestyle preferences and ignore why many Europeans still move abroad for higher pay.

Competition, Migration, and Protectionism

  • Concern that importing US “refugee” scientists creates more competition for scarce French positions, possibly forcing more French researchers to emigrate or leave academia.
  • Others see this as analogous to the historic US strategy of attracting global talent, arguing it benefits the host country’s science overall.
  • A few note the political risk: the French right could frame this as prioritizing foreigners, potentially ending the program later and destabilizing these researchers’ status.

US Political Climate and “Scientific Refugees”

  • Some see specific fields (gender studies, CRT, climate science) as especially targeted by current US leadership; others say all science suffers from politicized underfunding.
  • There is sharp debate over comparing today’s US to Nazi-era persecution of academics: one side calls it offensive minimization of the Holocaust; another argues early-stage fascist tactics (identifying outgroups, attacks on science, abusive rhetoric, camps) justify the analogy as a warning.

Private sector lost 33k jobs, badly missing expectations of 100k increase

ADP vs BLS and data reliability

  • Commenters note ADP’s series has a “spotty track record” compared with the government’s BLS payroll report, and the two often diverge sharply, including in 2025.
  • Several people stress ADP and BLS measure different universes: ADP is private-payroll clients only; BLS uses a survey of establishments plus modeling, and also counts government jobs.
  • There is disagreement over recent BLS downward revisions:
    • One camp sees a pattern of “rosy” initial numbers later revised down, interpreting this as politicized spin.
    • Others explain revisions as a standard survey effect: late responses tend to come from more volatile firms (doing lots of hiring/firing), so preliminary numbers systematically understate the true change in whichever direction the labor market is moving.
  • Historical revision data is cited to argue that revisions have been positive in some periods and negative in others, undermining claims of a permanent one‑direction bias.

Unemployment, underemployment, and alternative metrics

  • Many argue headline unemployment (U‑3) is too narrow: it misses discouraged workers, people forced into part‑time, and those “downgraded” from high‑pay to low‑pay jobs.
  • Others respond that:
    • The US already publishes six unemployment measures (U‑1 to U‑6), plus wage, hours, participation, and sectoral data.
    • Complaints often reflect unfamiliarity with these series rather than a true gap in measurement.
  • Still, several people highlight difficult‑to‑measure issues: underemployment by skill, job quality, job satisfaction, income volatility, and the lived ability to afford housing/food.
  • Suggested complementary indicators: real wages, median disposable household income, inequality (Gini), labor-force participation, money velocity, sectoral breakdowns, and even life satisfaction.

Media, public understanding, and “one-number” narratives

  • A recurring theme: the metrics exist, but media and public discourse fixate on single numbers (U‑3, monthly payrolls, GDP) and ignore the rest.
  • Some blame shallow journalism; others emphasize limited public attention and weak economic literacy, leading to cherry‑picking of whichever metric fits a preexisting narrative that “the economy is bad.”

Tariffs and sectoral/job-mix shifts

  • The ADP report’s details (goods‑producing up, some services down) lead to debate over tariffs and industrial policy.
  • One view: tariffs and related policies are indirectly pushing higher‑skilled workers into lower‑paid roles, increasing employer leverage and depressing wages.
  • Another view: in the long run, more local production could strengthen labor (e.g., unions), though that depends heavily on organization and broader policy.

ADP methodology and representativeness

  • Some call the ADP report “noise,” arguing its client base may be structurally biased (e.g., by firm size, sector, or use of specific payroll vendors).
  • Others counter that covering payroll for a large fraction of firms (and workers) provides a valuable signal, so long as users understand and adjust for known biases.

Lived experience and job market stress

  • Multiple commenters share bleak job‑search anecdotes, including repeated rejections for low‑wage work and concerns about ageism and disability discrimination.
  • These experiences reinforce a broader sentiment that official metrics and “strong economy” headlines often fail to match what many workers feel on the ground.

Tesla reports 14% decline in deliveries, marking second year-over-year drop

Stock reaction & valuation debate

  • Deliveries fell ~14% year-over-year, missing FactSet estimates by a few thousand units, yet the stock rose on the day.
  • Some attribute this to “better than feared” results after more bearish expectations and dip-buying in a volatile name.
  • Others see Tesla as a meme stock: 170 P/E, two years of negative growth, and a valuation larger than many major automakers combined.
  • Bulls argue the price bakes in optionality: mass FSD monetization, robotaxis, and humanoid robots; critics counter that these are speculative and not reflected in current execution.

Causes of declining deliveries

  • Prior quarter’s excuse—customers waiting for refreshed models—is seen as invalid now that refreshes are out and sales are still down.
  • Some expect further declines due to product distractions (Cybertruck) and delays on a smaller, cheaper model.
  • Competition from Chinese EV makers, especially BYD, is viewed as a major structural threat, though there is disagreement about whether BYD itself is slowing.

Brand, politics, and customer sentiment

  • Many commenters say Musk’s political behavior and personal scandals have made Teslas socially toxic for key demographics, simultaneously alienating EV-friendly buyers and the political right.
  • Several former fans/owners state they would no longer buy a Tesla solely because of Musk, despite liking the product.

EV ownership: hype vs reality

  • Some owners praise low running costs, performance, and software integration, saying they’d never return to ICE.
  • Others describe range shortfalls (especially in cold weather), trip-planning complexity, unreliable third‑party chargers, longer repair-part delays, rapid tire wear, and 15–30 minute fast charges as under-discussed downsides.
  • Debate over whether EV “hype” has overshot reality or whether media and online discourse have now swung into “anti‑hype.”

FSD, robotaxis, and safety

  • Pro‑Tesla voices claim current supervised FSD is “basically zero intervention” for them and that Austin robotaxis show Tesla matching or beating competitors.
  • Others report frequent phantom braking or seasonal unreliability, especially on the U.S. East Coast, and note Tesla’s system remains officially Level 2, requiring constant supervision.
  • There is sharp skepticism that robotaxis can be both very cheap and wildly profitable, or that ride‑hailing could ever justify Tesla’s valuation; concerns include safety, legal exposure, and privacy.
  • Critics call “robotaxi” a misnomer while safety drivers are still present, and question accident-reporting transparency.

Competition and broader EV market

  • Some argue Tesla’s issues are company‑specific (brand damage, product choices), not purely EV‑wide, though multiple major EV makers are also reporting slower sales in the U.S.
  • Others note global EV sales are still growing, with strong uptake in Europe and China; they see Tesla’s early success in accelerating EVs as real but increasingly outcompeted on price and variety.

Vertical integration & product experience

  • Supporters highlight Tesla’s tight integration of hardware, software, and app (remote control, OTA updates, unified infotainment) as still ahead of many legacy OEMs.
  • Counterexamples are offered from Audi, VW, Hyundai, etc., where similar capabilities exist; critics say Tesla’s UX is good but no longer unique, and sometimes over‑touchscreen‑dependent.

Investor sentiment split

  • Some commenters are long‑term holders who feel vindicated by past returns and confident in future growth stories.
  • Others actively short the stock, viewing it as the purest expression of “dream selling”: highest multiple in its peer group despite recent earnings and delivery declines.

What I learned gathering nootropic ratings (2022)

Exercise as the Dominant “Nootropic”

  • Many commenters agree with the article’s conclusion that exercise, especially resistance training and HIIT, outperforms most supplements for cognitive and emotional benefits.
  • Several people report life-changing improvements in mood, focus, pain, and sleep from modest daily routines (e.g., ~20 minutes lifting + 20 minutes cardio).
  • Others say they feel no mental benefits from exercise, only physical ones, highlighting substantial individual variation.

Barriers, Pain, and Special Cases

  • A major theme is why people avoid exercise: it’s painful, boring, time‑consuming, and rewards are delayed. Some liken early training discomfort to “pain” that only feels normal once you’re adapted.
  • There’s a split on injury risk: some argue regular resistance training almost inevitably yields chronic tendon/ligament issues; others with decade‑plus lifting histories report no lasting pain and less age‑related dysfunction than sedentary peers.
  • People with chronic conditions (CFS/ME, MS, long COVID, fibromyalgia) describe post‑exertional crashes or tissue damage without corresponding gains, so standard exercise advice can backfire for them.

Diet, Sleep, and Lifestyle

  • Multiple comments echo that “any daily movement + minimally processed food” is powerful; debate then erupts over what “processed” means (bread, flour, preservatives, glyphosate, “everything in moderation”).
  • High‑quality sleep is framed by some as even more fundamental than exercise, with feedback loops between the two. Meal timing, late protein (e.g., casein), and avoiding nighttime water are discussed for sleep quality.

Subjective Ratings, Placebo, and Evidence

  • Several people question the article’s reliance on self‑rated effects: strong placebo, expectation, and early-euphoria biases are seen as pervasive.
  • Others argue subjective reports are still “evidence,” just weak observational evidence that should motivate proper blinded trials rather than be taken as proof.
  • There’s discussion of how randomized trials often show strong placebo effects (e.g., depression), suggesting caution when interpreting enthusiastic first‑dose anecdotes.

Stimulants, “Real” Nootropics, and Risk

  • Strong stimulants (Adderall, Ritalin, modafinil, amphetamines) are widely acknowledged as powerful but also addictive, tolerance‑forming, and highly individual in effect; some find them life‑changing, others report severe mood crashes or cardiovascular concerns.
  • Substances like phenibut, kratom, tianeptine, and psilocybin are criticized as being treated as “nootropics” despite clear recreational/addictive profiles and withdrawal risks.
  • Commenters urge medical supervision before using prescription‑grade compounds for performance, warning that the “nootropics” label often obscures real drug risks.

Meta: Definition Drift and Framing

  • Some object to calling weightlifting a “nootropic” at all, since it’s not a substance and doesn’t match early definitions.
  • Others note the original concept of nootropics (safe, non‑sedating, non‑stimulating cognitive enhancers) has largely collapsed into a catch‑all for anything that feels like it boosts mood, focus, or confidence, at least temporarily.

Cloudflare Introduces Default Blocking of A.I. Data Scrapers

Scope of the Feature

  • Commenters note the headline is misleading: Cloudflare is offering an opt‑in managed rule that:
    • Updates robots.txt to disallow named AI crawlers (GPTBot, Google‑Extended, ClaudeBot, Meta, etc.).
    • Uses existing bot‑detection signals (“Bot Score”, fingerprints, global traffic patterns) to block additional AI scrapers, not just user agents.
  • Some users already enabled it and saw only robots.txt changes; others point to Cloudflare’s blog saying deeper network‑level blocking is also applied.

Effectiveness and the Bot Arms Race

  • Many argue serious scrapers will ignore robots.txt, spoof user agents, and use rotating residential IPs; blocking will mostly hit “honest” big players.
  • Others counter that Cloudflare’s scale (tens of millions of requests per second) lets it fingerprint tools, catch evasive crawlers, and correlate abusive behavior across IPs and ASNs.
  • Several operators report clear “AI bot storms” (huge RPS spikes, repeated hits to disallowed paths) and say Cloudflare or tools like Anubis significantly reduced load.
  • Concern: punishing transparent bots incentivizes obfuscation, but some say that arms race has existed for 20+ years anyway.

Impact on Site Operators

  • Many welcome the feature: AI bots were exhausting bandwidth, breaking small servers, or hammering expensive endpoints and APIs despite caching and robots.txt.
  • Others say well‑tuned caching or CDNs should make bot traffic cheap to serve and don’t understand the panic; replies highlight non‑cacheable endpoints and badly behaved crawlers.
  • A subset of projects explicitly want to allow AI training and RAG (docs, OSS, product sites) and worry about it being on by default or misconfigured.

User Experience and False Positives

  • Multiple anecdotes of overly aggressive bot detection (Cloudflare and others) locking out real users, content creators, or shoppers; captchas and “unusual traffic” messages seen as farcical and costly.
  • People fear more CAPTCHAs and “checking your browser” pages, especially for VPN, Tor, Linux, Firefox, or strong anti‑fingerprinting users.
  • Some argue Cloudflare is already degrading the open web and entrenching a “whitelisted browsers on approved devices” model.

Robots.txt, Law, and Ethics

  • Debate over whether AI companies actually honor robots.txt; suspicions of hidden or masked crawling.
  • Some want robots.txt or ToS to become legally enforceable; others think ToS aren’t real contracts and expect courts to be skeptical.
  • Ethical divide:
    • One camp: public content being used for training is parasitic “IP theft” that undermines incentives to create and should be restricted or compensated.
    • Another: training on public data is akin to human learning; individual contributions are tiny; the real extractors are platforms and gatekeepers, not models.
  • Specific controversy around blocking Common Crawl as an “AI bot” even though it’s a general web archive used by many.

Cloudflare’s Power and Motives

  • Strong undercurrent of worry about centralization: “no one else can really do this except Cloudflare,” implying enormous gatekeeper power.
  • Some see the move as protective; others see it as Cloudflare inserting itself as a paid intermediary and future “marketplace” between scrapers and publishers (AI‑SEO, pay‑per‑scrape).
  • Critics accuse Cloudflare of:
    • Turning the web into a de facto MITM network under its control.
    • Collecting vast behavioral data and enabling pervasive fingerprinting.
    • Making life especially hard for “non‑mainstream” clients while claiming to protect content.

Content Incentives and the Future of the Web

  • Many fear that unrestricted AI scraping:
    • Discourages new content (why write if bots monetize it?).
    • Accelerates the decline of “informational SEO” as LLM answers replace clicks.
  • Others argue incentives were already eroded by ad blockers, walled gardens, and platform dynamics; AI is just another blow.
  • Some think blocking AI will mainly help incumbents with direct deals (big platforms, large publishers) while small sites stay invisible to AI search and RAG.
  • A minority wants to opt in and even optimize for “LLM SEO,” seeing LLMs as a new discovery channel.

Alternatives and Open Questions

  • Suggested countermeasures besides Cloudflare:
    • Authentication walls (the only actually robust way to keep content out of training, but at odds with public access).
    • Self‑hosted filters like Anubis (proof‑of‑work or JS challenges, UA/ASN rules).
    • Classic web‑server tools (mod_security, rate‑limiting, IP blocking).
  • Some assert that if content is public, determined LLM scrapers will ultimately get it; best you can do is raise their costs.
  • Unclear how this will interact long‑term with:
    • Search engines that combine indexing and AI (e.g., tying search ranking to training permission).
    • Distinctions between bulk training crawls vs per‑query RAG “browsing” done on behalf of users.

Microsoft to Cut 9k Workers in Second Wave of Major Layoffs

Role of AI in the Layoff Narrative

  • Article links layoffs to “controlling costs while ramping up AI spending.”
  • Some commenters note AI is used as PR cover: either framed as “productivity gains” or “strategic focus,” though leaders themselves have said cuts are structural, not performance-based.
  • Strong skepticism that any of the thousands of jobs are meaningfully replaced by AI in the near term; AI is seen more as an investor story than an operational cause.

Xbox, Gaming, and Strategic Shifts

  • Significant focus on cuts in Xbox Game Studios and a broader pivot away from first‑party development toward paying third parties for Game Pass content.
  • Several see Xbox’s move to “games everywhere” and possible Xbox‑branded PCs as a retreat after losing ground to Sony and Nintendo.
  • Debate over game industry health: some point to AAA mismanagement and live‑service failures; others to market saturation and an industry maturing after decades of rapid growth.

Profitability, Shareholders, and the New Normal

  • Many are disturbed by large layoffs at a company with ~$24B quarterly net income and double‑digit growth.
  • Layoffs are viewed as EPS/stock‑price management and “course corrections,” not survival moves.
  • Some argue this reflects a shift from older philosophies that prioritized employee stability to a norm where workers are treated as disposable costs.

Offshoring, H‑1B, and Wage Arbitrage

  • Strong theme: US staff, especially higher earners, being replaced by cheaper offshore or visa labor, with India cited repeatedly (large investments, hiring, and H‑1B numbers).
  • Disagreement over how extreme the pay gap is, but consensus that labor‑cost arbitrage is a core strategy.

Unions, Worker Power, and Social Contract

  • Multiple long subthreads debate unions as a response: some see them as essential to restoring middle‑class security; others cite corruption, inefficiency, or global labor surplus as limiting their effectiveness.
  • Broader criticism that US policy, weak safety nets, and stock‑market incentives make mass layoffs easier and more frequent.

Impact on Workers and Tech Labor Market

  • Layoffs are described as broad, hitting different orgs and performance levels without a clear pattern.
  • Concern that US tech workers are becoming the new factory workers: offshoring + AI gradually hollowing out high‑pay roles, with likely knock‑on effects on housing and “tech cities.”
  • Many conclude: treat employment as transactional, don’t be loyal to employers, and expect further rounds of cuts.

I'm dialing back my LLM usage

Overall sentiment

  • Many experienced developers report initial enthusiasm followed by dialing back usage, especially for large, integrated features.
  • Consensus: LLMs are powerful assistants but poor autonomous programmers; value depends heavily on scope, context size, and user discipline.
  • Thread repeatedly contrasts “AI as tool” vs “AI as replacement,” with most arguing strongly for the former.

Where LLMs work well

  • Small, localized tasks: single functions, tests, log statements, small refactors, simple scripts, boilerplate/scaffolding, CRUD wiring.
  • “Super-StackOverflow”: quick conceptual explanations, surfacing APIs, brainstorming designs, pointing to docs or man pages.
  • Frontend/UI glue (HTML/CSS/JS/Tailwind) and repetitive patterns (e.g., model wiring, controllers, tests).
  • Code review and lint/static-analysis help: often catching real issues, though mixed with noise.
  • Architecture comprehension: summarizing large code areas, generating diagrams and docs, helping navigate under-documented systems.

Where LLMs fail or backfire

  • Large, tangled or legacy codebases; background “agents” often get lost, loop, or create messy diffs.
  • “Vibe coding” entire features/apps: leads to balls of mud, loss of mental model, and feeling like “the new guy on your own project.”
  • Subtle bugs and hallucinated APIs; plausible-but-wrong answers can waste more time than obviously wrong ones.
  • Poor at obscure or niche solutions (e.g., specific DNS records, shader code, SIMD) unless heavily guided.
  • As autonomous agents across a repo: many report loops and breakage within minutes, not the promised fully-automated workflows.

Mental models, ownership, and code quality

  • Strong emphasis on programming as “theory building”: if you don’t build the theory, you can’t truly maintain or debug.
  • Reviewing LLM code is likened to endlessly supervising a zero-trust junior; reading bad code is slower than writing good code.
  • Several insist: if you commit it, you own it—“but AI wrote it” is not a valid excuse.

Effective usage patterns

  • Treat LLM as a junior pair-programmer: you decide architecture, write design docs, keep high-level control, review every change.
  • Keep tasks small, write tests first, use modes that separate “ask/inspect” from “edit,” and stop using the model after 1–2 bad iterations.
  • Some adopt rules like “I type every character myself; AI only suggests” to retain understanding and skill.

Skills, cognition, and long‑term concerns

  • Fear of “steroid” effects: short-term productivity boosts, long-term erosion of problem-solving and coding skill.
  • Worries about choosing easy AI-amenable work (more code) over hard engineering work (thinking, design, coordination).
  • Debate over whether next-token predictors can ever handle true program “theory”; some see hard limits, others expect continued incremental gains.

Hype, economics, and skepticism

  • Many note a repetitive HN pattern: “LLMs are great but messy” vs “I’ve 2–10×’d output with agents,” often without code shown.
  • Some attribute aggressive pro-agent narratives to VC incentives, influencer culture, and native advertising.
  • Broad agreement: LLMs are already very useful tools; using them as autonomous coders for complex, long-lived systems remains risky and immature.

Don’t use “click here” as link text (2001)

Role of W3C and nature of the guideline

  • Some see this as mere style advice W3C shouldn’t spend effort on, preferring “real” standards work.
  • Others point out the page explicitly says it is non‑normative “bits of wisdom,” not a spec.
  • A few note similar gov/UK guidance exists, but with slightly different wording patterns.

Clarity, style, and calls to action

  • Many commenters actually prefer “click here,” especially for downloads or key actions, finding it clearer and more direct than “Get Amaya”‑style links.
  • Several argue that “Get Amaya” or bare “Amaya” feels like a neutral Wikipedia/news-style link, not a strong call to action.
  • Some propose compromises like “Download Amaya,” “Learn more about Amaya,” or full-phrase links (“Download Amaya now”), favoring more descriptive CTAs over “here.”

Accessibility and screen readers

  • Strong counterargument: screen readers often present a list of links out of context; pages full of “click here” become unusable.
  • Similar concern about multiple identical generic labels like “Learn more” or “Buy” on product lists.
  • Others argue screen readers (or LLM-based assistive tools) should infer context from surrounding text instead of forcing authors to change writing.
  • There is debate over whether to rely on heuristics vs. explicit ARIA/HTML attributes; some highlight inconsistent support across browsers/screen readers.
  • Legal requirements (WCAG/ADA/EU directives) are mentioned as pressure to design for existing assistive tech, even if that tech is seen as brittle.

Buttons vs links and link semantics

  • One camp: links are for navigation/information retrieval and should describe their target; actions (download, submit) should be buttons.
  • Others reject strict “no verbs” rules and consider verb phrases (“Download X”) perfectly valid link text in practice.
  • Inline prose examples (e.g., PiPedal text) show how removing “click here” can make sentences awkward; various rewrites are proposed.

Historical context and evolution

  • Older web: “click here” was everywhere and even arguably helpful when users were new to hypertext.
  • Modern trend: underlines/borders removed, making it harder to see what’s clickable, which some say makes explicit cues like “click here” more attractive again.

SEO, tooling, and implementation details

  • Non-generic link text also helps crawlers and Lighthouse/a11y audits, but some developers routinely ignore “generic link text” warnings.
  • Bookmarking behavior (link text vs page title) is briefly discussed as a minor argument against “click here.”
  • Suggestions include visually hidden text/ARIA to keep short CTAs visually while exposing rich labels to assistive tech.

Skepticism and perceived triviality

  • Some think this is overblown “dogma” or marketing-driven nitpicking with little real-world impact.
  • Others argue that, despite seeming trivial, link wording significantly affects accessibility and should be treated as part of responsible web design.

Math.Pow(-1, 2) == -1 in Windows 11 Insider build

Bug nature and scope

  • Report: On a Windows 11 Insider build, Math.Pow(-1, 2) (and C++ pow(-1, 2)) returns -1 instead of 1.
  • Affected stack: Both .NET and C++ appear to hit the same underlying issue via the Windows Universal CRT (UCRT) pow implementation.
  • Clarification: A later comment states the UCRT bug was already reported internally and fixed (OS bug #58189958), but the fix may take time to reach public Insider builds.
  • Several commenters are surprised a bug in such a fundamental function escaped to users and wasn’t caught by basic tests.

Testing, TDD, and AI-assisted development

  • Many express disbelief that CI or regression tests didn’t cover simple cases like squaring negative numbers.
  • Anecdotes are shared about LLMs “fixing” failing tests by changing expected values or mocking the function under test, likened to human anti-patterns.
  • Long subthread on TDD:
    • Critiques: TDD often degenerates into “make the test pass” without understanding, overemphasis on per-method tests, and huge test overhead.
    • Defenses: Proper TDD writes tests that mirror the spec at higher levels (acceptance/integration) and is valuable when done competently.
    • Disagreement over whether this “proper TDD” actually happens in the general industry, with accusations of “no true Scotsman” when defenders narrow the definition.

Ownership and bug-report handling

  • Strong disagreement with the suggestion that the bug “should be reported to MSVC instead”:
    • View 1: From the user’s perspective, it’s a .NET bug; .NET maintainers should own it, escalate upstream, and keep tracking it.
    • View 2: If the root cause is clearly in UCRT, it’s reasonable to direct the bug there, but the reply’s passive phrasing leaves responsibility ambiguous.
  • Several argue that telling users to refile upstream is poor practice for a commercially backed product; the application team should file and follow up.
  • Once clarified that the commenter was a community volunteer, some criticism softens, but the broader point about clear ownership remains.

Broader commentary on software quality and process

  • Jokes and concerns that software quality is “exponentially worse,” with references to AI-generated code making up a significant fraction of Microsoft’s codebase.
  • Comparisons to past numerical bugs (e.g., Pentium FDIV) and assertions that fundamental math libraries should have extremely strong regression testing.
  • Discussion of big-company bureaucracy: bug “buck-passing,” fragmented responsibility, and the idea that large firms behave like states with entrenched processes.

Tools, ecosystem, and communication channels

  • Brief discussion of how UCRT is shared across Windows and how OS, compiler, and CRT interact.
  • Comments note that most critical OS code likely avoids floating-point pow, mitigating immediate system impact.
  • Side thread criticizes reliance on Discord for issue handling and support:
    • Complaints: poor web searchability, lock-in, “too social” culture, and NSFW side channels mixing with technical topics.
    • Others note that the project in question also uses GitHub and forums, and Discord is mainly for fast, informal coordination.

They tried Made in the USA – it was too expensive for their customers

Price, Quality, and What Consumers Actually Buy

  • Many comments say consumers overwhelmingly prioritize low prices over origin, even when they claim to care about “Made in USA.”
  • Some argue Chinese goods are often as good or better than US-made at a fraction of the cost; others report the opposite, but agree price dominates.
  • “Premium” US-made lines often fail because the performance gap vs. imports is small while the price gap is huge.
  • Fast fashion is used as a case study: clothing was mostly US-made in the 1980s without an impoverished lifestyle; today people have more, cheaper, lower‑quality clothes and throw them away faster.

Feasibility of Domestic Production

  • A recurring theme: the US can make almost anything, but not everything, and not at current global price points.
  • Core constraints cited: higher labor and benefit costs, OSHA and environmental compliance, litigation risk, permitting delays, and loss of supply-chain depth and “industrial muscle memory.”
  • Textiles and sewn goods are highlighted as especially hard to automate; sewing remains labor‑intensive, so production follows cheap labor.
  • Some suggest partial reshoring and mixed product lines (standard made abroad, premium domestic) as a realistic compromise.

Labor, Jobs, and Working Conditions

  • Several threads debate whether bringing back low‑skill factory work is even desirable: it’s repetitive, physically damaging, and historically polluting.
  • Others counter that not everyone can do high‑skill work; societies still need large numbers of decent, stable blue‑collar jobs.
  • There’s disagreement over whether US workers are “lazy” or simply rationally avoiding dangerous, low‑status jobs that don’t support housing, healthcare, and family life.

China, Globalization, and Ethics

  • China’s advantage is framed less as “cheap labor only” and more as: integrated supply chains, rapid scaling, state-backed capital, and manufacturing know‑how.
  • Some emphasize ethical and security concerns: forced labor, environmental shortcuts, support for adversarial regimes, and vulnerability of over‑concentrated supply chains.
  • Others respond that US history and current practices are far from clean, and consumers gladly arbitrage these abuses when it lowers prices.

Tariffs, Retail, and Who Bears the Cost

  • The new tariffs are widely described as a blunt, regressive tax. Retail margins (e.g., Walmart) are too thin to absorb big cost increases, so prices will rise.
  • Many expect small brands, especially in discretionary niches (dog beds, specialty beverages), to be squeezed between higher input costs and retailers unwilling to take price hikes.
  • Commenters argue that serious reshoring would require long‑term industrial policy and targeted subsidies, not just tariffs and slogans about “Made in USA.”

Product Examples, IP, and Platforms

  • The SmarterEveryday grill brush is cited as a detailed look at how hard and expensive domestic manufacturing has become; reactions range from admiration to “it’s just not worth $80.”
  • Safety concerns around grill‑brush bristles show how minor risk differences can justify premium designs for some buyers but not for others.
  • Multiple commenters say they abandoned plans to manufacture domestically because Amazon and similar platforms allow rapid, ultra‑cheap knockoffs, and small firms cannot afford to enforce patents.
  • Patents themselves are hotly debated: some see them as necessary innovation protection; others see them as mostly anti‑competitive and poorly administered.

Class, Culture, and Skills

  • Several comments link offshoring to hollowed‑out communities and personal stories of “class mobility” that left people socially stranded between blue‑ and white‑collar worlds.
  • There is concern about the loss of shop classes and hands‑on skills, and debate over whether games and abstractions (e.g., “Factorio”) meaningfully substitute for real manufacturing exposure.
  • Underneath the economics, many see a cultural shift: from pride in making durable things locally toward a model where identity and value are increasingly produced by software, media, and finance rather than physical goods.

How large are large language models?

Model Size and Hardware Requirements

  • Several rules of thumb were discussed:
    • 1B parameters ≈ 2 GB in FP16 (2 bytes/weight) or ≈ 1 GB at 8-bit quantization.
    • A rough “VRAM budget” is often ~4× parameter-count-in-GB for overhead, so 2B ≈ 8 GB VRAM, 7B ≈ ~28 GB, 70B ≈ ~280 GB, unless heavily quantized.
    • Inference is typically bandwidth-bound; high-bandwidth VRAM (GPUs, Apple M-series, unified-memory APUs) matters more than large system RAM.
  • Quantization (8-bit, 5-bit, 4-bit) can cut memory 2–4× with modest or task-dependent quality loss; models trained natively at low bit-width may outperform post-quantized ones.

Data Scale and “Size of the Internet”

  • One thread compares model sizes (hundreds of billions of params → ~TB of weights) to human text:
    • Back-of-envelope estimates for “all digitized books” cluster around a few–tens of TB, with one concrete calc (using Anna’s Archive stats and compression) giving ~30 TB raw, ~5.5 TB compressed.
    • There is strong disagreement with a claim that “the public web is ~50 TB”; others point to zettabyte-scale web estimates and Common Crawl adding ~250 TB/month. It’s unclear what exact definition (text-only, deduped, etc.) the smaller figures use.
  • Some argue LLMs now operate on ~1–10% of “all available English text” and that training returns may be saturating, pushing advances toward inference-time “reasoning” and tools/agents.

LLMs as Compression (and Its Limits)

  • Many commenters like the metaphor of LLMs as lossy compression of human knowledge (“blurry JPEG of the web”); they highlight:
    • Astonishment at what an 8 GB local model can do (history, games, animal facts) and comparisons to compressed Wikipedia (24 GB).
    • Information-theoretic work showing language modeling closely tied to compression and evaluations that treat modeling as compression tasks.
  • Others caution that calling LLMs “compression” is misleading:
    • Traditional compression is predictably lossy or lossless and verifiable; LLM output is unpredictably wrong and requires human checking.
    • For most classic compression use-cases (archives, legal docs), LLM-style “compression” is unacceptable.
  • A more technical thread notes that:
    • Given shared weights, an LLM + arithmetic coding implements lossless compression approaching the model’s log-likelihood.
    • Training itself can be viewed as a form of lossless compression where description length is the training signal, not the final weights.

Model Scale, Capability, and Synthetic Data

  • Commenters note that open models only approached GPT-4-level reasoning when they crossed into very large dense (≈400B+) or high-activation MoE ranges, after years of 30–70B attempts failing to match GPT-3.
  • Some speculate that even larger frontier models were tried and quietly abandoned due to disappointing returns, suggesting optimal “frontier” sizes may now be smaller than the largest public models.
  • Debate on synthetic data:
    • One side warns about “model collapse” when models are trained on their own outputs.
    • Others counter that, in practice, carefully designed synthetic data (especially teacher–student distillation or code with executable tests) reliably improves performance; labs wouldn’t use it otherwise.

Critique of the Article and Model Coverage

  • Multiple factual and contextual issues are raised:
    • Confusion between different Meta models/variants and misstatements about training tokens.
    • Overstated claims about MoE enabling training without large GPU clusters.
    • Lack of discussion of quantized sizes despite a “how big are they?” framing.
    • Omission of notable families (Gemma, Gemini, T5, Mistral Large) while including smaller or less central models.
  • The author acknowledges some errors and clarifies specific points, but several commenters still characterize it as incomplete or “sloppy” and overly focused on token counts rather than practical size/usage.

Reasoning, Intelligence, and Future Directions

  • Long subthreads debate:
    • Whether LLM “reasoning” is fundamentally weaker than human reasoning despite vastly larger “working memory.”
    • Claims that humans learn from far less data vs. counters that human sensory input from birth (especially vision) is enormous.
    • Whether we are “out of training data” (for text) vs. large untapped sources (video, robotics, specialized interaction logs).
  • Some see intelligence as fundamentally related to compression/prediction; others emphasize novelty and idea generation beyond seen data.
  • There is speculation that:
    • Architecture and training-method improvements could reduce required model sizes for a given capability.
    • Consumer-grade hardware (high-end PCs or even phones) may eventually suffice for extremely capable models, with the internet serving as factual backing via tools and retrieval rather than being fully “baked in” to weights.

Spain and Brazil push global action to tax the super-rich and curb inequality

Perceptions of Spain and Brazil as Leaders

  • Many argue Brazil is “violently unequal” and deeply corrupt; Spain is also criticized for chronic corruption, so some see the initiative as virtue signaling rather than serious reform.
  • Others counter that shifting the Overton window matters: even symbolic pushes toward a global wealth registry and curbing tax havens are seen as useful steps, if hard to implement.
  • There is skepticism that BRICS or the EU will coordinate effectively, but some note BRICS is now a real organization and could in theory align on progressive taxation.

How Spain’s Tax System Works (and Feels)

  • Several comments clarify Spain’s progressive income tax: high marginal rates (45% above ~€60k, 47% above €300k, up to ~50% in some regions).
  • Supporters say this funds good healthcare, education, and social mobility; some high earners explicitly welcome paying more, framing it as solidarity.
  • Critics say top rates at relatively modest incomes are a “monstrous disincentive” and will drive talent and entrepreneurs elsewhere, portraying Spain as a high-tax, low-growth “socialist” state.

Wealth vs Income vs Consumption Taxes

  • Strong thread arguing to tax assets—especially land and property—rather than labor or global income; land value taxes and revenue-based corporate taxes are repeatedly proposed.
  • Others defend wealth or registry-based approaches as necessary because rich individuals hide assets through cross-border structures and benefit from loopholes and lighter capital taxation.
  • Brazil is cited as an example where heavy, regressive consumption taxes hurt the poor far more than the rich, suggesting “tax the rich more” is less urgent than “stop overtaxing consumption.”

Inequality, Investment, and “Trickle Down”

  • One camp: super-rich investment drives growth; focus should be on deregulation, cutting bureaucracy, and simple low tax rates (e.g., flat 10%) to stimulate business and personal responsibility.
  • Opposing camp: trickle-down has failed; capital gains are favored over labor; high inequality lets billionaires capture states and extract rents (housing, layoffs, buy-to-let, financial speculation).
  • Some emphasize that rich already pay a large share of total taxes; others reply that relative to their wealth they still contribute too little and continue to gain outsized economic and political power.

Role of the State and Corruption (Especially Brazil)

  • Several Brazilians describe a high-tax, high-corruption equilibrium: citizens pay heavily on income and consumption, then also pay privately for health, education, and security because public services fail.
  • For them, “more tax on the rich” sounds like more money into a corrupt system that already treats modest earners as “rich” in brackets. They advocate cutting waste, bureaucracy, and especially consumption taxes.
  • Others insist the state is the only tool to counter private power; shrinking it just shifts control from democratic institutions to unaccountable elites.

Housing, Land, and Structural Issues

  • Housing inflation is widely seen as a key driver of perceived inequality: ownership is far harder relative to median wages than decades ago.
  • Some blame zoning, planning, and NIMBYism for blocking supply; others point to broader cost pressures (Baumol effect) and investor-driven property hoarding.
  • Land value tax recurs as a proposed way to discourage empty properties, speculative holding, and excessive rent extraction while funding local services.

Automation, AI, and the Future of Inequality

  • A subthread argues that automation and AI are structurally amplifying inequality: capital owners can deploy robots and servers instead of hiring workers, decoupling investment from jobs.
  • Another view: automation has historically raised living standards; AI’s impact is not yet material, and policy (tax and regulation) will determine whether gains are shared or concentrated.

Feasibility and Likely Impact of a Global Super-Rich Tax

  • Supporters believe taxing extreme wealth and closing havens is vital to prevent democratic erosion and “French Revolution”-style backlash; they invoke high propensity to spend among the non-rich and New Deal-era policies.
  • Skeptics stress practical limits: wealth is largely in businesses and illiquid assets; one-off confiscations don’t fix structural issues and may ultimately hit workers and investment.
  • Many doubt that Spain/Brazil-led global coordination can overcome flight opportunities, political resistance, and deeply embedded national tax privileges for the rich.

More assorted notes on Liquid Glass

Perceived Strategic Motives (AR & “service layer” over apps)

  • Several commenters see Liquid Glass as preparation for AR: a unified, bland, layered UI that can be reused on glasses/visionOS and across devices.
  • Idea: force apps into a visually neutral, OS‑branded shell so Apple can render them consistently in AR and present itself as the primary “service provider” while third parties become interchangeable fulfillment backends (ride‑hailing, hotels, food, etc.).
  • Some welcome this fungibility for transactional services (travel, taxis, food) because it reduces friction; others dislike the loss of “evil B of X” middlemen only to get a bigger “benevolent A” (Apple) on top.

Brand Unification vs App Personality

  • Strong tension between wanting apps to follow platform conventions and wanting them to retain distinct identities.
  • One camp likes Apple pushing consistency and resents apps that ignore native UI; another argues Apple is suppressing third‑party branding to elevate its own.
  • Icon tinting and Liquid Glass styling are seen as further eroding app individuality.

Usability, Legibility & Accessibility

  • Many reports of lower contrast, blur, washed‑out icons, ambiguous button states, and extra whitespace reducing information density.
  • Concerns that transparency and layered glass make text and controls harder to see, especially for older users or those with impairments.
  • Accessibility toggles like “Reduce Transparency” and “Increase Contrast” help, but are hidden; some dislike being pushed into “second‑class,” uglier modes just to regain clarity.
  • Rounded corners and smaller hit targets on already small screens are called out as regressions.

Fashion, Sales & Organizational Incentives

  • Multiple comments frame the redesign as UI “fashion” to signal novelty and drive sales, not functional improvement.
  • Others blame internal incentives: large design orgs must ship change to justify themselves; management lacks incentive to leave a stable UI alone.
  • Pushback that fashion isn’t trivial: people expect visual refreshes, but critics argue fashion alone can’t justify breaking learned interfaces.

Impact on Developers & Tooling

  • Liquid Glass alters dimensions and behaviors, worrying developers relying on UIKit/AutoLayout; some resort to compatibility flags to block the new look.
  • SwiftUI is seen as better aligned with the new system, raising fears of pressure to migrate.
  • Some speculate Apple also wants to make native apps visually distinct from web/Electron/portable‑toolkit apps.

User Reception & “Nerd vs Normal” Split

  • Early beta users are split: some “absolutely love it” after a short adjustment; others liked it at first then soured on daily use.
  • A recurring view: mainstream users will complain briefly, adapt, and mostly not care—while “nerds” act as canaries for deeper usability and accessibility issues.