Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 48 of 350

Texas app store age verification law blocked by federal judge

Constitutional Rights & Age Limits

  • Many argue the law violates the First Amendment because it conditions access to all apps (i.e., all kinds of speech) on ID checks, analogous to forcing bookstores to card every customer.
  • Others push back that many rights are already age-limited in practice (guns, voting, alcohol), and seek a consistent legal principle; responses note minors do have meaningful 1A protections and that broad age-gating fails strict scrutiny.
  • Several comments reference established doctrine: limits on fundamental rights must serve a “compelling interest,” be narrowly tailored, and use the least restrictive means. Broad app-store gates are seen as failing all three.

Privacy, Surveillance & the Fourth Amendment

  • Strong concern that mandatory ID verification expands surveillance: more sensitive data, more centralization, more potential abuse by governments and corporations.
  • Some argue the Fourth Amendment already protects privacy “on paper,” but courts and doctrines like the third-party doctrine have eroded that protection in practice, especially with data held by tech companies.
  • Examples are cited where police access to search histories or cloud data occurs without warrants, reinforcing mistrust.

Analogy Debates: Books, Porn, Movies, Internet

  • Disagreement over the judge’s bookstore analogy: critics say app stores are gateways to dynamic social environments, unlike static books.
  • Others respond that children have 1A rights and internet-delivered speech is still speech; analogy is meant to illustrate overbreadth, not literal equivalence.
  • Narrow porn laws are contrasted with broad app-store mandates: porn is a specific content category; “apps” or “the internet” are too general to target constitutionally.

Technical & Practical Issues

  • Developers describe age-verification APIs as brittle, privacy-hostile, and effectively unworkable at scale; failures would either block legitimate users or force apps to “fail open.”
  • Discussion of token-based or liquor-store-style age checks shows deep skepticism: implementation details could still deanonymize users and create chilling effects.

Broader Legal & Global Context

  • Some commenters in the EU/UK express envy at U.S. constitutional protections as their own governments expand online speech and age-control regimes.
  • Others note that legal complexity and judicial interpretation effectively determine how rights are experienced in practice, regardless of constitutional text.

X-ray: a Python library for finding bad redactions in PDF documents

Context: Epstein PDFs and Redaction Failures

  • Many recently released Epstein court PDFs used naive “black box over text” redactions, leaving underlying text intact.
  • In some files, users can simply select and copy “redacted” lines in a browser PDF viewer and see the hidden text.
  • X-ray is highlighted as a tool that detects these overlay-style redactions at scale; it has already been run on examples from justice.gov.

Proper Redaction Tools and Workflows

  • Commenters note that Adobe Acrobat Pro, when used correctly with “mark for redaction” then “apply redactions,” permanently removes content and has been standard in legal practice for years.
  • “Draw a black box” is described as a legacy paper-era habit; in PDF this only hides, not removes, the text.
  • Some PDFs retain older revisions via incremental updates (/Prev trailer), meaning earlier, less-redacted states can still be recovered.

Rasterization vs. Searchability

  • A common “safer” user workflow is to overlay black boxes then rasterize pages to images, but this produces large, non-CCITT-compressed files and loses text search.
  • Requirements for searchable public records often rule out full rasterization.
  • Some governments go the opposite direction and intentionally scramble text layers so documents are readable but not searchable or copyable, which is viewed as hostile.

AI, Side-Channels, and Font Metrics

  • One proposal: use AI to enforce an objective redaction standard and compare human vs. AI redaction rates.
  • Others argue AI isn’t needed to detect naive redactions, but could help infer what should be redacted.
  • There is extensive discussion of “glyph spacing” / font-metric attacks: inferring redacted words from bounding box width, kerning, and context, especially when combined with AI.
  • Suggested mitigations include widening redaction boxes (possibly to a constant width) and using reflowable formats; skepticism remains about fully eliminating these leaks, especially for short, predictable text.

Intentional vs. Incompetent Redactions

  • Some insist such failures are pure amateurism and lack of training or process.
  • Others, citing strict federal redaction training, argue this was likely “malicious compliance” or deliberate sabotage of overbroad redactions.
  • There is no consensus; motives are described as unclear.

Legal and Ethical Redaction Considerations

  • Defensible reasons for redaction: protect victims, witnesses, informants, ongoing investigations, national security, and avoid releasing child sexual abuse material.
  • Law is said to prohibit redaction for embarrassment, reputational harm, or political sensitivity, yet many redactions in these documents appear unconnected to those legitimate grounds.
  • Victims have publicly complained that required redactions (victim identities) were missed while non-permitted redactions (protecting others) were applied.

Disclosure Ethics and Impact

  • One camp argues powerful de-redaction techniques should be withheld to avoid mass unmasking of past redactions.
  • Another argues that undisclosed vulnerabilities only create more victims, and that publicizing flaws is necessary for improved tools and workflows.
  • A key complication: unlike cryptography, existing redacted PDFs can’t be retroactively “patched,” so disclosure has permanent consequences.

Broader PDF and Government Tech Critiques

  • Several comments deride PDF as a fragile, overcomplicated, and insecure format for long-term public records, especially when redaction is anticipated.
  • Observers are unsurprised by widespread PDF mishandling, citing poor technical literacy in non-technical institutions and management that underestimates the complexity of secure redaction.

I didn't realize my LG TV was spying on me until I turned off Live Plus

TV spying, ACR, and Live Plus

  • Live Plus on LG TVs uses automatic content recognition (ACR) to scan everything on screen and feed “personalized” ads and recommendations.
  • Commenters note this isn’t unique to LG; most major TV brands do similar tracking.
  • There’s skepticism that toggling Live Plus “off” truly stops data collection; many assume it may just hide visible personalization while still sending telemetry.

Opt-out toggles vs. keeping TVs offline

  • Strong consensus among privacy‑minded commenters: never connect the TV to the internet, treat it purely as a dumb display.
  • Concerns that updates can re‑enable tracking settings or add new “features” like Copilot or more ads.
  • Some mention dark patterns at setup (multiple “agreements”, only some optional) and settings that used to reset to “on” after firmware updates.
  • A few push back, saying at least there is a setting and you can decline optional agreements.

Network controls and technical countermeasures

  • Many use Pi-hole, firewalls, or router rules to block TV DNS/traffic; some report the TV as a top generator of blocked requests.
  • Advice includes: force all devices to use Pi-hole via DHCP, block outbound UDP 53, or fully airgap the TV.
  • Others worry Wi‑Fi hardware, Wi‑Fi Direct, or future cellular/mesh schemes could bypass home-network controls, though concrete evidence for TVs doing this is not presented and called “unclear.”

External devices: which “smart” box to trust?

  • Popular pattern: TV offline + dedicated streamer (Apple TV, Android TV box, HTPC, mini‑PC, Raspberry Pi, Vero, etc.).
  • Many recommend Apple TV as the “least bad” mainstream option; others prefer rooted/custom Android TV or a Linux HTPC for maximum control and ad‑blocking.
  • Roku, Fire TV, and Chromecast are criticized for ad‑driven business models and aggressive ACR.
  • There’s disagreement over Apple’s privacy posture: some see hardware‑margin incentives as protective; others describe macOS/iOS as opaque, data‑hungry black boxes.

Broader critique: surveillance as default

  • Commenters see TVs as one instance of a wider pattern: every “smart” device (TVs, fridges, thermostats, cars) ships with tracking wrapped as “personalization.”
  • Many lament that non‑smart TVs are scarce, consumers often don’t care, mainstream media under‑covers the issue, and ad‑supported, data‑extractive models have become the default architecture of consumer tech.

Some Epstein file redactions are being undone

Technical Cause of the “Unredactions”

  • Many “redactions” are just black rectangles or black highlighting drawn over live text layers in PDFs.
  • Copy‑and‑paste, text selection around the box, or basic tools can recover the underlying text.
  • For scanned documents with OCR, text is often stored as invisible metadata aligned to the image; drawing a box over the image doesn’t touch that layer.
  • PDF is described as a complex, layered, vector format where content can be stacked; simply obscuring a layer doesn’t remove data.
  • Some commenters note that PDF has proper redaction mechanisms and professional tools (Acrobat, qpdf, Stirling PDF) that truly delete content, but they were evidently not used here.

How Redaction Should Be Done (and Its Pitfalls)

  • Suggested safer workflows:
    • Use purpose‑built redaction tools that remove text, not just recolor or overlay.
    • Or “go analog”: print, physically black out or cut, then scan to a new image‑only PDF.
  • Even print‑and‑scan can leak through:
    • Length of black bars can reveal name lengths (“length attacks”).
    • Poor marker choice can let text show under contrast adjustment.
    • Compression and steganographic artifacts can sometimes correlate redacted and original images.
  • Several historic failures (TSA manual, Manafort filings, Apple v Samsung, Sony FTC case) are cited as near‑identical mistakes.

Incompetence vs. Sabotage vs. Strategy

  • One camp sees this as straightforward incompetence: rushed deadlines, redeployed staff, minimal training, and a general pattern of technical illiteracy in government redactions.
  • Others float “malicious compliance” or quiet resistance: staff following orders in the most reversible way, knowing the data can be recovered while retaining plausible deniability.
  • A third view: the system is structurally bad at redactions on purpose—manual, slow, and error‑prone processes serve as a litigation tactic to delay or narrow disclosures.

Who Is Being Protected?

  • Multiple comments argue the redactions skew toward shielding alleged perpetrators, especially powerful or politically connected figures, rather than victims.
  • Specific examples of redacted corporate names and lawyers close to high‑level officials are noted, raising questions about selective protection and potential conflicts of interest.

Is “Unredacting” a Crime or a “Hack”?

  • Many object to calling this a “hack,” framing it instead as simply reading data that was never actually removed.
  • Discussion suggests that, for ordinary members of the public, examining and sharing such publicly released documents is generally protected speech; different rules apply to government and cleared personnel.

Volvo Centum is Dalton Maag's new typeface for Volvo

Software & Reliability Concerns

  • Multiple Volvo EV owners describe serious, recurring software bugs: erratic AC charging schedules (DST shifts, random time zones, ignored limits), infotainment audio failures, and unstable Android Automotive OS (AAOS) updates with vague “various fixes” notes.
  • Some owners are actively avoiding OS upgrades due to fear of regressions.
  • This fuels frustration that Volvo appears to prioritize fonts and nicknames over basic reliability and UX stability.

Touchscreens vs Physical Controls

  • Many argue that large central tablets and touch-only climate controls conflict with Volvo’s safety-oriented image and increase distraction compared to physical buttons/knobs.
  • Others counter that voice controls (e.g., “hey Google, turn on seat heating”) and steering-wheel buttons work well and can be safer than reaching for controls, though there’s concern about kids or mis-recognition.
  • There is nostalgia for older Volvos and other brands where controls were operable entirely by feel, with minimal text and no screens.

Branding & Purpose of a Custom Typeface

  • Some question why a carmaker needs its own font, seeing it as generic, non-innovative, and primarily a branding exercise.
  • Others note that staying “top of mind” requires constant brand work, and a bespoke typeface becomes a subconscious part of identity, similar to previous functional fonts in aviation.
  • Several commenters praise the font’s proportions, balance, and modern, minimal feel; others find it bland or insufficiently legible (O vs 0, I vs l).

Legibility, Safety Claims & Typography

  • Skeptics see “designed for safety” marketing around the font as hollow when more impactful safety wins (physical buttons, better UX, fixing software bugs) are outstanding.
  • Some point out research suggesting other font styles may be more legible, and note specific glyph design “blunders.”
  • Others respond that in a car UI, ambiguous code-like strings are rare; fonts are tuned for glanceable, contextual reading, not programming.
  • A minority defends this work as legitimate safety-related UX refinement, often outsourced, and not mutually exclusive with engineering fixes.

Volvo’s Identity & Ownership

  • Long sub-thread debates whether Volvo is “Swedish,” “Chinese,” or hybrid, weighing origin, HQ, ownership (Geely majority), design location, workforce, and manufacturing.
  • Opinions differ on how much Chinese ownership affects quality culture and safety priorities, with contrasting anecdotes about reliability of recent Chinese-built Volvos.

We replaced H.264 streaming with JPEG screenshots (and it worked better)

Use Case and Approach

  • System streams what is essentially a remote coding session: an AI agent editing code in a sandbox, viewed in the browser.
  • Original design used low-latency H.264 over WebRTC/WebSockets; replacement is periodic JPEG screenshots fetched over HTTPS.

“Why Not Just Send Text?”

  • Multiple commenters question why pixels are streamed at all:
    • For terminal-like output or code, sending text diffs or higher-level editor state would be far more efficient.
    • Others note the agent may use full GUIs, browsers, or arbitrary apps, making pure text insufficient.
    • Some argue the entire “watch the agent type in real time” model is misguided; review diffs asynchronously instead.

JPEG / MJPEG vs H.264

  • Several people point out this is effectively reinventing MJPEG (or intra-only H.264), a decades‑old technique.
  • Practitioners report similar past successes with JPEG/MJPEG for drones, remote desktops, browsers, and security cameras: simple, robust, low-latency.
  • Many criticize the H.264 setup:
    • 40 Mbps for 1080p text is described as absurd; 1–2 Mbps with proper settings is considered more than enough.
    • Complaints that tuning bitrate, GOP, VBR/CBR, keyframe intervals, and frame rate was apparently not seriously attempted.
    • Using only keyframes is seen as a misuse of video codecs that are efficient precisely because of inter-frame prediction.

Congestion Control and Why JPEG “Works”

  • Key technical insight often highlighted: the JPEG polling loop is a crude but effective congestion control:
    • Client requests next frame only after the previous is fully received, so frames don’t pile up in buffers.
    • With H.264 over a single TCP stream, lack of explicit backpressure handling led to massive buffering and 30–45s latency.
  • Commenters note this behavior is not inherent to JPEG; it’s a property of the pull model and not queuing frames.

Existing Protocols and Alternatives

  • Many suggest using mature solutions instead of rolling custom stacks:
    • VNC/RFB with tiling, diffs, and CopyRect; xrdp + x264; HLS/DASH/LL‑HLS; WebRTC with TURN over 443; SSE or streaming HTTP fallbacks.
    • Some propose JPEG/WebP/WebM with WebCodecs or HLS-style chunking rather than per-frame polling.
    • Others note PNG is too slow to encode/decode for this use, despite better text fidelity.

Enterprise Networks and Corporate IT

  • Strong agreement that enterprise constraints (HTTPS/443 only, TLS MITM, broken WebSockets/SSE, intrusive DLP) heavily shape design.
  • Some argue WebSockets and WebRTC-over-TURN on 443 now work in most corporate environments; others report ongoing breakage.

Perception of Engineering and LLM Use

  • Several readers feel the post reflects shallow understanding of video engineering and overreliance on LLM-generated code and prose.
  • Others praise the pragmatic outcome: a “dumb” but working solution that favors simplicity, even if technically suboptimal.

Fabrice Bellard Releases MicroQuickJS

MicroQuickJS design and constraints

  • Implements a small ES5-ish subset aimed at embedded use: no dynamic eval, strict globals, denser arrays without holes, and limited built-ins (e.g. Date.now() only, many String methods omitted).
  • “Stricter mode” disallows implicit globals and makes the global object non-mutable in the usual browser sense (window.foo/globalThis.foo); globals must be declared explicitly with var.
  • Arrays must be dense: writing far beyond the end (e.g. a[10] = 2 on an empty array) throws, to prevent accidental gigantic allocations; sparse structures should use plain objects.
  • Footprint targets are ~10KB RAM and ~100KB ROM, making it competitive with Espruino and other tiny JS engines; some note it would have been ideal for Redis scripting or similar use-cases.

Sandboxing, untrusted code, and WebAssembly

  • Multiple commenters focus on MicroQuickJS as a sandbox for untrusted user code or LLM‑generated code, especially from Python and other hosts.
  • Embedding a full browser engine (V8/JSC) is seen as heavy and tricky to hard‑limit memory and time; many existing bindings explicitly warn they are not secure sandboxes.
  • Running MicroQuickJS compiled to WebAssembly is attractive because it stays inside the Wasm sandbox, can be invoked from many languages, and allows hard resource caps; Figma’s use of QuickJS inside Wasm for plugins is cited as precedent.
  • There is debate over performance: nesting JS → QuickJS → Wasm → JS is much slower than native V8/JSC, but some argue predictability and JIT friendliness of Wasm can partially offset this for certain workloads.

Embedded and alternative JS runtimes

  • People compare MicroQuickJS to Espruino, Moddable’s XS, Elk, DeviceScript, and MicroPython/CircuitPython for ESP32/RP2040‑class boards.
  • Lack of malloc and small ROM/RAM needs are seen as enabling microcontroller scripting in JS, though bindings/HALs and flashing toolchains remain the true pain points.
  • Some speculate about thousands of tiny interpreters (e.g. on GPUs), but current work in that direction is experimental and not clearly aligned with MicroQuickJS yet.

Lua, Redis, and language design

  • One perspective: if MicroQuickJS had existed in 2010, Redis scripting might have chosen JS over Lua; Lua was picked for its tiny ANSI‑C implementation, not its syntax.
  • Long sub‑thread debates Lua’s unfamiliar syntax (1‑based indexing, block keywords), versus its consistency, tail‑call optimization, and suitability for compilers/embedded scripting.
  • Ideas like “language skins” (multiple syntaxes over one core semantics) are discussed as a way to reconcile familiarity with alternate designs.

Bellard’s reputation and development style

  • Extensive admiration for Bellard’s breadth and depth: FFmpeg, QEMU, TinyCC, QuickJS, JSLinux, LZEXE, SDR/DVB hacks, an ASN.1 compiler, and an LLM inference engine.
  • Many highlight his minimal‑dependency, single‑file C style and robust CS foundations; others note his low‑profile, non‑self‑promotional persona and lack of interviews.
  • Some joke about missing commit history and “12‑minute” implementation, while others infer a private repo or proto-then-import workflow.

“Lite web” and browser bloat

  • Inspired by “micro” JS, several commenters fantasize about a rebooted, lightweight web: HTML/JS/CSS subsets, Markdown‑over‑HTTP, “MicroBrowser/MicroWeb”, and progressive enhancement.
  • Others argue there is no economic incentive: browsers are complex because they must run arbitrary apps compatibly; any “simple” browser fails on most sites normal users need.
  • Gemini/Gopher/WAP are mentioned as historical or current attempts at simpler hypertext; opinions diverge on whether such parallel ecosystems can ever gain mainstream traction.

AI‑assisted experiments and HN norms

  • A visible thread chronicles using an LLM-based coding assistant to build MicroQuickJS integrations (Python FFI, Wasm builds, playgrounds), offered as evidence of fast prototyping and sandbox viability.
  • This sparks pushback about off‑topic AI evangelism, perceived self‑promotion, and “LLM slop”; others defend sharing such experiments as relevant and “hacker‑y” when they surface concrete findings (e.g., byte sizes, integration patterns, resource limits).
  • There is broader meta‑discussion on when linking one’s own blog or AI outputs is helpful vs. annoying, and how LLMs change the perceived effort behind quick demos.

How did DOGE disrupt so much while saving so little?

Severance, Contractors, and (Lack of) Savings

  • Many laid‑off staff reportedly secured a year of severance, then were rehired as contractors at higher rates, sometimes via consultancies charging the government multiples of prior costs.
  • Commenters note contractors often lose benefits and face high health‑insurance costs, but still may earn roughly double in salary.
  • Debate over taxpayer impact: some argue the net fiscal effect is tiny in the context of the federal budget; others contend the disruption costs likely outweigh any short‑term “savings.”

Disruption vs. Cost‑Cutting as the Real Goal

  • A recurring view: DOGE was never about efficiency or deficit reduction, but about:
    • Crippling agencies regulating or investigating Musk’s companies (safety, labor, tax, etc.).
    • Exfiltrating data on unions and complaints.
    • Weakening the broader regulatory state as an ideological project.
  • Several see it as a smash‑and‑grab or “ideological purge” used as theater to claim fulfillment of campaign promises while overall spending still grew elsewhere (defense, entitlements).

Incompetence, Malice, or Both?

  • One camp frames DOGE as classic “Chesterton’s fence” hubris: tech‑bro belief that large institutions are obviously broken and can be fixed with a chainsaw.
  • Another argues this was calculated self‑interested behavior by a sociopathic but canny billionaire protecting his empire.
  • Others posit a mix: genuine belief in government waste plus reckless, harmful execution; debate over whether Hanlon’s razor applies.

Government Efficiency and the Myth of “Easy 10% Cuts”

  • Several slam the “you can always cut 10%” mantra (popularized in tech/VC circles) as totally detached from how federal agencies operate.
  • Anecdotes from people working with CDC and other agencies describe extremely lean budgets and mission‑driven staff who could earn far more in private industry.
  • Counter‑arguments claim government is inherently inefficient due to lack of competition and job security, though this is challenged as ideology rather than observation.

Public Attitudes, Propaganda, and Consequences

  • Discussion links support for DOGE to decades of anti‑government propaganda and “I got mine” individualism.
  • Some stress that bureaucrats are often the last line preventing exploitation, and that gutting agencies like USAID has real human costs (including deaths abroad).
  • A minority claims DOGE exposed NGO corruption, but others note no resulting prosecutions and argue the main “revelation” was DOGE’s own corruption and failure.

Meta is using the Linux scheduler designed for Valve's Steam Deck on its servers

Open source cross‑pollination

  • Thread highlights how Valve’s Steam Deck work (SCX‑LAVD scheduler) is now improving Meta’s server efficiency, and notes the reverse flow (e.g., Meta’s Kyber I/O scheduler helping desktop/SteamOS microstutter).
  • Many see this as “commons” behavior: once code is upstreamed under GPL, it’s no longer “Valve’s thing” or “Meta’s thing.”
  • Some warn against relying on “trickledown” from big firms; corporate priorities can change despite licenses.

Why a handheld scheduler works in hyperscale

  • Commenters are surprised a scheduler tuned for handheld gaming also works for Meta’s servers.
  • Explanation: both gaming and large services have hard latency deadlines (frame times, controller input, voice/video, ad auctions, WhatsApp messaging, etc.), while background work can be delayed.
  • SCX‑LAVD is a latency‑aware scheduler; latency vs throughput is a spectrum, not a simple upgrade path.

Linux scheduling and sched_ext

  • Discussion contrasts the legacy Completely Fair Scheduler (CFS), newer EEVDF, and SCX‑LAVD: each chooses different trade‑offs between fairness, throughput, and latency; none is a universally “strict upgrade.”
  • Linux defaults historically favor throughput/fairness and are hard to tune; at hyperscale, even 0.1% gains justify dedicated kernel engineers.
  • sched_ext (developed at Meta) and BPF‑style mechanisms make it easier to plug in alternative schedulers; SCX implementations live in a shared GitHub repo used by multiple companies.

Valve’s role and contractor model

  • Valve is portrayed as a relatively small, revenue‑dense company that contracts out deep systems work (e.g., Igalia for schedulers, graphics stack, Proton pieces).
  • Igalia is described as a worker‑owned, highly skilled Linux consultancy, seen as a positive example of “company funds OSS” in practice.
  • Several comments argue contracting can work extremely well when scope is tight, expertise is high, and the client remains technically engaged.

Linux ecosystem strengths and weaknesses

  • Many credit Valve (plus earlier Wine/CodeWeavers work) with pushing Linux forward: Proton, DXVK, HDR/VRR on Wayland, Gamescope tools, shader pre‑caching, futex improvements, bcachefs sponsorship.
  • Others stress this builds on decades of volunteer groundwork (Wine, kernel, desktop).
  • Recurrent pain points: desktop Linux UX, accessibility, laptop sleep/hibernate, OOM behavior, hardware/driver quirks, fragmented ABIs and mobile platforms.

Business and ethics angles

  • Meta is criticized for scammy ads and AI misuse but also noted as a major Linux kernel contributor.
  • Valve is praised for technical contributions yet criticized for lootboxes and enabling third‑party gambling around in‑game items; some defend Valve as “least bad,” others call that willful blindness.
  • Side debate on RHEL source availability and GPL obligations, with claims that CentOS Stream effectively exposes the code even if RHEL’s own source distribution is awkward.

Stop Slopware

What “slopware” is and who’s to blame

  • “Slopware” is framed as low-effort, often AI-generated projects dumped into public ecosystems, especially open source.
  • Some argue the bigger problem predates AI: large corporations already ship bloated, buggy “slop” at massive scale, so singling out hobbyists using AI feels misplaced or hypocritical.
  • Others say the real issue is not AI per se but people publishing code they don’t understand, then implicitly asking others to maintain or trust it.

AI and learning to program

  • The site’s claim that “you learn better without AI” is heavily disputed.
  • Many see AI as an unprecedented accelerator for beginners: it lowers setup barriers, explains unfamiliar code, fills in boilerplate, and helps people quickly validate whether an idea is feasible.
  • Critics counter that overreliance encourages “mental coasting,” shallow understanding, and a slippery slope where learners never really internalize fundamentals.
  • Emerging consensus in the thread: AI is powerful for learning if used intentionally (asking questions, rewriting, cross-checking), but harmful when used as a code vending machine.

Craft vs pragmatism

  • A recurring tension: “software as craft” vs “software as a tool to solve problems.”
  • Some are dismayed that many developers never cared much about craftsmanship—only outcomes and paychecks.
  • Others argue most users don’t care how code is made; they care if it works. High craft is reserved for personal projects, critical systems, or self-respect, not typical business software.
  • Several note that obsessing over craft can become gatekeeping and self-sabotage in commercial contexts.

Effect on the commons and ecosystems

  • Concern about AI-driven “eternal September”: vast numbers of low-quality libraries, repos, and packages flooding GitHub, PyPI, etc., making it harder to find good tools.
  • One commenter cites data showing a large share of PyPI packages with only a single release, suspecting many are abandoned or AI-generated.
  • Others downplay storage/cost issues but worry about norms: publishing lots of unmaintained, auto-generated projects erodes expectations of stewardship.

Future of work and cleanup

  • Some expect a growing market for “cleanup specialists” fixing AI slop; others think AI-assisted workflows will simply raise the overall baseline and leave “pure craftsmen” behind.
  • There’s guarded optimism that AI can enable better architectures if humans focus on specs, tests, and design while offloading grunt work to models.

Show HN: CineCLI – Browse and torrent movies directly from your terminal

Tool concept and reception

  • CineCLI is a terminal interface for browsing movies via the YTS API and opening magnet links in a torrent client.
  • Many commenters find the idea fun or nostalgic, especially for terminal enthusiasts, and compare it to past tools like Popcorn Time.
  • Others downplay it as “just a YTS API wrapper” and question its utility beyond being a learning project, given YTS’s reputation for low-quality releases.

Demo, UX, and documentation feedback

  • Multiple people criticize the demo GIF as too slow and meandering; they suggest speeding it up, planning the demo better, or using dedicated terminal recording tools.
  • There’s a suggestion to showcase a public-domain film in the demo for legal/optics reasons.
  • Several users comment that the README looks obviously LLM-generated; some dislike this as “slop” and say it signals low care or code quality, while others argue it’s fine to automate boring documentation tasks.
  • The README/LLM debate becomes quite heated, with some replies turning openly abusive.

Legal, ISP, and safety concerns

  • Questions arise about whether using this tool violates ISP terms or local law.
  • Multiple commenters stress that the legal risk depends on the downloaded content, jurisdiction, and torrenting behavior, not the CLI itself.
  • Several point out the lack of in-tool warnings compared to torrent sites that prominently urge VPN use and note IP exposure; they argue that making torrenting so frictionless without such disclaimers could mislead inexperienced users.
  • There’s brief discussion about copyright being enforced in both authoritarian and liberal countries.

Content sources, quality, and ecosystem

  • One critic notes that anyone comfortable with the CLI could use higher-quality sources and private trackers instead of YTS, and questions who the tool is for.
  • Others discuss alternative piracy ecosystems: public and private trackers, DHT indexers, Kodi + various plugins, *arr stacks, real-debrid/premium services, usenet streaming, and Jellyfin with .strm files.
  • There is some discussion of best practices and ethics around using Tor vs VPNs for accessing torrent sites, and concerns about misuse of the Tor network.

Naming and NSFW association

  • Several commenters note that the project/author name matches a banned, graphic subreddit and warn others it is NSFL; others dismiss the concern or react defensively.

Inside CECOT – 60 Minutes [video]

Suppression of the 60 Minutes Segment

  • Many commenters see CBS’s decision to pull the Cecot segment as overt political censorship to protect the current administration and advance corporate interests (e.g., merger/antitrust approval).
  • Others note that footage of Cecot and its abuses was already widely reported; they argue the segment wasn’t uniquely revelatory, and that the key difference is the weight and audience of 60 Minutes, not the raw facts.
  • The accidental upload by a Canadian partner, and the subsequent availability on Archive.org and YouTube, are framed as classic “Streisand effect”: an attempt to bury the piece amplified its reach.

Bari Weiss’s Role and Editorial Justifications

  • An internal email from the new editorial lead outlines demands for more administration perspective, more detail on criminal histories and charges, and a fuller explanation of legal rationale.
  • Supporters say this looks like standard “do more reporting” and context-adding, especially given the seriousness of the claims.
  • Critics see it as a pretext: insisting on on‑record participation from officials who already refused to comment effectively grants them veto power; focusing on “charges” undermines presumption of innocence; and the legal framing allegedly misstates the administration’s own arguments.
  • Broader discussion portrays her as part of a pattern: self‑branding as a defender of free debate while backing or enabling censorship when it serves ideological or patron interests.

Ethics and Legality of Deportations to Cecot

  • Commenters emphasize that many of the 252 Venezuelans deported to Cecot had no U.S. convictions, with some having entered legally; sending them into indefinite, torturous detention without trial is described as a betrayal of U.S. constitutional principles and human rights norms.
  • Several label Cecot a concentration camp rather than a prison, stressing the absence of due process and the intent of permanent disappearance.
  • A minority argue that Cecot dramatically reduced homicides in El Salvador and that concern for the rights of gang members is misplaced; others rebut that torture is prohibited irrespective of crime and that many deportees were not gang members at all.

Archiving, Distribution, and Info Control

  • Users rapidly mirror the segment via Archive.org torrents, magnet links, and alternative video hosts; many volunteer to seed “for a cause.”
  • There’s praise for Archive.org and simultaneous anxiety over potential DMCA takedowns, leading to calls for more decentralized, non‑U.S.-centric preservation.

HN Moderation, Flags, and Perceived Bias

  • Numerous comments note that multiple posts about the segment were flagged or killed, sparking accusations that HN is suppressing anti‑Trump or anti‑oligarch content.
  • Others counter that HN is designed to downweight outrage‑driven political stories; moderators explain that flags are balanced by upvotes and that the front page is intentionally curated away from constant political drama.
  • Debate widens into whether certain outlets (e.g., 404media) are unfairly penalized, and whether a small ideological cohort exploits flagging to shape the visible discourse.

Local AI is driving the biggest change in laptops in decades

Memory, RAM Prices, and New Architectures

  • Many point out that exploding DRAM prices make “AI laptops with huge RAM” unrealistic in the near term; some expect 8 GB to re‑become the default.
  • Others argue DRAM cycles are historically feast/famine and high AI margins should eventually drive more capacity and lower prices, though current big-buyers (e.g., large AI labs) may distort competition.
  • Workstation laptops with 96–128 GB have existed for years; the move to 2‑slot, non‑upgradeable designs is seen as an artificial constraint.
  • Discussion of compute‑in‑flash, compute‑in‑DRAM, memristors and high‑bandwidth flash: seen as promising to host larger models cheaply, but with skepticism about latency, bandwidth figures, cost, and real‑world availability.

Critique of the Article and “AI PC” Branding

  • Multiple commenters call the article technically weak: misunderstanding TOPS, ignoring that required throughput can be computed, confusing millions vs billions of parameters, and underplaying existing open‑source benchmarks.
  • The article is criticized for ignoring the RAM price spike and for implying that most current hardware can’t run useful models.
  • “AI PC” and “Copilot+ PC” labels are widely seen as marketing; many current “AI” laptops mostly just have a cloud LLM endpoint plus an NPU that does little in practice.

Local vs Cloud AI: Capability, Economics, and Privacy

  • Enthusiasts report good experiences running mid‑sized models (e.g., 7–30B, GPT‑OSS 120B quantized) on Apple M‑series laptops with 24–128 GB, or on modest GPU desktops, for offline coding, CLI usage, and image generation.
  • Others argue that:
    • Truly frontier models (hundreds of GB) are far beyond typical consumer PCs for many years.
    • For most users, cheaper laptops + cloud subscriptions are more economical and higher quality.
  • “Good enough” is contested: some find current small models already practical; skeptics say average users will abandon them after a few visible mistakes compared to frontier cloud models.
  • Strong privacy arguments for local inference (personal data never leaving the device), but several believe most people will accept cloud trade‑offs.

GPUs, NPUs, and Specialized Accelerators

  • Debate over whether GPUs will be displaced by specialized AI chips:
    • One side expects distinct accelerators for LLMs vs diffusion.
    • Others say GPGPUs remain the best balance of power, flexibility, and cost.
  • Clarified that dense LLMs are extremely bandwidth‑bound: weights must effectively be read per token; HBM and low‑precision formats are key.
  • NPUs on consumer laptops are viewed as underpowered, fragmented, and poorly supported in software, mostly saving a bit of power for small on‑device tasks.

OS, Platforms, and Control

  • Apple silicon is repeatedly cited as currently the best laptop platform for local AI (unified memory, fast integrated GPU), though high‑RAM configs are expensive.
  • Critics note that many non‑Apple laptops marketed as “AI ready” are effectively just “can make HTTP requests to a cloud LLM.”
  • Concerns about Microsoft’s Copilot/Recall and pervasive telemetry drive some toward Linux, but gaming, creative tools (Adobe, video editing), and driver issues are significant barriers.
  • Some see aligned incentives: RAM‑hungry cloud AI competes with consumers for memory, nudging users toward being thin clients to datacenter models.

Overall Mood

  • The thread is sharply divided:
    • Optimists see local AI as already viable on high‑end consumer hardware and expect hardware to chase this use‑case.
    • Skeptics see “AI laptops” as mostly hype, with serious local AI remaining a niche akin to gaming rigs, while mainstream users rely on cheaper, more capable cloud models.

Satellites reveal heat leaking from largest US cryptocurrency mining center

Terminology and Thermodynamics

  • Several commenters say “leaking” is misleading; the facility is intentionally dumping heat as part of normal operation, effectively functioning as a giant electric heater.
  • Others argue it is inefficiency, since electricity is meant to do “computer work” and all of it ends up as heat anyway.
  • There’s agreement that for any conventional computation, nearly all input energy eventually becomes heat; only a negligible fraction escapes as sound or light and that too turns into heat later.

Waste Heat, Quality of Heat, and Reuse

  • Discussion on whether the heat could be used for district heating: technically yes, but it’s low‑temperature “low‑quality” heat, hard and costly to capture and transport.
  • Rockdale is small, so there’s unlikely to be local demand matching hundreds of megawatts of heat.
  • Some note that modern district heating can move hot water efficiently over long distances and that some data centers already heat nearby buildings, but crypto operations often don’t bother.
  • Debate over whether “waste heat” means “heat with no Carnot engine attached yet” vs. “unavoidable thermodynamic endpoint.”

Fundamental Limits and Reversible Computing

  • Landauer’s principle and the idea that the minimum energy cost of computation trends toward zero as temperature approaches absolute zero are mentioned.
  • This segues into reversible/adiabatic computing, with a cited startup demonstrating partial energy recovery; commenters see this as potentially revolutionary but still very challenging.

Scale of Energy Use

  • The “as much power as 300,000 homes” framing sparks back‑of‑the‑envelope comparisons to steel and aluminum plants.
  • The site reuses grid capacity from a former aluminum smelter that drew over 1,000 MW; some note the crypto operation actually uses less energy and dumps less heat than the prior industry, though it provides fewer useful jobs and products.

Climate Impact of Waste Heat

  • One thread asks how much global warming is from direct waste heat vs. greenhouse gases.
  • Quick estimates in the discussion suggest direct human waste heat is minuscule compared to incoming solar energy and to the radiative forcing from greenhouse gases; CO₂ is seen as the dominant problem.

Value and Ethics of Proof‑of‑Work Mining

  • Many view the facility as “needlessly absurd” and a “crime against humanity” scale waste, especially given climate concerns and low social utility of crypto mining.
  • Others defend crypto as a reaction against KYC/AML and cashless societies, arguing the genie can’t be put back in the bottle.
  • There’s frustration that proof‑of‑work remains dominant despite alternative consensus mechanisms and that local economic benefits (jobs) are minimal compared to past industrial use of the site.

Lotusbail npm package found to be harvesting WhatsApp messages and contacts

Popularity, trust signals & dependency bloat

  • Several commenters argue that download counts and GitHub stars are poor security signals; 56k downloads is seen as low and easily gamed.
  • Others admit that in practice “verification” often means only checking age, stars, and a quick repo glance, not real audits.
  • Heavy transitive dependency trees (thousands of packages, GBs of node_modules) make meaningful review unrealistic, reinforcing complacency.
  • Some advocate writing small utilities in-house instead of pulling trivial deps, but acknowledge the JS ecosystem tends to reintroduce them transitively anyway.

Supply-chain risk & ecosystem design

  • Many see npm’s late-fetch, uncurated model as structurally unsafe compared to distro-style repositories (Debian, etc.) with human stewardship and reproducible builds.
  • Others counter that no ecosystem truly audits everything (xz is cited) and the problem is broader than npm: PyPI, Cargo, Docker images, GitHub Actions, curl-to-bash installers, etc.
  • Some suggest corporate-curated internal registries and approval workflows; others note this requires dedicated security staff and slows development.

Mitigations in practice

  • Suggested tactics:
    • Vendor critical deps, read them, pin versions, and update slowly.
    • Use lockfiles, Dependabot (with human review), and dependency “cool-down” windows.
    • Containerize or VM-isolate dev environments; avoid global npm installs.
    • Enforce policies where every new dependency has an “owner” responsible for reviewing changes.
  • There is interest in tools like Nix/Bazel/Buck for strict pinning and reproducibility, though their learning curve is seen as a barrier.

OS, capabilities & permission models

  • Some argue the real root problem is that code runs with “ambient authority”: any library can access filesystem, network, credentials.
  • Proposals include capability-based languages (functions only get access to explicitly passed resources) and finer-grained OS mediation of network/domain access.
  • Others warn this easily turns into walled gardens or unusable permission UX, citing mobile OSes and macOS as mixed examples.

JavaScript ecosystem & stdlib debate

  • One camp claims JS is particularly risky for backends (weak static analysis, culture of many tiny packages, no “real” stdlib).
  • Others respond that JS now has a large standard library and that exfiltration attacks would be just as feasible in Go/Rust/Java; the issue is trust, not language.

WhatsApp-specific angle & npm governance

  • This package is a malicious fork of an unofficial WhatsApp Web client library, not an official API wrapper, which inherently requires broad access to user data.
  • Some see using such a library as a security red flag from the outset.
  • Multiple comments call for Microsoft to either harden npm with real governance and automated scanning (especially for obfuscated/encrypted payloads) or hand it to a foundation.

LLMs, AI content & future risks

  • The blog post itself is widely perceived as AI-generated, prompting meta-discussion about AI-written “slop” dominating security reporting.
  • On the code side, some expect more people to “vibe code” libraries with LLMs to avoid untrusted deps; others warn LLMs can just as easily reproduce malware or become another poisoning vector.

It's Always TCP_NODELAY

Practical Experiences & Performance Wins

  • Multiple commenters report big latency improvements after disabling Nagle via TCP_NODELAY in:
    • Chatty protocols (e.g., DICOM on LAN, database client libraries, student TCP simulators, SSH-based games).
    • Cases where messages are ready in user space but sit unsent due to kernel buffering.
  • Go is noted as disabling Nagle by default, which surprised some who were debugging latency.
  • Some mention using LD_PRELOAD hacks or libraries (e.g., libnodelay) to force TCP_NODELAY for legacy binaries.

Nagle vs Delayed ACK & TCP_QUICKACK

  • A recurring theme is that Nagle’s algorithm and delayed ACKs interact badly:
    • Nagle waits for ACK to send small packets; delayed ACK waits to piggyback an ACK, causing 100–200ms stalls or worse.
  • Historical context: early TCP stacks used long global ACK timers (~500 ms).
  • TCP_QUICKACK can reduce receive-side ACK delay but doesn’t fix send-side buffering. Portability across OSes is uneven.
  • One suggestion: TCP stacks should track whether delayed ACKs actually get piggybacked and disable them per-socket when they don’t.

Should Nagle Still Exist / Be Default?

  • One camp: Nagle is “outmoded,” should be off by default, and policy should live in applications, which can buffer themselves.
  • Another camp: it still protects shared/cellular/wifi links from floods of tiny packets and helps poorly written or unmaintained software.
  • Some argue the kernel must arbitrate tradeoffs between competing apps; others say this is the app’s responsibility.
  • Side effect: disabling Nagle can increase fingerprinting risk by exposing fine-grained timing (e.g., keystroke patterns).

APIs, “Flush,” and Message Orientation

  • Many lament that the stream-based socket API lacks a proper “flush now” for TCP, making mixed interactive/bulk use awkward.
  • TCP_CORK, MSG_MORE, and buffered writers are cited as partial workarounds, but portability is limited.
  • Several argue TCP APIs should have been message-oriented from the start; instead, every protocol reimplements framing on top of a byte stream.
  • SCTP and QUIC are mentioned as more message-like alternatives, but lack broad OS-level, general-purpose adoption.

Alternatives & Generic Batching

  • Suggestions to use UDP (or QUIC, Aeron, ENet, MoldUDP-style protocols) when you control both ends and can implement reliability/ordering as needed.
  • One commenter reframes Nagle and delayed ACK as poor special cases of a more general “work-or-time” batching strategy with explicit latency bounds.
  • Related lower-level analogy: interrupt moderation on NICs—also a batching vs latency tradeoff.

Ethernet, CSMA, and Legacy Networks (Side Thread)

  • Long subdiscussion on CSMA/CD vs CSMA/CA, hubs vs switches, full duplex, PAUSE frames, and why collisions effectively don’t exist on modern switched, full‑duplex Ethernet.
  • Some corrections that Nagle is a TCP-layer mechanism and not directly about CSMA, though both historically addressed inefficient use of shared media.

US destroying its reputation as a scientific leader – European science diplomat

Global R&D, Brain Drain, and “Outsourcing”

  • Several comments reject the idea that other countries “outsourced” R&D to the US; instead, the US aggressively competed with superior funding, salaries, and immigration policies, causing brain drain.
  • Others note that this was often experienced as a loss abroad but is now becoming an opportunity as disillusioned US scientists can be “poached” by Europe and elsewhere.
  • There’s agreement that science as a global enterprise will be fine without US dominance; the real risk is to the US economy and jobs tied to scientific industries.

US Policy Shifts and Scientific Reputation

  • The thread cites: cuts to grants (especially diversity-related), halted biomedical funding to foreign partners, and political interventions into university programs as evidence of reputational damage.
  • Some argue these moves, even if later reversed, cause long-lasting harm: projects cancelled, relationships broken, researchers emigrating.
  • Others say it’s too early to quantify damage; US still has major technological lead in chips, software, and defense, and a future administration might reverse course.

EU Motives, Horizon Europe, and Diplomacy

  • Multiple commenters frame the EU diplomat’s statement as both politically motivated and self-serving: a way to justify funding Horizon Europe and European “reindustrialisation.”
  • Horizon Europe itself is criticized as bloated, bureaucratic, and “cosplay” projects with too many mandatory partners and overhead.
  • Some see EU rhetoric as part breakup-drama, part genuine alarm: the EU doesn’t actually want the US to collapse, since that would harm both sides.

Funding Levels, Waste, and Politicization

  • Linked reporting on falling US grant rates is contrasted with claims that “a lot of people were getting easy money,” which others challenge as vague and uninformed.
  • Debate centers on whether there is real “waste,” especially in “diversity-related” research. Critics question its value; defenders note that:
    • “Diversity” labels have been applied even to apolitical areas like biodiversity.
    • Population diversity in biomedical research is necessary for valid results.
  • Several comments stress that public basic research has long-term economic payoff and isn’t charity; cuts mainly shift power to privately funded, more biased research.

US Decline, Public Apathy, and Empire Analogies

  • Many see this as part of a broader US “decade of humiliation” or imperial decline, comparing it to past British/French/Spanish collapses.
  • Others caution that US decline has been predicted for decades and that previous scares (e.g., Japanese tech ascendance) were reversed through coordinated policy.
  • A recurring theme is domestic insecurity: when average citizens are struggling and politically polarized, they neither care about nor reliably support long-horizon scientific investment.

US blocks all offshore wind construction, says reason is classified

Stated vs Suspected Motives

  • Official rationale is “classified national security,” widely viewed in the thread as a pretext.
  • Many argue the real drivers are:
    • Fossil fuel interests and petrodollar politics.
    • Personal vendetta against wind after the Scottish golf-course turbine fight.
    • Payback for foreign or domestic political slights (e.g., Denmark/Greenland, Denmark’s wind companies).
  • Some see it as part of a broader pattern: cancelling solar, EV incentives, agency cuts, and pro‑coal interventions to systematically kill renewables.

National Security, Radar, and Drones

  • Commenters acknowledge real technical issues:
    • Offshore turbines create radar clutter, complicate low‑altitude surveillance, and may hinder sub detection or sonar.
    • Wind farms could offer cover for ship‑launched drones or complicate tracking near coasts.
  • Counterpoints:
    • These issues have been known for decades and engineered around in the UK, Germany, Denmark, China, etc.
    • Defense agencies already sit in permitting; if it were purely radar, it should have surfaced early, not mid‑construction.
    • Sweden’s more limited blocks are cited as not comparable to a blanket US halt.

Economics and Alternatives

  • Some say offshore wind is subsidy‑dependent “rent seeking” compared to onshore wind or solar.
  • Others note offshore’s higher capacity factors and argue it’s economically strong if allowed to scale.
  • Broader debate spins into nuclear vs renewables, storage costs, grid stability, and regulatory burden, with no consensus.

Governance, Legitimacy, and Precedent

  • Multiple comments frame this as another example of:
    • Executive overreach under a “national security” umbrella.
    • A rule‑of‑law breakdown where federal orders of dubious legality are still obeyed.
  • Concern that such arbitrary reversals will chill large‑scale infrastructure investment generally.

Things I learnt about passkeys when building passkeybot

Use of LLMs in Passkeybot Documentation

  • Several commenters object to the project’s quickstart step of “paste this into a good LLM,” especially for security‑critical auth code.
  • Concerns: outsourcing auth logic to an LLM, lack of traditional API docs, and the need to fully review LLM output makes it feel pointless.
  • The author clarifies that the LLM is meant only to translate a well‑commented TypeScript example into other languages/frameworks; core logic is documented via sequence diagrams, handler descriptions, and a demo.
  • Others defend LLM‑oriented onboarding as a better DX than guessing framework‑specific boilerplate.

Passkeys vs Passwords: Security, Recovery, and Usability

  • Some wish passkeys fully replaced passwords; others insist passwords remain vital for recovery and cross‑device use, especially after device loss, theft, or fire.
  • Main claimed advantages of passkeys: phishing resistance, protection against credential reuse and database breaches.
  • Counterpoints: good password managers already mitigate phishing via domain checking; passkeys add conceptual and UX complexity and lack flexibility for some use cases.
  • Recovery is a recurring pain point: not all sites allow multiple passkeys; some limit to a single authenticator; fallbacks (email/SMS, magic links) reintroduce weaker factors.

Vendor Lock‑in, Attestation, and Client Bans

  • Strong worry that “unexportable keys + attestation + ability to ban clients” yields de facto lock‑in to Apple/Google/Microsoft ecosystems.
  • Spec author statements about potentially blocking clients that allow export (e.g., some password managers) are seen as hostile to user control.
  • Defenders argue: unexportability is a core security property, and RPs should be able to distrust compromised/rogue clients; users can rely on multiple authenticators or account‑recovery flows instead.
  • Critics respond that inability to back up credentials is unacceptable and that client‑based blocking is too powerful a lever.

UX Problems and Edge Cases

  • Reports of conflicting passkey providers (native keychains vs password managers vs hardware keys), awkward multi‑click flows, and difficulty setting preferred providers.
  • Examples of “orphaned keys” and inability to enroll multiple device‑specific passkeys, confusing labels due to cross‑device sync, and bugs that effectively lock users out.
  • Some users have reverted to passwords + TOTP after frustrating passkey experiences.

Related Technical Discussions

  • PKCE is discussed as ensuring continuity of OAuth flows beyond what state alone provides.
  • Concerns raised about the Digital Credentials API as infrastructure for broader online ID mandates, though others note ID proof is already required for some travel and government services.

GLM-4.7: Advancing the Coding Capability

Perceived Capability & Benchmarks

  • Many find GLM‑4.7 very strong for coding, often “in the Sonnet zone,” below Opus/GPT‑5.2 but close enough for daily work, especially given its cost.
  • Benchmarks are mixed: some point to weak “terminal bench” scores; others cite strong SWE-bench numbers (e.g., beating Sonnet 3.5 by a wide margin, slightly ahead of Sonnet 4, slightly behind 4.5).
  • Several note that benchmark leaders often perform poorly on real tasks, whereas GLM‑4.6/4.7 feels better than its scores suggest; consensus is that hands-on testing matters more than charts.

Pricing, Value, and Product Positioning

  • Z.ai’s subscriptions (including cheap annual “lite” and coding plans) are repeatedly called “insanely cheap,” ideal as a Claude/GPT backup or secondary daily driver.
  • Users contrast this with Anthropic’s high per-token and agentic-pricing, seeing GLM as “Claude Code but cheaper,” especially for long-running coding tools.
  • Some worry low pricing is subsidized “dumping,” potentially anti-competitive long-term.

Usage Patterns & Tooling

  • Popular workflows:
    • Use Claude/GPT for planning and “tasteful” refactoring, GLM‑4.6/4.7 for implementation.
    • Use GLM via Claude Code MCS/MCP endpoints or tools like Crush/OpenCode; some tweak env vars so all “Haiku/Sonnet/Opus” slots map to GLM.
  • Several praise GLM‑4.7’s tool use and agentic coding; others found earlier models underwhelming in OpenCode and reverted to Claude Code.

Local Inference, Hardware, and MoE

  • Thread is dense with local-serving debate: Mac Studio/M4, Strix Halo, RTX 4090/5090, multi‑GPU rigs, Cerebras/Groq ASICs.
  • Consensus: GLM‑4.7’s 358B MoE (32B active) is still too big for smooth interactive use on typical consumer hardware; quantized local runs are “hobby/async,” not yet a practical Claude Code replacement.
  • Clarified that MoE reduces compute and bandwidth per token, not RAM capacity; full parameters still must be loaded.

Distillation, Similarity to Gemini, and Training

  • Multiple commenters think GLM‑4.7’s frontend examples and chain-of-thought style look strikingly like Gemini 3, suspecting distillation from frontier models.
  • Some say this is fine—even desirable—if it yields cheap open weights. Others argue language tics (e.g., “you’re absolutely right”) aren’t reliable evidence of training sources.

Privacy, Terms, and Politics

  • Z.ai’s terms allow extensive training on user data and broad rights over user content; several warn against using it for serious/proprietary work.
  • Some see Chinese-origin models as heavily censored on topics like Tiananmen; others dismiss such political tests as irrelevant for a coding-optimized model.

Competition and Ecosystem

  • Many welcome GLM‑4.7 as proof open‑weight models are closing the gap with billion‑dollar proprietary systems, adding price pressure on Anthropic/OpenAI/Google/xAI.
  • Omitted comparisons (e.g., Gemini 3 Pro in charts, Grok 4 Heavy, Opus 4.5) are criticized as selective benchmarking.