Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 277 of 360

Tesla seeks to guard crash data from public disclosure

Access to NHTSA Crash Data & Redactions

  • Many argue that if NHTSA has crash data, taxpayers and crash victims should see it, especially for systems operating on public roads.
  • Others contend Tesla shouldn’t be singled out and data should be comparable across all manufacturers.
  • Thread participants inspect the official CSV and find:
    • Tesla, BMW, Subaru, Honda and others have many redacted or blank ADAS/ADS version fields.
    • Tesla appears to redact nearly all relevant ADAS fields (including narratives and system versions), making serious analysis difficult; some say this is materially worse than peers, others say many brands are similarly opaque.
  • There is consensus that current redaction levels, across multiple makers, significantly weaken the public’s ability to evaluate ADAS safety.

Reporting Thresholds, Under‑Reporting & EDR

  • NHTSA’s special crash reporting only covers serious outcomes (fatalities, vulnerable road users, hospital transports, airbag deployment).
  • Tesla has been criticized by NHTSA for telematics gaps and for treating many non‑airbag events as “non-crashes,” likely undercounting incidents.
  • Separately, UN and US EDR rules mostly capture physical vehicle behavior, not who (human vs ADS) controlled it. The contested data here goes beyond legal minimums, into proprietary logs Tesla chooses to keep.

Safety, Supervision & Autonomy Levels

  • One camp claims camera-only systems are already much safer than humans in absolute deaths, given their tiny fleet share; critics say the relevant metric is rate per mile and that good independent data is missing.
  • Tesla’s own Autopilot stats are challenged as incomparable (highway vs mixed driving, supervised vs unsupervised).
  • Some cite crowdsourced “miles per disengagement” suggesting poor unsupervised performance compared with other AV projects.
  • Long subthread debates SAE Levels 2–4:
    • Level 2 is seen as demanding inhuman vigilance (“pretend to drive”).
    • Level 3’s handover window is viewed by some as inherently risky; others say sufficient seconds of guaranteed control can make it workable and close to Level 4.
    • Many argue anything marketed as “Full Self Driving” should bear Level‑4‑like liability.

Liability, Logging, and Corporate Motives

  • Strong concern that Tesla markets FSD as near‑autonomous while legally treating all failures as driver error, including edge cases where the system disengages just before impact.
  • Some call for third‑party code and log audits for safety‑critical systems, comparing the bar to aviation or even casino software.
  • Tesla’s legal stance invokes “competitive harm” if detailed crash logs are released; critics compare this unfavorably to pharma trials and to earlier promises about open patents and advancing safety.
  • A few defend Tesla’s right to protect internal data and fear misinterpretation by hostile media, but others respond that data from public roads and public risk should not be treated as trade secrets.

User Experiences & Brand Perception

  • Anecdotes diverge:
    • Some HW4/v12 owners say FSD now feels like a genuine safety aid on most trips.
    • Others describe poor object detection (e.g., bins vs children), frequent construction-zone failures, and reliance on human “babysitting,” which they consider more stressful than driving.
  • Subscription pricing for a “safety feature” is widely criticized on principle.
  • Several argue rivals (especially Chinese EVs and Waymo-style L4 systems) now exceed Tesla on quality or safety, with Tesla leaning heavily on stock-market hype and brand politics.

LLMs and Elixir: Windfall or deathblow?

LLMs as Coding Aid for Elixir and Other Languages

  • Several commenters describe LLMs as a “windfall” for working in new or niche languages (Crystal, Elixir, Rust, Go), especially to fill library gaps and explain concepts.
  • Elixir/Phoenix is seen as particularly well-suited: low boilerplate, functional style, and easy-to-verify small code increments make human review of LLM output less painful than in large, side‑effectful Python/JS stacks.
  • Others report that LLMs still frequently get stuck, hallucinate fixes, or degrade into type‑casts and loops, especially with React/React Native or niche ecosystems.

Safety, Crashes, and the BEAM Runtime

  • Pro‑Elixir comments highlight that BEAM’s process isolation and supervision mean generated code can fail without taking down the whole system, reducing firefighting compared to Python or similar.
  • Critics counter that “practically never crash” is overstated: NIFs can crash the VM, memory/storage exhaustion and bad architecture still apply, and Erlang/Elixir provide fault containment, not immunity.
  • There’s partial agreement that BEAM is robust but not unique in preventing full machine crashes for web workloads.

Is Elixir General-Purpose or Specialized?

  • Long subthread argues whether Elixir is truly general purpose.
  • Many say the language, history, and ecosystem are heavily biased toward long‑running services/servers; it’s awkward for CLIs, games, CPU‑heavy tasks, or some embedded work. VM startup time and C integration are recurring pain points.
  • Others emphasize that you can build CLIs, compilers, or games; they distinguish theoretical generality from practical applicability and note “BEAM for everything” is not a mainstream stance.

Designing Languages and Docs for LLMs

  • Core insight noted: languages and ecosystems now need to “market themselves” to LLMs via LLM‑friendly docs (e.g., terse usage rules) and consistent patterns.
  • Some worry this becomes the new SEO/adtech game, skewing evolution toward what models like rather than what humans need.
  • Ideas surface for LLM‑optimized languages: strict and expressive type systems, non-ambiguous syntax, high information density; Moonbit, Gleam, Rust, Elm are mentioned.

Experiences, Tools, and Models

  • Positive reports for Elixir with Cursor, Windsurf, Claude, and Sonnet; Gemini is often described as weaker and JS/React‑biased.
  • Tools like tidewave (LLM with iex) and Phoenix.new’s agentic generator show LLMs running their own REPL/debug loops and building Phoenix apps from plans.
  • Some developers claim weeks of work fully delegated to LLMs (with review), and use them as tutors by having them generate code and tests, then learning by fixing failures.

Craft, “Vibe Coding,” and Jobs

  • There’s tension between people who see “vibe coding” as abdicating craft and others who see it as just another tool trade‑off.
  • A few argue that people with no real tech preferences or depth are unlikely to displace experienced engineers, even with LLMs.

Cheap yet ultrapure titanium might enable widespread use in industry (2024)

New deoxygenation method & the yttrium problem

  • The Nature paper’s process removes oxygen from molten titanium using yttrium metal plus yttrium fluoride.
  • Resulting titanium can contain up to ~1% yttrium by mass; commenters note this contradicts “ultrapure” marketing.
  • Debate centers on whether 1% Y is acceptable:
    • Oxygen is extremely harmful to titanium’s ductility; trading O for Y may be a net win.
    • Yttrium is already used in some alloys and is likely benign for many structural/industrial uses but undesirable for implants or highly specialized alloys.
  • Economically, yttrium is expensive and supply‑constrained; 1% content could add notable cost and create geopolitical risk, leading some to label this potentially uneconomic without further process refinement.

Alternatives and follow‑on processing ideas

  • Commenters list other approaches: molten‑salt electrolysis (FFC Cambridge/OS), calciothermic routes, hydrogen plasma arc melting, calcium‑based deoxidation, magnesium hydride reduction, and solid‑state routes (e.g., Metalysis).
  • No clear consensus on which are most efficient or scalable; details are mostly at the “survey of ideas” level.
  • Ideas like separating yttrium by density from molten titanium or grinding off deoxygenated surface layers are raised but quickly run into practicality issues given titanium’s machining difficulty.

Titanium’s real bottleneck: manufacturability, not ore price

  • Multiple practitioners stress that raw material cost is only a fraction of titanium part cost.
  • Core problems:
    • Very low thermal conductivity → localized overheating during machining.
    • High reactivity when hot → ignition risk, especially shavings and in reactive atmospheres (e.g., wet chlorine pipelines).
    • Difficult casting (high melting point, inert atmospheres), poor ductility for forming, specialized tooling and copious coolant needed.
  • As a result, machining time, tool wear, safety measures, and process constraints dominate the economics.

Material behavior & comparison to other metals

  • Discussion explains “protective oxides”: Al, Ti, stainless steels form thin, adherent oxides that block further corrosion; iron rust is porous/flaky and accelerates corrosion instead.
  • Yttrium is framed as a “getter”: a less harmful impurity that binds oxygen, analogous to how steelmaking adds elements to capture undesirable impurities.

Impact on industrial and consumer use

  • Skeptical view: even if titanium sponge becomes cheap, widespread substitution for steel/aluminum is unlikely; it remains hard and dangerous to work, so everyday items won’t suddenly switch.
  • Nuanced counterpoint: cheaper titanium could expand some niches—3D‑printed aerospace parts, eyeglass frames, corrosion‑critical components, medical devices where Y contamination can be managed or avoided.
  • For things like phones and watches, several argue titanium is mostly marketing: weight savings are small, hardness is worse than stainless, and scratch resistance isn’t better.

Energy-cost and fusion tangent

  • One line of discussion wonders if cheaper energy (solar, future fusion) will naturally make titanium production cheap regardless of process.
  • Replies range from “fusion is always 20 years away” skepticism to cautious optimism about well‑funded private fusion efforts; no resolution, and relevance to near‑term titanium economics is left unclear.

OpenAI slams court order to save all ChatGPT logs, including deleted chats

Deleted vs. “Hidden” Chats and User Trust

  • Many see the “real news” as confirmation that “deleted” and “temporary” ChatGPT chats were never truly gone—now made explicit by the court order.
  • Commenters argue calling this “deletion” is misleading; at best it’s soft-delete plus retention “unless legal obligations,” which now covers essentially everything.
  • Several people regret having shared highly personal or therapeutic content with ChatGPT, now realizing it may be stored indefinitely and potentially exposed.

Legal Holds, Discovery, and Court Reasoning

  • Others note that litigation holds and preservation orders are standard: once sued, a company must stop destroying potentially relevant evidence.
  • The judge appears to have lost patience after OpenAI resisted or dodged questions about privacy-preserving alternatives (e.g., anonymization).
  • Critics respond that ordering retention of all logs (including non‑US, non‑party users) is a grossly overbroad “fishing expedition” that offloads risk onto millions of uninvolved people.

GDPR, Jurisdiction, and Conflict of Laws

  • Multiple comments highlight potential conflict with EU GDPR “right to erasure,” though GDPR already has carve-outs for legally required retention and legal claims.
  • Debate centers on whether a US court order can justify processing EU users’ data contrary to EU law, especially via US‑based infrastructure.
  • Some argue this illustrates why non‑US regulators distrust US cloud providers and why EU entities insist on EU-incorporated subsidiaries and local hosting.

Technical and Semantic Complexity of Deletion

  • Long subthreads explain why hard, deterministic deletion across databases, logs, laptops, backups, and analytics pipelines is technically hard and expensive.
  • Proposals like per-user encryption keys or row‑level encryption are discussed; some say feasible in practice at smaller scale, others say performance costs are prohibitive.
  • Several advocate at least clearer language: distinguish “removed from your view” from “cryptographically unrecoverable.”

Enterprise, API, and Business Fallout

  • Many note this applies to ChatGPT Free/Plus/Pro and API traffic, undermining prior assurances that API data wasn’t retained beyond short windows.
  • Commenters predict enterprise customers in regulated sectors (healthcare, defense, finance) will reevaluate or terminate OpenAI use, or move to Azure‑hosted or fully on‑prem models.
  • Others reply that almost all SaaS and privacy policies include “unless required by law,” so legal recourse against OpenAI is limited.

Shift Toward Local and Open Models

  • News is widely cited as a strong argument for local or self-hosted LLMs (DeepSeek, Mistral, etc.), even with weaker quality and higher setup costs.
  • Underlying sentiment: any cloud LLM should now be assumed to keep prompts indefinitely and to be discoverable in future litigation or breaches.

Cursor 1.0

Tool landscape and comparisons

  • Commenters note the ecosystem is crowded: Cursor, Claude Code, Cline, Roo Code, Aider, Copilot, Zed agent, JetBrains Junie, Windsurf, Emacs+gptel, AugmentCode, Ampcode, etc.
  • Benchmarks like liveswebench exist but are seen as incomplete; differences in workflow, model choice, context size, MCP support, and UI matter more than a single score.

Cursor vs other AI coding tools

  • Strong praise for Cursor’s tab completion/“next edit”; many say it’s the best autocomplete they’ve used and a major reason to stay, even if they rarely use agents.
  • Others find Cursor’s agent weaker than Claude Code, Cline, Roo, or Aider: reports of wrong tool calls, premature stopping, messy diffs, and issues on large codebases.
  • Claude Code is widely praised as a smarter, more capable agent (good at using CLI/SSH, grokking big repos), but burns tokens quickly; many end up on expensive Max plans.
  • Aider is liked for tight git integration and control (micro‑commits, undo per prompt, custom rules). Cline/Roo are praised for pure agent workflows but can be very costly with reasoning models.
  • JetBrains Junie and Zed’s agent are seen as “good and improving”, appealing to those who dislike VSCode forks.

Pricing, value, and economics

  • Strong debate over paying $100–$200/month personally: some see it as trivial vs developer time; others outside high‑pay markets say it’s unaffordable.
  • Cursor’s old opacity around “Max” pricing is criticized; current “API cost + ~20%” model is viewed as more transparent.
  • Several note it’s easy to burn $10–$70/day on API‑metered agents; Cursor’s flat fee is valued as cost control.
  • Many assume all players are subsidizing usage and not yet profitable; some question the sustainability of current pricing.

Workflow, UX, and agents vs autocomplete

  • Split preferences: some want “agent as OS” (Claude Code, Codex) orchestrating across filesystem, terminals, and git; others prefer staying in the editor with strong autocomplete and light chat.
  • Concerns that agents require constant command approvals and can create sprawling, hard‑understand diffs; requests for better mapping from each change back to the agent’s reasoning.
  • Heavy users often run multiple tools/IDEs in parallel (e.g., Zed or JetBrains for editing, Cursor/Claude Code for agents).

Technical and product concerns with Cursor

  • Complaints: frequent breaking updates, sparse or late docs, opaque context selection, Python regressions, Windows “q” command bug, memory leaks, lagging behind upstream VSCode and its extensions.
  • Multi‑root and large‑repo behavior can be flaky; users resort to custom rules files and git workflows to compensate.
  • Some dislike Cursor’s divergence from VSCode (marketplace access, dev containers), and the closed‑source nature of a VSCode fork.

Business model, strategy, and trust

  • Skepticism that a proprietary VSCode fork is a durable strategy given Microsoft’s incentives and Copilot’s deep integration.
  • Others argue Cursor’s fast growth and polish justify sticking with it rather than chasing every new agent.
  • Multiple commenters express unease about possible astroturfing and bot‑written “glowing” reviews in AI‑tool threads generally, making online feedback feel less trustworthy.

Amelia Earhart's Reckless Final Flights

Earhart’s skill, recklessness, and media myth

  • Several commenters repeat a theme from the article: Earhart was considered a reckless pilot by experienced aviators, in contrast to other pioneering women like Jacqueline Cochran.
  • Her public image is described as heavily manufactured by a publicity machine, likened to modern influencers whose branding outpaces their competence.
  • One thread argues she was pushed beyond her actual technical abilities (navigation, radio) by fame and the pressure to keep generating “firsts,” drawing parallels to Stockton Rush and Donald Crowhurst.

Gender, strength, and aircraft/vehicle design

  • Early aircraft lacked boosted controls and were built around typical male upper-body strength; some argue women of that era would be physically unable to handle certain emergencies.
  • Others push back on simplistic “designed for women” narratives (e.g., power steering/brakes in cars), saying these technologies primarily served safety and comfort for all drivers.
  • Broader debate about how criticism of a woman can be misread as sexism, and how icons of an identity become hard to critique.

Control forces, dives, and aerodynamics

  • Long, technical discussion on “feel forces,” control-surface travel limiters, structural failure, and flow separation at high speed.
  • Anecdotes about 727 and 707 recoveries, WWII fighters, dive brakes, and how loss of effective airflow can make control surfaces useless regardless of pilot strength.
  • Explanation of trim tabs and why even big jets can be flown by hand, but out‑of‑trim conditions can quickly exceed human endurance.

737 MAX / MCAS dispute

  • Very detailed back‑and‑forth over MCAS:
    • One side: concept sound, implementation sloppy; pilots failed to execute established runaway-trim memory procedures despite prior incidents and emergency directives.
    • Opposing side: MCAS itself was a dangerously conceived system (single‑sensor, repeated large nose‑down commands, hidden from pilots, excessive workload), making crashes largely a Boeing and regulatory failure.
  • Multiple references to official reviews (JATR, FAA boards) and disagreement over how much blame to assign to crews vs manufacturer and regulators.

Radio/navigation errors in Earhart’s final flights

  • Linked analyses emphasize her weak radio knowledge and critical equipment decisions (e.g., removal or damage of antennas, reliance on misunderstood HF propagation).
  • She had already failed a practice ocean navigation exercise using similar techniques, then didn’t repeat it.
  • Commenters see this as part of a broader pattern: success‑oriented planning, underestimating technical complexity, and inadequate preparation.

Risk, “greatness,” and early aviation culture

  • Many note that early aviation was broadly reckless, but distinguish between unknown risks and ignoring known ones.
  • Comments reflect ambivalence: admiration for courage and pioneering spirit vs insistence that hero narratives not obscure serious errors in judgment.
  • Several people frame Earhart as both inspirational and cautionary: you “have to be a little crazy” to make history, but aviation is unforgiving of carelessness.

Language, style, and legacy

  • Side thread on The New Yorker’s diaeresis (“coördinate,” “naïve”): explanations of what it is, whether it’s useful or pretentious, and how rare it is outside that magazine.
  • Discussion of how publicity and national myth-making ensure Earhart’s legend vastly eclipses technically more impressive or earlier circumnavigators, especially women whose names are now largely forgotten.

Autonomous drone defeats human champions in racing first

Military and Warfare Implications

  • Many see this as directly relevant to battlefield drones: fast, vision-only autonomy that could dodge fire and continue after jamming, especially in Ukraine/Russia–style wars.
  • Commenters argue small, cheap autonomous drones are emerging as a new “equalizer” weapon, potentially analogous to nuclear deterrence for smaller states.
  • Others stress that this makes it easier for weak or non-state actors to strike high‑value targets (e.g., leadership, critical infrastructure) from afar.

Current Use of Autonomy in War

  • Disagreement over how widespread autonomous drones already are:
    • One side: most frontline FPV drones in Ukraine/Russia are manually piloted (analog or fiber), with only niche use of auto‑lock or path-following systems.
    • Other side: there is “enormous” adoption of partial autonomy (lock‑on modules, autonomous loitering recon, GPS/INS navigation), though full AI swarms are not yet common.
  • Recent analyses cited in the thread say a broad AI/ML “drone revolution” is not yet here; cheap manual FPV remains dominant due to cost and robustness.

Ethics, Misuse, and Regulation

  • Strong anxiety about “Slaughterbots”-style scenarios: swarms of tiny, autonomous assassination drones targeting civilians, politicians, or journalists.
  • Some argue a global pause is needed; others respond that, unlike nukes, the tech is too cheap and widely available to be meaningfully “pinned.”
  • Worries include terrorism, anonymous political killings, and the erosion of any clear boundary between “battlefield” and civilian life.

Countermeasures and Arms Race

  • Suggested defenses: RF jamming, lasers (e.g., Iron Beam), CIWS-style guns, anti-drone drones, nets, dense surveillance, and possibly EMP-like devices.
  • Concerns that defenses will be costly and localized, while attackers can overwhelm them with cheap swarms; autonomy also undercuts radio‑based jamming.
  • Expectation of “drone vs drone” battles and escalating anti‑drone tech, with combined-arms tactics (e.g., striking air defenses once they reveal themselves).

Technical Details and Limits of the Racing System

  • System runs entirely onboard (Jetson Orin NX + IMU + single forward camera); no GPS, lidar, or motion capture.
  • A reinforcement‑learning policy directly outputs motor commands, replacing classic PID flight control.
  • Commenters note this achievement is in a highly constrained, known-track environment; RL generalization to arbitrary courses or messy real‑world settings is seen as an unsolved problem.

Non-Military and Positive Uses

  • Suggested benign applications: search-and-rescue after disasters, infrastructure inspection, firefighting, accident forensics, and faster medical delivery.
  • Some still see even these as dual-use stepping stones toward more capable weaponized drones.

Ada and SPARK enter the automotive ISO-26262 market with Nvidia

Ada/SPARK vs C++ in safety‑critical systems

  • Debate centers on the F‑35’s move from Ada to C++: some see that as a regression driven by politics/contractors and tooling, not by technical merit.
  • Others argue C++ is fine if heavily restricted (MISRA-style) and backed by strong processes; language choice alone cannot fix bad systems engineering or management.
  • F‑35’s software problems are cited by some as evidence against C++; others blame organizational issues and talent distribution, not the language.

Tooling, ecosystem, and hiring

  • Pro‑C++ side stresses a richer commercial tool ecosystem (IDEs, compilers, analyzers) and larger talent pool.
  • Pro‑Ada side counters that Ada has multiple commercial compilers, static analyzers, and that many safety‑critical projects run on Eclipse‑like or niche IDEs anyway.
  • Concern is raised that specializing in Ada could be career‑limiting in some markets due to HR keyword filtering; others say embedded/firmware skills transfer well and pay in automotive/aerospace is solid.

Rust and other contenders

  • Some expected industry to move toward Rust instead; Rust is already used in some automotive contexts and has ISO‑26262‑qualified tooling (e.g., Ferrocene).
  • View in thread: SPARK targets full formal verification and very high assurance (e.g., “artificial heart” type systems), while Rust focuses more on memory safety with less emphasis on proof.
  • Opinion that Ada/Rust comparison is often distorted by poor or AI‑generated information.

Ada technical and safety properties

  • Advocates highlight: strong typing (including units), built‑in fixed‑point types, range checks, constrained profiles like Ravenscar, and SPARK theorem proving for memory and functional safety.
  • Disagreement over whether Ada’s abstractions are “zero‑cost”: proponents say generics and slicing can be compiled as efficiently as C++/Rust; skeptics worry about copies vs views and lack of clear monomorphization guarantees.

Certification, automotive context, and hardware

  • ISO‑26262 and DO‑178C discussed as process‑heavy standards; languages and tools are “qualified/certifiable” rather than “making systems safe” by themselves.
  • AUTOSAR is portrayed as widely used but bureaucratic and domain‑locked.
  • ECC RAM is said to be standard for serious safety‑critical automotive systems; Toyota unintended‑acceleration cases are referenced as motivation for stronger HW/SW safety.

Redesigned Swift.org is now live

Website redesign, docs, and UX

  • Many like the new visual design, gradients, and use of the bird motif as a scrolling separator; some think it’s “actually cool.”
  • Others criticize usability: missing or hard‑to‑find search, oversized footer, and first‑page examples that lack explanation.
  • Several compare language sites: Swift.org now feels closer to the increasingly complex Go/TypeScript sites; Lua’s and Python’s documentation are praised for clarity and simplicity.

Swift on the server and backend

  • People ask about using Swift (especially Vapor) for web backends; one sees benchmark results where Vapor ranks poorly vs Go, C#, C++, PHP/Laravel.
  • Some attribute this to project maturity and optimization effort rather than language speed; others call the benchmarks “probably bad” and point to Apple’s blog about successful Java→Swift/Vapor migrations.
  • Practitioners report good runtime performance in real projects, with compile times and smaller ecosystem as bigger pain points.

Apple, governance, and community trust

  • Strong resentment toward Apple’s treatment of developers and Swift’s original leadership is voiced; a linked forum incident is seen as reflecting badly on project stewardship.
  • Others argue Google and other vendors aren’t obviously better, though this is disputed with examples of friendlier ecosystems (Go, Dart, Android vs iOS).
  • There’s broad agreement that Swift’s reputation is heavily shaped—positively or negatively—by its origin at Apple.

Language design, complexity, and technical traits

  • Some view Swift as an excellent “sweet spot”: fast, memory‑safe, OO‑friendly, easier than Rust.
  • Critics say it has accreted C++‑like complexity, frequent breaking changes, slow type checking, and convoluted concurrency/actors, with ABI stability arriving relatively late.
  • ARC vs GC is debated: ARC can reduce memory footprint but introduces reference cycles; several say tools like Instruments make leaks manageable, though they’re Apple‑only.

Cross‑platform usage, tooling, and ecosystem

  • Many struggle to see compelling non‑Apple use cases due to weaker libraries and tools; others highlight Swift on Linux/Windows, server, embedded, and GTK, with Qt bindings emerging.
  • Some want better non‑Xcode tooling; others note there is already an LSP, formatter, and usable setups in Emacs, VS Code, and Nova.
  • A prominent marketing line claiming Swift is the “only language” that scales from embedded to cloud is widely criticized as inaccurate, with GitHub issues filed.

Embedded Swift and batteries‑included aspirations

  • The homepage’s emphasis on “Embedded” intrigues people; Apple’s Secure Enclave use is cited, but some say that’s not strong evidence of general embedded viability.
  • Several wish for more batteries‑included server/infra support (e.g., a solid standard HTTP server, more extensive stdlib), similar to Go, even if not everyone agrees on large stdlibs.

Curtis Yarvin's Plot Against America

Perceived Authoritarian Drift & “Plot Against America”

  • Several commenters see Yarvin’s ideas reflected in current U.S. trends: normalized political violence, stronger “unitary executive,” targeting of dissidents, and ambitions to hollow out the federal state in favor of state–corporate power blocs.
  • Some argue Trump is only a “vibes coup”; the real danger is a future, more competent authoritarian who inherits the altered norms and legal tools.
  • Others think this is alarmist or analytically shallow, arguing his premises have already failed predictive tests.

Platforming vs. Ignoring Yarvin

  • There’s tension between “don’t amplify cranks” and “ignoring cranks got us here.”
  • One side says giving him New Yorker treatment legitimizes him and wastes attention; the other says once his ideas influence people in power, mainstream scrutiny is a civic duty.
  • Multiple people stress that debate itself doesn’t “legitimize” ideas; suppression by omission is seen as both unrealistic and dangerous.

Influence on Tech & Current Politics

  • Commenters highlight ties to wealthy tech figures and to a sitting vice president who has publicly praised Yarvin and borrowed his rhetoric about purging the civil service.
  • Project 2025 and related plans to remake the executive branch are repeatedly linked, by commenters, to Yarvin-ish blueprints for “technological fascism.”

Assessment of His Ideas & Ideology

  • Harsh view: his political program is neo-feudal, unoriginal (likened to old technocracy), internally inconsistent, and morally abhorrent (race science, genocide talk, “states own citizens”).
  • Milder critics say his diagnoses of how power actually works and of democratic failure are sometimes insightful, but his solutions are teenage thought experiments.
  • A few defend him as “thought-provoking” and worth reading to stress-test democratic assumptions, while explicitly rejecting his conclusions.

Urbit & Technocracy as Mirrors

  • Urbit is cited as a technical analogue of his politics: grandiose re‑invention of everything, extremely complex, few real problems solved, but impressive-looking to non-experts.
  • Broader critique of technocrats: confusing aesthetic neatness for real-world effectiveness; “Brasilia in Substack form.”

Cultural & Meta Notes

  • Cyberpunk and Stephenson’s worlds (Snow Crash, Diamond Age) are invoked as dystopias misread as aspirational—parallels drawn to Yarvin fans.
  • Some lament that this kind of far-right guru gets major-media oxygen while socialist or egalitarian alternatives remain marginal.
  • On HN itself, users are split between flagging the thread out of exhaustion and insisting that, given his proximity to power, discussion is necessary.

Apple Notes Expected to Gain Markdown Support in iOS 26

Markdown support: export vs input

  • Initial reporting implied full Markdown input; commenters noted the underlying source only mentioned Markdown export.
  • The article was later corrected to clarify it’s export-only, disappointing those who want to write in Markdown or edit notes as plain text.
  • Export to Markdown is still seen as a big win for moving content into tools like Obsidian or Joplin, especially if indentation and attachments are preserved; some are skeptical Apple will cover all metadata and media.

Editing experience: Markdown vs rich text

  • Some argue Markdown typing is slower on touch keyboards due to symbol entry; others say it’s faster than diving into the formatting UI, especially with physical keyboards.
  • Selection and formatting of already‑typed text on iOS is widely described as “fiddly,” so adding Markdown markers after the fact may be easier than using rich‑text tools.
  • There’s a clear split: some love Markdown and even mentally “render” it; others find raw Markdown ugly (especially inline URLs) and want rich text, or at least syntax that auto‑hides like Bear’s implementation.

Data portability and lock‑in

  • Strong warnings against deeply investing in Apple Notes because of opaque storage, weak export, and iCloud dependence.
  • Workarounds mentioned: IMAP‑backed notes (with limited feature support and potential corruption between clients), AppleScript tools on macOS, Shortcuts‑based exporters, and third‑party apps that bulk‑export to Markdown.
  • Some users accept lock‑in and treat Notes as a scratchpad for throwaway or temporary content; important material is moved to plain‑text/Markdown systems elsewhere.

iOS versioning change

  • Many were initially confused by “iOS 26,” assuming it was a joke or typo, then realized Apple is rumored to be moving to year‑style version numbers.
  • CalVer is seen as more understandable for non‑experts and helpful in aligning Apple’s OS releases, though some view it as marketing‑driven (and possibly influenced by competitors’ naming).

Markdown’s broader rise

  • Discussion connects Apple’s move and Microsoft Notepad’s new formatting/Markdown support to Markdown’s mainstreaming.
  • Several note Markdown’s importance as a “native language” for LLMs and its de facto dominance among lightweight markup formats, despite many dialects and alternatives (Org, AsciiDoc, LaTeX, etc.).

Perceptions of Apple Notes quality and scope

  • Users appreciate Notes’ simplicity, E2E encryption, and attachment support, but complain about bugs (cursor jumps, disappearing text, table glitches) and poor search.
  • Some want Notes to stay simple and free of LLM features, others find it too limited and migrate to Bear, Obsidian, Upnote, or Org‑based tools.
  • A recurring theme: Apple ships minimal features, leaves apps stagnating for years, then slowly backfills obvious capabilities like Markdown export.

A proposal to restrict sites from accessing a users’ local network

Security motivation and threat models

  • Many commenters support restricting public sites from talking to localhost or private IPs, calling current behavior “insane” given:
    • Web pages can already issue HTTP requests to LAN devices and localhost (including no-cors “simple” requests, <img> and <form> CSRF, WebSockets, DNS rebinding, timing attacks).
    • Examples discussed: Meta/Facebook localhost tracking IDs, Zoom’s localhost server zero‑day, routers/printers/dev servers with unauthenticated or weakly checked endpoints, Bitcoin or JSON‑RPC APIs with poor Content‑Type checks.
  • Clarifications: CORS mainly governs whether JS can read responses; many “simple” requests still reach local devices and can change state or trigger exploits even if the response is opaque.

Legitimate use cases that might break

  • Wide range of existing patterns rely on browser→local HTTP:
    • Desktop helpers behind web UIs (password managers, CAD/3D mouse support, 3D printers, Plex‑style dashboards, enterprise agents, OAuth redirect listeners on localhost).
    • IoT and NAS/router management UIs using a single cloud page that CORS‑fetches metrics directly from multiple local devices.
    • Local signing with smartcards/e‑IDs, local LLM/AI servers, Jupyter, test report viewers, LAN file‑sharing tools (e.g. PairDrop).
  • Some argue these are better implemented as:
    • Browser extensions with native messaging, custom protocol handlers, or fully local/static UIs.
    • UIs hosted on the device itself; but others describe severe pain around HTTPS, certificates, and “secure context” APIs for devices without public domains.

Permissions, UX, and granularity

  • Many welcome per‑site browser prompts (“this site wants to access devices on your local network”) similar to camera/mic.
  • Others think prompts are largely ineffective, citing macOS/iOS/UAC patterns where users reflexively click “Allow,” especially when sites instruct them to do so.
  • Concerns that the proposed permission is coarse‑grained (all local hosts/ports) and violates least‑privilege; suggestions include:
    • Firewall‑like controls (CIDRs, ports, localhost vs LAN vs VPN) and possibly mDNS‑backed, human‑readable device names in prompts.
    • API surfaces for extensions to enforce more detailed policies (some of which were weakened by Chrome’s Manifest V3).

Definitions of “local” and IPv6 / enterprise edge cases

  • Debate over using RFC1918 as “local”:
    • Enterprises often run massive 10/8 networks across many hops; some clients have public IPs but access RFC1918 services via VPN or site‑to‑site tunnels.
    • CGNAT and IPv6 (global addressing with ULAs/link‑local) blur local vs remote; some say there is no robust way to infer “site‑local” from an IPv6 address alone.
  • Several prefer admin‑configurable policies (group policy CIDR lists), or basing decisions on origin+destination categories rather than raw address classes.

Trust in Google and standards politics

  • Significant skepticism that Google’s motives are purely security‑driven; some see this as raising the bar for self‑hosted/local solutions while Google’s own tracking and ads remain untouched.
  • Others counter that the threat (local probing, tracking, and exploitation) is real and that a vendor‑neutral spec any browser can adopt is still valuable despite Google’s market power.

Alternatives and current mitigations

  • Users mention:
    • uBlock Origin’s “Block outsider intrusion into LAN” list, Firefox’s Behave extension, legacy IE security zones, Safari on iOS already asking per‑site for local access.
  • Some advocate going further and generally clamping down on cross‑site requests, pushing aggregation back to servers instead of browsers.

Mistral Code

Product clarity and availability

  • Several commenters couldn’t tell from the landing page whether Mistral Code is a CLI, IDE plugin, or standalone IDE; the VS Code extension link is buried and then leads back to “enterprise only.”
  • The official FAQ confirms: Mistral Code is currently a premium feature only available to Enterprise customers with an active agreement and allocated “seats.”
  • Many developers say they’d like to try it personally but are blocked by the enterprise gate, making the HN launch feel pointless for them.

Enterprise-only and “contact us” pricing

  • Strong pushback on “contact us” pricing, especially with a form asking for company size and revenue; some call it “dead on arrival” for individual developers and small companies.
  • Others argue this model is standard and effective for high-ticket enterprise software, where deals are in the tens or hundreds of thousands and require contracts, security reviews, and ongoing relationship management.
  • There’s debate over whether bespoke pricing is primarily about value-based, complex deployments vs. extracting maximum money and filtering out “small potatoes.”

Target market and competitive context

  • Several infer Mistral is targeting large, compliance‑sensitive orgs (e.g., EU banks) that need EU-hosted/self-hosted solutions and are wary of US providers.
  • Critics note Mistral’s models are not generally seen as SOTA and suggest that if they were, transparent pricing and self-serve trials would be easier to justify.
  • Others point out that Copilot (with free quotas), Zed+Ollama, and various open agents already provide similar or better experiences, often locally and cheaply.

Fork of Continue and open‑source monetization

  • The VS Code listing states Mistral Code Enterprise is a fork of Continue, with credit given.
  • Some say you can largely replicate Mistral Code by configuring Continue with Mistral models (e.g., Codestral/Devstral) and possibly fine‑tuning via Mistral’s API.
  • This triggers a broader discussion:
    • Monetizing Apache/MIT/BSD-licensed projects is legally fine; concerns are moral/economic, not legal.
    • Others recommend copyleft (GPL/AGPL), MPL, BSL, or dual licensing if you don’t want big companies to capture all the value.
    • There’s frustration that permissively-licensed projects get commercialized by better-resourced companies while original authors see little financial benefit.

Local vs hosted assistants and future of this niche

  • Some believe IDE assistants will trend toward “zero-cost and local” using open models and local runners; competition against that with locked-down, remote enterprise products may be difficult.
  • Others counter that many enterprises explicitly want managed, centrally controlled solutions and will accept “contact us” friction to get compliance, security, and support.

Tesla shows no sign of improvement in May sales data

Investor Focus & Valuation

  • Several comments argue current shareholders largely ignore weak sales and are pricing in speculative future cash flows from robotaxis, Optimus robots, and “largest AI project” narratives.
  • Others say this is pure faith/FOMO: market cap and P/E are seen as meme‑stock levels, disconnected from car‑company fundamentals.
  • Some suspect institutional investors are quietly exiting while retail and ETFs hold the bag; a few even flirt with “Enron 2.0”–style conspiracy theories, while others push back that conspiracies are overused explanations.

Robotaxis, FSD, and Safety

  • Many are deeply skeptical that Tesla’s robotaxi plans justify its valuation, even if technically successful; comparisons to Uber’s thin margins are common.
  • Waymo is repeatedly cited as the benchmark: millions of paid driverless rides, lidar-based sensing, few major safety scandals.
  • Tesla’s camera‑only approach is criticized as inherently riskier; some suggest Musk can’t admit lidar might be necessary without massive retrofits.
  • Owners are split: some say FSD now drives long distances with almost no interventions; others report it struggles on nonstandard roads and see it as far from the promises (e.g., coast‑to‑coast drives, 1M robotaxis by 2020).
  • Calls for independent, third‑party safety validation instead of Tesla‑supplied statistics are strong.

Build Quality, Reliability, and Design

  • Numerous references to “ridiculous” manufacturing defects, longstanding QC problems, and poor reliability surveys; Cybertruck is cited as a particularly botched launch.
  • Critics say interiors have been cheapened and UX degraded: loss of stalks, everything on a central touchscreen, awkward door handles, and minimal physical controls viewed as cost‑cutting, not elegance.
  • Some disagree, praising the 3/Y as comfortable, simple, and cheaper than German EVs with similar features.

Competition and Alternatives

  • Many point out that Tesla’s early technical lead has eroded: Hyundai/Kia, Ford (Mach‑E, F‑150 Lightning, future models), GM, Nissan, Toyota, and especially Chinese makers like BYD/NIO now offer compelling EVs.
  • The shift to NACS by other automakers is seen as removing Tesla’s Supercharger moat.
  • Some buyers are explicitly choosing hybrids (e.g., Prius) or non‑Tesla EVs for cost, reliability, or road‑trip economics.

Politics, Musk, and Brand Toxicity

  • A major thread is political: multiple commenters say they won’t buy or will sell Teslas due to Musk’s behavior, perceived authoritarian and extremist sympathies, and heavy entanglement with government contracts.
  • Others lament “giving up high‑quality EVs over politics,” but get strong pushback that supporting or boycotting a CEO is a legitimate factor once products are no longer uniquely superior.
  • Several owners report they like their cars but will never buy another; some sold solely to avoid association with the brand.

Consumer Behavior & Market Trajectory

  • Anecdotes suggest a growing cohort of ex‑or‑current owners moving to Hyundai, Toyota, etc., and friends who abandoned Tesla purchase plans.
  • Some still praise Teslas as “transformational” and hold the stock, hoping for long‑term upside.
  • Overall sentiment in the thread leans toward: once‑innovative product, now facing stronger competition, quality issues, and severe reputational damage that may not yet be fully reflected in sales or stock price.

We are no longer a serious country

Bond markets, tariffs, and Fed risk

  • Commenters agree bond markets matter more than equities and that policy whiplash can erode confidence in the dollar.
  • Some fear a future purge of Fed leadership and replacement with loyalists, seeing this as a 1930s-style risk.
  • Others note markets have already forced the administration to back down on tariffs more than once, but opponents say the repeated attempts show persistence, not restraint.
  • The “Taco Trade” is cited as an example of traders exploiting tariff-related volatility.

Is the administration “dumb” or strategically ruthless?

  • One camp sees the administration as incompetent, pointing to poorly drafted executive orders, constitutional overreach, and weak implementation capacity.
  • Another argues it is legally aggressive and coherent in its own goals: generating volatility, enriching insiders, expanding foreign-policy tools, and sending hardline messages to allies and rivals.
  • Several emphasize that incompetence and danger can coexist; even “failed” orders still enable abuses by agencies.

Symbolism, bill naming, and seriousness

  • Some dismiss focus on the “One Big Beautiful Bill Act” name as trivial; only legal effects matter.
  • Others see the name as emblematic of shallow branding and a broader lack of seriousness among enablers.
  • There is debate over whether such branding is actually quite effective at agenda-setting and distraction.

US hegemony, rules-based order, and tariffs

  • Critics argue US elites long ago embraced an exceptionalist “rules-based order” that mainly means “the US makes the rules,” with diplomacy hollowed out.
  • Defenders counter that US security spending and interventions (e.g., protecting shipping lanes) deliver global public goods and soft power.
  • Tariffs are viewed both as a dangerous form of saber-rattling that could undermine reserve-currency status and as a potent diplomatic lever.

Red vs blue politics and voter grievances

  • There is disagreement over how much current policy is “red state,” but many see strong overlap on social conservatism, deregulation, tax cuts, and hostility to federal bureaucracy.
  • Multiple comments probe why poorer red-state voters support such policies:
    • Some emphasize emotional grievance politics, scapegoating of out-groups, and delight in “punching down.”
    • Others note shared left–right grievances about elites, inequality, and loss of control, even if right-wing policy responses worsen those problems.
  • Data and anecdote alike are cited that red states generally have worse socioeconomic outcomes, yet out-migration toward them suggests their policy mix still attracts many.

Interpreting the economic signals and Krugman’s framing

  • Some see the divergence between interest rates and the dollar as historically worrying and consistent with the article’s thesis of declining seriousness.
  • Others accuse the article (and the Financial Times chart it references) of cherry-picking time windows; longer-range plots paint a more ambiguous picture.
  • A recurring frustration is the mix of valid concerns with rhetorical flourishes (e.g., title, bill name) that feel more like partisan doomsaying than sober risk assessment.

Decline, rhetoric, and public disengagement

  • Historical declines (Venezuela, Argentina, Portugal, Rome) are invoked to argue rich countries can indeed fail and the US might be on that path.
  • Another group criticizes absolutist claims like “no longer a serious country” as lazy, ahistorical, and unhelpful for finding solutions.
  • Several comments highlight widespread public detachment from markets and policy, polarized information ecosystems, and the difficulty of finding non-partisan, analytically rigorous guidance on what genuinely matters.

VC money is fueling a global boom in worker surveillance tech

AI, capitalism, and the purpose of surveillance

  • Commenters frame surveillance tech as an old trend supercharged by AI, arguing it’s being used to punish and control rather than help workers.
  • Several contrast “AI + socialism” (post-scarcity / The Culture–style utopia) with “AI + capitalism” (techno-feudalism, Manna-like micro‑management).
  • A minority suggest surveillance is a pragmatic response to eroded norms of duty and self‑accountability; others strongly counter that productivity is already high and stagnating wages, not “counterculture,” explain worker disengagement.

Global scope, labor rights, and sector differences

  • Some say strong labor protections (EU, Switzerland) make such tech largely irrelevant or illegal, but others insist “bossware is everywhere,” just with different legal constraints.
  • Academic, research, and healthcare jobs are seen by some as relatively insulated; others report growing monitoring of students, lecturers, labs, and hospitals.
  • Multiple participants say small businesses can be more invasive than large enterprises because owners face less internal oversight.

Business justifications vs critiques

  • Supporters of basic tools (GPS‑based clock-in, identity checks) argue they’re vital for small, low‑margin businesses and remote hourly work, especially outside rich countries.
  • Critics respond that constant monitoring reflects managerial failure: if performance changes can’t be seen via outcomes, the business model or management is broken.
  • There is debate over whether identity verification is “surveillance tech”; some see it as neutral infrastructure, others note it underpins blacklists and abusive profiling (e.g., in Mexico or with national ID schemes).

VCs, “Little Tech,” and regulation

  • Many see VC‑backed “worker surveillance” and “Little Tech” narratives as rent‑seeking: calls to relax regulation are read as attempts to exploit loopholes and externalize harm.
  • Others push back that US regulation is already heavy in many sectors; the real VC complaint is about IPO/SPAC rules limiting liquidity and exits.

Social consequences, resistance, and arms race

  • Surveillance is linked to broader wealth concentration, enclosure‑like loss of autonomy, and low‑trust societies.
  • Suggested responses: stronger legal limits (narrow scope, minimal retention, strict sharing rules), refusing to work for surveillance startups, and public awareness.
  • An arms race is noted: monitoring mouse activity vs mouse jigglers, with warnings that such “fraud” can increase employers’ leverage over workers.

IRS Direct File on GitHub

Political fate and lobbying concerns

  • Many commenters say Direct File is being shut down after the 2024 season due to a new funding bill, tying this to long‑standing influence from the tax‑prep lobby (especially Intuit) and anti‑tax activists.
  • Others push back on specific claims (e.g., timelines, donation patterns), but broadly agree Congress, not the IRS, blocked expansion and continuation.
  • There’s debate over partisan responsibility: several blame Republican opposition to easy filing; others argue both parties are susceptible to lobbying and neoliberal privatization.
  • Some note Direct File proved the idea is technically and operationally feasible; ending it is now framed as a political choice, not an impossibility.

Role of Direct File vs. commercial tax software

  • A recurring theme: most Americans have simple returns (W‑2, a few 1099s, standard deduction, common credits) and should not have to pay for filing.
  • Commenters argue the IRS already receives much of the needed data and could pre‑fill returns or at least show taxpayers what it knows and ask for corrections.
  • Opponents highlight missing information (marital status changes, dependents, self‑employment details, state and local taxes, ambiguous deductions) and fear that if the IRS “starts the return,” some people won’t add tax‑increasing items.
  • Some see business opportunities in building on Direct File; others respond that the whole point was to remove the need for such businesses in simple cases.

Technical design and codebase

  • Interoperability is via the existing Modernized e‑File (MeF) API used by commercial providers; several say the hard part was encoding tax law, not wiring the API.
  • The codebase is large, “boring enterprise” JS/TS/Java with Scala for the “Fact Graph” rules engine; some praise its cleanliness and extensive design docs.
  • The Fact Graph models tax rules as a DAG of facts and derived values, good for incomplete information but limited where the law is circular or interpretive.
  • A deeply nested reactive Java controller sparked criticism as hard to read and test, with some blaming misuse of reactive frameworks.

Open source, licensing, and trust

  • As federal work, the code is in the public domain; this is welcomed as a potential baseline for competitors, but it also means anyone can redistribute modified versions.
  • Some wish for commit signing and a verifiable chain of custody; others note the GitHub mirror is just a scrubbed dump and that public‑domain status weakens the value of signature semantics.

Broader tax‑system reflections

  • Multiple commenters compare to countries where pre‑filled returns take minutes, viewing the US system as deliberately complex to support private intermediaries and maintain tax “pain.”
  • Others note that instructions and paper forms are usable for many filers, but software shines at discovering which forms and edge cases apply.

Prompt engineering playbook for programmers

Confusion, Non-determinism, and Inconsistency

  • Several programmers describe LLM behavior as disorienting: small wording changes yield different answers, results vary run-to-run, and model updates break previously “good” prompts.
  • This randomness makes it hard to justify heavy upfront “prompt engineering” as a reliable process.

Utility vs Overhead for Programmers

  • Many say they’d rather just write the code: by the time a long prompt is crafted, refined, and run, the task could be done manually.
  • LLMs are seen as most useful for quick documentation lookup, simple refactors, boilerplate, and debugging when you paste actual code or stack traces.
  • Agentic tools are criticized for burning money/tokens and sometimes mangling codebases.

Are Prompt Guides Really Needed?

  • One camp: guides are mostly hype; you learn “prompt fu” quickly through use, similar to early “Google-fu” books.
  • Another camp: many users won’t systematically experiment, so curated tips and watching experts can meaningfully improve results and reveal non-obvious tricks.

Long vs Short Prompts, Context, and Iteration

  • Multiple commenters report that shorter, focused prompts plus iterative refinement outperform long, intricate ones.
  • Irrelevant detail is seen as harmful; relevant, structured context (markdown specs, project structure, style guides) can help.
  • Some feel verbose prompts just get echoed back and give a false sense of control.

Specific Prompting Techniques Discussed

  • Widely cited “real” techniques:
    • In-context examples (few-shot).
    • Chain-of-thought / step-by-step reasoning (less critical on newer “reasoning” models).
    • Structured output (JSON or schemas, sometimes via two-stage prompts).
    • Role / framing prompts (expert vs junior, critic vs collaborator).
  • Using domain-specific jargon and expert vocabulary can noticeably change answers.

Debate Over “Prompt Engineering” as a Concept

  • Strong pushback against calling this “engineering”: it’s described as trial-and-error, craft, or just clear technical writing/communication.
  • Others argue that once you systematically test, version, and benchmark prompts—especially in pipelines and API workflows—it starts to resemble engineering practice.
  • There’s concern that the focus on prompting is displacing core programming skills, though some see “native language as a programming language” as an inevitable shift.

Limits and Need for Better Tooling

  • Consensus that if a task is beyond current LLM capability, no prompt will fix it; you must decompose into subtasks.
  • Some argue that IDEs/agents should handle low-level prompt optimization, with developers focusing on clear requirements and tests rather than micromanaging prompts.

FFmpeg merges WebRTC support

What was added

  • FFmpeg’s libavformat now includes a WHIP muxer, enabling it (and apps using its libraries) to send media via WebRTC.
  • WHIP (WebRTC-HTTP Ingestion Protocol) = push media into a WHIP gateway that handles full WebRTC (ICE/STUN/SCTP, etc.) to endpoints.
  • It does not yet implement WHEP (the receiving/egress side) and does not expose the low-level SCTP datachannel parts of WebRTC.

Practical use cases and impact

  • Any FFmpeg-based tool (OBS, GStreamer, custom apps, Unreal streaming setups, etc.) can now ingest into WHIP-enabled services (e.g., some CDNs, Twitch, Cloudflare, LiveKit, Dolby).
  • Enables low-latency browser consumption of streams without users opening ports, e.g.:
    • Remote desktop / remote control, drones, robots.
    • Art installations / city “portals” with near real-time audio/video.
    • Recording or transcoding live WebRTC streams directly with FFmpeg CLI or libraries.
  • Seen as a major step toward a ubiquitous open broadcasting protocol across desktop, mobile, web, and embedded platforms; potentially lowers cost vs traditional transcoding-heavy setups, especially with simulcast and modern codecs.

Latency, protocols, and alternatives

  • WebRTC is praised for sub-second latency over the open internet, especially under loss, where UDP+FEC/NACK can outperform TCP.
  • Some point out that on clean LANs, very low latency is also possible with plain TCP streams; browsers’ HLS-centric ecosystem is the real barrier.
  • Discussion of HLS vs LL-HLS and why the industry ended up using complex WebRTC for low latency.
  • Comparisons with SRT; some users struggle to get SRT under ~1s, while WebRTC can reach ~100 ms on LAN.

Security and hardening

  • Concern that adding WebRTC increases FFmpeg’s attack surface, referencing many WebRTC-related CVEs in browsers.
  • Others argue most CVEs are implementation-specific (browsers, libvpx), not inherent to the protocol; FFmpeg’s WHIP piece is small and narrow.
  • General consensus: FFmpeg is already sensitive software; treat it as untrusted when handling user input:
    • Run in containers or VMs, use seccomp, minimize enabled muxers/decoders (e.g., --disable-muxer=whip if unused).

Implementation status and project process

  • Building requires --enable-muxer=whip and --enable-openssl; some got 500 errors with certain providers until they added an audio stream.
  • There is interest in future WHEP and possibly more complete peer-side functionality (ICE, SCTP, datachannels), but that’s not standardized or exposed yet.
  • Some FFmpeg developers criticize the merge process: a single large squashed commit, limited tests, and missing features like NACK support; concern about half-finished vendor-driven features entering tree before being production-ready.

AGI is not multimodal

Scope and meaning of AGI and “intelligence”

  • Many argue “AGI” is poorly defined or even meaningless; likewise for “intelligence” itself, since there is no agreed test that definitively requires it.
  • Some suggest humans aren’t truly “general” either, since each person only covers a subset of domains, yet the human brain architecture is general and can be specialized.
  • Others propose a practical definition: an architecture that can be cheaply copied and fine‑tuned across many tasks is “functionally AGI,” in which case current large models may already cover a big fraction of what’s needed.

Embodiment, world models, and the physical environment

  • A major thread supports the article’s claim that general intelligence requires being situated in and acting on an environment (physical or rich digital), learning through interaction and consequences.
  • Examples: self‑driving failures where “everyone just knows” the dynamics of vehicles; developmental psychology notions of enriched environments, episodic memory, and executive control.
  • Some broaden embodiment to any agent loop with perception and action, including purely digital office or simulation environments, while others insist physical reality is uniquely constraining and essential.

Multimodality and senses

  • Many commenters say AGI “must” be multimodal (at least vision, audio, etc.), but disagree on whether it needs all human senses (smell, taste) or human‑like ranges.
  • Disability analogies (blind or deaf humans) are used on both sides: to argue senses are not essential to intelligence, or to argue they shape the scope and kind of understanding.
  • There is interest in non‑human modalities (infrared, EM fields) and the idea that more modes increase the ability to correlate and generalize.

LLMs, next‑token prediction, and world models

  • One side criticizes the article’s framing of LLMs as “just next‑token predictors” and notes that this interface does not constrain internal computation; long coherent outputs imply substantial internal planning and semantic structure.
  • Others stress that current models operate over symbol statistics, not grounded experience; coherence and “reasoning” may be sophisticated mimicry of human text rather than learned causal world models.
  • Debate continues over whether semantics can ultimately “reduce” to patterns over symbols, or whether sensorimotor grounding and continuous experience of time are indispensable.

Research directions, capital, and practicality

  • Some recount attempts at embodied AI (e.g., robotics startups) that burned large sums without clear success, contrasting that with immediate commercial value of LLMs, which pulls funding away from harder embodiment work.
  • A few see the paper as mostly re‑stating that “we need to understand how to build AGI to build AGI,” with little concrete technical guidance; others find it a valuable synthesis that pushes beyond naive multimodal “model‑gluing” and scale‑only strategies.