Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 229 of 356

Shutting Down Clear Linux OS

Clear Linux’s performance and user experience

  • Widely regarded as one of the fastest Linux distros; even AMD reportedly used it in benchmarks.
  • Users praise its stability and speed in production (e.g., multi‑year uptime on EPYC servers, Minecraft server performance, custom SteamOS builds).
  • Others report bugs and driver instability even on Intel hardware (e.g., NUCs).
  • Perceived performance gains attributed to aggressive compiler flags, transparent hugepages, function multi‑versioning, kernel tweaks, stateless/minimalist design, and “bloat removal,” not to Intel’s proprietary compiler.
  • Some dispute claims of ultra‑fast boot; experiences range from sub‑10 seconds on other distros to ~30 seconds on Clear.

Shutdown handling and trust

  • The “effective immediately” end of updates is criticized as irresponsible and damaging to user trust; users want a grace period to migrate before security patches stop.
  • Several say this reinforces a general rule to avoid software that depends on a single corporation.

Intel’s layoffs, strategy, and software ecosystem

  • Many tie the shutdown to large, ongoing Intel layoffs and cost‑cutting, not to Clear Linux’s technical value.
  • Discussion of layoff practices: abrupt terminations vs. longer notice, sabotage/insider‑threat risk, legal protections (e.g., WARN Act, collective bargaining) in some jurisdictions.
  • Layoffs are framed as EPS management and wage suppression, with concern about repeated rounds destroying morale and talent.
  • Fear that this casts doubt on other Intel software (QAT, GPU drivers, MKL, Kata Containers, etc.), making developers hesitant to depend on them.
  • Some argue Intel has a long “graveyard” of abandoned projects and is “cooked”; others note ongoing value in fabs and ecosystem contributions.

Corporate vs community projects and tech choice

  • One camp advocates “boring” tech, Lindy‑effect picks (Debian, FreeBSD) to avoid rug pulls.
  • Counterarguments: these choices have real costs (e.g., apt’s scripting model, slow container builds) and may delay adoption of genuinely transformative tech (e.g., Kubernetes).
  • Debate over simple rules like “avoid corporate/VC projects”: some say they’re necessary; others note corporations fund major R&D and community projects can also stagnate.

Forks and alternatives

  • Some predict a fork by ex‑maintainers; others doubt it without salaries and ongoing funding, citing fork fatigue.
  • Users are now evaluating replacements (e.g., CachyOS, other fast distros) and, in some cases, reconsidering future Intel hardware purchases in favor of AMD.

EPA says it will eliminate its scientific research arm

Reaction to eliminating EPA’s research arm

  • Strong majority of commenters view dismantling EPA research as reckless and destructive, especially for long‑term public health and environmental protection.
  • Several describe direct collaborations with EPA scientists that materially improved regulation (e.g., toxicity modeling), arguing research is essential to detect new hazards.
  • A minority argue EPA is bloated, politicized (e.g., DEI programs, focus on CO₂), and insufficiently effective on industrial/military pollution, seeing the cut as overdue or at least predictable under the new administration.

Voters, propaganda, and “voting against interests”

  • Many see this as part of a long “divide and conquer” arc: redirect resentment toward minorities while advancing deregulation and tax cuts.
  • Multiple comments cite work showing rural hospital closures (linked to Medicaid non‑expansion) actually increase Republican support, reinforcing the idea that suffering doesn’t shift partisan loyalties.
  • Others stress decades of right‑wing media and corporate funding convincing voters that agencies like EPA are biased or anti‑freedom, so dismantling them can be framed as “draining the swamp.”

Institutional fragility and the Constitution

  • Thread repeatedly broadens to the erosion of U.S. checks and balances:
    • Executive power used to sabotage agencies Congress funds, effectively nullifying appropriations.
    • Supreme Court seen by many as partisan, dismantling the administrative state and long‑standing precedents; defenders counter that this Court overturns fewer precedents overall and is “judicially conservative.”
    • Disputes over whether delegating legislative and quasi‑judicial power to agencies is compatible with the Constitution at all.
  • Some argue the system has always run “on belief” and civic norms; once a faction stops honoring those norms, paper constraints fail.

Social media, “weaponized stupidity,” and free speech

  • Widespread view that targeted manipulation via social media has outpaced citizens’ ability to reason about politics, enabling attacks on expertise and institutions.
  • Comparisons to Germany’s more “defensive” constitutional model and speech limits; others note such legal tools still depend on political will.

What comes next

  • Some predict future administrations will either:
    • Try to rebuild public research capacity, likely via expensive private contracts and further privatization, or
    • Be unable to restore capacity at all, accelerating U.S. decline.
  • Underlying fear: once scientific capacity and institutional trust are dismantled, rebuilding them is far harder than tearing them down.

Marathon fusion claims to invent alchemy, making 5000 kgs gold per gigawatt

How the scheme works (as discussed)

  • Commenters note this is an add‑on to a future deuterium‑tritium tokamak: use 14 MeV neutrons in the blanket to turn Hg‑198 into Au‑197 via (n,2n) → Hg‑197 → Au‑197.
  • Gold is claimed as a by‑product: the plant supposedly still generates full power and breeds tritium.
  • Several comments emphasize this is not a fusion reactor design, but a neutronics blanket configuration around one.

Radioactivity and storage issues

  • Produced material is a mix of stable gold and radioactive mercury isotope; refining could isolate gold, but refineries may avoid radioactive feedstock.
  • Paper estimates: ~13 years storage to avoid radioactive‑waste labeling; ~17 years to reach “banana level” activity.
  • Some think this delay is trivial for vault‑stored bullion; others expect strong public/market stigma against “nuclear gold.”
  • Industrial uses (electronics, medical) might be more sensitive to residual radioactivity; jewelry and vault storage less so.

Economics and impact on gold price

  • One calculation (assuming 5000 kg per GW‑year) claims raw electricity cost alone could exceed current gold price, but others stress electricity is the main product; gold is extra revenue.
  • If many GW‑scale plants existed, thousands of tonnes/year of gold could be added, potentially lowering prices—but commenters note fusion deployment at that scale is many decades away and may never fully saturate demand.
  • Financial engineering ideas: “maturing” notes for 17‑year vault gold, analogous to bonds or aging whisky/cheese.

Mercury‑198 supply and separation

  • Hg‑198 is ~10% of natural mercury; discussion centers on isotope separation to bring costs toward a few $/kg as assumed in the paper.
  • Some question current very high Hg‑198 prices; others argue they reflect tiny bespoke markets, not scalable separation costs.
  • Concerns that global mercury and Hg‑198 supply fundamentally cap how much gold can ever be produced this way.

Fusion vs other neutron sources

  • Commenters ask why not use fission or accelerators; responses note the need for ≥9 MeV neutrons.
  • D‑T fusion’s 14.1 MeV neutrons provide both sufficient energy and huge flux; fission/accelerator neutrons would likely be uneconomic at scale.

Feasibility, timeline, and use cases

  • Multiple comments describe this as “fun, almost sci‑fi,” but stress it depends on commercially viable fusion, which is still decades away.
  • Some see it mainly as a way to enhance early fusion‑plant economics or as a form of mercury waste disposal, not a route to unlimited cheap gold.

Skepticism and presentation

  • Some distrust the marketing tone (“once‑in‑a‑century feat” with little discussion of pitfalls).
  • Others link to an external technical critique suggesting the physics and simulations are plausible but the concept remains very low TRL and tightly constrained by tritium‑breeding design margins.

Broader implications and side topics

  • If any stable element can be synthesized with fusion neutrons, commenters speculate that many metals (silver, rhodium, iridium) could lose scarcity as stores of value.
  • This leads to joking about only cryptocurrencies remaining as “un-synthesizable” scarce assets, alongside general gold/crypto/finance humor.

AI capex is so big that it's affecting economic statistics

Scale and Nature of AI Capex

  • Commenters note AI capex is now ~1.2% of US GDP, which is striking for such a new category but still small vs historical mega-programs (railroads, Apollo, WWII).
  • Some argue the framing “eating the economy” overstates things; others emphasize the velocity: going from near-zero to a Norway-sized share of GDP in a couple of years is unprecedented.
  • Debate over whether this is truly “AI capex” versus generic cloud/datacenter buildout with an “AI” label; several point out that Nvidia GPU sales and ad-driven ML (Meta, Google) are the real drivers.

Bubble, ROI, and Opportunity Cost

  • Strong disagreement on whether this is a bubble: critics ask “how many signs do we need?”; defenders say we’ll only know after the pop, or after clear positive ROI.
  • Some highlight inconsistency in claiming AI capex both starves other sectors and fully multiplies GDP—if funds are diverted, the counterfactual multiplier must be considered.
  • Others counter that higher expected AI returns raise the hurdle rate, starving marginal non‑AI projects even if the overall economy grows.
  • A recurring theme: massive spend on a rapidly depreciating asset (GPUs, short-lived models) vs past capex on century-scale infrastructure (rail, fiber).

Reuse, Depreciation, and Hardware Aftermath

  • Concern that, once hype fades, companies may destroy or mothball GPU fleets for tax and logistical reasons; others note that at scale liquidators usually extract value, not landfill it.
  • Some hope for repurposing: drug discovery, scientific computing, cheap gaming/VR, or other yet-unknown uses, echoing how dark fiber post‑dotcom later fueled new startups.
  • Skepticism that we will systematically reuse everything; layoffs and capacity destruction are seen as more likely in some scenarios.

Energy, Environment, and Power Infrastructure

  • Widespread concern about power demand: estimates (within the thread) of US AI datacenters rising to ~70–90 TWh/year and already being a noticeable share of US electricity.
  • Heated debate over renewables: some want mandates that every new AI DC be powered by clean energy; others note datacenters need firm, not intermittent, power and that long-duration storage and permitting are real bottlenecks.
  • Several point out that big cloud firms are currently among the largest buyers of renewable PPAs and are exploring nuclear (especially small modular reactors), but siting, regulation, and bureaucracy slow deployment.
  • Water use and local impacts (cooling, grid capacity, political capture) are recurring worries; some argue the long-lived energy infrastructure is the real durable benefit if AI fizzles.

Labor, Automation, and Inequality

  • Major thread on obsession with replacing white‑collar work (developers, lawyers, analysts). Many interpret this as executive desire to cut payroll and shareholder‑driven cost minimization.
  • Others frame automation as historically normal: like spreadsheets, CAD, and calculators, AI will compress some job categories (5 people vs 50) but not necessarily eliminate professions.
  • There’s visible resentment and schadenfreude: non‑tech workers enjoying the idea that the “learn to code” crowd now faces a similar threat.
  • Deep disagreement about whether AI-led efficiency gains will be broadly deflationary and welfare‑enhancing, or just enrich capital owners and further hollow out the middle class.
  • Several stress that increased efficiency doesn’t help displaced workers without structural changes (ownership, safety nets, or new domains of demand).

AI Capabilities, Limits, and “Kaboom” Debate

  • One camp sees “unstoppable progress”: chess/Go, protein folding, now competition-level math, arguing anything formalizable and cheaply verifiable will eventually be dominated by AI, justifying huge capex.
  • Skeptics say the promised “kaboom” hasn’t shown up in drug prices, film/game quality, or clearly transformative non-demo applications; they see impressive toys but fragile systems and lots of slop content.
  • Many report real productivity gains for search, coding, and analysis, especially for non-experts; others share frustrating experiences with agents, RAG, and context limits, arguing that current LLM+scaffolding is brittle.
  • Dispute over whether LLMs truly “reason” or just approximate reasoning unreliably; some cite recent math benchmarks, others call out hype, unverifiable claims, and bluffing on formal contests.

Historical Analogies and Long-Term Outlook

  • Comparisons to railroads, telegraph, dotcom fiber, and nails: earlier overbuilds created stranded assets that later underpinned new waves of innovation, often after investors were wiped out.
  • Some note that past capex (rail, Apollo) clearly built broad, durable public goods; in contrast, AI capex might concentrate gains, accelerate capital centralization, and not distribute benefits as widely.
  • Several expect an eventual crash in LLM valuations and GPU demand to be “glorious,” leaving behind cheap compute and overbuilt datacenters that future startups repurpose.
  • Overall divide: one side views current AI capex as rational investment in a general-purpose technology with decades of productivity gains ahead; the other sees a speculative frenzy burning power, hardware, and money without commensurate, evidenced societal return—yet.

Broadcom to discontinue free Bitnami Helm charts

Perceived Risk Around Spring and Broadcom

  • Some commenters say Bitnami’s move reinforces fears about Broadcom’s stewardship of other VMware assets, especially Spring.
  • In at least one enterprise, Spring Boot is now classified as a top risk, with mandated migration paths to alternatives (Quarkus, Helidon, Micronaut, Vert.x, Pekko, Jakarta EE).
  • Specific worries: license changes (e.g., BSL/closed source), key features moving behind paywalls, reduced staffing and slower security fixes, and dependence on a single vendor.
  • Others argue this is likely overreaction: Spring is widely used, forkable, and large players could sustain a community fork if needed.

What’s Changing With Bitnami and Helm Charts

  • Bitnami Helm charts and container definitions remain Apache-2 licensed on GitHub; the main change is discontinuing free distribution of most prebuilt images on Docker Hub.
  • All historical images are copied to bitnamilegacy, which stops receiving updates after Aug 28, 2025. The primary bitnami namespace will be cleaned up and limited to a small subset of “Secure Images” intended as a paid offering.
  • Some users find the communication confusing (timelines, which tags move when) and feel the docs under-emphasize non-paid migration paths.

User Impact, Migration Paths, and Alternatives

  • Many expect widespread breakage in CI/CD and running deployments when images disappear or move; manual updates or registry rewrites will be needed.
  • Recommended strategies:
    • Fork and collectively maintain the charts and container builds.
    • Use upstream vendor images (Postgres, Redis, RabbitMQ, etc.) or build from Bitnami’s Dockerfiles.
    • Mirror all production images to private registries to avoid future supply disruptions.
    • Discover other charts via Artifact Hub or project-specific repos.

Broadcom’s Strategy and Community Reaction

  • Strong sentiment that this is classic “enshittification”: extracting revenue from a previously free, developer-friendly asset and pushing enterprises toward a ~$5k/month “secure images” subscription.
  • Some note Broadcom’s broader pattern of buying mature products, monetizing locked-in enterprise users, and shedding the rest, though a few point out Broadcom (or its predecessor) also enabled successes like Raspberry Pi.

Helm, Kustomize, and Kubernetes Packaging Debate

  • The thread broadens into tooling: Helm’s user experience is divisive.
  • Criticisms: Go-text templating over YAML (whitespace-sensitive), brittle authoring, confusing schemas, and opaque failures.
  • Defenses: powerful composition, fast install experience, versioned release artifacts, rollback behavior, and “standard packaging” for vendors.
  • Alternatives discussed: Kustomize (especially with Flux/ArgoCD), Jsonnet/Tanka, CDK8s, Kapitan, Anemos, and plain YAML/JSON with GitOps.

Asynchrony is not concurrency

Disagreement over definitions

  • Many comments dispute the article’s definitions of asynchrony, concurrency, and parallelism.
  • Formal models (Lamport, CSP) are cited:
    • Concurrency is often defined in terms of partial orders and causality (“can these events affect each other?”) rather than “multiple tasks at a time”.
    • Parallelism is physical simultaneous execution; concurrency is a property of the program or model, not of hardware.
  • Some argue concurrency is a superset containing both parallelism and asynchrony; others insist concurrency and parallelism are orthogonal (program model vs execution model).
  • Several note that “at the same time” is ill‑defined (whose clock?) and that everyday dictionary definitions are unhelpful in technical contexts.

What “asynchrony” should mean

  • The article’s definition (“tasks can run out of order and still be correct”) is heavily criticized:
    • Many say this really describes independence or commutativity, not asynchrony.
    • Others define async as: non‑blocking submission of work with result collected later; or “explicitly structured for concurrency”.
  • Multiple comments stress that async does not inherently mean “out of order”: APIs may guarantee FIFO ordering while still being asynchronous.
  • Partial ordering examples (e.g., socket operations, file writes) are used to show that “order doesn’t matter” is too strong.
  • Some propose using mathematical terms like commutativity or “permutability”, but others note they don’t capture partial orders or complex interleavings.

Zig’s async / asyncConcurrent design

  • The core Zig idea: separate “asynchronous but may execute serially” from “requires concurrency”, allowing:
    • Async APIs callable from synchronous contexts.
    • Libraries that are polymorphic over sync/async environments.
  • The client/server accept vs connect example is central and also confusing:
    • Several readers initially misread it as about parallelism; others point out contradictions between the definitions and the example.
    • Concern that asyncConcurrent is opaque and easy to misuse without re‑reading the article.
  • Some praise the design as ingenious and promising; others call it premature and rhetorically driven by Zig’s API needs.

Practical concerns: races, testing, and real systems

  • Debate over whether async “has all the pitfalls of concurrency”:
    • One side: async races (e.g., multiple in‑flight non‑idempotent operations) feel similar to threaded bugs.
    • Other side: lack of hardware‑parallel races makes async substantially easier to reason about.
  • Several note that many ecosystems combine async with true multithreading, so mutexes and classic concurrency hazards still apply.
  • Testing interleavings of async/concurrent code is highlighted as hard; suggestions include specialized test I/O backends with fuzzing or deterministic scheduling.

Terminology fatigue and value

  • Some argue these distinctions (async vs concurrency vs parallelism) add little practical insight and often obscure real questions like “what operations can overlap?” and “what ordering is required?”.
  • Others find the distinctions—especially “concurrency as programming model, parallelism as execution model”—very useful for reasoning and teaching.
  • A recurring sentiment: the field badly lacks shared, unambiguous vocabulary, and this article both contributes and adds to the confusion.

Cancer DNA is detectable in blood years before diagnosis

Commercial blood-based cancer tests and “Theranos vibes”

  • Commenters joke about a kitchen-counter cancer blood tester backed by VC and a charismatic founder, alluding to Theranos.
  • Several existing services are mentioned: multi-marker wellness panels and the Galleri multi‑cancer early detection (MCED) test, costing ~$800–$1,000 and sometimes offered via life insurers or longevity clinics.
  • Users who’ve taken Galleri generally value it, but others question affordability, especially for “average families” and outside the US/UK.

Actionability, anxiety, and personal stories

  • People ask what one actually does with a positive MCED: see a doctor, then imaging/biopsies/oncology referral.
  • Anecdotes: one person uses annual tests after losing a friend to late-stage cancer; another relative had a positive ctDNA‑type result that never led to detectable cancer, causing a year of intense anxiety before the signal disappeared.
  • Early-detection promise is emotionally compelling for those with family cancer or Alzheimer’s risk.

Sensitivity, specificity, and overdiagnosis

  • Multiple medically informed commenters stress that many people harbor pre‑cancerous clones and low‑level cancer signals that never become clinically relevant.
  • The core problem: current ctDNA/MCED tests struggle to balance sensitivity and specificity; at population scale, false positives and detection of indolent lesions could lead to unnecessary imaging, biopsies, surgeries, and even serious complications.
  • Some argue “you’re better off not knowing” in many scenarios; others push back, emphasizing lives that might be saved.

Costs, insurers, and health‑system incentives

  • One view: insurers avoid paying for early screening despite the ability to detect cancer early; others respond that high test cost, follow‑ups, and unknown net benefit justify caution.
  • Debate over whether broad screening would actually save money once you include all negatives and workups.
  • US insurance barriers (e.g., difficulty getting PET scans) are contrasted with the idea of universal systems where individuals don’t pay out of pocket.

Technical and research challenges

  • Experts describe ctDNA workflows, ultra‑low allele fraction noise, CHIP, and background somatic mutations, noting population‑level utility is still unproven.
  • Some see promising roles in post‑treatment relapse monitoring; proactive screening in asymptomatic people is described as “dicier.”
  • Others propose massive longitudinal datasets (blood sequencing, imaging, cheap high‑bit sensors) plus ML to extract predictive patterns—acknowledging cost, ethics, and data/consent issues.

Possible futures and study skepticism

  • Ideas include tiered non‑invasive screening, better precancer treatments, lifestyle‑targeted interventions, and community‑driven trials.
  • One commenter flags that the underlying study may be overhyped: paywalled, unclear false‑positive data, and no independent validation mentioned in the press coverage.

How I keep up with AI progress

Sources and Strategies for Keeping Up

  • Many commenters endorse a small, high‑signal set of sources: specific blogs, newsletters, Substack authors, YouTube channels from major AI labs, and curated RSS or Twitter/X lists.
  • Some track updates via popular Python/ML libraries (LangChain, PydanticAI, etc.) as proxies for where the industry is heading.
  • Several highlight specific educators and video series for deeper conceptual understanding rather than news-chasing.
  • Others recommend meta‑feeds (curated AI news aggregators, HN front page, podcast feeds) rather than following dozens of individual voices.

“Why Keep Up?” vs “You Don’t Have To”

  • A major thread questions the article’s “and why you must too” claim, arguing it never really justifies the necessity.
  • Many say you can safely ignore AI for months or years and catch up quickly when needed, since most news is incremental, tools are fungible, and real capability leaps are rare.
  • Others counter that basic familiarity is increasingly table stakes for developers; not engaging at all risks career stagnation or layoffs.

Productivity, Tools, and Early Adoption

  • Split views on current usefulness: some report major productivity gains (especially for coding and simple tasks); others find tools inconsistent, overhyped, or not yet worth the overhead.
  • Prompt engineering is debated: some dismiss it as transient or overblown; others say careful prompting and tool use still matter for quality results.
  • Discussion on whether to pay for “tools” (editors/assistants bundling models) versus “models” directly; concerns include vendor lock‑in, misaligned incentives, throttling, and BYO‑key/LLM trends.

How to Learn: Build vs Read

  • Several argue deep understanding comes from building projects, running local models, and experimenting (e.g., with agents, RAG, speculative decoding), not from endlessly consuming blogs and social media.
  • A recurring theme is to avoid FOMO: track just enough to spot genuinely new capabilities, focus on what’s useful for your own domain, and accept that strategic lagging can be rational.
  • The author clarifies in comments that the piece targets already‑interested readers and aims to provide a higher‑signal starting list, not to pressure everyone into constant AI monitoring.

Meta says it won't sign Europe AI agreement

Meta’s refusal and what it signals

  • Many see Meta’s refusal as a heuristic that the code is probably good; others warn this is just bias and insist on reading the text first.
  • Meta frames the Code as “growth‑stunting overreach” that will throttle frontier models and EU startups; critics see this as lobbying spin from a company with a long history of privacy abuses.
  • Some argue Meta has also contributed positively via open‑source AI and tooling, so its position can’t be dismissed outright.

OpenAI contrast and “regulation as moat”

  • OpenAI has committed to signing and is portrayed as very pro‑regulation, partly due to deep government and military ties.
  • Several commenters think the biggest incumbent backing regulation is classic “pull up the ladder” behavior, using compliance cost as a moat.
  • Others simply don’t trust OpenAI’s public commitments, citing previous reversals on openness.

Copyright, training data, and responsibility

  • Strong focus on Chapter 2: copyright and training.
  • US: recent pretrial rulings treat training on copyrighted text as fair use, but that is contested and may be appealed; acquisition (piracy vs bulk buying/scanning) is still a separate legal issue.
  • EU: no broad “fair use”; member states have narrower exceptions and different doctrines.
  • The Code/Act:
    • Allows training on copyrighted works (with opt‑outs) but expects “reasonable measures” to avoid infringing outputs and overfitting.
    • Suggests providers prohibit infringing use in T&Cs or, for open models, warn about it in documentation.
    • Debate over whether holding model providers partly responsible for downstream misuse is workable, especially for open source.

EU regulation, GDPR, and cookies as precedent

  • One camp: the Code is onerous, technocratic, and written by people who don’t understand AI; likely to entrench incumbents and lawyers, as with GDPR.
  • Other camp: most provisions are “common sense” (transparency, safety, user choice) and needed because large firms won’t self‑police.
  • Cookie banners are a huge flashpoint:
    • Critics say they show EU’s failure to foresee real‑world behavior, leading to dark‑pattern consent theatre with little real privacy gain.
    • Defenders blame companies and ad networks for malicious compliance; argue GDPR enabled data‑access/deletion rights and could work if enforced properly and if sites stopped unnecessary tracking.

Innovation, competitiveness, and “keeping up”

  • Concern that threshold‑based rules (e.g., model scale) will freeze EU startups below those levels while US/China firms race ahead, then enter Europe with stronger products and big legal budgets.
  • Others reply that slightly weaker or slower models are acceptable if that buys more accountability and reduces power concentration.
  • Some fear Europe is repeating a pattern: heavy regulation, weak local champions, dependence on US/Chinese tech; others welcome fines and constraints on foreign megacorps even if it means fewer domestic giants.

Voluntary Code of Practice vs future law

  • The Code is described as a voluntary, EU‑endorsed self‑regulation step ahead of binding rules.
  • Skeptics call it empty virtue signaling that only PR‑sensitive players will follow.
  • Supporters say it’s a sandbox: lets companies trial obligations, refine them based on reality, and avoid a sudden cliff when they become hard law.

AI risk, timing, and philosophy of regulation

  • One side: early AI regulation is premature and likely to misfire; regulators rarely predict markets correctly and often protect entrenched interests.
  • Other side: waiting until harms fully materialize (pricing discrimination, autonomous weapons, mass surveillance, job displacement) is too late; the whole point is to shape the market now.
  • Broader tension runs through the thread: trust in democratic regulation vs fear of bureaucratic overreach and Europe “self‑sabotaging” its tech future.

Firefox-patch-bin, librewolf-fix-bin AUR packages contain malware

Incident and Impact

  • Three AUR packages — librewolf-fix-bin, firefox-patch-bin, zen-browser-patched-bin — were found to contain a remote access trojan (RAT) that gives full control of the machine to the attacker.
  • Packages existed only a few days before removal; they were new, not compromises of the popular librewolf-bin or zen-browser-bin.
  • Several comments argue that with a RAT there is no reliable cleanup: the only safe response is to assume total compromise, take machines offline, back up data, and fully reinstall.

Information Gap and Indicators of Compromise

  • Some participants criticize the advisory for not listing technical indicators (files, startup entries, etc.) that would help users check systems.
  • Others counter that Arch’s priority is rapid notification; a full malware analysis is unrealistic, and a RAT may leave few or no consistent traces, especially if payloads are dumped to /tmp and cleaned up or actions vary per host.

How the Attack Worked

  • The AUR PKGBUILDs pulled code from a GitHub repo; a Python script downloaded a binary payload later uploaded to VirusTotal.
  • One package declared provides=('firefox'), so many existing packages that depend on firefox appeared as “dependents”, likely to increase visibility.
  • At least one Reddit post promoted the malicious zen-browser-patched-bin as a “great find”, suggesting deliberate social engineering.

AUR Trust Model and User Responsibility

  • Repeated emphasis that AUR is explicitly “untrusted user content”: anyone can upload, packages are not vetted, and users are expected to read PKGBUILDs before building.
  • Arch’s official tools (pacman) do not interact with AUR; third‑party “helpers” (yay, paru, etc.) simply automate fetching PKGBUILDs and usually show PKGBUILD/diffs before building.
  • Disagreement over real‑world behavior: some claim most Arch users install from AUR “without a second thought”; others dispute this and view AUR use as inherently “at your own risk”.

Proposals for Better Safeguards

  • Suggestions: tools to print all URLs in PKGBUILDs, highlight diffs on update, or summarize new commits to make manual review easier. Helpers already do some of this; printing URLs is seen as a useful extra.
  • Proposals to integrate LLMs for malware review are widely rejected as impractical (high false positives, easy to game).
  • VirusTotal integration into pacman -U is proposed; pushback focuses on privacy, limited usefulness against new malware, high API load, and conflict with Arch’s ethos of minimalism and user control.

Broader Reflections

  • Several note that similar risks exist in other ecosystems (Fedora COPR, Plasma widget store, npm, etc.).
  • Some express nervousness and plan audits of third‑party repos; others frame the quick detection (within ~2 days) as evidence the AUR community is actively policing new uploads.

LibreOffice slams Microsoft for locking in Office users w/ complex file formats

Lock-in: formats vs people and organizations

  • One view: Microsoft doesn’t “lock in” users; organizations do, by standardizing on Office formats and tools because of convenience and network effects.
  • Others counter that this is exactly what “lock-in” is: schools and workplaces send files only reliably readable with Microsoft Office, forcing recipients onto Windows/Office or VMs.
  • Some argue that today the real lock-in is the wider ecosystem (Teams, Outlook, SharePoint, OneDrive, collaboration features), not just the file formats.

Microsoft strategy and intent

  • Several comments see OOXML and similar moves as a continuation of a long-term strategy: proprietary tech, embrace–extend–extinguish, buying and killing competitors, sabotaging open standards.
  • Others say this is over-ascribing malice: Office’s formats are a byproduct of decades of features, backward-compatibility hacks, and internal chaos. Even Microsoft engineers reportedly hate dealing with it.

OOXML complexity and interoperability

  • People who’ve worked with OOXML describe it as essentially a serialization of Office’s internals, including quirky flags like “behave like Word 95/WordPerfect,” making full reimplementation very hard.
  • The standard is seen as cryptic, inconsistently documented, and entangled with legacy behavior, forcing reverse engineering of old Word versions.
  • Some report that even different Word versions can’t always open each other’s docs correctly; LibreOffice sometimes handles certain old files better than modern Word.

Comparison to PDF and other formats

  • PDF is generally viewed as far more interoperable: many tools can read/generate it, the spec is relatively clean for rendering, though editing is painful and many real-world PDFs are non-conforming.
  • Alternatives suggested: Markdown + pandoc, Asciidoc, HTML, CSV; critics reply these are too primitive or require substantial tooling and expertise to match Office’s capabilities.

LibreOffice vs Office vs Google Docs

  • LibreOffice is praised for doing a surprisingly good job with Office formats, but PowerPoint compatibility and polish are recurring pain points.
  • A major criticism: lack of a first-class, Google-Docs-style web collaboration experience. Collabora/online solutions exist but are described as clunky and resource-heavy.
  • Many organizations now standardize on either Office 365 or Google Workspace, with PDF for interchange; in that world, some argue strict file-format openness “matters less” than collaboration.

Relevance and framing of the complaint

  • Some see LibreOffice’s blog post as rehashing a 20-year-old fight about OOXML; others argue it’s still relevant because those decisions continue to hinder user freedom and FOSS adoption.
  • A side thread criticizes sensationalist headlines using words like “slams” as clickbait that add heat but little light.

H-1B program grew 81 percent from 2011 to 2022

Perceived Quality: U.S. Grads vs H‑1B Workers

  • Some hiring managers report that domestic candidates consistently outperform most H‑1B applicants, describing many H‑1B resumes as formulaic, with high levels of fraud and “body shop” churn.
  • Others argue the H‑1B pool is a superset that includes many elite grads (e.g., top U.S. and foreign universities), so it’s unsurprising that some are world‑class.
  • There’s confusion over what “American CS grads” means: many H‑1B holders earned U.S. degrees and would be counted in both groups.

University Incentives and Foreign Students

  • Several comments note that international students often pay higher tuition and effectively subsidize domestic students, especially as public funding has declined.
  • Some see the large foreign share of elite university enrollment (~40% in some places) as evidence that citizens are being neglected.
  • Others say attracting global talent to U.S. universities has been core to U.S. technological and geopolitical strength.

Wage Suppression, Exploitation, and Ethnic Bias

  • Many see H‑1B as de facto indentured servitude: tied to one employer, afraid to quit, easier to overwork, and often used to undercut domestic wages.
  • Examples are given of substantial pay gaps between visa and citizen workers in similar roles; others counter with personal data showing H‑1Bs being paid above market and cite prevailing‑wage rules.
  • Commenters highlight consulting “mills,” templated resumes, fraudulent interviews, and claims of managers preferring co‑nationals, leading to monoculture teams.
  • Debate persists over whether big tech itself is abusing the system or mainly outsourcing firms are.

Impact on Domestic Careers and Labor Markets

  • Multiple posters say there was never a real shortage of trainable Americans; companies just avoid training and prefer “plug‑and‑play” hires.
  • Concerns: entry‑level roles have dried up, mid‑career hiring dominates, and older engineers (50+) are being pushed out, breaking the junior‑to‑senior pipeline.
  • Some report CS/CE grads facing above‑average unemployment recently; others insist tech genuinely had shortages up to ~2021 and that many H‑1Bs are in more specialized or senior roles.

Program Structure, Data, and Backlogs

  • Several note the statutory cap on new H‑1Bs (65k + 20k master’s) hasn’t changed; the chart’s growth mostly reflects renewals and long green‑card backlogs (especially for India and China).
  • Some call the chart misleading for implying more inflow rather than slower naturalization; others say it fairly shows the growing H‑1B population.
  • Commenters point out large numbers of foreign tech workers also arrive via other programs (e.g., OPT), further affecting the market.

Policy Proposals and Reforms

  • Suggested reforms include:
    • Rank H‑1B applications by total compensation and require guaranteed multi‑year pay.
    • Allow only occupations with rising wages and employment to use H‑1Bs.
    • Tie caps to sector‑specific unemployment; pause or cut H‑1Bs when tech unemployment is high.
    • Impose high, wage‑indexed application fees and grant immediate green cards with full job mobility.
    • Mandate H‑1B salaries significantly above median to eliminate cost arbitrage.
    • Crack down on “H‑1B mills,” tighten wage definitions, and improve enforcement using tax data.
    • Prioritize U.S. university graduates and possibly add country‑level caps or adjustments.

Broader Political and Ethical Framing

  • One camp prioritizes national economic strength and innovation, even if some citizens’ incomes suffer.
  • Another insists the nation’s purpose is to advance its citizens’ well‑being; using immigration to hold down wages is seen as corrupt and destabilizing.
  • Several note that broader public sympathy for displaced tech workers is low, despite years of “STEM push” rhetoric.

Valve confirms credit card companies pressured it to delist certain adult games

Credit card control and lack of alternatives

  • Commenters describe Visa and Mastercard as a de facto global duopoly: processors must follow their rules or face higher “risk” fees or disconnection.
  • Even if a processor is willing, the schemes’ own “restricted lists” around adult content dominate.
  • Regional systems (JCB, UnionPay, Pix, Interac, UPI, Wero, etc.) exist but don’t substitute globally for Visa/MC, so large platforms like Steam have little leverage.
  • Several people note: if Valve kept any targeted titles, it risked losing card payments for all of Steam.

Fraud/chargebacks vs moral crusade

  • One camp claims porn and gambling are high‑chargeback, high‑fraud categories; card brands simply don’t want that risk.
  • Others strongly doubt this explains Steam: generic “STEAM” descriptors, generous refunds, and harsh penalties for chargebacks should keep rates low.
  • Selective targeting of specific porn subgenres (incest, rape, child‑abuse themes) and not all adult games is cited as evidence it’s about “brand safety” and moral pressure, not pure risk.
  • Prior crackdowns on Pornhub, OnlyFans, guns, cannabis, and other controversial but legal sectors via banks and card networks are invoked as precedent.

Nature of the banned content and censorship line‑drawing

  • The removed titles are described as incest/non‑con/“lolicon‑ish” visual novels and similar low‑effort porn games. Some say Valve never should have listed them.
  • Others stress “fiction is not real” and worry about a slippery slope: today fringe porn; tomorrow LGBTQ content, “problematic” kink, or simply any explicit sex.
  • Repeated contrast: graphic murder and torture in mainstream games and TV are fine to monetize; explicit sex, especially taboo themes, triggers financial deplatforming.

Valve’s role vs infrastructure power

  • Some argue Valve used card pressure as cover to do long‑overdue curation of shovelware without openly owning the decision.
  • Others see Valve as constrained: payment networks now function like unregulated utilities that can silently decide which legal content and businesses survive.
  • There’s disagreement over whether private intermediaries should have a moral veto, or whether only democratically enacted law should define what’s off‑limits.

Proposed solutions and their limits

  • Regulatory ideas: treat card networks as common carriers/financial utilities; enforce payment neutrality for all legal commerce; apply antitrust and anti‑cartel law.
  • Technical workarounds suggested: crypto or stablecoins, Steam wallet/points, a separate adult storefront, direct bank rails (ACH/SEPA/FedNow/UPI).
  • Many note practical barriers: user friction, on‑ramps that still depend on Visa/MC, KYC and AML rules, and lack of mass demand.

Broader worries

  • Widespread concern about a “choke point” model where governments and activist groups achieve censorship indirectly by leaning on financial and infrastructure chokepoints.
  • Several connect this to the decline of cash and fear a future where access to payment rails — and thus to speech and livelihood — depends on opaque moral standards set by a handful of firms.

In the long run, GPL code becomes irrelevant (2015)

Core thesis and its limits

  • Article’s claim: in the long run, permissive code wins because corporations will rewrite around GPL; GPL projects get sidelined.
  • Many agree this matches recent trends in popular stacks (MIT/Apache everywhere, GPL largely in niches or “enthusiast” projects).
  • Others say this is overstated or outdated: several big GPL projects (Linux, Git, Blender, Krita, QGIS, MySQL, etc.) remain central.

Corporate behavior and incentives

  • Permissive licenses lower legal friction, so large firms ban GPL entirely to avoid “license contamination” of proprietary code.
  • Economic argument: upstreaming patches once is usually cheaper than maintaining a private fork, so permissive projects often get substantial corporate contributions.
  • Counterpoint: when modifications are strategically valuable, companies will keep them private regardless of license, or rewrite from scratch if GPL-encumbered.

User vs developer freedom, and “fairness”

  • One camp: GPL centers user freedom and creates a “ratchet” that prevents proprietary capture, even if that annoys proprietary developers.
  • Another camp: most users care more about convenience and quality than license fairness; they’ll pick the best product, even if proprietary.
  • Some argue rewriting GPL code just to avoid sharing is wasted human effort; others see making bad actors “pay the full cost” as a feature.

Examples and case studies

  • GCC vs LLVM/Clang and EDG cited both as evidence that permissive code can displace proprietary, and that corporate-funded permissive projects can overshadow GPL alternatives.
  • Linux vs BSD: some see this as GPL’s triumph; others attribute it to historical timing and network effects more than license.
  • Web engines: permissive+weak-copyleft engines (Chromium/WebKit/Gecko) and the demise of proprietary engines (IE, EdgeHTML) are seen as supporting the article’s “complexity economics” story.

MPL/LGPL as middle ground

  • Multiple commenters highlight MPL/LGPL as a practical compromise: code remains open, but they don’t force entire downstream applications to be copyleft.
  • Browser engines and some libraries (Qt, GEOS, Servo) are used as evidence that weak copyleft may have the best long-term “survival characteristics”.

Cloud, relicensing, and AI

  • Redis, Elasticsearch, Terraform: permissive cores later relicensed to “source available” to fight cloud providers; community permissive forks (e.g., Valkey) then emerge.
  • Some see this as evidence permissive licensing invites corporate capture; others note the open forks remain viable.
  • A few argue AI makes all licenses weaker by making “clean-room” rewrites cheap, shifting the freedom battle from source code to model weights.

ICE is getting unprecedented access to Medicaid data

Executive Power, Courts, and Eroding Checks

  • Many see ICE’s access as part of a broader executive “power grab,” enabled by decades of expanding presidential authority (Cold War, post‑9/11, Nixon/Reagan precedents).
  • Several comments argue Congress has largely ceded power; budget increases and new authorities turn ICE into a de‑facto domestic security force.
  • Sharp disagreement over the Supreme Court: some say it is a partisan tool enabling executive lawlessness (e.g., immunity, blocking Biden’s loan relief); others reply that the core problem is bad statutes, not the Court, whose job is to apply existing law.

Non‑Citizens, Constitutional Rights, and Due Process

  • Multiple commenters insist most constitutional protections apply to “persons,” not just citizens; others point to laws like the Privacy Act and Patriot Act that explicitly limit protections.
  • There is detailed discussion of “expedited removal”: originally narrow, then progressively expanded, cited as a textbook slippery slope once due process is denied to any category.
  • Some warn that once the state can deport people without meaningful process, it can misclassify even citizens; others counter that courts and evidentiary standards still exist.

Why ICE Wants Medicaid Data & HIPAA Questions

  • Confusion arises because Medicaid is generally for citizens/permanent residents. Participants note:
    • “Qualified non‑citizens,” emergency Medicaid, and state‑funded expansions (e.g., for undocumented children, pregnant people, full coverage in some states).
    • Use cases: identifying undocumented enrollees, relatives of citizen children on Medicaid, or cross‑state inconsistencies and fraud.
  • HIPAA’s law‑enforcement carve‑outs are cited as the legal hook; critics say this shows how weak those protections are when “administrative requests” suffice.

Databases, Surveillance, and Historical Echoes

  • Strong concern that any centralized list (health, immigration, etc.) will be repurposed—today for immigrants, tomorrow for the poor, dissenters, or minorities.
  • Comparisons are made to WWII internment and Gestapo/Stasi‑style list‑building and “disappearances,” with some warning this is a first run at secret police.
  • Others argue LE access to data is long‑standing (e.g., used against CSAM) and not unique to this administration, though abuse risk is widely acknowledged.

Partisan Blame and “Both Sides”

  • Some frame this as uniquely driven by current GOP leadership and an explicitly nativist agenda; others emphasize bipartisan responsibility for building the tools.
  • There is recurring tension between “both parties are the same” cynicism and pushback that current deportation plans and rhetoric mark a qualitative escalation.

I'm Peter Roberts, immigration attorney who does work for YC and startups. AMA

Work authorization & visa pathways

  • For remote work outside the U.S., no U.S. work authorization is needed even if the employer is American.
  • For coming to the U.S., options discussed:
    • O‑1 for “extraordinary ability” (no degree required; criteria-based, not pure “genius”). Cost estimates in the thread range roughly $5–15k, not $100k.
    • H‑1B is easier substantively but constrained by the annual lottery; cap‑exempt roles at universities and research orgs (e.g. Fermilab, NASA) bypass the lottery.
    • Citizenship-specific visas: E‑3 (Australians), TN (Canadians/Mexicans), and others (Chile, Singapore) seen as relatively easy if you have an offer.
    • L‑1 is straightforward via large multinationals after a year abroad; harder but possible for smaller firms.
    • E‑2/E‑1 treaty visas let founders and same‑nationality employees run/build businesses in the U.S.

Green cards, EB categories, and timelines

  • EB‑2 NIW is now backlogged and, in practice, closer in difficulty to EB‑1A; several commenters/answers recommend EB‑1A as the better high‑achiever route.
  • Country of birth, not citizenship, drives EB‑2/EB‑3 queue length; India/China face long waits.
  • Approved I‑140s generally lock in a priority date after 180 days, even if the employer withdraws.
  • H‑1B holders with approved I‑140s can get extensions beyond the 6‑year limit; NIW approval can enable 3‑year extensions.
  • PERM processing is slow but not reportedly more denial‑prone right now; batching recruitment and premium processing can shave some time.

YC and accelerators

  • Attending an accelerator is framed as business activity, not “work,” so B‑1 status is usually appropriate; B‑2 (tourist) is not.
  • Participation can strengthen an O‑1 case but can’t reliably be treated as a qualifying “award” or “membership.”

Enforcement climate, rights, and travel

  • Multiple threads express anxiety about ICE, DHS “revisiting” past green card/citizenship approvals, and denaturalization rhetoric.
  • Advice: carry proof of status (green card or passport+I‑94), know your rights (ACLU resources cited), and have an immigration attorney’s contact.
  • Law technically requires green card holders to carry the card at all times; many don’t, accepting the small legal risk vs. loss/replacement hassle.
  • Some report CBP inspecting devices and social media for green card holders; isolated examples of detention are mentioned.
  • For long stays abroad, reentry permits can protect green card status for years; the 6‑month rule mainly affects naturalization “continuous residence,” not mere LPR validity.

O‑1 criteria and perceived abuse

  • Clarified that O‑1 has A and B subcategories (science/business vs arts/entertainment); entertainers and media figures can qualify.
  • Debate over whether certain podcasters or models should get “extraordinary ability” visas; others counter that the statute explicitly covers such fields and aims to attract high‑impact talent, not only scientists/engineers.
  • Founders can sometimes meet the “high salary or other remuneration” test using equity valued by arm’s‑length fundraising.
  • Premium processing can significantly reduce O‑1 adjudication delays.

Policy, ethics, and labor-market tensions

  • One thread argues H‑1B and other worker programs displace U.S. CS grads; others push back that the comparison mixes entry‑level and senior/specialized roles and that foreign grads face their own structural disadvantages (OPT limits, E‑Verify constraints, high costs).
  • Some commenters criticize “visa as a service” startups and VC behavior as skirting or commoditizing immigration rules.
  • A long political subthread links current immigration crackdowns and denaturalization talk to broader anti‑democratic or “tech bro” political projects; others emphasize still supporting high‑skill immigration while being concerned about citizens’ rights.

Status complications & edge cases

  • Examples discussed: running a startup on H‑1B (nuanced; often requires concurrent H‑1B), transitioning from TPS or J‑1 to F‑1/O‑1, TN for software engineers with CS degrees (some recent CBP friction but still being approved), and green‑card holders living abroad as digital nomads.
  • General pattern: many scenarios are fact‑specific, with repeated advice to get individualized legal consultations rather than rely solely on forum guidance.

French villages have no more drinking water. The reason? PFAS pollution

Scope and Title Framing

  • Several comments challenge the HN post title; they stress the article concerns ~3,500 people in 16 villages, not “all French villages.”
  • Discussion on wording: “these French villages” or “some/16 French villages” is seen as clearer and less sensational.

Source and Extent of Contamination

  • Article excerpt: authorities currently suspect PFAS came from paper mill sludge used as fertilizer near water catchments.
  • Some argue using industrial byproducts as fertilizer is “greedy and stupid”; others respond that circular use of waste is often reasonable, but only if toxicity is properly assessed.
  • Multiple commenters note similar PFAS sludge/fertilizer scandals in Germany, Maine (US), and elsewhere.
  • Contamination pathways via PFAS‑treated paper, packaging, lubricants, toilet paper, and firefighting foams are discussed; exact contributions in this French case remain unclear.

Health Risk, Responsibility, and Systemic Issues

  • Authorities claim no statistical evidence yet of adverse health outcomes in the affected villages, but commenters are skeptical and emphasize long‑term, poorly quantified risks.
  • PFAS and microplastics are framed as the “environmental sin” of this era, comparable to PCBs.
  • Debate over blame:
    • Some highlight corporate greed, regulatory failure, and debt‑driven finance.
    • Others stress human nature, poverty, and global consumption patterns.
  • A minority cautions against pure catastrophism, noting that earlier pollutants (e.g., coal, plastics) also brought large health and welfare gains.

Filtration and Individual Mitigation

  • Links and discussion indicate:
    • Under‑sink and multistage reverse osmosis (RO) systems can remove PFAS effectively.
    • Pitcher and simple carbon filters show inconsistent PFAS removal; some whole‑house systems may even increase PFAS levels.
  • Concerns raised about:
    • RO waste‑water ratios and impracticality for all household uses.
    • Possible microplastic shedding from RO membranes, partially mitigated by post‑carbon stages.
    • Disagreement over whether demineralized/acidic RO water is harmful; evidence is contested.
  • One commenter describes achieving <1 ppt at home via self‑installed filtration and doubts governments will fund large‑scale remediation promptly.

Regulation, Monitoring, and Alternatives

  • Some praise French monitoring and notification, and wonder how many US localities have undetected PFAS issues.
  • Debate on policy responses:
    • Broad regulation of all organofluorines vs. incremental bans on individual molecules.
    • Whether to ban PFAS‑laden sludges from farmland outright, or test and restrict based on measured levels.
  • Wind turbines are briefly discussed as possible PFAS sources via coatings; one linked source calls livestock‑PFAS‑from‑windfarms claims misleading, but commenters note legacy PFAS use in turbine materials is still a concern.

ACA health insurance will cost the average person 75% more next year

Who “ACA Health Insurance” Refers To

  • Commenters clarify this is the individual market sold on ACA marketplaces (e.g., Healthcare.gov), not employer, Medicare, or Medicaid coverage.
  • You can sometimes buy identical plans off-exchange, but only marketplace plans get ACA tax credits.

Why Premiums Are Spiking

  • Core explanation: enhanced COVID-era premium tax credits are expiring, so people lose subsidies and their out-of-pocket premiums jump.
  • Insurers also expect a sicker risk pool as healthier people drop coverage when it becomes more expensive, so they raise base premiums in anticipation.
  • Several note this reflects the collapse of the ACA “three‑legged stool” (guaranteed issue + mandate + subsidies) after the individual mandate penalty was removed and now subsidies are cut.

Confusion About the “75% Increase”

  • Some readers are confused whether underlying plan prices are rising 75%, or just the consumer’s share after subsidies shrink; one cites a KFF explainer that it’s the latter (out-of-pocket premiums).
  • Skeptics argue the NPR example ($60 → $105) is cherry-picked and “meaningless” without showing full plan cost and tax credit details; one calls it scare tactics.

Real-World Cost Experiences

  • Reported ACA premiums range from ~$300/month for a single bronze plan to $3,600/month for an unsubsidized platinum family plan, with debate over whether high-tier plans are financially rational versus high-deductible bronze.
  • Multiple people stress that employer plans routinely cost $2,000–$3,000+/month in total, but employees often see only their small contribution.

Structural Problems Beyond the ACA

  • Strong sentiment that tying insurance to jobs is “bogus”; debate over whether transitioning off employer coverage is politically feasible.
  • Recurrent themes:
    • Hospital and practice consolidation and private equity ownership.
    • Rural hospital closures driven by low Medicaid reimbursement and looming Medicaid cuts.
    • High administrative overhead, PBM dynamics, and opaque billing.
    • Rapid expansion of upscale medical facilities amid fears of an eventual “crash.”

Politics, Messaging, and Alternatives

  • Many note widespread public confusion that “Obamacare” and the ACA are the same, and argue labeling was used as a partisan/racial wedge.
  • Blame for current cuts and price spikes is sharply partisan; some predict right-wing media will still blame “Obamacare” itself.
  • Suggested reforms include: Medicare buy‑in, state‑level universal care (starting with blue states), or mandating employers convert premium spending into wages.
  • Skepticism surrounds “Medi‑Share”/sharing ministries; one link portrays severe consumer risk since they’re not true insurance.

NYPD bypassed facial recognition ban to ID pro-Palestinian student protester

Policy vs. “Rights” and Illegality

  • Some argue the NYPD simply violated an internal/administrative policy, not a constitutional right, so “bypassed” is more accurate than “broke the law.”
  • Others counter that sidestepping democratically established limits on police tech use is effectively a rights violation (privacy, due process), even if not yet codified in higher law.
  • There’s concern that such policies exist partly to block real legislation while remaining easy for police to ignore.

Evidence, Misidentification, and Due Process

  • Several commenters stress the charges were dismissed with prejudice and the judge noted there was virtually no corroborating evidence beyond a complainant’s word.
  • Accusations include doctored DMV photos and failure to obtain potentially exculpatory medical records, seen as serious prosecutorial misconduct.
  • Debate: one side emphasizes the alleged rock-throwing as serious assault; the other notes that weak or tainted evidence and illegal methods undermine justice, even if a crime occurred.

Facial Recognition Ban and Loopholes

  • Core issue: NYPD used a fire marshal with Clearview AI access to do what they themselves were barred from doing.
  • Commenters liken this to “laundering” data requests through adjacent agencies or foreign partners to evade domestic limits.
  • Some call for termination or even criminal liability for such end-runs, warning that allowing loopholes makes bans meaningless.

FDNY’s Role and Investigations

  • Questioning why fire marshals have facial recognition; suggested justifications include arson investigations, identifying witnesses or victims.
  • Others argue crime investigation belongs under tightly regulated police units, not fire departments with looser oversight.
  • A counter-view prefers specialized, non-police investigators (including for fires and mental health incidents), to avoid over-centralizing power in police.

Clearview AI, Social Media, and Chilling Effects

  • Alarm that a private firm can match faces from protest footage to scraped school and social photos, then tie that to government ID records.
  • Even those who avoid posting photos note they can’t control others posting images that later get scraped.
  • “Just don’t post photos” is criticized as effectively forcing people to self-censor online expression and assembly—seen as a First Amendment chilling effect, even if current doctrine hasn’t caught up.

Protests, Hate Crime Framing, and Selective Enforcement

  • Some think the hate-crime context justifies strong investigative tools; others point out the facial-recognition ban has no such exception.
  • Multiple comments view this as part of a broader pattern: aggressive state response to pro‑Palestinian or anti‑war demonstrations, contrasted with more lenient treatment of other causes.
  • Historical references (Kent State, MOVE, 9/11-era measures) are invoked to argue that the state reliably escalates surveillance and force against anti‑imperialist movements.

Broader Surveillance-State Concerns

  • Many see this case as a warning about ubiquitous facial recognition, data brokers, and AI analysis enabling pervasive tracking and political repression.
  • Disagreement exists over whether police should ever have access to such tools; but there’s stronger consensus that if rules exist, police cannot be trusted to police themselves when breaking them.

Ask HN: Any active COBOL devs here? What are you working on?

Where COBOL Is Actively Used

  • Heavy use in banking, insurance, government (tax, pensions, unemployment, health insurance, lotteries), education payroll, and healthcare “patient accounting.”
  • Common patterns:
    • Nightly/batch jobs (ACH, claims, billing, pensions, inventory, replenishment).
    • Online transaction processing via CICS/IMS.
    • Backends: DB2, IMS/DL/1, VSAM, sequential datasets, sometimes SQL databases.
  • Not strictly mainframe: PeopleSoft, vertical ERPs, and products like Global Shop use Micro Focus/AcuCOBOL on Windows, Linux, AIX, etc.

Modernization and Migration Efforts

  • Many are reverse‑engineering COBOL to:
    • Move to Java/.NET/TypeScript/low‑code/COTS systems.
    • Consolidate multiple mainframes or re‑host COBOL (e.g., Micro Focus on x86/private cloud).
  • Real difficulty is not COBOL syntax but:
    • 30–40 years of undocumented business logic.
    • Tight coupling and huge “uber‑monolith” systems.
  • Multiple stories of failed or massively over‑budget migrations (SAP/ERP replacements, AS/400 rewrites), leading some orgs to build new systems in‑house and use old devs mainly as domain historians.
  • Bridging tech: host gateways, custom Java/JS adapters, APIs, Kafka, DB‑driven integration.

Salaries, Careers, and Culture

  • Several reports that routine COBOL roles pay below typical software salaries; some consultancies hire cheap juniors while selling “COBOL scarcity.”
  • High rates exist mainly for niche experts on idiosyncratic, critical systems.
  • Many mainframe/COBOL devs are older, long‑tenured, extremely business‑process‑oriented, and often not active in online tech communities.
  • Cultural gaps noted:
    • Less exposure to modern security, tooling, and cloud paradigms.
    • Very strong system reliability, efficient keyboard‑driven workflows, and deep domain knowledge.

LLMs, Tooling, and Learning

  • Several commenters successfully used LLMs to generate or analyze COBOL, especially boilerplate batch code; others stress the surrounding ecosystem (mainframes, control languages, JCL) remains the hard part.
  • AI is being explored for code migration and test generation but seen as requiring heavy human oversight.
  • Learning/on‑ramping resources mentioned: IBM Z Xplore, Coursera mainframe courses, IBM Redbooks, mainframe emulators, and formal training programs.

Developer Experience and Perception

  • COBOL characterized as:
    • Verbose, boilerplate‑heavy, excellent for structured record processing and database‑centric logic.
    • Capable of surprisingly modern UIs (SCREEN SECTION, GUI controls in some compilers).
  • Opinions range from “boring, hated it” to “actually a decent 4GL‑like environment; not as bad as its reputation.”