Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 66 of 348

Using Python for Scripting

Dependency Handling for Python Scripts

  • Core pain point: people want single-file Python scripts with a shebang, but non-trivial scripts need external packages.
  • Some accept a simple requirements.txt or wrapper scripts that activate a venv/conda env as “good enough.”
  • Others want inline dependency declarations, similar to C# script comments or Ruby’s inline Bundler.
  • uv receives a lot of praise: it supports PEP 723–style /// script headers with dependencies = [...], auto-creates disposable venvs, caches packages, can be used via shebang, and can manage Python versions.
  • Downsides of uv: not installed everywhere (undermines “Python is everywhere”), and the first run needs internet. Workarounds mentioned include shiv (zipapp bundling) and pipx-based exec wrappers.

Python vs Bash for Scripting

  • Several argue that anything non-trivial should move from bash to a “real” language; Python is much more readable, maintainable, and powerful.
  • Critics ask what bash can do that Python (stdlib + subprocess) cannot; consensus is that bash scripts often get complex because of bash, not despite it.
  • Counterpoint: system package managers make bash-based tooling straightforward, while Python’s multiple packaging tools (pip, pipx, conda, poetry, uv…) are seen by some as confusing and fragile.

Tooling: Nix and Other Ecosystem Solutions

  • Nix’s nix-shell shebang and flakes are praised for making any-language scripts (including Python) fully reproducible with pinned dependencies.
  • Pushback: Nix’s learning curve and perceived complexity make it overkill “just to auto-install some dependencies.”
  • Some defend Nix as no worse than other package managers when used simply, and very robust once understood.

Python Stdlib, subprocess, and Helpers

  • Many comments emphasize how strong the Python stdlib is for scripting: HTTP, sqlite, tkinter, JSON, diffs, globbing, etc., with no extra packages.
  • subprocess.run is highlighted as the “workhorse” for calling external commands, with check, capture_output, and text options; asyncio.subprocess for concurrency.
  • Third-party helpers like sh, Plumbum, pyp/pawk are discussed; sh receives criticism for unsafe defaults (TTY behavior, hidden stderr).

Portability, Stability, and Alternatives

  • Some challenge the article’s claim that Python 3 is on “basically every machine”; in practice there may be multiple versions or none.
  • Concerns: unmaintained Python scripts often break on new interpreter versions; dependency installation without root can be painful.
  • Alternatives mentioned for “scripting”: Rust (XTask), Go, Nim, Janet, JavaScript, xonsh, Nushell, Perl (with mixed reception). Static binaries are valued where long-term, dependency-free deployment matters.

GitHub Actions has a package manager, and it might be the worst

Maintenance and Strategic Direction

  • Multiple commenters report core GitHub-maintained actions (e.g., checkout, cache, setup-*) being archived or closed to contributions, despite being central to most workflows.
  • A quoted GitHub note says resources are being redirected to “other areas of Actions,” which many interpret as deprioritizing maintenance in favor of AI/LLM efforts and Azure migration.
  • Some argue this isn’t exactly “dropping support” but refusing external contributions and only making internal, roadmap-driven changes.

Security, Package-Manager Behavior, and Lockfiles

  • Strong agreement that Actions behaves like a package manager without lockfiles: action versions can change under stable-looking tags or branches, so pipelines can break or be compromised without repo changes.
  • Pinning to SHAs is recommended in docs but:
    • Does not lock transitive dependencies.
    • Is often ignored in practice (most users pin to tags like v1).
    • Can still break when runners or APIs change.
  • Examples of insecure practices: actions referencing master branches, unpinned scripts or binaries from external URLs.
  • Some use scanners (e.g., Zizmor) and hardening actions, or vendor actions into their own repos, but these are seen as fragile workarounds.

Secrets and CI/CD Threat Model

  • Long subthread debates whether CI/CD should handle secrets at all:
    • One side: runners should get capabilities (OIDC, role assumption, secure enclaves) instead of raw secrets.
    • Others: in practice, deployments, signing, cross-cloud testing, license servers, etc. still require secret-like material; CI must manage it securely.
  • GitHub’s OIDC integration with clouds is praised as one of the few well-executed security features, but still seen as “secrets all the way down.”

Alternatives, Runners, and Vendor Lock-in

  • Suggestions: GitLab CI, CircleCI, Jenkins, Buildkite, TeamCity, Forgejo, Onedev, Woodpecker/Drone, ArgoCD; opinions are mixed, many say none are “actually good.”
  • Third-party runners (Depot, Blacksmith) are praised as faster/cheaper than GitHub-hosted runners while keeping GitHub as UI/trigger.
  • Some highlight “trusted publishing” flows (PyPI, npm) as effectively tying major ecosystems to GitHub/GitLab CI and limiting competition.

Workflow Design, YAML, and Local-First Approaches

  • Several argue most marketplace actions are unnecessary wrappers; prefer Makefiles, shell scripts, or custom Docker images invoked from CI so they run identically locally.
  • Frustration with YAML-based pipelines and lack of first-class local execution; tools like Nix, Dagger, mise, Taskfile, and act are mentioned as ways to regain determinism and local parity.
  • Overall sentiment: Actions is convenient “free compute” tightly integrated with GitHub, but brittle, opaque, and under-maintained.

Microservices should form a polytree

Polytree vs. DAG as a Design Constraint

  • Many commenters agree avoiding directed cycles is important, but argue a general DAG is sufficient; requiring a polytree (no undirected cycles, unique path to each node) is seen as over‑constraining.
  • Critics say the article doesn’t convincingly explain why undirected cycles are harmful beyond standard diamond‑dependency issues (e.g., inconsistent views of upstream state).
  • Some view the “polytree” angle as graph‑theory overreach: interesting intellectually, but not backed by empirical evidence or realistic examples.

Shared Infrastructure and Cross‑Cutting Concerns

  • The strongest pushback targets the idea that no service should be depended on by multiple parents.
  • Common shared services—authN/authZ, logging, metrics, configuration, feature flags, storage, image hosting, notifications, DNS—naturally break the polytree property.
  • Proposals to work around this include:
    • Doing auth at a gateway and passing claims downstream (JWT, enriched headers).
    • Treating logging and metrics as “fire‑and‑forget” to a bus, not hard dependencies.
  • Many argue duplicating such services per consumer just to preserve a polytree is unrealistic and wasteful.

Failure Isolation, Service Types, and Siloing

  • Several comments pivot to the real value of microservices: isolating failure modes and enabling independent scaling, rather than obeying a specific graph shape.
  • One concrete pattern: classify services into infrastructure, domain/business, and orchestration, with allowed call directions (orchestration → domain → infra) to avoid cycles and clarify responsibilities.
  • Others describe success stories where auth was made stateless (JWT) so an auth outage didn’t take down other APIs.

Enforceability and Evolution Over Time

  • Multiple people doubt such topological purity can be maintained as business needs evolve (e.g., GDPR‑style “delete all user data” flows, cross‑cutting features).
  • Even in DAGs, accidental cycles arise via helpers and libraries (services effectively calling themselves).
  • IAM/network policy can restrict who may call whom, but doesn’t solve the design‑level tension between purity and changing requirements.

Analogies, Alternatives, and Scope of “Micro”

  • The “tree” idea is likened to Erlang/Elixir supervision trees, actor hierarchies, and general acyclic dependency rules in monoliths and OO inheritance.
  • Some stress that microservices should be relatively coarse, business‑domain‑aligned units, not tiny functions; edges (service contracts) are expensive and should be minimized.
  • Overall sentiment: acyclic, one‑way dependencies and clear boundaries are widely supported; strict polytree topology is seen as occasionally useful as a mental model but too rigid for many real systems.

Palantir could be the most overvalued company that ever existed

Historical overvaluation and metrics

  • Commenters compare Palantir to extreme historical bubbles, especially the South Sea Company, whose market cap allegedly reached several times Britain’s annual GDP while producing little real value.
  • There’s pushback on comparing company market cap (a “stock”) to GDP (a yearly “flow”); some say it’s a misleading but quick way to convey scale, others argue it’s as meaningless as comparing a river’s flow to a dam’s volume.
  • Crypto is cited as an example of how tiny float + headline “market cap” can create absurd valuations.

Tesla, bubbles, and P/E

  • Tesla is repeatedly raised as a rival for “most overvalued,” with its very high P/E and heavy dependence on EV sales, subsidies, and accounting gains (e.g., Bitcoin).
  • Some argue wild market caps are a hallmark of bubbles; others counter that for liquid stocks, market price is still the best available measure of value, even if imperfect.

What Palantir actually does

  • Several people ask what the “magic sauce” is.
  • Descriptions from the thread:
    • Platform (e.g., Foundry) that ingests messy organizational data, cleans and integrates it into a “single pane of glass,” then surfaces analytics and operational tools.
    • Heavy use of “forward deployed engineers” (effectively high-end consultants) embedded with clients—especially governments—to understand domain problems and build bespoke workflows.
  • Skeptics say the tech isn’t fundamentally unique versus other enterprise data/analytics/ERP stacks; the differentiation is branding, political connections, and willingness to do sensitive surveillance/defense work.

Political, ethical, and geopolitical angles

  • Many comments focus on Palantir as an arm of the security state: ICE, intelligence agencies, military, and potentially an “American social-credit system.”
  • Some fear it becoming an “OS for government” with deep lock-in, enabling price hikes and austerity elsewhere in the public sector.
  • Others argue its global market is large and not EU-dependent, but note competition from Chinese surveillance vendors and trust issues in regions wary of US neo-colonial behavior.
  • Ethical investors describe intentionally excluding Palantir despite defense exposure in their portfolios.

Valuation, growth assumptions, and investor behavior

  • The article’s claim that Palantir must grow revenue 15x over 25 years at ~35% annually is flagged as a math error; commenters recalc this as ~11.4% CAGR for 15x, saying 35% corresponds to ~1500x.
  • Some call the analysis “dumb” for assuming constant margins and ignoring software operating leverage; others reply Palantir may behave more like a services firm if it relies on ongoing data-cleaning labor.
  • P/E-based screens show Palantir isn’t even the most extreme by that metric; many smaller names look worse.
  • A recurring theme is that Palantir, like Tesla or certain defense firms, attracts ideological investors who buy into a political/military worldview, not just cash flows—seen as both a strength for hype and a risk for long-term returns.

Perceptions of leadership and brand

  • The CEO’s highly animated public appearances and founders’ extreme political/religious rhetoric are cited as red flags by some, but also as part of a cultivated “edgy,” military-coded brand that resonates with a certain investor base.

Reactions to the article and media

  • Multiple commenters complain the linked article is effectively an ad, with intrusive sponsorship disguised as a bullet point, undermining its credibility.
  • Some see broader “anti-tech hysteria” in the thread; others frame the criticism as rational skepticism about surveillance capitalism and bubble valuations.

Socialist ends by market means: A history

Marginalism, Markets, and Prices

  • One thread debates whether marginalism fundamentally depends on market prices.
  • Consensus: marginalism is about choices over concrete goods; it can exist without explicit prices, but quantitative accounting (profit/loss, costs) requires prices.
  • Some participants reference attempts to synthesize marginalism with labor theories of value: marginal utility dominating short term, labor costs anchoring long-term prices in competitive, “freed” markets.

Wage Slavery, Class Conflict, and Political Dichotomies

  • Several comments argue that “Left vs Right” is a distraction from the real divide: wealthy vs poor.
  • “Wage slavery” is contested:
    • One side sees it as describing structural power imbalances and lack of real alternatives for workers.
    • Another side dismisses it as rhetorically inflated, stressing individual responsibility (saving, job mobility) and legal freedoms.
  • There’s friction over whether “options” are meaningful if all options still involve exploitative wage relations.

Markets vs Capitalism; Co‑ops and Mixed Systems

  • Multiple commenters stress that markets and capitalism are not identical.
  • Examples from rural areas (ISPs, stores, gas stations) and large federated co‑ops are used to show that shared ownership can function inside market economies.
  • Some see “markets with social ownership” as a win for classical liberalism: once markets are accepted, they view it as de facto capitalism, with “socialist” rhetoric mostly rebranding.

Scale, Infrastructure, and Regulation

  • Large-scale firms are discussed through railroads, highways, container shipping, and computing:
    • One view: technological change and economies of scale naturally drive planetary-scale firms, making co‑ops uncompetitive.
    • Counterview: consolidation often depended on state support (rail regulation, sanitary laws, highway subsidies), which advantaged large firms and undercut smaller competitors.
  • Disagreement over whether modern tech has raised optimal firm size “above planetary scale” or whether administrative overhead and competition remain limiting.

Social Safety Nets, Crime, and Welfare Design

  • Debate around social safety nets:
    • One side sees welfare as necessary to prevent poverty-driven crime and support those who can’t work.
    • Another points to large fraud cases as evidence of perverse incentives, arguing enforcement is the real problem, not welfare itself.
  • International examples (Australia, Israel, South Africa, Singapore) appear as contrasting models of pensions, work requirements, and crime.

Central Planning, Natural Monopolies, and State vs Market Roles

  • Some comments equate state control over production with suppressing market signals, arguing that planners cannot match decentralized price information.
  • Others note that “communism” doesn’t logically require strict central planning, only that it historically coincided with it.
  • Natural monopolies (rail, roads, power lines, last-mile internet) are debated:
    • One side: physical and timing constraints make real competition limited.
    • Other side: networks, multimodal transport, and backup channels still provide alternatives, even if costlier or imperfect.

Human Nature, Incentives, and Socialism’s Feasibility

  • A recurring theme is whether socialism depends on “reprogramming” humans to be less self-interested.
  • Critics say any system where some get more for doing less will generate resentment and breakdown; they see this as universal, not unique to socialism.
  • Supporters reply that:
    • All systems redistribute; capitalism does it via philanthropy, inheritance, and state-backed wealth.
    • Human behavior is strongly shaped by upbringing, culture, and institutions, not fixed selfishness.
    • The article’s vision isn’t about abolishing self-interest but redirecting it in non-capitalist property structures.

Corruption, Power-Seekers, and System Stability

  • One worry: a minority of highly exploitative personalities (psychopaths/narcissists) will capture any hierarchy.
  • In capitalism they become CEOs, politicians, celebrities; in socialism, they may become corrupt officials, potentially destabilizing the system more deeply.
  • Some participants see no convincing design yet that harnesses these people’s drive without letting them wreck egalitarian structures.

Co‑ops, Ownership Models, and Examples

  • Co-ops are discussed as serious, scalable institutions, not just niche hippie projects.
  • Large worker co‑ops are cited as evidence that worker ownership can coexist with complex, globalized operations, often benefiting workers more directly than shareholder-driven firms.

Meta‑Critique of Economic Theorizing

  • One thread expresses frustration with what’s seen as “navel-gazing” about Smith, Marx, and labels.
  • This view asks for empirical modeling, simulations, and experiments rather than endless reinterpretation of canonical theorists and ideological branding.

The era of jobs is ending

Plausibility of “end of jobs”

  • Some argue there is effectively infinite work; increased efficiency just shifts what humans do.
  • Others counter that if AI/robots can do nearly all tangible and commercial work better and cheaper, most humans become economically redundant.
  • Skeptics note physical bottlenecks (energy, land, materials) and that many tasks (plumbing, construction, healthcare, teaching, judgment-heavy roles) are far from full automation.
  • Factory veterans dispute “lights-out” rhetoric, saying highly automated plants still rely heavily on skilled human troubleshooting.

Automation, R&D, and human capability

  • One line of debate: can most people pivot to R&D or creative work once routine jobs disappear?
  • One side cites decades of academic and psychological data suggesting only a minority can do high-level R&D.
  • The other side argues current data is biased by existing life constraints; freed from survival work, many more could contribute intellectually, though evidence is unclear.

Income, UBI/UBS, and economic structure

  • Central worry: if jobs vanish, how do people access food, housing, and services, and who sustains demand for production?
  • UBI and variants (GBI, universal basic services) are proposed; some point to small-scale trials as promising, others note most are means-tested (GBI) rather than truly universal.
  • Concerns include inflation/repricing of everything to soak up UBI, and who provides/incentivizes services if income is decoupled from work.
  • Some argue that in a post-scarcity, highly automated economy, providing basics might be cheaper than managing unrest.

Power, inequality, and social stability

  • Many fear extreme capital concentration: owners of AI/robotic means of production vs a surplus population with no bargaining power.
  • Scenarios range from mass deprivation and “serf classes” to violent unrest, sabotage of critical infrastructure, or de facto culling via poverty.
  • Others claim that at very high automation levels, excluding most humans is unstable; access to automated production becomes a matter of survival and thus politics, not markets.

Meaning, consumerism, and human behavior

  • Some envision a “lives, not jobs” era where people do work for fulfillment, not survival.
  • Critics point to real-world “abundance pockets” (deindustrialized regions with welfare + cheap entertainment) where many default to drugs and aimlessness, echoing Huxley’s “soma.”
  • There’s disagreement whether most people, freed from necessity, would pursue higher aspirations or simply sink into low-effort consumption.

Bag of words, have mercy on us

Metaphors and Mental Models

  • Many object to “bag of words” as a metaphor: it’s already a specific NLP term, sounds trivial, and doesn’t match how people actually use LLMs.
  • Alternatives proposed: “superpowered autocomplete,” “glorified/luxury autocomplete,” “search engine that can remix results,” “spoken query language,” or “Library of Babel with compression and artifacts.”
  • Some defend “bag of words” (or “word-hoard”) as deliberately anti-personal: a corrective to “silicon homunculus” metaphors, not a technical description.

Anthropomorphism and Interfaces

  • Commenters repeatedly see people treat LLMs as thinking, feeling agents, despite repeated explanations that they’re predictors.
  • Chat-style UIs, system prompts, memory, tool use, and human-like tone are seen as major anthropomorphizing scaffolding that hides the underlying mechanics.
  • Some argue a less chatty, more “complete this text / call this tool” interface would reduce misplaced trust and quasi-religious attitudes.

Capabilities vs. “Just Autocomplete”

  • Disagreement over whether “just prediction” is dismissive:
    • Critics: next-token prediction on text ≠ modeling the physical world or doing reliable reasoning; models lack stable world models, meta-knowledge, and consistent self-critique.
    • Defenders: prediction is central to human cognition too; given scale, tool use, feedback loops and agents, prediction-plus-scaffolding may cross into genuine problem solving.
  • Examples cited both ways: impressive math/competition performance, code generation for novel ISAs vs. brittle reasoning, hallucinations, and inconsistency under minor prompt changes.

Human Cognition Comparisons

  • Long subthread on whether all thinking is prediction: references to predictive processing / free-energy ideas vs. objections that this redefines “thinking” so broadly it loses usefulness.
  • Some argue we don’t understand human thought or consciousness well enough to assert LLMs categorically “don’t think”; others say lack of learning at inference time, motivation, and embodiment are decisive differences.

Ethics, Risk, and Social Roles

  • Underestimating LLMs risks missed opportunities; overestimating them risks delusion, over-delegation in high-stakes domains, and possible moral misclassification (either of humans or models).
  • Economic concern: many “word-only” roles may be replaceable if a “magic bag of words” is good enough for employers.
  • Creative concern: several insist they value works because humans made them, akin to the “forklift at the gym” analogy; others see AI as acceptable when the goal is output, not personal growth.

Interpretability and Inner Structure

  • Interpretability work (e.g., concept neurons, cross-lingual features, confidence/introspection signals) is cited as evidence of internal structure beyond naive bag-of-words.
  • Skeptics counter that much of this research is unreviewed, commercially motivated, and doesn’t yet demonstrate human-like understanding or robust world models.

How I block all online ads

HN Title Handling

  • Some comments note HN’s auto-removal of “How/Why” from titles as an old anti-clickbait measure.
  • Others argue this often degrades clarity (e.g., “How I block all online ads” vs “I block all online ads”) and see calling it out as a way to get moderators to revert it.

Browsers and Core Extensions

  • Common “baseline” setup: Firefox + uBlock Origin; many say this almost eliminates ads and trackers.
  • Others prefer Brave (often for speed and built-in blocking) but dislike its Chromium base or its crypto features.
  • A few report Firefox instability or slowness vs Chromium; others say Firefox is rock-solid for them.
  • Edge is mentioned as still accepting Manifest V2 extensions, so uBlock Origin works there.
  • Several recommend additional extensions: SponsorBlock (skip in‑video sponsors), DeArrow (de-clickbait titles/thumbnails), Consent-O-Matic (auto-reject cookie banners), and user-agent switchers/Chrome Mask to bypass “Chrome-only” sites.

DNS / Network-Level Blocking

  • Many use Pi-hole, AdGuard Home, NextDNS, ControlD, Mullvad DNS, etc. to block ads and trackers across entire networks and devices (including TVs and mobile apps).
  • Debate over self-hosted (Pi-hole/AdGuard on router/VPS) vs managed (NextDNS/ControlD): tradeoffs in cost, customization, reliability, and effort.
  • DNS blocking is praised for simplicity but noted as weaker against “native”/first-party ads (e.g., some streaming services, Twitch, YouTube, in-app SDKs) and occasionally breaking services or links.

YouTube, Streaming, and TV Apps

  • Heavy focus on YouTube:
    • Strategies: uBlock Origin + SponsorBlock (browser), MPV + yt-dlp + SponsorBlock, FreeTube, NewPipe, Invidious, ReVanced, SmartTube, iSponsorBlockTV, Apple TV/Home Assistant setups.
    • Many still pay for YouTube Premium and then also use blockers or ReVanced for UX fixes, background play, and hiding Shorts.
    • Others refuse to pay on principle (paywalling background play, UI churn, AI features) and rely purely on blocking/downloading.
  • Twitch and other platforms: AdGuard Extra, Twire, SmartTube, DNS-level blocking, or simply abandoning services when ads become too intrusive.

“Click All Ads” / AdNauseam Idea

  • Some argue blocking is insufficient and advocate “poisoning” ad profiles by auto-clicking ads (AdNauseam or similar concepts) to waste budgets and undermine tracking.
  • Others say such clicks are trivial to detect as fraud and mostly filtered, calling the approach snake oil.
  • There is discussion of Google’s early ban on AdNauseam and whether that implies it was impactful.
  • Technical concerns: need for safe isolation (VMs, background profiles) and protection from possible exploits.

Ethics, Economics, and “Supporting Creators”

  • Strong sentiment that the ad-supported web has become predatory, especially for non-technical users.
  • Some users simply close or boycott ad-heavy sites rather than block, accepting lost content.
  • Others explicitly support creators via Patreon/memberships while blocking ads everywhere.
  • Debate over whether ad-funded content should simply disappear if it can’t survive without tracking-heavy ads.
  • YouTube creators’ mid-roll and integrated sponsor segments are viewed as unavoidable; SponsorBlock and similar tools are considered essential by many.

Usability, Breakage, and Effort

  • Reports of certain sites/apps breaking under aggressive blocking (Shopify apps, Netflix with Pi-hole, some finance/banking apps with VPN-based blockers).
  • Some see complex multi-layer setups (VPN + DNS + extensions + hosts) as overkill; others find them easy once “amortized” over time.
  • Host-file-only setups are mentioned as very low-maintenance; rebuttals note they miss many trackers and UI annoyances.
  • One commenter asks about tools to block AI-generated content akin to ad blockers; no clear solution emerges in the thread.

XKeyscore

Current NSA Capabilities vs. Pre-Snowden

  • One side argues the NSA’s collection capability is “greatly degraded”: most traffic is now encrypted, so they can no longer passively read vast amounts of content as they did pre-Snowden.
  • Opponents say that while content interception has changed, overall capabilities are still enormous: they can still “push a button” on specific people, and budget, mission, and authorities have not meaningfully shrunk.

Bulk Collection vs. Targeted Access

  • There is broad agreement that bulk, full-take content collection from backbone taps is far less useful now because TLS, E2EE, and encrypted metadata (e.g., via big platforms) are widespread.
  • Disagreement focuses on whether this is merely an inconvenience or a “massive loss” of a unique ability: keyword search over everyone’s plaintext content to discover new targets.

Encryption, CAs, and Cloudflare/Google

  • Several comments emphasize that modern encryption is not “magically broken” by NSA; attacks must target endpoints, keys, or intermediaries.
  • Certificate Transparency and key rotation are cited as reasons why large-scale MITM via bogus certificates (including hypothetical Let’s Encrypt compromise) would be noisy and quickly detectable.
  • Some speculate that US intermediaries like Cloudflare (terminating a large fraction of TLS) or big providers (Google, Microsoft, Apple) could be compelled or infiltrated, but others stress:
    • No known legal mechanism to demand “everything” from such companies.
    • Huge political and commercial risk for companies if such cooperation became known.

TAO, Zero-Days, and Circumventing Encryption

  • Many note that NSA’s Tailored Access Operations (and similar units) focus on endpoint compromise: zero-days, implants, hardware interception, OS-level backdoors, mobile spyware comparable to Pegasus, etc.
  • Consensus: targeted hacking of “almost anyone” is feasible; doing this at Internet scale without detection is not.

Metadata, AI, and “Store Now, Decrypt Later”

  • Metadata is repeatedly described as extremely valuable: who talks to whom, when, over what services, patterns of life, even with Tor/VPNs.
  • Some argue dragnet metadata plus ML/AI enables target discovery and selection without decrypting everything.
  • “Store now, decrypt later” with future quantum attacks is mentioned but treated as speculative; if that happens the whole landscape changes.

Domestic Use, Parallel Construction, and Cases

  • A side-thread discusses “parallel construction” in high-profile criminal cases, asserting that intelligence-derived leads are laundered into seemingly ordinary evidence.
  • Specific cases are floated, but others find them weak examples or note that DOJ policy on such use is not binding.

Aims and Target Sets

  • One perspective: NSA is primarily focused on foreign governments and terrorism, not random domestic users of Signal/Tails.
  • Counterpoint: if someone already associated with foreign threats is using such tools (even in the US), they become legitimate targets, and metadata is enough to flag them.

Second Leaker and Shadow Brokers

  • Some links argue XKeyscore details did not all come from Snowden and may instead be from a “second source,” possibly the same entity behind the Shadow Brokers leaks.
  • Others note this remains conjecture, albeit grounded in overlap of timeframes and internal NSA locations of the leaked materials.

Encryption, Obfuscation, and Net Neutrality

  • One branch advocates fully encrypted, obfuscated traffic (no cleartext SNI, app-pinned keys, Telegram/WeChat-style protocols) to frustrate surveillance and traffic discrimination.
  • A reply questions the net neutrality angle: hiding your traffic doesn’t stop ISPs from prioritizing traffic they can identify and favor; the effect would matter only if everyone encrypted/obfuscated similarly.

Classification and Wikipedia Editing

  • A meta-thread nitpicks Wikipedia’s use of “secret” vs. “classified,” noting that the program is reportedly Top Secret and that technically information, not systems, are classified.
  • Attempts to edit the article wording are blocked by automated anti-vandalism, prompting mild frustration.

Storage and Scaling

  • Past claims about “20 TB/day” XKeyscore intake are contrasted with modern hardware improvements and massive growth in global data volume.
  • Commenters assume NSA can store far more now, but likely faces a worse ratio of storable content to total global traffic, especially with so much of it encrypted.

Evidence from the One Laptop per Child program in rural Peru

Overall impact and interpretation of the study

  • Commenters highlight the core finding: strong gains in computer skills but no significant improvement in academic performance, with some evidence of worse grade progression.
  • Some view this as a partial success: digital skills are valuable for employability and national productivity, especially as computers and phones permeate daily life.
  • Others argue that this misses the program’s stated goals and that hoping “give computers → get better at everything” was always unrealistic without deeper pedagogical change.

Design, usability, and implementation issues

  • The Sugar interface is widely criticized as an experimental, heavy, Python-based GUI that ran poorly on weak hardware and broke with familiar desktop paradigms, creating a barrier for both users and potential developers.
  • Several argue that a standard lightweight Linux + common window manager would have enabled better performance and a larger ecosystem of existing software.
  • Lack of teacher training and limited or absent internet access are repeatedly cited as critical missing pieces; without content, guidance, or connectivity, many devices were “glorified calculators.”

Context, opportunity cost, and evidence-based policy

  • A strong thread emphasizes opportunity cost: tens of millions of dollars could have funded interventions with proven impact in similar settings (nutrition, school meals, early childhood programs, teacher development).
  • Advocates of evidence-based development contrast OLPC’s rollout with programs tested via randomized controlled trials and co-designed with local stakeholders.
  • Others defend OLPC as legitimate experimentation: failures generate knowledge, and earlier “effective” policies were also once untested.

Broader structural and ethical debates

  • Some attribute disappointing results to deep structural problems in rural Peru—malnutrition, illness, weak schools, lack of connectivity—arguing laptops alone cannot overcome those.
  • There is pushback against framing outcomes in terms of “cognitive ability” differences; this is called out as veering into racist explanations and ignoring program design flaws.

Legacy and indirect effects

  • Many note OLPC’s influence on low-cost laptops, netbooks, and Chromebooks, and on pushing the industry toward cheaper, smaller devices, especially in education.
  • Others downplay this, calling netbooks a fad and Chromebooks a niche, arguing that the real transformative device in developing countries has been the smartphone, not the OLPC laptop.

Estimates are difficult for developers and product owners

Why Software Estimates Are Hard

  • Many comments argue software work is inherently novel and complex, so past effort doesn’t transfer cleanly; “easy, repeatable” tasks tend to get automated away.
  • Unknown prerequisites, unclear constraints, and hidden code interactions often dominate effort and only surface mid‑implementation.
  • Time distributions are seen as heavy‑tailed/log‑normal: a “simple” task can blow up by orders of magnitude, not just 20–30%.

Estimates vs. Commitments

  • Developers report that “estimates” quickly become deadlines and self‑imposed promises; ranges are collapsed to single dates via a “telephone game” up the org chart.
  • Re‑planning is often treated as failure instead of learning, so people pad aggressively to protect themselves, which wastes time and erodes trust.
  • Some describe estimates as tools of control or “debt servitude” for PMs and sales, similar to sales forecasts.

Value of Estimates and Counterarguments

  • Others insist estimates are necessary for prioritization (is a feature worth 2 weeks vs 2 months?), coordination with marketing, sales, legal, and external commitments.
  • Comparison to other engineering fields: bridges, films, pharma all miss estimates too, but still estimate and buffer (contingencies, change orders).
  • A strong view: if software wants to be treated as a real engineering profession, it must be able to justify at least rough, order‑of‑magnitude estimates.

Methods and Heuristics

  • Techniques mentioned: Delphi/Delphi‑like group methods, three‑point/PERT, ROPE (realistic/optimistic/pessimistic/“equilibristic”), Monte Carlo forecasts, cone of uncertainty.
  • Agile techniques: planning poker with Fibonacci story points (complexity, not time), t‑shirt sizing, or coarse buckets (day/week/month/year).
  • Heuristics: multiply by 2–π–8, move to next larger unit, always give ranges and confidence (P50/P90) instead of single numbers, and continuously update estimates as you learn.

Process, Tools, and Culture

  • Strong support for Kanban/continuous delivery with rolling priorities and avoiding hard external dates; Scrum/SAFe are criticized as “Agilefall” when coupled to rigid roadmaps.
  • Several emphasize that accurate forecasting depends on historical data and stable processes (“evidence‑based scheduling”), but few orgs systematically collect and analyze that.
  • Jira and similar tools are seen as necessary for visibility by PMs but as “translation tax” by devs when they must constantly maintain tickets.
  • Broad agreement that the real lever is trust, frequent delivery, and honest communication about uncertainty; without that, any estimation scheme gets weaponized or ignored.

The C++ standard for the F-35 Fighter Jet [video]

JSF C++ Subset & Determinism

  • Thread centers on the F‑35 C++ rules: no exceptions, no recursion, and no dynamic allocation after initialization (especially not in inner loops).
  • Many note this is standard for hard real‑time and embedded systems: you must prove worst‑case timing, avoid fragmentation, and eliminate hidden blocking (e.g., allocator mutexes).
  • Aim is determinism: fixed stack bounds, static memory, predictable control flow.

Memory, RAII, STL & Exceptions

  • Debate over RAII: some equate it with heap use (e.g., std::vector), others stress RAII is about lifetime, not allocation; can be used with static/stack memory and pools.
  • With -fno-exceptions, large parts of the standard library are awkward but not entirely unusable: containers can still be used if you accept terminate on throw, or follow the “freestanding” subset.
  • Others stress that in such environments you typically avoid std containers/strings anyway, often using custom allocators, pools, or shared-memory/paged containers.

Recursion, Control Flow & Timing

  • Recursion is banned because stack usage must be statically bounded and analyzable; explicit loops with fixed limits are easier to reason about.
  • Discussion of tail calls and potential Rust “tail recursion” operators that would be compile‑time‑verified, but not available in C++.
  • Some argue early returns and exceptions complicate reasoning about cleanup; others say early returns often reduce complexity and that exceptions can be fast if designed correctly.

Coding Standards (MISRA, JSF, AUTOSAR) – Help or Hindrance?

  • JSF rules compared to MISRA and AUTOSAR; all seen as part of DO‑178C–style process rigor rather than guarantees of correctness.
  • Supporters: static analysis plus strict rules reduce certain defect classes and aid auditability.
  • Critics: many rules are cosmetic or counterproductive (e.g., no early returns, weird unused-variable idioms), and empirical studies show some MISRA rules correlate with more defects.
  • Consensus: standards must be tailored; “blind 100% compliance with no deviations” is viewed as a misunderstanding.

Autocode vs Hand‑Written Safety‑Critical Code

  • Split views on Simulink/Matlab autocode:
    • Pro: eliminates common human slip‑ups (off‑by‑one, missed checks), gives high‑fidelity implementations of validated models; for many control problems pass/fail vs tests is what matters.
    • Con: output can be “spaghetti”, resource‑heavy, and hard to reason about; when autocode is later hand‑modified, guarantees vanish and complexity explodes.
  • Disagreement over whether extra CPU/RAM to accommodate bloated autocode is acceptable or can force more complex system architectures.

Stacks, Heaps & Mission Assurance (Satellites, Avionics)

  • Some claim satellites/avionics avoid STL and dynamic memory to keep variables at fixed addresses, so bad cells can be patched around and ground debugging can use exact replicas.
  • Others with space‑flight experience push back: stack use is ubiquitous; heap is often allowed at init; some modern missions use full C++ STL (e.g., std::map) with exceptions.
  • General pattern: static allocation for core control loops, possibly bounded pools elsewhere; heap usage is constrained but not universally banned.

Alternative Languages: Ada, Rust, C(+), GLib

  • Ada comes up repeatedly as the “obvious” safety‑critical choice; history explained: Ada was mandated, then dropped partly over ecosystem/tooling and hiring issues.
  • Some argue DoD should have enforced Ada harder; others point to high‑profile Ada failures (e.g., Ariane 5) as proof language alone doesn’t guarantee safety.
  • Rust is suggested as allowing “100% of the language” under similar constraints; rebuttal notes that std/alloc and panics conflict with MISRA‑style rules; real safety profiles would restrict Rust too (e.g., no_std, no third‑party crates).
  • One long subthread describes how GLib uses compiler cleanup attributes to emulate RAII in C: g_autofree/g_autoptr/g_auto plus type‑specific cleanup functions achieve destructor‑like behavior without full C++.

Other Domains: Games, HFT, Web Backends

  • Game engines commonly ban exceptions, RTTI, dynamic allocation in hot paths, and sometimes smart pointers; practices resemble JSF constraints.
  • HFT traditionally avoided exceptions for latency, though there are niche designs using exceptions to avoid branches on rare error paths.
  • Some web and infrastructure developers also avoid post‑startup allocations for performance predictability, using custom allocators and pools.

Error Handling: Exceptions vs Error Codes

  • In safety‑critical systems, error codes (or result types) are favored: clearer control flow, easier static reasoning, and fewer unwinding concerns.
  • Others note research showing exceptions can outperform carefully checked error codes in complex scenarios, but the main objection is semantic: unwinding is hard to make robust in low‑level code.
  • Thread acknowledges that both exceptions and error codes can be mishandled; discipline and tooling matter more than mechanism.

F‑35 Program Quality & Ethics

  • Mixed views on F‑35 overall: widely criticized for cost and schedule overruns, but also widely described as the most capable fighter currently in mass production and heavily exported.
  • Some see its software process as a relative success amidst hardware/management issues; others question focusing on refining tools for systems that can be used in ethically troubling ways.
  • Ethical objections to discussing its software “like any other tech” are raised; counter‑arguments frame technology as neutral and place responsibility on policy rather than code.

I failed to recreate the 1996 Space Jam website with Claude

Web tech & the original Space Jam site

  • Several comments note the 1996 site actually used table-based layout, not CSS absolute positioning; early versions even used server-side image maps before moving to static tables.
  • People suggest prompting the model explicitly to use <table> layouts and 1990s-era techniques, though others argue only tables and CSS ever mattered in practice.
  • Some nostalgia and technical detail about 90s browser quirks (font metrics, gamma differences, nested tables, 1×1 spacer GIFs, sliced images, Dreamweaver/Photoshop workflows).

Why multimodal LLMs struggle here

  • Multiple commenters say current multimodal LLMs don’t “see pixels”: images are chopped into patches and embedded into a semantic vector space, destroying precise geometry.
  • Pixel-perfect tasks, exact coordinates, and spatial layouts (ASCII art, circles, game UIs) are repeatedly cited as consistent weak spots, even when models are strong at general coding.
  • Someone points out that models often parse 2D content poorly even as text.

Suggested better approaches

  • Strong theme: don’t one-shot. Use iterative, agentic workflows:
    • Have the model write image-processing tools (OpenCV, template matching) to locate assets and measure offsets.
    • Use Playwright or browser tooling to render, screenshot, diff against the target, and loop until tests pass.
    • Treat this as TDD: first write a test that compares rendered output to the screenshot, then have the model satisfy the test.
  • Several people report getting much closer or essentially perfect results with this tooling+feedback setup, though often with hacks (e.g., using the screenshot itself as a background).

Benchmark value & realism

  • Some see the task as contrived (“just download the HTML”); others note it mirrors real workflows where developers implement UIs from static mocks or screenshots.
  • Many say the exercise usefully maps the boundary: models are good at “make something like X” but bad at “recreate X exactly.”

Trust, overconfidence, and tool role

  • Commenters stress that LLMs are overconfident and their failure modes are opaque; juniors may not recognize subtle mistakes.
  • Debate over whether a tool that needs checking is “bad” or simply incomplete but still useful if it does 80–90% of the work.
  • Several frame LLMs as cheap, fallible interns that require supervision and external verification rather than as autonomous programmers.

What the heck is going on at Apple?

Scope of the Shakeup

  • Some see the CNN framing as overblown: several departures are retirements or obvious promotions, not a crisis.
  • Others argue this is an unusually large and cross‑functional exodus for historically stable Apple leadership (AI, design, hardware, legal, policy, operations, CFO), and therefore legitimately newsworthy.
  • There’s concern that Apple is losing not just aging executives but also “rising stars” in AI and search to Meta and others.

Alan Dye, Design, and “Liquid Glass”

  • Commenters are overwhelmingly hostile to Dye’s tenure.
  • He’s blamed for a decade of regressions in Apple UI: illegible, over‑cosmetic design, and especially the “Liquid Glass” look in iOS/macOS 26, perceived as buggy, battery‑draining, and hard to read.
  • Multiple anecdotes claim Apple designers and users were relieved he left; people hope this allows “real HCI people” to regain influence.
  • His move to Meta is widely described as a net positive for Apple and a risk for Meta’s usability.

AI Strategy: Crisis or Smart Caution?

  • Split view:
    • One camp thinks Apple’s AI efforts (Siri, Apple Intelligence) are embarrassingly weak, and continued talent loss in AI could be existential if AI becomes central to devices.
    • Another argues Apple doesn’t need to “pivot to AI,” can safely integrate third‑party models, and benefits by not shoving AI into everything like Microsoft and Google.
  • Several note growing user backlash to AI‑everywhere UX; Apple’s slower, more optional approach is seen by some as a feature, not a bug.

Tim Cook, Succession, and Internal Culture

  • Speculation that large moves reflect pre‑Cook‑retirement house‑cleaning or succession drama (who didn’t get the “crown”). Others think it’s simply age‑driven turnover plus internal frustration.
  • Many feel Cook is a world‑class operator and “accountant,” but not a product visionary; Apple looks conservative, slow, and increasingly driven by Wall Street.
  • There’s anxiety over rumors the chip chief might leave; that is seen as the only truly alarming potential loss.

Product Quality and Direction

  • Consensus: hardware (especially Apple Silicon) remains stellar; software and UX have deteriorated.
  • macOS/iOS 26, Liquid Glass, Siri stagnation, and the muddled role of iPad are frequent complaints.
  • Some think this shakeup is exactly what critics have asked for—a reset of a drifting, design‑ and AI‑confused Apple—while others worry it signals deeper rot reminiscent of pre‑Jobs‑return 1990s Apple.

The AI wildfire is coming. it's going to be painful and healthy

Reality of the “AI Wildfire” / Bubble

  • Several commenters reject the article’s claim that “every promising engineer” is being chased by AI startups; they see few serious offers, mostly from firms trying to automate them away.
  • Many don’t see a classic dot‑com style bubble of tiny, overvalued AI firms; instead, they see a market dominated by a handful of giants with huge but real spend.
  • Others point to “shovelware” apps (LLM-wrapped language tools, productivity hacks) as today’s Pets.com: low-effort grifts using API access, some VC-backed but economically trivial when they vanish.
  • Wildfire metaphor is widely criticized as overwrought, ecologically inaccurate, and nihilistic: real wildfires can destroy ecosystems, not “cleanse underbrush.”

Business Value vs Hype

  • Some report concrete productivity gains (e.g., LLMs writing most of their code, >2× output) and argue providers could credibly charge a significant fraction of developer salaries.
  • Others see mainly FOMO-driven “AI for AI’s sake”: executives demanding AI features regardless of quality, usage, or ROI; AI search and support often worse than what they replaced.
  • There’s disagreement over whether current AI already delivers “measurable and immediate” returns; skeptics say layoffs are often just cost-cutting with AI as pretext, and benefits are hard to quantify.
  • Debate over trajectory: one side expects continued improvements and new use cases; the other sees slowing model progress and no guaranteed path to “tremendous business value.”

Infrastructure, Concentration, and Compute

  • This cycle is seen as different from prior tech booms because of massive capex in GPUs and datacenters; VC “high risk” money is now a large share of the real economy.
  • Some argue even a mass startup wipeout would be a rounding error compared to the entrenched giants (clouds, model labs, chipmakers), so there’s no true “cleansing fire.”
  • Nvidia’s role is debated: critics expect large customers to move to ASICs; defenders say Nvidia is already effectively an ML-ASIC company with a huge CUDA moat, likening it to Cisco post‑dot‑com.
  • Compute and energy are viewed as long-lived assets; many expect any downturn in AI demand to be temporary, with cheaper compute enabling new waves of usage.

Labor, Inequality, and Everyday Experience

  • Examples are cited of AI reducing staffing needs (receptionists, tier‑1 support, translation, data entry), with active efforts to cut headcount in large organizations.
  • Others stress historical patterns where productivity tech didn’t simply produce mass unemployment, but acknowledge today’s low-wage workers have little buffer.
  • Office workers note decades of efficiency gains without proportional sharing of value; many describe current AI work (slap-on features, “AI foistware”) as pointless from a user perspective.
  • Tech workers discuss coping strategies: ride the AI wave for résumé value vs. aggressively saving for early retirement and expecting layoffs in a boom–bust cycle.
  • Broader concerns include erosion of the “old internet,” lock‑in to heavily moderated platforms, AI-generated slop and astroturfing, and a general sense that user experience has worsened from Web 1.0 through social/mobile to AI.

He set out to walk around the world. After 27 years, his quest is nearly over

Nature of the journey & continuity

  • Commenters clarify the walk is “continuous” in route, not in time: he frequently pauses for months or years, then returns to the exact stopping point.
  • Some see this as standard “section hiking” and still impressive; others feel that for a decades‑long, sponsored “around the world” quest it’s less epic than imagined.
  • Distance and pace analysis shows his original 8‑year estimate assumed ~20 km/day every day; reality over 27 years is closer to ~6 km/day when breaks and setbacks are included.

Visas, borders, and legality

  • Many threads focus on how modern visa regimes make uninterrupted long‑term overland travel nearly impossible.
  • Schengen’s 90/180‑day rule, lack of an EU‑wide long‑stay travel visa, and post‑Brexit status for UK citizens are dissected in detail; consensus is that bureaucratic constraints largely force his breaks.
  • His detention for crossing into Russia at a non‑official point divides opinion: some call the arrest foreseeable and justified; others see it as rigid bureaucracy unable to handle edge‑case adventurers.
  • This spirals into a long philosophical debate on borders: are strict territorial controls “natural” (parallels to animals, immune systems, IT firewalls) or a relatively recent, often harmful political construct?

Ethics of the quest: admiration vs abandonment

  • Many admire his persistence through extreme environments and state harassment; some compare him to other long‑distance adventurers and polar rowers.
  • A substantial subthread reacts strongly to reports that he effectively abandoned a very young child for this project.
  • Some label it selfish and unforgivable, arguing we shouldn’t celebrate feats built on family neglect; others point to complicating factors (marital breakdown, relocation of the child, military constraints) and a later partial reconciliation.

Human nature, media, and travel

  • His observation that almost everyone he met was kind resonates with many long‑term travelers, who echo that everyday in‑person interactions are far better than online discourse or news suggests.
  • Others counter with experiences of frequent scams, theft, and refusals of help, arguing that travel also exposes a “worst of humanity” side.
  • Several note how modern social media amplifies negativity and outrage, while the physical world of ordinary people remains mostly decent.

Adventure culture and commercialization

  • Commenters share numerous examples of global walkers, cyclists, bikers, unicyclists, and tuk‑tuk travelers.
  • There’s disagreement over YouTube‑driven adventure: some say turning journeys into content undermines authenticity and local connection; others argue online communities can be motivating and supportive, not necessarily corrosive.

Scala 3 slowed us down?

Performance testing & profiling on the JVM

  • Many commenters say major language upgrades demand automated performance tests, flamegraphs, and tooling like JMH, async-profiler, JFR, and Java Mission Control.
  • Some teams run continuous or nightly benchmarks comparing two versions side-by-side, analyzing CPU, GC metrics, allocation rate, and kernel-level counters.
  • There is concern about noisy neighbors and VM variability; approaches include fixed hardware, concurrent version runs, hardware performance counters, and warm-up phases.

Root cause: Scala 3, inlining, and macros

  • Several explain that in Scala 3 inline is part of the macro system and is mandatory, unlike Scala 2’s @inline hint.
  • Blindly converting @inline to inline can generate huge expressions, overloading the JIT and causing pauses and slowdowns.
  • Clarification: macros are compile-time; the problem is JIT cost on large generated expressions, not runtime codegen per se.

Dependencies and upgrades

  • Strong agreement that when upgrading language major versions, libraries must be upgraded too; old transitive deps can hide subtle perf bugs.
  • Some are puzzled that old libraries were still present, but others point out this is normal when version ranges are pinned or transitive.
  • One camp insists “keep libraries updated” is best practice; another argues frequent updates introduce new bugs and risk, so change should be minimized and isolated.

Scala 3 syntax and tooling

  • The optional indentation-based, brace-less syntax draws criticism: seen as unnecessary “Python envy” and a distraction that complicates tooling and learning.
  • Others argue it’s optional, closer to ML/Haskell styles, and can be auto-rewritten by compiler/scalafmt; projects can standardize on either style.
  • Tooling quality is contentious: some report Scala 3 IDE support (e.g., via LSP/Metals) as better than Scala 2, others say it’s still a downgrade from IntelliJ Scala 2 and some IDEs remain unreliable.

Scala vs Java vs Kotlin & ecosystem health

  • One view: Scala missed its Spark-era window, is now an “academic curiosity,” and Kotlin/modern Java have taken over industry mindshare.
  • Counterview: Scala is widely used at large companies, has powerful type systems and features still unmatched by Java/Kotlin, and remains very expressive and performant.
  • Opinions diverge sharply on Scala’s governance: some say Scala 3 changes ignored real pain points (compile times, tooling) and fatigued users; others argue Scala 3 finally regularizes type inference and fixes deep design issues.
  • Broader debate branches into Kotlin’s role (strong on Android, mixed adoption server-side), long-term maintainability, hiring costs, and Java’s evolving functional features.

High-level languages and predictable performance

  • A few argue that high-level languages with aggressive optimizers (Scala, Haskell, etc.) make long-term performance predictability hard: small changes can cause opaque regressions.
  • Others respond that JVM languages remain far faster than many other “high-level” languages and that this single bug is not evidence Scala is failing.

Dollar-stores overcharge customers while promising low prices

Regulation, Enforcement, and Fines

  • Many see the core problem as weak, under‑resourced regulation rather than lack of laws: NC’s $5k/inspection cap is viewed as a “cost of doing business,” especially with rare inspections.
  • Others argue this is regulatory capture if industry lobbying kept penalties low or weakened them over time.
  • Suggested fixes:
    • Escalating fines for repeat violations, potentially up to % of revenue/profit.
    • Treating systemic mismatches as fraud with possible criminal liability for executives.
    • “Bounty hunter” / qui tam models where customers or employees share in penalties.
    • Aggressive inspection strategies (multiple inspections per day, or closing stores that exceed error thresholds).

Legal Status of Shelf Prices

  • Long subthread on “invitation to treat” vs binding offer:
    • In common‑law theory, shelf displays invite the customer to make an offer; the contract is formed at checkout.
    • Several commenters note many US states effectively treat the displayed price as binding in practice, especially when systematic, not one‑off, discrepancies occur.
  • Debate over whether “mistakes” (old tags, misprints) should excuse retailers; some say occasional errors are inevitable, others say that if you put up a number, you should be legally bound to it.

Customer Experience and Power Imbalance

  • Practically, catching overcharges requires time, vigilance, confrontation with staff, and often long waits for a manager—costly for low‑income, time‑poor shoppers.
  • Social pressure (holding up a line, fear of conflict, being labeled “difficult”) further suppresses complaints.
  • Some report smooth corrections and even free items; others report being yelled at or refused adjustments.

Economics of Dollar Stores: Convenience vs Exploitation

  • Two competing framings:
    • Convenience: they are often the only or closest store in rural and low‑income areas; travel cost and time can easily outweigh a few cents per item.
    • Exploitation: per‑unit prices are often far higher than supermarkets; small package sizes plus cash‑flow constraints mean poor shoppers pay more over time (“Boots theory” of poverty).
  • Disagreement over whether dollar stores are killing local grocers or simply filling already‑underserved markets; some cite studies showing rural grocers closing after dollar stores arrive, others blame grocers’ product mix or management.

Technology & Process Proposals

  • E‑ink shelf labels and store apps to keep shelf and register prices in sync are seen as likely future; concerns about dynamic pricing and difficulty proving discrepancies.
  • Some argue this is mostly understaffing and bad internal processes (one clerk doing everything), not inherently “impossible” to fix.

Private Equity and Corporate Incentives

  • Strong thread blaming private equity and financialization: “slash staff, squeeze margin, treat fines as a line item,” especially in essential services.
  • Counter‑arguments note that low reported margins and weak returns in retail suggest shareholders are not obviously over‑rewarded; the deeper issue may be market structure and lack of competition.

Comparisons and Norms Elsewhere

  • Multiple examples of stricter regimes:
    • States (e.g. MA, MI) where overcharges must be refunded plus a bonus/free item.
    • Policies where mispriced items are free or heavily discounted, creating strong incentives to fix errors.
    • Australian/UK approaches where the lowest displayed price must be honored and regulators are more aggressive.
  • Many conclude US practice tolerates too much “predation” and relies on individual shoppers to police behavior that regulators and courts should be addressing structurally.

The state of Schleswig-Holstein is consistently relying on open source

Motivation: Sovereignty and Security

  • Many argue governments should move off Microsoft mainly for digital sovereignty, not cost: fear of sanctions, espionage, and political pressure via cloud services (Exchange, M365, account-based logins).
  • Examples raised include Microsoft cutting off ICC email and broader US surveillance practices; some see the US as an unreliable or even hostile actor.
  • Open source is viewed as reducing structural dependency: states can audit, patch, and self-host instead of relying on opaque US infrastructure.

Practical Migration Challenges

  • Bureaucratic culture and change-aversion are seen as bigger blockers than technology: slow internal processes, compliance constraints, and lack of in-house engineering capacity.
  • Concerns about rushed rollouts, poor UX research, and inadequate user training; some report frustrations with email/calendar migration (e.g. Outlook → Open-Xchange).
  • Others counter that “training” is routinely ignored when switching between proprietary products; resistance appears only when “open source” is mentioned.

Office/Excel Lock‑in and Alternatives

  • Excel is widely acknowledged as the hardest piece to replace (performance, advanced formulas, VBA, deep integration with workflows).
  • Debate over whether LibreOffice/Calc (or OnlyOffice, Collabora, etc.) are “good enough” for most users, with agreement that edge cases (complex workbooks, legal track-changes, Outlook/Exchange workflows) are costly to migrate.
  • Several suggest keeping a small MS footprint for irreducible legacy use, while moving the majority to OSS.

Linux Desktop & Enterprise Management

  • Skeptics highlight immature tooling versus AD/Group Policy/Intune, weaker EDR/DLP ecosystems, and compliance expectations; fear Linux desktop success stories omit 10+ years of TCO data.
  • Others argue Linux is administrator‑friendly by design (immutable system areas, central package repos) and that heavy endpoint tooling is partly a Windows problem.
  • There is a long subthread on whether EDR/AV is essential “defense in depth” or an unnecessary rootkit-like attack surface.

Open Source Governance, Control, and Funding

  • Worry that state-funded OSS could be steered into backdoors or surveillance; counterpoints stress forking, transparency, and existing review processes (e.g. xz backdoor discovery).
  • Strong sentiment that cost-savings should be partially reinvested into upstream projects or local developers; Schleswig-Holstein’s “upstream-only” strategy and German FOSS funding programs are cited positively.
  • Some warn that framing OSS purely as a cheap replacement risks Munich-style reversals under future lobbying and political shifts.

Over fifty new hallucinations in ICLR 2026 submissions

Legal and ethical framing

  • Several comments argue that using LLMs to submit papers with fake citations is straightforward negligence or fraud; once liability attaches (e.g., in law or medicine), many expect AI enthusiasm to cool and even institutional bans.
  • Others stress that negligence in law is about failing “reasonable care,” not strict liability; the emotional backlash against AI is seen by some as irrational.

“Hallucination” vs fabrication and pre‑AI baselines

  • Many dislike the term “hallucination,” preferring “fabrication,” “lies,” or “confabulation,” emphasizing that humans are still responsible.
  • Multiple commenters note citation errors and even fabricated references long predate LLMs; they argue we need a baseline: run the same analysis on pre‑LLM papers and compare error rates.
  • Counterpoint: LLMs are a “force multiplier” for both fraud and accidental nonsense—able to churn out plausible but nonexistent papers, quotes, and references at huge scale.

Peer review, tooling, and academic incentives

  • 20,000 submissions to one conference are seen as a symptom of publish‑or‑perish culture, conference‑centric CS, and citation metrics being used as KPIs.
  • Reviewers say they do not and realistically cannot verify every citation; their job is to assess novelty, soundness, and relevance under tight time and no pay.
  • Others argue that if reviewers don’t check citations at all, peer review is a weak quality gate and partly responsible for the mess.
  • Several propose automated citation “linters” at submission time, DOIs/bibtex checks, and even LLM‑based tools to flag unsupported claims—though people worry about LLMs hallucinating during checking too.

Responsibility, regulation, and blame

  • Big split between “bad tool vs bad craftsman” analogies: some say AI is just a power tool and shoddy outputs indict only the user; others point out that widely deploying an unreliable tool predictably increases slop and externalities.
  • Many want strong sanctions: desk rejection plus blacklists, institutional censure, or “walls of shame” for proven fabrications, regardless of whether AI is invoked as an excuse.
  • Others emphasize systemic pressures: metric‑driven academia, management mandates to use AI, and vendors overselling capabilities while disclaiming responsibility.

Impact on science and trust; appropriate AI use

  • Widespread fear that AI‑generated “slop” (papers, reviews, detectors) will worsen the replication crisis and erode already fragile trust in science.
  • Some see LLMs as useful for narrow tasks—finding candidate papers, editing, or fuzz‑testing arguments—but consider using them to write or pad research papers without full human verification as incompatible with serious scholarship.