Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 245 of 358

A new law in Sweden makes it illegal to buy custom adult content

Scope and Mechanics of the New Law

  • Extends Sweden’s existing ban on buying sex services to certain online sexual services.
  • Key change: paying to “influence” the content of custom photos/videos is equated with paying for a sexual act.
  • “Normal” studio porn remains legal; subscriptions to creators (e.g. OnlyFans) without custom content remain legal.
  • Audio and text-based sexual services (chats, phone sex) are explicitly excluded.
  • Buyers and possibly site operators are criminalized; sellers are not.

Official Rationale vs. Critics’ View

  • Authorities frame this as addressing power imbalances and coercion in “digital prostitution,” arguing that online sex-for-pay can be as harmful as physical prostitution.
  • They stress that lack of physical contact does not change the core problem of vulnerable people being induced into sexual acts for money.
  • Critics argue this logic would apply to many labor markets with unequal power, and see sex work as being singled out.

Impact on Sex Workers and Platforms

  • Several comments describe OnlyFans-style work as relatively safer: physical distance, anonymity, creator control over limits.
  • The law is said to have already led OnlyFans to disable DMs for Swedish creators, gutting the main income stream from custom content.
  • Concern that workers will be pushed to shadier sites or in-person work, becoming more vulnerable and less protected.

Consent, Coercion, and Nature of Sex Work

  • Disagreement over whether most sex workers are “victims” or autonomous entrepreneurs; some note many successful online creators clearly do not see themselves as coerced.
  • Others argue sex is uniquely intimate and psychologically risky, making economic coercion in this domain especially harmful; counter-voices question whether sex is really so different from other hazardous or degrading jobs.
  • There is broader criticism of the “Nordic model”: making purchase illegal while sale is legal is seen by many as ideologically driven and counterproductive.

Cultural and Political Context

  • Several comments link this to Scandinavian feminist and collectivist traditions that view prostitution as incompatible with gender equality and heavily tied to trafficking.
  • Others, often from an Anglo-American lens, see it as paternalistic, sex-negative, and a denial of adult agency.
  • Some describe Sweden as increasingly governed by dogmatic, lobby-driven policy.

Edge Cases, AI, and Workarounds

  • Questions raised about:
    • Whether AI-generated or interactive AI porn would be covered, since the law doesn’t clearly require a human performer.
    • Whether Swedish creators can still sell custom content to foreign buyers without legal risk to themselves.
    • Whether data-driven “matching engines” that anticipate demand (without explicit custom orders) would be considered “influencing” content.
  • These points are left largely unresolved and labeled as unclear in the discussion.

How to not pay your taxes legally, apparently

Scope and Practicality of the Article’s Advice

  • Many point out the article is specifically about avoiding tax on exits (e.g., using QSBS), not “never paying taxes.”
  • Several note it only applies if you first create something worth many millions; that’s “step 0” and is non‑trivial outside of VC fantasy scenarios.
  • QSBS details and caveats:
    • Works for C‑corps, not S‑corps; state treatment varies.
    • Acquirers often prefer asset purchases to avoid liabilities, which can break QSBS benefits.
    • The real benefit is exclusion of up to $10M in capital gains, not $10M of tax.
    • Five‑year holding is hard to game legally; “creative options” are disputed.
  • Some warn following aggressive schemes is a good way to get audited; “LLC is not a tax entity” is reiterated.

Who Can Actually Avoid Taxes

  • Repeated theme: serious tax optimization is mostly available to the already wealthy—those who can pay top firms and set up complex structures or move jurisdictions.
  • Counterpoint: forming an LLC and using small‑business incentives is accessible and encouraged by many governments, so “little guys” can do some optimization.
  • But many “loopholes” only make economic sense above high income/wealth thresholds.

Morality vs Legality

  • One camp: nothing wrong with legally minimizing taxes; if the state wants money, it should write airtight, simple laws.
  • Another camp: legality and morality don’t fully overlap; exploiting intentional or accidental gaps shifts the burden to lower‑earners and undermines social trust.
  • Debate over whether paying taxes is itself moral when governments also fund wars or policies some consider immoral.
  • Some frame taxes as a “defector game”: free‑riding via avoidance invites backlash and political instability.

Loopholes, “Bugs,” and Policy Design

  • Disagreement over whether loopholes are “bugs” (unintended) or “backdoors” (deliberate favors). Likely both exist.
  • Complexity of the code is likened to complex software: impossible to make bug‑free, heavily tested “in production.”
  • Many exemptions began as policy tools (to encourage investment, avoid double taxation), but function as opaque subsidies to the rich.
  • Suggestions include radically simpler systems with few or no deductions, even removing charity exemptions; others argue this is politically impossible.

Inequality and Political Power

  • Widespread belief that wealthy individuals and corporations lobby for and shape these exemptions, then use political donations to prevent reform.
  • Perception that ultra‑rich exploit global arbitrage (Monaco, Portugal NHR, Puerto Rico, etc.), while ordinary workers pay “sticker price.”
  • Several note US citizenship‑based taxation is unusually sticky, making true escape costly (renunciation, exit taxes, potential penalties).

Government, IRS, and Enforcement

  • Strongly mixed views on the IRS: from “honest backstop” to “dishonest grifters” based on personal horror stories and long disputes.
  • Some argue enforcement focuses on easy targets while those with elite advisors can push the envelope.
  • Others emphasize the IRS does punish blatant schemes (e.g., aggressive deduction shells) and that some popular “just deduct everything” ideas are clearly unsafe.

Meta: What Counts as a ‘Loophole’?

  • Observation: commenters call exemptions they dislike “loopholes” and ones they support “incentives.”
  • No agreed objective standard emerges for distinguishing a fair incentive from an illegitimate loophole.

macOS Icon History

Perceived Peak and Overall Direction

  • Many see macOS/iOS visual design peaking around 2012–2014, both in icons and hardware; newer styles are viewed as an incremental improvement over 2020–2024, but a historical regression.
  • Others argue current M‑series laptops are the hardware peak, citing performance, efficiency, and manufacturing quality, though some feel they lack the “clever” or delightful touches of older machines.

Usability and Clarity of Icons

  • Strong sentiment that icons should be quickly recognizable; several recent designs are called confusing or indistinguishable at a glance:
    • Game Center’s colored bubbles, Reminders’ dots, and modern Notes vs Calendar are cited as ambiguous.
    • Some say the new icons look blurred/low contrast, undermining the point of high‑DPI “retina” displays.
  • Others defend more detailed, skeuomorphic icons (e.g., old trash can, Photo Booth, Preview) as immediately legible and familiar.

Skeuomorphism vs Flat and “Squircle” Design

  • Debate over whether “good” icons must be extremely simplified; critics of photorealism say it’s against icon design principles, supporters say detail helps recognition and there’s no technical reason to avoid it now.
  • Nostalgia for whimsical, expressive icons (OS X era, third‑party apps, CandyBar customizations) versus complaints that modern sets are bland, samey, and “corporate.”
  • Discussion of rounded rectangles / squircles as a homogenizing trend across tech and other industries, seen by some as a broader “modernist minimalism” malaise.

Why Rounded Squares Everywhere?

  • One view: the uniform “app button” shape helps users recognize something as an app/action, especially in 3D environments like visionOS, where icons must stand out from arbitrary 3D objects.
  • Upsides noted: predictable hit areas for pointing devices, consistent visual canvas, platform control over visual narrative.
  • Downsides: loss of character, reduced differentiation, and a sense of design-by-committee.

Hardware, Ecosystem, and Values

  • Praise for M‑series SoCs and Rosetta transition; counterarguments that competitors are close in performance and that Apple’s TSMC capacity deals and closed, non‑repairable hardware are user‑hostile.
  • Some prioritize right‑to‑repair and modular machines (e.g., Framework), accepting trade‑offs in build quality and battery life; others prefer Apple’s integrated approach as best overall value.

Specific Icons, Omissions, and Resources

  • Frequently praised older icons: 2012–2014 Game Center and Notes, 2014–2020 Calculator, early System Preferences.
  • Mixed views on 2025 icons for Photo Booth and Podcasts; some find Photo Booth’s evolution especially sad.
  • Requests for histories of iTunes, Safari, Xcode, and OS 9/NeXT/Rhapsody-era icons; a few links to external icon galleries and a screensaver showcasing classic Aqua icons.

'Positive review only': Researchers hide AI prompts in papers

Prompt injection, agents, and security concerns

  • Several commenters treat prompt injection as a fundamental architectural flaw, likening the current situation to pre–SQL-escaping days while people are already stacking “agents” on top.
  • Others argue we shouldn’t fix prompt injection so much as avoid relying on AI for serious tasks at all.
  • There’s anxiety about agents with shell access (“rm -rf” jokes, “yolo mode” agents provisioning infra) and recognition that this is no longer hypothetical. Some suggest sandboxing via VMs and backups.
  • A minority notes that models have become less gullible, and that “prompt engineering” has shifted from magic incantations to giving realistic context and goals.

Hidden prompts in papers: protest, honeypot, or fraud?

  • Core issue: authors embedding invisible instructions in manuscripts to force “positive review only” when run through LLMs.
  • Some see this as clever protest or a honeypot to expose prohibited AI use by “lazy reviewers,” analogous to Van Halen’s brown M&Ms or exam instruction traps.
  • Others consider it academic misconduct akin to biasing or bribing reviewers, arguing it unfairly advantages some submissions and should trigger formal sanctions.
  • Middle-ground view: purely diagnostic watermarks (“mention a cow,” or neutral tokens) are acceptable; anything that steers sentiment crosses an ethical line.

LLMs in peer review: capabilities and limits

  • Many commenters insist LLM-only peer review is unethical and epistemically unsound: models can’t truly assess novel findings, only echo corpus patterns.
  • Others note practical benefits: grammar/style fixes, spotting inconsistencies, checking policy compliance, or surfacing issues humans then verify.
  • Conferences often ban sharing submissions with external LLMs to prevent leaks into training data; local, air‑gapped models are discussed but policy applicability is unclear.
  • Reports from ML venues suggest AI-written reviews are already common, often over-focusing on self-declared “Limitations” sections.

Incentives and dysfunction in academic publishing

  • Several argue the peer-review system is overloaded and misused as a career gate, so low-effort and AI-driven reviews are predictable.
  • Journals are criticized as profiteering intermediaries: authors and reviewers are mostly unpaid while APCs and subscription fees are high. Others counter that editorial coordination, hosting, and copyediting are nontrivial work, though even supporters concede the prestige value (trust in rigorous review) is the main product.

AI beyond academia: hiring and detection

  • Parallel idea: job applicants hiding prompts in resumes to game AI screeners; experienced recruiters say such “resume hacks” are a negative human signal.
  • Cited research and anecdotal efforts embed invisible prompts or watermarks in text to statistically detect LLM-generated reviews; accuracy is above chance but far from perfect.

Local-first software (2019)

Relationship to Capitalism, SaaS, and Business Models

  • Many see local-first as aligned with privacy, autonomy, and “software as infrastructure,” and as a reaction against subscription-driven “enshittification” and cloud lock‑in.
  • Others argue the real problem is rent‑seeking and conflict‑of‑interest business models, not capitalism per se; some counter that this “corrupted capitalism” is just capitalism in practice.
  • SaaS is framed as primarily a pricing/financialization model: predictable recurring revenue, higher valuations, easier approvals than large one‑time purchases, and psychological anchoring on low monthly prices.
  • A recurring theme: there is no well‑established, equally attractive business model for local‑first. Proposed alternatives include: paid native apps with optional sync services; time‑limited update licenses; co‑ops with user governance; sysadmin‑as‑a‑service for self‑hosted boxes; and more radical ideas like UBI. Skepticism remains about their scalability.

Cloud vs Local: Trade‑offs and Governance

  • Critiques of cloud/SaaS: lock‑in, surveillance, opaque business incentives, cloud as DRM, subscriptions for basic functionality, online dependencies that make products brittle (including games and appliances).
  • Defenders note genuine benefits: multi‑device sync, collaboration, advanced features (e.g., data platforms), and the fact that many users want turnkey hosted services, not self‑administered systems.
  • Some propose tackling cloud harms via contracts and standards (EOL guarantees, data portability, open/documented formats, audited access logs, escrow), rather than only technical decentralization. Others doubt enforceability and point out practical migration costs.

Technical & UX Challenges of Local‑First

  • Core difficulty: the sync layer—conflict resolution, schema migration, multi‑device and sometimes P2P under NAT.
  • CRDT-based approaches (Automerge, Yjs, Loro, Ditto, etc.) are praised for enabling offline collaboration but criticized for: complex semantics for real‑world conflicts, poor server‑side querying, and still‑necessary “manual merge” UIs.
  • Opinions differ on maturity of third‑party sync engines (ElectricSQL, Ditto, InstantDB, Couch/Pouch, RxDB, etc.). Some report production success; others find docs and tooling immature.
  • Several builders stress how much harder fully offline‑first is compared to “offline‑tolerant” apps, and how self‑hosting and P2P (TURN, NAT traversal) add operational burden.
  • Web platform limits (same‑origin, service worker update model) are seen as a barrier to truly “download once, run forever” browser apps, pushing some toward desktop shells (Electron, Tauri).

AI, Local Compute, and Future Trajectory

  • There is excitement about local LLMs and offline AI workspaces as a natural fit for local‑first ideals, but also concern that heavy AI workloads will further entrench cloud‑only services.
  • Some expect hardware and models to catch up; others think generative AI will remain cloud‑centric for a long time, making many new applications structurally non‑local.

Practitioner Experiences and Patterns

  • Many examples of local‑first or self‑host‑friendly tools (notes, finance, bookmarks, audiobooks, SCADA analytics) with optional sync via commodity storage (Dropbox, WebDAV, iCloud, etc.).
  • A common pattern: free or cheap local client; optional paid sync or cloud convenience. Builders emphasize user control over data formats (plain files, SQLite) and the ability to “eject” to self‑hosted or file‑based workflows.
  • Some commenters remain skeptical that local‑first can become mainstream beyond motivated, technical users, given usability expectations and current economic incentives.

What 'Project Hail Mary' teaches us about the PlanetScale vs. Neon debate

PlanetScale vs. Neon: Use Cases and Trade-offs

  • Both are framed as good but different tools, optimized for distinct workloads rather than direct substitutes.
  • Rough heuristic from the thread:
    • PlanetScale: better fit for predictable, steady load where you provision fixed CPU/RAM and accept idle capacity.
    • Neon: better fit for spiky/variable workloads; you pay for compute hours and get autoscaling and scale-to-zero.
  • One commenter points out a caveat: Neon’s “active time” billing keeps compute on for ~300 seconds after each request, so a steady trickle of traffic incurs 24/7 billing anyway.

Compute–Storage Separation vs. Single-Box Latency

  • A database reliability engineer argues strongly against separating compute and storage (e.g., Aurora-style):
    • Writes must traverse the network and be replicated across multiple storage nodes/AZs, adding ~1ms+ each time, which is large vs. local SSD.
    • Aurora MySQL loses MySQL’s change buffer optimization, forcing synchronous secondary index updates and worsening write latency.
    • Combined with common app‑side issues (JSON-everywhere, poor normalization, bad queries), this can lead to “disastrous” performance.
  • They praise some Aurora features (survivable page cache, low replication lag) but downplay the value of autoscaling for predictable peaks.
  • Another commenter suggests the latency hit is accepted mainly to gain scalability, durability, and flexibility.

RDS vs. Newer Managed Offerings

  • One user running multi‑TB Postgres on RDS 24/7 says it “seems fine” and asks why switch.
  • Reply: RDS works, but AWS pricing is seen as increasingly “greedy”; PlanetScale/Neon may cut costs and improve DX, especially now that PlanetScale offers Postgres.

Branching, Dev Experience, and Security

  • Neon’s instant branching is highly praised: easy to give every dev or feature branch its own DB snapshot with minimal disk overhead.
  • Concerns raised about prod data in non‑prod branches; suggested mitigations: anonymized base dumps, though many security teams disallow prod data outside prod regardless.

AI, “Vibe Coding,” and Safety

  • Some criticism of Neon’s AI/vibe‑coding marketing: catching AI‑generated SQL injection after the fact is seen as a poor substitute for understanding security fundamentals.

Book Tangent: Project Hail Mary

  • Large side thread debates the book’s merits:
    • Consensus: fun, fast, very readable “cheeseburger sci‑fi,” strong on problem‑solving and “competence porn,” weak on deep characters or literary quality.
    • Audiobook receives especially strong praise; many say it’s better experienced in audio due to performance and sound design.
    • Compared repeatedly to The Martian: similar structure and feel; some prefer The Martian, others Project Hail Mary.
    • Divided views on whether audiobooks “count” as reading; one commenter insists they’re a different medium, others reject that as snobbish.
  • Numerous alternative sci‑fi recommendations appear (e.g., Children of Time, Dune, Neuromancer, Hyperion, Culture series, Ted Chiang collections), generally framed as “meatier” than Weir’s work.

Problems the AI industry is not addressing adequately

AGI timelines, definitions, and plausibility

  • Wide disagreement on timelines: claims range from “verbal-only AGI for most math in 2–7 years” to “maybe 2040 with new paradigms” to “we’re nowhere near, lacking even a theory of comprehension.”
  • Definitions diverge:
    • “Median human across almost all fields” is seen by some as already “basically achieved,” others call that meaningless or wildly overstated.
    • Debate over whether solving formal, well-specified tasks counts as “general,” vs needing open-ended problem-solving without prior data.
  • Some argue current LLMs show genuine “idea synthesis”; others say they only remix text and lack true understanding or originality.

Biology-inspired and alternative paradigms

  • One line of argument: accurate large-scale neuron simulations (continuous time, dendritic learning, rich temporal dynamics) could yield animal-like intelligence; money + compute + biological inspiration makes AGI by ~2040 plausible.
  • Pushback: these ideas are already being tried in labs and academia; paper-churn incentives and transformer efficiency keep alternatives marginalized, but any new paradigm must beat transformers on cost–performance.

Job-hopping and “revealed beliefs” about AGI

  • The article’s inference (“people leaving leading labs → AGI not close”) is widely criticized as bad logic.
  • Commenters list mundane reasons: 8–10× pay bumps, better equity, seniority, dislike of bosses, portfolio diversification, and “screw-you money,” regardless of AGI beliefs.
  • Some see frequent moves and shifting data-center plans as evidence AGI rhetoric is mostly for fundraising and hype.

Company behavior vs rhetoric

  • Skeptics: if AGI were imminent, firms wouldn’t focus on chatbots, sales, and engagement; they’d be in “crash program” mode. Canceled or reshaped infra plans are read as cooling expectations.
  • Others: products both fund research and generate invaluable interaction logs (RLHF/agents), which may be the fastest path to better models. Chatbots are framed as “data funnels,” not just revenue.
  • Several note a pattern of extreme fear (“x-risk”) and extreme optimism being used to justify faster deployment either way.

Hallucinations, reliability, and “understanding”

  • Many see persistent hallucinations as the core unsolved problem that makes full autonomy unsafe; “agents” are viewed by some as Rube Goldberg workarounds.
  • Others claim hallucinations are more manageable now and that LLMs already deliver high-impact productivity gains when paired with verification, RAG, and iterative loops.
  • Large subthread on whether “a sufficiently good simulation of understanding is understanding,” vs the need for deeper mechanistic insight; no consensus.

Ethics, social impact, and desirability of AGI

  • Concerns about optimizing AI for addictiveness (short-form video, flattering chatbots) rather than benefit; parallels to social-media harms.
  • Climate impact of AI-driven data centers sparks debate: some see near-term fossil-heavy buildouts as irresponsible; others emphasize nuclear/renewables and argue added demand can coexist with decarbonization.
  • Several argue that, if achieved, AGI would primarily serve capital as infinitely scalable, right-less labor (superhuman “CEO,” digital slaves), questioning whether AGI is even socially desirable.

Business models, moats, and sustainability

  • Open questions about whether current pricing is VC-subsidized and what “true” costs would be absent cheap capital.
  • Broad agreement that compute and data—not secret algorithms—are the main moats, favoring large incumbents if AGI ever arrives.
  • Some expect an eventual AI bubble shakeout similar to dotcoms, followed by more modest, pragmatic uses; others predict “good enough and cheaper” systems will significantly disrupt labor well before any true AGI.

A 37-year-old wanting to learn computer science

Computer Science vs. Software Development

  • Several commenters argue the OP’s goals (web apps, blogs, streaming device, education apps) are primarily software engineering, not “computer science.”
  • They stress that CS typically includes math-heavy topics (discrete math, linear algebra, algorithms, data structures), which are only indirectly related to building typical apps.
  • Others say the distinction matters mainly if you want theory or academia; for building things, programming and applied engineering skills are more relevant.

How to Learn: Projects vs. Theory

  • Strong emphasis on “learn by building”:
    • Automate personal annoyances (backups, price trackers, downloaders).
    • Start with a concrete project you care about and pick tools as needed.
  • Counterpoint: don’t skip fundamentals—data structures, algorithms, networking, databases, testing, design paradigms. MOOCs and structured curricula (OSSU, MIT OCW, SICP, HtDP) are frequently recommended.
  • Some warn against getting lost in theory if your goal is practical work; others warn against “framework bootcamps” that produce assemblers rather than designers.

Use of AI/LLMs

  • One camp: avoid AI early; if it writes code for you, you won’t actually learn. Use it later as a multiplier.
  • Another: treat AI as a “knowledgeable but fallible friend” for explanations, alternatives, and debugging, but never as an unquestioned expert.
  • A more skeptical view calls LLMs “fake experts” that lie unpredictably; useful only where errors are low-impact and supervision is strong.

Age, Jobs, and the Market

  • Many late starters (30s–40s+) report successful transitions and encourage the OP; “no age limit” is a recurring theme.
  • Others describe pervasive ageism: difficulty getting interviews, pressure to hide age, and being filtered out as “culture fit.”
  • Bootcamps are described both as a fast route that has worked for some and as predatory debt traps for others.
  • Some say a year of focused effort plus learning on the job can work; others claim breaking in at mid‑30s+ without connections is nearly impossible.

ADHD, Motivation, and Life Design

  • Commenters warn that quitting work entirely can backfire, especially with ADHD; a job provides structure.
  • Advice includes keeping some form of income, setting deadlines, having an exit plan, and guarding against distraction and paralysis from too many goals.
  • Motivation, curiosity, and “love of building” are repeatedly described as more decisive than age.

Stop Killing Games

Digital “ownership” vs rental

  • Debate over whether “Buy” buttons are deceptive when licenses can be revoked; some want explicit “Rent/Lease license” labeling so users understand they’re getting revocable access, not property.
  • Others argue the whole thing is semantic: software has long been licensed, not owned; creators should be free to distribute on whatever terms they choose, even if that means eventual destruction.
  • Counterpoint: in other domains (books, DVDs), purchased copies remain usable regardless of publisher decisions; destroying access to purchased games feels like theft and cultural vandalism.

Archiving, storage, and cloud lock‑in

  • Some users systematically obtain DRM‑free copies of games they buy and archive them (S3, Glacier, local NAS) to pass on like physical book collections.
  • Disagreement over risk models: cloud is praised for durability and ease; others stress account lockouts, billing failures, and legal shutdowns—“you don’t own it, you have revocable access.”
  • Self‑hosting advocates claim home labs are sufficiently reliable and less complex than large clouds; others point to cost, effort, and misconfiguration risk.

Stop Killing Games initiative & online services

  • Supporters see the initiative as modest: if you sell a one‑off game that depends on servers, you must ship an end‑of‑life plan (offline/LAN mode, local server binaries, or clear expiry/refund terms).
  • FAQ excerpts show it doesn’t demand perpetual official servers or retroactive fixes; older incompatible titles might be grandfathered.
  • Critics worry about feasibility, especially for games built on non‑redistributable proprietary components, and about disproportionate burden on indie devs.
  • Some fear regulation will push studios further into subscriptions/SaaS and stricter DRM, or encourage malicious compliance where “technically playable” clients are useless.

Piracy, DRM‑free platforms, and workarounds

  • Many prefer GOG, itch.io, Humble, or DRM‑free Steam titles, and even use pirated or “repacked” versions of games they legally own to avoid forced updates, online checks, or shutdowns.
  • A moral line appears: “If buying isn’t owning, pirating isn’t stealing” is used to justify archiving or continuing to play delisted content.

Broader analogies: appliances, right‑to‑repair, and regulation scope

  • IoT‑locked appliances and EVs are cited as parallel trends: hardware that can be remotely degraded or disabled, undermining repair culture and longevity.
  • Suggestions include mandatory sunset plans, open‑sourcing or escrow on EOL, or at least clear categorization (product vs time‑limited service).
  • Others see this as overreach for a relatively minor issue compared to more pressing policy areas, preferring market pressure and labeling over law.

OBBB signed: Reinstates immediate expensing for U.S.-based R&D

Scope of the Change (Section 174 / R&D Expensing)

  • TCJA (effective 2022) forced R&D – including software development – to be capitalized and amortized (5 years domestic, 15 years foreign).
  • Commenters describe absurd situations where companies losing cash still owed tax because R&D salaries couldn’t be fully deducted in the current year.
  • OBBB restores immediate expensing for domestic R&D and explicitly treats software development as R&D, with a catch‑up deduction allowed for 2022–2024 costs.
  • Many see this as undoing a serious policy mistake that hurt startups and cash‑flow‑positive growth companies and generated significant accounting overhead (engineer time classification, “project” bureaucracy).

Domestic vs Foreign R&D and Offshoring Incentives

  • Foreign R&D remains on a 15‑year amortization schedule. Some hail this as “literally could not be better” for US tech workers.
  • Others do the math: the NPV penalty is modest compared to 50–70% lower offshore wages, so tax timing alone rarely outweighs labor‑cost arbitrage.
  • Debate over what counts as “foreign” (employees of a foreign subsidiary vs direct foreign contractors) remains somewhat unclear in the discussion.
  • Several note persistent non‑tax frictions with offshoring: time zones, culture, legal complexity, chronic quality/coordination issues and repeated cycles of offshoring then onshoring.

Immigration, H‑1B, and Fees

  • Thread highlights a raft of new or higher immigration‑related fees and a small excise tax on certain cash remittances abroad.
  • Some argue this effectively raises the cost of foreign workers, indirectly favoring domestic hiring; others see it as nickel‑and‑diming immigrants rather than fixing core issues (like H‑1B wage floors).

Impact on Hiring, Layoffs, and AI Narrative

  • Many expect improved startup cash flow and some increase in US developer hiring or at least fewer layoffs, especially at smaller firms that truly felt 174.
  • Others think the layoffs were mainly about higher interest rates, stock‑price management, and AI “vibes,” so the reversal of 174 will only partially offset broader headcount pressure.
  • Strong disagreement on AI: some see it replacing a large share of coding work (especially junior/mid roles), others say it’s a powerful accelerator but far from replacing software engineering as a job.

Legislative Process and Omnibus Politics

  • The bill is widely criticized as “overstuffed”: R&D fix plus unrelated items (immigration fees, green‑energy changes, gambling loss limits, Medicaid timing).
  • Several see this as a symptom of a broken US process: one big reconciliation bill, delayed “time‑bomb” provisions, and partisan games about when cuts take effect.
  • Extended subthreads debate filibuster, reconciliation, and alternative voting systems (RCV, approval, STAR) as structural reforms.

Green Energy, Deficits, and Distributional Concerns

  • Removal or reduction of green‑energy incentives is seen by some as a major negative shock: stranded private investments, lost industrial policy vs China/Europe, and long‑run climate/competitiveness costs.
  • Others argue non‑nuclear renewables have been subsidy‑dependent and should now prove their economics.
  • Several frame OBBB as a large wealth transfer via higher deficits and inflation: benefits accruing to capital owners and high‑margin firms, with ordinary savers paying via devalued dollars.

International Comparisons

  • Canada’s SR&ED and similar European credits (e.g., France’s CIR) are mentioned as more generous on paper (large refundable percentages of dev salaries), but also paperwork‑heavy, patchily enforced, and sometimes abused.
  • Some non‑US commenters note that in their systems, 100% expensing of software salaries is normal, making the US experiment with Section 174 look especially self‑sabotaging.

What Microchip doesn't (officially) tell you about the VSC8512

Enjoyment of the series & hardware opacity

  • Commenters praise the depth of the reverse‑engineering work and note it exemplifies how opaque hardware can be, especially PHYs.
  • People highlight that vendor capabilities and errata often only become clear late in bring‑up, forcing redesigns; some compare this to “hidden” behaviors in software libraries.
  • The VSC8512’s lineage through multiple acquisitions is seen as part of the confusion, with a sense that even “opened up” docs from Microchip still omit important details.

PHYs, legacy tech, and real-time networking

  • Token Ring support lurking in “dark silicon” sparks discussion about legacy industrial systems needing deterministic behavior.
  • Long subthread clarifies real-time categories (hard/firm/soft) and notes:
    • Consumer/pro‑audio over Ethernet is usually soft or firm real‑time.
    • Safety‑critical domains (nuclear, avionics) demand hard real‑time guarantees.
  • AVB/TSN are mentioned as making Ethernet more suitable for tight timing, but traditional Ethernet alone is seen as inadequate for the strictest cases.
  • A claim that DOCSIS is token‑based is corrected: it uses TDMA/CDMA, not token passing.

Microchip, MPLAB, and GPL concerns

  • One user objects to Microchip charging ~$1,000 to unlock compiler optimizations in what appears to be a GCC‑based toolchain, questioning GPL compliance.
  • Others respond that:
    • GPL permits charging money; the key is providing corresponding source.
    • Microchip does publish source archives, which likely satisfies the license.
    • A noted “loophole” is contracts that forbid customers from even asking for source (Qualcomm example), raising questions about enforceability.

Vendor toolchains vs custom toolchains

  • Many embedded developers dislike vendor IDEs/BSPs, finding them buggy, bloated, or hard to reproduce issues with.
  • Others insist on using vendor stacks because:
    • Vendor silicon support often requires reproducing bugs in their official environment.
    • Offloading toolchain liability is attractive for organizations.
  • There’s a split between those who prefer minimal, upstream GCC/Clang + hand‑written drivers, and those who prioritize official support and integration.

Ecosystems, documentation quality, and vendor behavior

  • Microchip receives mixed reviews: more open than some predecessors, but still poor tooling (huge MPLAB installs, broken default projects) and incomplete docs.
  • ST’s STM32 line is widely liked for CubeMX configurator and relatively good docs, but criticized for:
    • Numerous variants causing supply and selection headaches.
    • Documented and undocumented errata (especially higher‑end parts).
  • NXP is described as having “too much” documentation that’s hard to navigate; tool download friction is mentioned.
  • Nordic is praised for BLE parts and reasonable documentation, though Zephyr is seen as heavy for small MCUs.
  • RP2040 is singled out for excellent docs and a vibrant, open community; esp32 also gets positive notes for docs and framework (esp‑idf).
  • Texas Instruments’ MSP430 line is cited as a model: comprehensive family manuals, per‑device guides, and explicit errata documentation.

Big semiconductor vendors & secrecy

  • Broadcom, Qualcomm, and similar vendors are depicted as hostile to small/medium customers: NDAs, restricted docs, sales‑gatekept access, and unresponsive support unless volumes are very large.
  • Anecdotes describe:
    • Internal silos and codebases with layers of wrappers and long‑lived unfixed bugs.
    • Known bug lists kept internal and not exposed in public errata.
    • Tiered support where only high‑volume clients get real engineering help or design influence.

Why vendors stay closed

  • Several rationales are proposed:
    • Cost of producing externally consumable documentation and supporting many small customers.
    • Desire to funnel prospects through sales and management for upselling.
    • Fear that detailed public docs help competitors in feature and performance comparisons.
    • Limited margins and high NRE: sub‑million‑unit customers may not justify the support burden.

Other wishes and side notes

  • Someone wishes Microchip would publish programming algorithms and bitmaps for legacy Atmel SPLDs/CPLDs; current understanding is partly reverse‑engineered.
  • Raspberry Pi’s RP series and TI’s MSP430 are held up as examples of how good, public documentation substantially improves the embedded developer experience.

Nvidia won, we all lost

GPU Performance, Value, and “Luxury” Positioning

  • Many commenters feel GPU generational gains for gaming have stagnated relative to price: mid‑high cards from 2017–2020 still feel “good enough” for most titles at 1080p/1440p.
  • Others strongly dispute claims that a 2080‑class card is “close” to current flagships, citing benchmarks, 4K, high-refresh monitors, VR, and ray tracing where modern high-end cards are dramatically faster.
  • Broad agreement that high-end GPUs have shifted from enthusiast tools to luxury goods; “midrange” now effectively starts around $500–650, which some see as normalization of inflated pricing.

Pricing, Supply, and AI vs Gaming

  • Nvidia is seen as prioritizing datacenter/AI chips; consumer GPUs are perceived as a side business used to maintain mindshare and a “halo” for CUDA/RTX.
  • Ongoing resentment over paper launches, persistent scalping, and MSRPs that don’t reflect actual street prices. Some argue Nvidia could produce and stock more (like console launches); others point to TSMC capacity and AI demand as hard limits.
  • A minority defends Nvidia’s behavior as rational profit-maximizing in a supply‑constrained market; critics call it enshittification and deliberate luxury positioning.

12VHPWR / 12V‑2x6 Connector and Safety

  • Long subthread on melting/burning connectors: disagreement over how much 12V‑2x6 improves the situation and whether failures are mostly user error vs design negligence.
  • Engineers highlight lack of fusing, sensing, and current balancing as “fire waiting to happen”; others note these features belong on the PCB/PSU rather than in the connector itself.
  • Mention of a prior lawsuit and the perception that Nvidia shipped an obviously marginal design to support extreme power draw.

DLSS, Upscaling, and “Fake Frames”

  • Strong divide: some see DLSS and frame generation as “snake oil” used to cheaply claim huge FPS gains, with visible artifacts, latency, and a departure from “real” engine‑rendered frames.
  • Others say DLSS (especially recent versions) is excellent, often superior to FSR and third‑party tools, and that temporal methods plus upscaling are now fundamental to real‑time ray/path tracing.
  • Broader technical discussion: TAA artifacts, MSAA’s impracticality in modern deferred pipelines, and the tradeoff between higher pixel density vs smarter reconstruction.

Monopoly, Lock‑In, and Alternatives

  • Many frame Nvidia as having de facto monopoly power in GPU compute via CUDA and RTX‑exclusive features, enabling aggressive pricing and influence over reviewers.
  • AMD is praised for open drivers and solid gaming value (especially on Linux), but criticized for weak AI/CUDA alternatives; Intel Arc is seen as promising but immature.
  • Some argue that users themselves created this situation by overwhelmingly choosing Nvidia; others respond that once lock‑in exists, “just switch” is no longer a realistic market correction.

Everything around LLMs is still magical and wishful thinking

Crypto vs. LLMs: Similar Hype, Different Substance

  • Some see “it’s crypto all over again”: heavy marketing, exaggerated claims, and a social environment where criticism is dismissed.
  • Others argue the analogy is shallow: crypto never found broad legal-economy uses beyond censorship‑resistant payments (though that’s life-or-death useful for some), whereas LLMs already have many mainstream, non-speculative applications.
  • A recurring point: both fields suffer from dishonest or naive overpromising, which drives away people who might benefit from a sober understanding.

Real-World Utility Reports

  • Strong positive anecdotes:
    • Classifying invoices, data science tasks, PCAP analysis, transcribing and mining thousands of calls, summarizing large text corpora, drafting legal documents, research assistance, and brainstorming.
    • Code help: debugging, boilerplate, refactors, unit tests, SQL, “rubber-duck” architecture discussions; some claim 2–5x personal output, a few claim “LLMs write nearly all my production code” with human review.
  • Many treat LLMs as high-level languages or “thinking partners” rather than autonomous agents.

Limits, Failure Modes, and Trust

  • Frequent failure modes: hallucinated APIs, protocols, citations, laws; ignoring project docs; forgetting instructions; weak math; poor performance in niche stacks or complex architecture; brittle behavior across sessions.
  • Strong warnings against use for mission-critical code, safety‑critical systems, or unsupervised legal filings; multiple external examples of AI-caused legal errors are cited.
  • Several stress that LLMs can make users feel productive while quietly injecting subtle bugs or conceptual slop.

Productivity Claims and Measurement Problems

  • Practitioners report modest average gains (often ~10–30%) rather than “10x”, due to non-coding overheads and review costs.
  • Management fixates on headline multipliers; internal “success” metrics are often narrow or methodologically weak.
  • The article’s main critique, echoed by some commenters: sweeping claims (“Claude writes most of X’s code”, “I’m 5x everyone else”) are anecdotal, unverifiable, and lack crucial context (domain, baseline skill, quality standards, review rigor).

Economics, Cost, and Open Models

  • Debate over sustainability: huge training spend vs currently limited impact on GDP and heavy VC subsidies.
  • Open-weight models (e.g., Llama family, Qwen) are seen as a check on API pricing and vendor moats; legal attacks on them could shift power back to a few incumbents.
  • Many expect strong local models on consumer hardware to be “good enough” for most work, even if bleeding edge remains centralized.

Workflows, Methodology, and “Prompt Engineering”

  • Effective users describe careful, iterative workflows: targeted prompts into known code regions, explicit test planning, checklist-driven agents, and strict human auditing.
  • Others find that writing robust prompts and then verifying output can take as long as doing the work manually, especially for novel problems or messy legacy systems.
  • General consensus: LLMs amplify good engineers and good processes; in weak contexts they mainly accelerate the production of low-quality output.

Broader Impacts and Open Questions

  • Concern about: erosion of junior roles and skill pipelines, AI-generated “slop” in codebases and documents, overhype driving bad management decisions and premature layoffs.
  • Some foresee large efficiency gains in “manual data pipelining” and back-office work, with humans shifting toward verification and liability-bearing roles.
  • Safety issues like prompt injection and limited context are flagged as fundamental, under-addressed constraints.
  • Many commenters reject both “magic” and “useless” extremes, calling for rigorous, domain-specific evaluation rather than vibes-based extrapolation.

Being too ambitious is a clever form of self-sabotage

Ambition: Fuel vs Self-Sabotage

  • Several distinguish “ambition as action” vs “ambition as identity signaling.”
  • “Too ambitious” is framed as: ambitions that harm doing, become a substitute for action, or are tied to craving honor rather than outcomes.
  • Contrast between people who quietly climb smaller “mountains” to prepare vs those who refuse anything but Everest and then stall.

Taste–Skill Gap and Creative Frustration

  • Many resonate with the “taste-skill discrepancy”: taste improves faster than ability, creating shame and paralysis.
  • Over-researching and “developing taste” can turn people into critics instead of creators.
  • Quantity-beats-quality anecdotes (e.g., photography, Federer statistics) support the idea of learning through many imperfect attempts—but several note these examples are low-cost domains.

AI, Tools, and Depth of Skill

  • One line of discussion: AI raises output “skill” (speed, baseline quality) without improving underlying craft or taste.
  • Some fear AI shortcuts prevent real learning, especially in programming, design, or art.
  • Others argue detractors often haven’t seriously used AI and that fears are partly about economic obsolescence; this claim is challenged as stereotyping and logically weak.
  • Legal/authorship worries (licensing, plagiarism, responsibility for code) also deter use.

Perfectionism, Procrastination, and “Eternal Child”

  • Many see themselves in the pattern: gifted as kids, now stuck chasing impossible standards and avoiding “ordinary” work.
  • Described as a “puer aeternus” pattern: preserving infinite potential by never committing, fearing being merely average.
  • Suggested remedies: notice the mental “callback” that avoids boring finite choices; retrain it through small, repeated commitments.

Planning, Strategy, and Chores

  • Over-strategizing can turn exciting ideas into dead chores; planning itself becomes a dopamine hit and a way to avoid execution.
  • Some criticize cultural over-valuation of “grand strategy” versus the unglamorous consistency, maintenance, and grunt work that actually ship things.

Curiosity, Scope, and Cost

  • One commenter calls “unconstrained curiosity” a vice; others strongly defend it as the root of scientific and creative breakthroughs.
  • Scope creep and constant bar-raising are seen as common self-sabotage patterns.
  • Several stress context: “just do it” is powerful for cheap, repeatable work, but high-cost, low-frequency bets (startups, megaprojects) genuinely need more upfront planning.

Upbringing, Ego, and Standards

  • Discussion of parenting patterns: praising innate brilliance vs effort or self-evaluation can feed fragile ambition and fear of mediocrity.
  • Some suggest deliberately doing things you’re bad at, or building for your own needs, to reduce pressure and reconnect with process over perfection.

# [derive(Clone)] Is Broken

Core issue with #[derive(Clone)]

  • Thread centers on the fact that #[derive(Clone)] adds bounds like T: Clone on generic types, even when only the fields need to be Clone.
  • Example: struct Foo<T>(Arc<T>) is derive-failing today because T: Clone is required, though Arc<T>: Clone needs no such bound.
  • This also affects other derivable traits (Debug, PartialEq, etc.), making derive less useful for many generic types and for phantom parameters.

Historical and type-system constraints

  • Linked design notes explain the original choice: “perfect derive” (deriving from field requirements) used to be blocked by:
    • Need to allow cyclic trait reasoning (like for auto traits Send/Sync), which is hard to keep sound.
    • A semver hazard: derived bounds would silently change when private fields change (e.g., switching from Rc<T> to T in a list type alters when List<T>: Clone holds).
  • Some argue this is now mostly a policy/semver question, not a hard technical blocker.

Workarounds and proposed improvements

  • Several crates implement “perfect derive” or allow explicit where-like annotations on derive to override bounds, sometimes with escape hatches for cycles.
  • Suggestions:
    • Allow explicit bound syntax inside derive (e.g. #[derive(Clone(bounds(Arc<T>: Clone)))]).
    • Add attributes to exclude fields from auto-derives (#[noderive(Clone)] style).
    • Keep derive simple in std and rely on crates for advanced behavior.

Developer experience and error messages

  • Multiple commenters report being badly confused the first time they hit this, especially when all fields are obviously Clone but the type isn’t.
  • Current diagnostics tend to point at the inner type (“implement Clone for T”), often suggesting the wrong fix.
  • There is a concrete request to improve the error message by explaining that derive inserted overly restrictive bounds and suggesting a manual impl as a fix.

Comparisons and broader language debates

  • Haskell’s deriving was cited; correction: Haskell effectively does “perfect derive” by constraining only field types.
  • Long subthreads debate Rust vs Haskell features (strictness, linear/affine types) and Rust vs C++ complexity, macros, and ecosystem bloat.
  • Some see Rust derive as a useful convenience whose edge cases don’t justify extra complexity; others see this quirk as an unnecessary “sharp edge” that undercuts Rust’s usual ergonomics.

LLMs and boilerplate

  • One branch argues that LLMs can generate and maintain manual impls, reducing the need for richer derive mechanisms.
  • Others are skeptical, pointing to hallucinations, overconfident wrong answers, and the risk of developers not recognizing subtle mistakes.

Why are there no good dinosaur films?

Changing scientific understanding

  • Several comments reflect on how quickly paleontology and geology have changed: asteroid-impact extinction was speculative in the 80s/early 90s and is now near-consensus, plate tectonics only entered school curricula in the late 60s–80s, and dinosaur depictions (feathers, posture) have shifted dramatically.
  • People note how odd it feels that things they “always knew” were unknown to their parents or grandparents, and how unevenly new science diffuses across regions and school systems.
  • There’s debate over how “settled” the Chicxulub impact is versus multi‑cause models (Deccan Traps “one‑two punch”), and over what counts as “proof” versus a robust theory.

Jurassic Park: wonder vs “creature feature”

  • Many argue the original Jurassic Park still delivers awe: the first Brachiosaurus reveal and the T. rex paddock sequence are repeatedly cited as masterful buildup and payoff.
  • Others agree with the critique that after its initial grandeur the film becomes a conventional monster chase, though fans contest that the dinosaurs are framed as animals, not supernatural “monsters.”
  • There’s praise for the film’s lived‑in world: logistics of the park, staff dynamics, legal/financial angles—all largely inherited from Crichton but carefully preserved on screen.
  • The book–film comparison recurs: the novel is seen as more explicitly about complexity/limits and chaos; the movie shifts toward human fallibility and spectacle, but many feel it improves the characters (especially Hammond and Malcolm).

Education, religion, and “theory”

  • Commenters recall teachers being criticized or disciplined for teaching plate tectonics in the 90s due to religious objections, and creationist tactics like “Were you there?” being used against deep‑time science.
  • There’s discussion of public confusion over “theory” in scientific vs colloquial sense, and of how controversial topics get memory‑holed in classrooms, which paradoxically can spark more curiosity.

Why good dinosaur films are rare

  • Several argue dinosaurs alone don’t give you much thematic range: they’re large non‑verbal animals, so adult stories tend to collapse into “run from big predator” unless reframed as human‑against‑hubris (Jurassic Park) or something metaphorical.
  • People contrast dinosaurs with zombies, vampires, and aliens: those are flexible symbols for disease, sexuality, capitalism, etc., and can be dropped into many settings; dinosaurs are historically constrained and over‑identified with Jurassic Park’s premise.
  • Some suggest the basic “revived dinosaurs in the modern world” story has been so definitively claimed by Jurassic Park that any similar film feels like a knockoff; time‑travel setups create their own narrative problems.

Franchises, sequels, and Hollywood incentives

  • Many see the decline of dinosaur films as a subset of broader franchise fatigue: Alien, Terminator, Matrix, and Star Wars are cited as series that hit one or two “local maxima” then flailed.
  • There’s a lot of blame on studio economics: billion‑dollar grosses for middling Jurassic World entries show there’s little financial incentive to take risks or craft deeper stories.
  • Commenters criticize modern blockbusters for over‑relying on CGI, quippy dialogue, and IP recycling instead of detailed worldbuilding and strong scripts, while noting that script is cheap but most vulnerable to executive interference.

Nostalgia and current reception

  • Some worry that acclaim for Jurassic Park is just generational nostalgia; younger viewers in the thread generally still rate it much higher than its sequels and recent Jurassic World films, citing story and characters more than VFX.
  • Others found the original underwhelming even at release and side with Ebert that it lacked sustained grandeur, showing the divide is not purely generational.

Alternatives and outliers

  • A few works are offered as “better” or at least interesting dinosaur media: Don Bluth’s The Land Before Time, Apple’s Prehistoric Planet, the animated series Primal, the Czech film Cesta do pravěku, and older pulp‑style movies.
  • But overall, the thread consensus is that truly strong dinosaur stories for adults remain rare, and that Jurassic Park (plus perhaps a handful of TV/animated works) still stands largely unchallenged.

The story behind Caesar salad

Visiting the “Original” and Restaurant Takes

  • Some recommend visiting Caesar’s in Tijuana for the tableside experience, though the current recipe reportedly differs from the original (anchovies, Worcestershire, Tabasco, lemon, multiple garlic forms).
  • Others cite chain and local restaurants with unexpectedly good Caesars, showing wide variation in quality and style.

Home-Made Dressing & Technique

  • Strong consensus that homemade dressing is vastly better than bottled.
  • Multiple detailed recipes shared: classic emulsions with egg yolk, Dijon, lemon, anchovy, Worcestershire, garlic, neutral oil; plus “shortcut” versions based on mayonnaise.
  • Tips include:
    • Use a stick blender for foolproof emulsions.
    • Thin to dressing consistency with water or extra acid.
    • Combine extra-virgin and neutral oils so vinaigrettes don’t solidify in the fridge.
    • Chill bowls and lettuce; shock or refrigerate romaine for crispness.
  • Variations: added bacon, capers, kale, arugula, chickpeas, roasted Brussels sprouts, etc., often acknowledged as “not really Caesar” but tasty.

Anchovies, Eggs, and Authenticity

  • Debate over anchovy content: some insist anchovies are “the point,” others prefer anchovy-free “Caesar-style” vinaigrettes.
  • Worcestershire is noted as fish-based and an alternate umami source.
  • Discussion of coddled vs raw eggs, and how that affects emulsification.

Form, Etiquette, and “Proper” Caesar

  • Classic Caesar described as whole romaine leaves, originally eaten by hand; some diners dislike uncut leaves and expect chopped salad.
  • Informal “rules” like “no knife on the salad plate” are mentioned, but treated as cultural/parental artifacts.

Taste, Popularity, and Culture

  • Many see Caesar as a “gateway” salad for people who otherwise dislike vegetables; others criticize it as “just dressing on scaffolding.”
  • Comparisons to pizza in near-universal appeal are contested, with some saying Caesar is mainly a North American thing, others reporting it’s common in parts of Europe.
  • Side debate on salads in American vs Mediterranean contexts, and what “counts” as a salad (fruit/nut salads, pasta salad, etc.).

Health & Safety Concerns

  • Brief argument over foodborne illness risks from raw vegetables and eggs; some view salad risk as negligible, others avoid raw produce entirely.

Sleeping beauty Bitcoin wallets wake up after 14 years to the tune of $2B

Scope of the event

  • Large wallets from 2011 (10k BTC each, ~60k BTC total) moved after ~14 years.
  • Some posters dispute calling this “Satoshi-era” since public activity from Satoshi largely ended in 2010.

Who controls the coins? Explanations debated

  • Plausible mundane explanations: owner finally regaining access (out of prison, recovered device, inheritance, “old man USB stick in a drawer”).
  • Many argue brute-forcing keys is effectively impossible at Bitcoin’s key sizes; hobbyist projects that “crack wallets” mostly exploit weak RNGs/brainwallets, not actual keyspace search.
  • Minority argue it could be:
    • An undisclosed implementation/RNG bug limited to early wallets.
    • A state-level or university team with a targeted shortcut.
    • Eventually, quantum attacks on elliptic-curve cryptography.
  • Others note multiple related wallets moving at once makes random brute force less likely and coordinated key recovery more likely.

Feasibility of brute force and cryptopocalypse concerns

  • Multiple back-of-the-envelope calculations with H100-class GPUs and comparisons to Bitcoin’s total hash rate conclude generic brute force is computationally hopeless.
  • Counterarguments: attacks could be highly localized (specific curve weakness, wallet bug, or RNG flaw), so not equivalent to breaking all ECC.
  • Some emphasize that a general ECC break would have far larger consequences (TLS, banking, state secrets) than stealing Bitcoin.

Liquidation, market impact, and “whale” behavior

  • Consensus: dumping billions on open exchanges at once would move price hard; serious holders would use:
    • OTC / private deals and “dark pool”-like arrangements.
    • Gradual selling, or borrowing against BTC (“buy-borrow-die”) instead of selling.
  • Even just moving long-dormant coins is seen as bearish signal: more potential liquid supply plus fear of future selling.
  • Debate whether this particular move is big enough to truly “crash” Bitcoin versus being absorbed by institutional and ETF demand.

Security, traceability, and wrench attacks

  • Concern that whoever controls such wallets is at risk of physical coercion (“wrench attacks”); advice is to rotate to stronger address types and improve personal security.
  • Some emphasize Bitcoin is not “unpoliced”: large thefts have led to arrests once thieves touch regulated exchanges; blockchain analytics firms track tainted coins and mixer usage.
  • Others argue enforcement remains weaker than in fiat systems and that irreversible, pseudonymous transfers make scams and extortion easier.

Bitcoin as currency vs store of value

  • Long thread on whether Bitcoin is:
    • A failed “peer-to-peer electronic cash” system now functioning mainly as a speculative, deflationary store of value, or
    • A uniquely valuable, non-sovereign, censorship-resistant asset.
  • Critics: volatility, lack of wide retail pricing in BTC, reliance on stablecoins and exchanges, huge energy use, and suitability for scams mean it’s closer to a speculative commodity than a working currency.
  • Supporters: fixed supply, resistance to seizure/censorship, global 24/7 settlement, and use in unstable-currency countries or under repressive regimes are seen as core value.

Lost coins, deflation, and macro arguments

  • Some call lost coins a “bug” that worsens Bitcoin’s deflationary bias and encourages hoarding, echoing classic “deflationary spiral” critiques and gold-standard history.
  • Others counter:
    • Lost coins act like a proportional “airdrop” to remaining holders.
    • Bitcoin should be seen as an investment asset/“digital gold”, where deflation is desirable, not as a primary currency.
  • Extended debate on inflation vs deflation, historical depressions, and whether deflation inherently discourages productive investment.

Trust, institutions, and “intrinsic value”

  • One side: fiat has “intrinsic” demand via taxes and legal tender status; Bitcoin is a “consensual hallucination” whose value rests only on sentiment and speculation.
  • Other side: all money is collective belief; Bitcoin’s scarcity, neutrality, and independence from states are exactly its point.
  • Repeated theme: you cannot truly eliminate trust; crypto merely shifts it—from states and banks to code, miners, exchanges, and social consensus.

Human angle: regret and missed chances

  • Many recall casually mining or spending BTC when it was <$10 and deleting wallets or selling early; broad agreement that most early users would have sold long before today’s prices.
  • This story reopens old “what if” scenarios: landfill hard drives, Silk Road spending, and small stashes that might now be life-changing.

LLM-assisted writing in biomedical publications through excess vocabulary

LLM “Excess Vocabulary” and Weasel Words

  • Commenters focus on the paper’s finding that words like “delves,” “potential,” “significant,” and a long list of “excess style words” have surged.
  • Some see these as vague, business‑hype vocabulary that obscures meaning, echoing Orwell’s criticism of abstract, obfuscatory language.
  • There is debate over “significant”: in statistics it is precise, but in generic prose it’s seen as a weasel word unless clearly defined.
  • One person argues that trends for “delves” are confounded by its use in games (WoW, Magic: The Gathering, YouTube essays), suggesting not all lexical changes are due to LLMs.

Recognition of “LLM‑ese” in Practice

  • Many say “delves” and patterns like “it’s not just X, it’s Y” are now strong LLM fingerprints.
  • Some like the clarity and tidy structure of LLM output but find the repetitive style and buzzwords grating.
  • Anecdotes describe professionals unknowingly revealing LLM use through emojis and characteristic explanation patterns.

Non‑Native Authors, Translation, and Editing

  • Several note that the majority of English‑language scientific papers are written by non‑native speakers; pre‑LLM, expensive “Author Services” filled this gap.
  • One side argues LLMs are “masterful translators” and a clear win for accessibility and equity, often improving clarity over human‑written drafts.
  • The opposing side worries non‑native authors may miss subtle but important shifts in meaning, and advocates for human editors familiar with both language and domain.

Responsibility, Accuracy, and Misuse

  • Concern that authors may over‑delegate responsibility to LLMs, blaming “the tool” when nuance or correctness is lost.
  • Others counter that responsibility ultimately remains with the authors, just as with human editing or tax professionals.
  • Some mention broader issues: publication pressure, fabricated results, and the reproducibility crisis, with LLMs potentially making low‑quality papers appear more credible.

Equity, Bias, and the Role of Writing in Science

  • The article’s “equity in science” framing is criticized via an example of a mis‑resolved citation, interpreted as over‑reliance on automated tools.
  • One view: writing skill is integral to scientific thinking; if a researcher can’t articulate findings, the science itself is suspect.
  • Counterview: writing and science are distinct skills; tools that lower the writing barrier let more capable scientists contribute, especially non‑native English speakers.
  • Some worry about “dumbing down” and cultural soft power of English‑centric, Western‑trained LLMs; others see this as another in a long line of technological shifts that reallocate, rather than destroy, skills.

Incapacitating Google Tag Manager (2022)

Blocking JS and Third‑Party Trackers

  • Several commenters say browsing with most JavaScript blocked is practical: allow first‑party scripts, selectively enable per site, and many pages work fine or even better.
  • Others find it burdensome, especially when visiting many new or vendor sites for work, where constant tuning of per‑site rules is tedious.
  • Mobile support for fine‑grained blocking is seen as weaker and less usable than on desktop.

Tools and Techniques

  • Common stacks: uBlock Origin (often in “advanced”/hard mode), uMatrix, NoScript, Privacy Badger, Cookie AutoDelete, DNS‑level blocking (Pi‑hole, NextDNS), and hosts file lists.
  • Strategy patterns: block all third‑party by default; allow only what’s needed; sometimes keep a separate “clean” browser with minimal extensions for testing or problem sites.
  • DNS/hosts‑based blocking is limited when GTM/analytics are proxied or served first‑party, including server‑side GTM and Cloudflare Insights.

What Google Tag Manager Actually Does

  • Multiple explanations clarify GTM beyond the article:
    • It’s a central container for injecting scripts (“tags”) without redeploying site code.
    • Primarily used by marketing to add/modify analytics pixels and ad trackers (Google Analytics, Facebook Pixel, etc.) and to attach triggers (URL, page state, events).
    • Offers versioning, preview, and permissions so non‑engineers can iterate quickly on campaigns.

Security, Performance, and Governance Concerns

  • Characterized by many as “XSS‑as‑a‑service”: non‑technical teams can inject arbitrary JS into production without code review, staging, or performance evaluation.
  • Reported problems: site breakage from bad third‑party scripts, large performance hits from dozens of tags, privacy‑policy drift as tags accumulate and are never cleaned up.
  • Some consider GTM among the worst software they’ve worked with; others note it can be “a good tool if you insist on doing those things.”

Ethics of Tracking and Advertising

  • One side: tracking via GTM is “racketeering”/spyware; advertisers historically measured performance without invasive surveillance and should do so again.
  • Other side: measuring ad effectiveness is framed as a legitimate business need; GTM is just the current mechanism.
  • Debate over whether widespread blocking would meaningfully degrade UX: some fear loss of behavioral insight; others argue good UX doesn’t require intensive analytics.

Data Poisoning and Active Resistance

  • Some propose polluting trackers’ data (e.g., fake events, AdNauseam, TrackMeNot) to degrade profiling.
  • Counterpoints: this mainly wastes advertisers’ budgets and may push Google to improve bot filtering; impact on the ad ecosystem is contested but viewed by some as worthwhile pressure.

Alternatives and Scope

  • For basic, privacy‑friendlier analytics (e.g., on static/GitHub Pages sites), commenters suggest many GA alternatives such as lightweight, non‑tracking services and server‑side, event‑level logging.
  • Several note that if you block GTM, you likely also want to consider blocking other analytics platforms like Yandex Metrica and Cloudflare Insights.