Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 300 of 362

Material 3 Expressive

Visual Style and “Expressiveness”

  • Many see “expressive” as pastel-heavy pink/purple, duotone, jelly-like shapes—more “SaaS landing page” or “failed Linux rice” than professional UI.
  • Several commenters find the palette physically tiring or nauseating, especially combined with motion; others say it looks playful, modern, and in line with current fashion.
  • Some like that it’s less corporate and more fun, especially for younger users; others want a restrained, Bauhaus‑style, form‑follows‑function aesthetic.

Usability, Information Density, and the Send Button

  • The flagship example—giant circular Send button—gets heavy criticism:
    • Gains a fraction of a second on first‑time discovery but permanently reduces space for composing and reading content.
    • Moves the most dangerous action (send) closer to the keyboard and more accident‑prone.
  • People argue the same usability gain could be achieved with clearer affordances (labelled button, better contrast, placement) without “blowing up” controls.
  • Repeated concern that Material trends systematically reduce information density and prioritize first‑use metrics over long‑term efficiency and expert use.

Page Implementation: Cursor, Motion, and Performance

  • The demo page itself is widely called unusable:
    • Custom circular cursor with capture/“magnet” behavior feels laggy, fights OS settings, and often stutters, especially while scrolling.
    • Paragraph blocks animate independently when scrolling, giving some users motion sickness.
    • Mid‑page forced dark→light switch is described as a “flashbang” and as ignoring system color‑scheme preferences.
  • Many see this as ironic for a UX case study and as emblematic of overdesigned, JS‑heavy sites that perform poorly on anything but top‑end hardware.

Research, Metrics, and Demographics

  • The article’s quantified “subculture,” “modernity,” and “rebelliousness” scores are widely mocked as pseudo‑scientific marketing, with doubts about survey framing and statistical rigor.
  • Some UX researchers in the thread explain participant panels, power analysis, and demographic balancing, but admit panels skew young and certain groups (e.g., 70+ women) are hard to recruit at scale.
  • Several see the focus on first‑fixation time and vibe‑metrics as optimizing for short‑term “wow” rather than durable, everyday usability.

Comparisons to Other Designs (Material 1–3, Holo, iOS, etc.)

  • Many miss Holo and Material 1, which are remembered as clearer, denser, and more “future‑tech” or task‑oriented.
  • Material 2 and 3 are criticized for flatness, ambiguous clickability, toggle states, and sameness across apps.
  • Some think Material 3 Expressive is a modest step back toward clearer grouping, contrast, and animations with purpose; others see it as just more rounded, purple, and childish.
  • iOS is repeatedly cited as also having bad UX, but still the benchmark many executives actually use; some see M3E as a clumsy attempt to chase iOS aesthetics.

Developer Ecosystem and Tools

  • Frustration that Material Web components are in “maintenance mode” while the docs still point to them, leaving Angular Material and third‑party kits to fill gaps.
  • Flutter developers complain that Material changes propagate unpredictably (e.g., sudden pink apps), forcing flags like useMaterial3: false. Some want M3 Expressive as a separate opt‑in system.
  • Others still value Material’s comprehensive design kits, component specs, and accessibility guidance, seeing M3E as an incremental expansion of options rather than a wholesale redesign.

Broader Critiques of Modern UI and Product Strategy

  • Strong sentiment that contemporary UI trends favor “vibes,” branding, and attention‑retention over clarity, consistency, and efficiency.
  • Complaints about hidden actions (ellipsis menus, share sheets), ever‑larger tap targets, and white‑space bloat particularly affect power users, small screens, and seniors.
  • Several frame this as resume‑driven and metric‑driven churn: designers must keep changing things to justify roles, even when core principles (clear affordances, density, stability) are already well understood.
  • A minority welcomes movement toward more emotional, characterful interfaces, but even they often question whether M3 Expressive’s examples truly deliver that without compromising basic usability.

GOP sneaks decade-long AI regulation ban into spending bill

Scope of “Automated Decision Systems”

  • Commenters debate whether simple controllers (PID in espresso machines, pacemakers, fuzzy-logic rice cookers) are covered.
  • A quoted definition is very broad: any computational process using ML, statistics, data analytics, or AI that outputs scores, classifications, or recommendations influencing/replacing human decisions.
  • Several people note that this likely reaches far beyond “AI” in the popular sense and into many standard software systems.

Federal Preemption vs States’ Rights

  • Many see the 10‑year state preemption as starkly at odds with long‑standing GOP rhetoric about “states’ rights,” calling it power-seeking rather than principle-driven.
  • Others argue preemption is routine under the Commerce Clause and comparable to federal control over areas like air travel, banking, auto safety, etc.
  • There’s legal debate over the 10th Amendment, anti‑commandeering doctrine, and whether Congress can block states from attaching AI conditions inside domains they traditionally regulate (e.g., insurance).

Regulation, Innovation, and California

  • Some fear “market-destroying” state bills (especially from California) could smother AI, arguing a federal shield helps innovation and national security versus China.
  • Others counter that California is exactly the kind of “laboratory of democracy” that should be allowed to pioneer AI rules, and that tech is already thriving there despite regulation talk.

Consumer Protection and Sector Impacts

  • Critics warn the ban could:
    • Undercut state bias audits for hiring and algorithmic lending.
    • Hamstring efforts to regulate AI-driven insurance denials, especially in healthcare and Medicaid.
    • Limit local rules for autonomous vehicles and traffic behavior on public roads.
    • Preempt state privacy, content moderation, and age‑gate regulations that rely on automated systems.
  • Supporters contend existing anti‑discrimination and other federal laws still apply, and that overregulation mainly empowers bureaucracies and NIMBYs rather than citizens.

Workarounds, Semantics, and Enforcement

  • Some propose drafting broader, tech‑neutral rules (e.g., “any computerized facial recognition”) to indirectly cover AI, though others argue courts would see through this.
  • There is disagreement over how easy “AI laundering” and semantic games would be in front of competent judges.

Broader Political and Ethical Concerns

  • Several participants see unregulated AI as a path toward pervasive surveillance or social‑credit‑like systems.
  • Others explicitly welcome a decade without state-level AI rules, viewing heavy‑handed regulation as more dangerous than corporate misuse.
  • The measure is framed by some as a direct transfer of power from citizens and local governments to large data/AI corporations.

Branch Privilege Injection: Exploiting branch predictor race conditions

Vulnerability & Scope

  • New Spectre-class issue: a race in Intel’s branch predictor lets attacker-controlled user code influence predictions used in more-privileged contexts, leaking arbitrary memory (PoC reaches ~5 KB/s, enough to eventually dump all RAM).
  • Affects all Intel CPUs from Coffee Lake Refresh (9th gen) onward; researchers also observe predictor behavior bypassing IBPB as far back as Kaby Lake, though exact exploitability per model is unclear.
  • Authors report no issues on evaluated AMD and ARM systems. Commenters stress this is one member of a large “speculative execution” family, not a single bug.

Relation to Spectre / Training Solo

  • Both this and Training Solo are Spectre v2-style attacks on indirect branches:
    • Training Solo: attacker in kernel mode “self-trains” mispredictions to a disclosure gadget.
    • Branch Privilege Injection: attacker’s user-mode training updates are still in flight when privilege changes; updates get applied as if they were made in kernel mode, steering a privileged branch to a gadget.

Performance vs. Security

  • Heavy debate on branch prediction:
    • Many emphasize it’s fundamental for modern deep pipelines; without it, stalls would be catastrophic.
    • Others fantasize about simpler or static predictors, or even disabling prediction, but acknowledge large slowdowns (lfence-everywhere estimated ~10×; static prediction seen as 20–30% hit).
  • Paper reports microcode mitigation overhead up to ~2.7% on Alder Lake; software-only strategies evaluated between ~1.6% and 8.3% depending on CPU.
  • Some argue cumulative mitigations have eroded much of the headline performance gains of the last decade.

Mitigations, OSes & Microcode

  • Intel advisory (INTEL-SA-01247) and microcode (20250512) released; embargo ended May 13.
  • All major OSes are said to have mitigations/microcode paths; Windows press text mentioned explicitly, with comments clarifying Linux gets Intel microcode via distro packages.
  • Long side-thread on whether Ubuntu 24.04 LTS is “up to date”:
    • One side: LTS with security backports is what production actually runs, thus relevant.
    • Other side: LTS kernels are effectively distro forks, not representative of current mainline mitigations.

Cloud, VMs & Isolation

  • Concern over cross-VM leaks in shared-cloud CPUs.
  • Memory-encryption features (Intel TME-MK, AMD SEV) are noted but characterized as insufficient for this style of side channel, since leakage comes from microarchitectural behavior, not direct DRAM reads.
  • Suggestions: pin VMs or security domains to separate cores/SMT siblings, and design OS/hypervisors to avoid co-locating different privilege levels on one core—though practicality and completeness of this approach are debated.

Web Exploitability & Risk Appetite

  • Strong disagreement over real-world risk:
    • Some insist these require “insane prep,” are unlikely to be exploited from JavaScript, and disable mitigations for performance on trusted boxes.
    • Others point out Spectre was demonstrated from JS, that JITted code is effectively low-level, and that browser timer fuzzing only slows, not eliminates, such attacks.
    • Several warn that deciding a system is “safe” to disable mitigations is tricky: CI servers, databases with stored procedures, and dependencies all amount to running semi-arbitrary code.

Architecture & Long-Term Outlook

  • Ideas raised: capability-style pointers (CHERI), richer pointer metadata, or resurrecting segmentation-like mechanisms; pushback that hardware, not language choice, is the core constraint.
  • Many expect speculative-execution side channels to keep appearing; mitigations are seen as an ongoing performance–security trade that’s unlikely to end while we run untrusted, Turing-complete web code on high-performance CPUs.

OpenAI's Stargate project struggling to get off the ground, due to tariffs

Tariffs, Taxation, and Constitutional Power

  • Large part of the thread argues tariffs are effectively taxes, and the US system intended Congress—not the president—to control taxation.
  • Many see current tariffs as “rule by executive fiat,” enabled by emergency powers stretched beyond their intent (trade deficit defined as “emergency”).
  • Some argue voters “got what they asked for” because Trump clearly campaigned on tariffs; others counter that:
    • Voters didn’t understand tariffs as taxes and were explicitly misled that “other countries” would pay.
    • Delegating taxation-by-tariff to the executive violates the constitutional design, even if it’s technically legal.
    • Congress is abdicating its duty out of partisanship and fear of retaliation from the president’s base.
  • Debate over whether this constitutes “taxation without representation”: one side says Trump is the chosen representative; the other insists Congress is the true tax representative.

Stargate Scale, Funding, and Tariffs

  • Multiple comments correct the article: Stargate is a $500 billion initiative, not $500 million; several express disbelief such a huge number is being treated casually.
  • Some see tariffs as only a 5–15% cost increase for hardware, not enough alone to derail a half‑trillion project; they suspect the tariff angle is overplayed for clicks.
  • Others highlight investor fears of overcapacity in data centers and question whether OpenAI, which “no longer has a clear edge,” can justify that scale.
  • SoftBank’s lack of concrete financing months after promising an “immediate” $100B is taken as a sign the hype outran reality.

Compute-Maximalism vs. Research and Efficiency

  • Several criticize the “bigger data center = better AI” mindset as a characteristically American brute‑force approach, especially in light of more efficient models like DeepSeek.
  • Others argue compute and foundational research are complementary: constraints can drive breakthroughs, but hitting a compute ceiling can also block progress.
  • Concern that today’s cheap or free LLM access artificially inflates demand; once prices rise to reflect real costs, much of the capacity being planned could sit idle, echoing the fiber overbuild of the dot‑com era.

Global Realignment and Non-US Opportunities

  • Some suggest the rest of the world should exploit US self‑inflicted instability by building their own AI and data‑center industries, viewing the US as an unreliable partner.
  • EU commenters note:
    • There are already shifts to EU providers driven by data rules.
    • The main constraint isn’t lack of capital but lower risk appetite and heavy bureaucracy.
  • Others predict Trump‑era damage to US institutions (courts, trade reliability) will last for decades, regardless of election outcomes.

Historical Analogies and Voter Responsibility

  • Extensive side-debate compares the situation to the American Revolution and the Boston Tea Party:
    • Nuanced recounting that colonial resistance was driven heavily by economic elites and smuggling interests, not just noble “taxation without representation.”
  • Long argument over responsibility of non‑voters and third‑party voters in a binary system; some insist “not voting is effectively a vote for whoever wins,” others reject that framing as unfair to the marginalized.

Media, Hype, and Source Skepticism

  • Several criticize the TechCrunch piece as a thin rehash of Bloomberg, with sensational framing (“tariffs could”) and little evidence that tariffs are the primary obstacle.
  • Some are annoyed by heavy reliance on anonymous “people familiar with the matter,” arguing this allows any narrative to be laundered as reporting.
  • A few also view the coverage as riding two popular sentiment waves—anti‑AI and anti‑Trump—to drive engagement rather than inform.

PDF to Text, a challenging problem

Why PDF → Text Is Hard

  • PDFs are primarily an object graph of drawing instructions (glyphs at coordinates), not a text/markup format. Logical structure (paragraphs, headings, tables) is often absent or optional “extra” metadata.
  • The same visual document can be encoded many different ways, depending on the authoring tool (graphics suite vs word processor, etc.).
  • Tables, headers/footers, multi-column layouts, nested boxes, and arbitrary positioning are often just loose collections of text and lines; they only look structured when rendered.
  • Fonts may map characters to arbitrary glyphs; some PDFs have different internal text than what’s visibly rendered, or white-on-white text.

Traditional Tools and Heuristics

  • Poppler’s pdftotext / pdftohtml, pdfminer.six, and PDFium-based tools are reported as fast and “serviceable” but differ in paragraph breaks and layout fidelity.
  • Some users convert to HTML and reconstruct structure using element coordinates (e.g., x-positions for columns).
  • Others rely on geometric clustering: treat everything as geometry, use spacing to infer word/paragraph breaks and reading order.
  • Many workflows resort to OCR and segmentation instead of relying on internal PDF text, especially for complex or scanned documents.

ML, Vision Models, and LLMs

  • ML-based tools (e.g., DocLayNet + YOLO, Docling/SmolDocling, Mistral OCR, specialized pipelines) segment pages into text, tables, images, formulas, then run OCR. This yields strong results but is compute-heavy.
  • Vision LLMs (Gemini, Claude, OpenAI, etc.) can read PDFs as images and often perform impressively on simple pages, but:
    • Hallucinate, especially on complex tables and nested layouts.
    • Have trouble with long documents and global structure.
    • Are costly at scale (e.g., 1 TB corpus for a search engine), so impractical for some use cases.
  • Some argue old-school ML (non-LLM) on good labeled data might compete well with hand-written heuristics.

Scale, Use Cases, and Reliability

  • For massive corpora (millions of PDFs), CPU-only, heuristic-heavy or classical-ML pipelines are favored over GPU vision models.
  • Business and legal workflows need structured extraction (tables, fields, dates) and high reliability; VLMs are seen as too error-prone today.
  • Accessibility adds another dimension: must recover semantics (tables, math, headings) for arbitrary PDFs without sending data to the cloud.

Ecosystem, Tools, and Alternatives

  • Many tools are mentioned: pdf.js (rendering + text extraction), pdf-table-extractor, pdfplumber, Cloudflare’s ai.toMarkdown(), ocrmypdf, Azure Document Intelligence, docTR, pdftotext wrappers, Marker/AlcheMark, custom libraries like “natural-pdf”.
  • There’s demand for “PDF dev tools” akin to browser inspectors: live mapping between content streams (BT/TJ ops) and rendered regions.
  • Suggestions like embedding the original editable source in PDFs or enforcing Tagged PDF could help, but depend on author incentives and legacy content.
  • Several comments defend PDF as an excellent “digital paper” format; the core issue is using it as a machine-readable data container, which it was never designed to be.

Google is building its own DeX: First look at Android's Desktop Mode

ChromeOS, Android, and Fuchsia Direction

  • Several comments see this as “ChromeOS 2.0”: evidence of Google folding ChromeOS into Android rather than the reverse.
  • Some argue ChromeOS feels faster and more efficient than Android on the same low‑end hardware, especially when the Android VM is disabled.
  • A long subthread debates Fuchsia: one insider characterizes “Fuchsia as Android replacement” as effectively dead and reduced to niche products (e.g. smart displays), while others point to high commit volume, starnix (Linux syscall layer), and micro‑Fuchsia VMs as signs it’s still strategically alive.
  • Consensus: near‑term trajectory looks like “ChromeOS into Android” rather than “Android onto Fuchsia.”

Existing Desktop Modes and Prior Attempts

  • Many note this concept is old: Samsung DeX, Motorola Atrix, Windows Phone Continuum, Ubuntu Touch convergence, ChromeOS with Linux+Android, Motorola Ready, laptop shells like NexDock.
  • DeX gets mixed but generally positive reviews: good enough for email, browsing, remote desktop, even light dev; “annoyingly close” to great but held back by quirks (latency, 4K hacks, app behavior, small UX papercuts).
  • Windows Continuum is remembered as smooth but crippled by lack of Win32 apps. Linux on DeX and similar experiments were short‑lived.
  • Some point out Linux‑phone distros (Ubuntu Touch, Librem 5, Plasma Mobile) already offer “phone as full Linux desktop,” but are niche and often missing containers, Docker, or polish.

Use Cases, Benefits, and Limits

  • Enthusiasts like the idea of:
    • Travel setups using a phone + USB‑C monitor or AR glasses + foldable keyboard.
    • Secure single device (e.g. hardened Android) that docks into a full workstation.
    • Thin‑client workflows using VS Code tunnels, code‑server, or remote desktops.
  • Others find the value marginal compared to a cheap laptop: you still need screen, keyboard, pointing device, power, and often a GPU; the only saved component is the SoC.
  • Concerns include: battery drain when driving big displays, thermal limits, noticeable input latency, and Android UI not being well‑adapted to mouse/keyboard.

Linux Integration and “Real Computer” Aspirations

  • A key excitement point is Google’s new Linux Terminal / AVF‑based Linux VM and its hinted integration with desktop mode.
  • People see first‑party Linux containers on Android as a potential “complete game changer” for development and “full‑fat” apps, similar to or better than ChromeOS’s Linux support.
  • Some express a broader desire for phones to be general‑purpose, user‑controlled computers, not consumption‑oriented, locked‑down appliances.

One Device vs Many Devices, and Economics

  • One camp imagines a future where phone compute powers everything: docks, laptop shells, AR glasses, perhaps with optional accelerator boxes. They argue we currently “waste” a lot of silicon across idle devices.
  • The opposing camp notes:
    • Consumers demonstrably like separate form factors (phone, laptop, desktop, tablet) for ergonomic and social reasons.
    • The extra silicon in a laptop/desktop is becoming a small fraction of total device cost compared to screens, batteries, enclosures.
    • Vendors are financially incentivized to sell multiple devices rather than one converged one.
  • There’s also debate over cloud vs local: some prefer cloud‑centric, cached experiences (ChromeOS style); others want phone‑centric local compute plus offline backups.

AR Glasses, Docks, and Input

  • Several comments describe successful setups with DeX + AR glasses (e.g. Xreal) and foldable Bluetooth keyboards, calling it “feels like the future” for light work and travel.
  • Others report earlier AR attempts as too jittery, low‑res or tiring; newer hardware is said to be significantly better but still not perfect for heavy coding.
  • People want the phone screen usable as a trackpad/keyboard in desktop mode; DeX already does this, and commenters hope Google’s mode will too.

Trust, Longevity, and Article Quality

  • Multiple commenters say they’d hesitate to invest in Google’s desktop mode because of Google’s history of killing products; Samsung’s long‑term support for DeX is seen as comparatively reassuring.
  • This ties into a desire for open‑sourcing abandoned projects so communities can carry them forward—countered by arguments that Google has little business incentive to do so.
  • The linked article itself is criticized as thin, repetitive, possibly AI‑generated; a better original source is identified and substituted.

Why I'm resigning from the National Science Foundation

NSF Takeover and Governance Concerns

  • Commenters highlight the article’s core claim: political operatives (DOGE staff) now override NSF’s expert review, blocking already-approved grants and sidelining the National Science Board.
  • This is framed not as “reform” but as a hostile assertion of political control over scientific funding pipelines.

Authoritarian Drift and “Coup” vs. Chaos

  • The Librarian of Congress episode and other moves (agency heads fired, legal norms ignored) are described by many as a coordinated bureaucratic power grab across branches.
  • Some insist it’s a deliberate, well-executed authoritarian project, not a “clusterfuck”; others say it only becomes a full “coup” when election results are openly defied, though several argue that illegally seizing powers is already a coup.
  • Comparisons are made to slow, institutional authoritarianism in other countries, rather than dramatic one-day putsches.

Historical Roots of US Scientific Dominance

  • Multiple threads revisit why the US became a science superpower:
    • Influx of scientists fleeing Nazi purges and war-torn Europe.
    • Massive WWII and Cold War research infrastructure.
    • Unique postwar economic position and geographic insulation from devastation.
  • Some see current cuts and interference as squandering this legacy.

Brain Drain and Global Competition

  • Several researchers report foreign scientists already planning exits from US academia and government labs due to uncertainty and fear.
  • Debate over whether Europe can meaningfully poach (lower salaries, flawed funding systems) vs. China’s potential, tempered by concerns about political risk and detentions.
  • EU initiatives (new funds, “super grants”) and anecdotal evidence from conferences suggest other regions are actively positioning to attract US-trained talent.

Public Funding vs. Profit-Driven Research

  • Large argument over whether it is acceptable—or inevitable—that more scientists “end up in industry.”
  • One side: industry R&D is better funded, can do significant applied work; examples include Bell Labs, pharma, big tech research.
  • Other side: many fields (astronomy, fundamental physics, much of math, basic biology/chemistry) have no clear profit path and would vanish or shrink dramatically under a “must be profitable” rule.
  • Disagreement over opportunity cost: critics question billion‑dollar projects (e.g., future colliders); defenders argue basic research historically generated unpredictable but enormous downstream gains (internet, GPS, vaccines, imaging, etc.).

Universities, Ideology, and DEI

  • Some argue universities became a liberal monoculture enforcing DEI “loyalty oaths,” and that GOP backlash on funding was predictable.
  • Others counter that:
    • “Liberal monoculture” is exaggerated and weaponized to justify political meddling.
    • Nondiscrimination and DEI requirements are tied to existing law, not inherently partisan.
    • Allowing the executive to impose “viewpoint diversity” policing or shut programs is itself a larger threat to academic freedom.

Public Opinion, Media, and Messaging

  • Discussion about whether long-form pieces like the article can shift public opinion in a TikTok/soundbite era.
  • Several note decades of political science showing low information and short attention spans among voters; pithy slogans outcompete nuanced explanations.
  • Some argue the US over‑fetishizes elections and “popular will” at the expense of institutional safeguards designed to buffer passions.

Tech-Right and VC Influence on Policy

  • Strong criticism of the “tech right” and high-profile VCs who, despite benefiting from state‑funded science (NSF, DARPA, CERN), now attack public research institutions.
  • Commenters link this to ideological projects (e.g., eugenics, race science, libertarian techno‑authoritarian fantasies) and “terminal engineer brain” hubris—believing technical success qualifies them to rearchitect government.

Taxpayers, Deficits, and the Value of Science

  • Some question why their taxes should fund US “scientific dominance” instead of reducing deficits or supporting more visible needs.
  • Others respond that:
    • NSF’s budget share is tiny but strongly correlated with long‑run economic growth.
    • Many everyday technologies (internet, GPS, vaccines, digital infrastructure) are direct or indirect products of publicly funded basic research.
  • Disagreement persists over how to weigh long-term, diffuse benefits against immediate fiscal concerns.

Meta: Echo Chambers, Replication, and Fatigue

  • Complaints that dissenting views are heavily downvoted, turning HN into an echo chamber; comparisons to other platforms’ moderation dynamics.
  • Replication crisis is invoked by some to argue public science is already failing; others reply that this argues for better design and more replication, not defunding.
  • Underneath many comments is a sense of exhaustion: repeated cycles of political interference and institutional damage to science, with uncertainty about whether the system can meaningfully recover.

How “The Great Gatsby” took over high school

Varied personal reactions to Gatsby

  • Many recall skimming or Cliffs Notes in school and retaining almost nothing; some found it boring, “unreadable,” or populated with shallow, unlikeable characters.
  • Others describe it as a favorite book, especially on reread: prose called “near perfect,” compact, and unusually beautiful sentence-by-sentence.
  • Several say it did not land at all in high school but resonated deeply after heartbreak, work, class mobility, or time abroad.

Competing interpretations of the novel

  • Common readings: an outsider sacrificing everything to join the “in crowd”; the emptiness of status; critique of the American Dream; class division between new money and entrenched wealth.
  • Clarifications about the triangle Nick–Gatsby–Daisy–Tom, bootlegging/drugstores, bonds, and period-specific racism.
  • Some see a deeper, possibly racial dimension and question whether Gatsby is simply “white,” especially given Tom’s racism.

Is it a good high‑school book?

  • Strong view that many themes (middle‑age malaise, regret, class ennui) are inaccessible to teens with little life experience. This can turn them off reading entirely.
  • Others argue the point is to stretch imagination and critical skills beyond direct relatability; not everything should mirror a teenager’s life.

Teaching methods and student engagement

  • Criticism of approaches that implicitly demand students “love” the book and parrot approved themes, leading to sterile dissection and heavy use of guides/AI.
  • Suggestions: fewer works but deeper context (e.g., Greek myth ➝ epics ➝ Shakespeare), more choice of texts, comparative essays across multiple books.
  • Debate over “student‑centered” pedagogy: some fear it reduces literature to self-mirroring; defenders say resonance with personal experience is what enables imaginative transport.

Canon vs contemporary choices

  • Some advocate modern, relatable books (YA, cyberpunk, even Harry Potter/Game of Thrones) as more engaging entry points.
  • Others defend an agreed canon for common cultural references and historical context, arguing we already lose shared touchstones as classics are dropped.

Rereading with age

  • Multiple commenters urge revisiting high‑school “classics” later; many report rediscovering Gatsby, Hemingway, Melville, etc., as adults, though some still find them mediocre, proving taste and value remain highly individual.

In a high-stress work environment, prioritize relationships

Value of relationships and networking

  • Many agree relationships matter more than raw performance for promotions, collaboration, and future opportunities.
  • In layoffs, relationships rarely save your job, but they can be crucial for finding the next one via references and leads.
  • Being the “helpful person who knows who to ask” is seen as good for both effectiveness and long-term security.

Skepticism about relationship-focus

  • Some prefer to “mind their own business” and avoid superficial relationships; that’s less stressful for them, even if risky.
  • Several note that individual contributors often lack clout: their referrals don’t carry much hiring weight, so “networking” can feel overrated.
  • Others draw a hard line between professional rapport and true relationships, which they reserve for life outside work.

Toxic environments and when to leave

  • Multiple commenters stress distinguishing normal high pressure from abusive or incompetent leadership; you don’t owe loyalty to the latter.
  • Supportive coworkers can make a bad place survivable and even help you leave, but those bonds can also delay necessary exits.
  • Some emphasize legal/HR realities: in many places, references are minimal, so staying solely for a good recommendation may be pointless.

Interviews: negativity, honesty, and “polite fictions”

  • Big debate around “paint your last job positively.”
  • One side: interviews test maturity and discretion; constant griping or blame is a red flag, so candidates should frame issues diplomatically.
  • Other side: this encourages inauthenticity and a culture of routine lying; people resent having to pretend bad jobs were good.
  • Many propose a middle ground: be specific, measured, and focus on mismatched priorities, lessons learned, and what you’re seeking next.

Positivity, culture, and communication norms

  • People criticize both “toxic positivity” (never acknowledging real problems) and “toxic negativity” (constant complaining).
  • There’s broad agreement that what you choose to be negative about is a strong signal; tone and focus matter more than whether you ever criticize.
  • Several point out that norms around white lies, “reading between the lines,” and direct criticism are highly culture-dependent.

Stress, competence, and coworkers

  • One camp says most stress comes from incompetent people offloading their problems, especially in leadership; they advocate minimizing contact.
  • Others push back: lack of vision is often structural (bad management, compartmentalization), and writing people off instead of mentoring can be toxic.
  • There’s discomfort with “rock star” language; many prefer solid, kind team players over divas, and say great engineers help others improve.

Remote work and introverts

  • For remote teams, suggestions include cameras on, daily greetings, and light chat; basic kindness and non-jerk behavior still go far.
  • Some introverts say relationship maintenance itself is stressful; being minimally social but reliably non-difficult has worked fine for them.

Well-being and identity

  • Several highlight that chronic stress causes lasting damage; prioritizing sleep, health, and timely exits can matter more than any job.
  • A recurring theme: don’t reduce yourself to someone who must constantly manage impressions. Staying broadly kind, competent, and self-respecting is framed as both sustainable and valuable.

Launch HN: Miyagi (YC W25) turns YouTube videos into online, interactive courses

Overall Reception and Concept

  • Many commenters find the idea “magical,” intuitive in hindsight, and a natural fit for LLMs (auto‑generating quizzes, tutors, and structure around existing videos).
  • Several educators and edtech practitioners say this solves a real pain point: turning scattered YouTube learning into something structured and interactive.
  • Others are more skeptical, noting users can already feed transcripts into general-purpose LLMs and asking what value Miyagi adds beyond convenience and UI.

Content Quality, Pedagogy, and Trust

  • Big concern: how to ensure generated courses aren’t “teaching nonsense,” especially for learners who can’t detect subtle errors.
  • For “official” courses, creators review and edit content; this is currently treated as the main quality signal.
  • For user-generated courses, there is no systematic human review; future plans include user feedback on questions, learning paths, and possibly more formal evaluation.
  • Edtech professionals raise questions about deeper methodologies (knowledge tracing, skills taxonomies, learning outcomes, retention), which are not yet clearly implemented.
  • Some argue assessments are overemphasized; for self-directed adult learners, quizzes may be more “demo” than true value unless backed by tracking, validation, and credentials.

Experience, Features, and UX Feedback

  • Positive feedback on the general interface and idea of integrated tutor, quizzes, and flashcards, but multiple users report bugs, long generation times, and login issues.
  • Suggestions:
    • Clean up long lecture lists and better sectioning.
    • Integrate bottom tools into a single, more agent-like tutor.
    • More gamification, cohorts/social learning, and real-world artifact-based tasks, not just trivia.
    • Smarter “watch” links that jump to the relevant video segment.
  • Debate over giving the AI tutor a “persona”: some like the idea; others find overly humanized AI off-putting.

Use Cases and Scope

  • Interest in:
    • Language learning (with repetition and grammar focus).
    • Child/elementary content, though the current tutor safety is not trusted for unsupervised young kids.
    • Poker, chess, and other domains that may need custom tooling (solvers, interactive boards).
    • Niche “how-to” content (e.g., home improvement), including the potential use of comments as errata.
  • Currently relies mostly on transcripts; no video-frame understanding yet, though that’s on the roadmap. Supports PDFs, slides, and other file types as inputs.

Copyright, Ethics, and Creator Relations

  • This is the most contentious theme.
  • Miyagi says:
    • Any monetized course includes revenue sharing with creators, with some signed partnerships.
    • Embedded videos are used, and creators can request takedowns.
    • Non-partner content can still be summarized and augmented, with opt-out rather than opt-in.
  • Critics argue:
    • Opt-out with ambiguous revenue terms is ethically weak; should be explicit opt-in and generous sharing since creators supply most of the value.
    • Even if embedding is allowed, generating derivative educational products without consent feels exploitative to many.
    • “Extra views” are not sufficient compensation if a platform monetizes AI-based derivatives.
  • Comparisons are drawn to:
    • YouTube’s own AI summaries, accused of reducing watch time while monetizing summaries without adequate creator compensation.
    • Earlier controversies where companies enrolled creators into monetized systems without clear opt-in, which produced severe backlash.
  • Legal status (derivative work vs. fair use) is left unresolved in the thread; multiple commenters flag this as a major long-term risk and perception issue.

Platform Risk and Future Direction

  • Some discuss dependence on YouTube APIs and subtitles; founders state they already support direct uploads and could expand to other subtitle sources.
  • Commenters from traditional edtech note this idea has been floated before but ran into pedagogy, licensing, and “no-ads” constraints at established platforms, hinting that a startup can move faster but must still address those same issues over time.

AI Is Like a Crappy Consultant

Perceived Productivity Gains

  • Many describe a large jump in usefulness: from “idiot intern” to something like a junior/crappy consultant.
  • Strong praise for AI code completion and refactoring: described as “god mode,” faster than static typing + IDE refactors + Vim/Emacs for large, mechanical edits.
  • Helpful for learning unfamiliar APIs and replacing much Stack Overflow–style searching.
  • Some say AI is far better than Google search and expect it to displace traditional search; others prefer curated search engines (e.g., Kagi) for deterministic, non‑hallucinated results.

Code Quality, Architecture, and “Vibe Coding”

  • Common theme: AI is poor at architecture and data-structure design, tends to force new problems into existing, suboptimal patterns and assumes current code and user instructions are correct.
  • “Vibe coding” (letting the AI build systems end‑to‑end) is seen as risky; multiple anecdotes of silent failures (e.g., broken file migration scripts) and chaotic student projects.
  • Several argue good engineering practice hasn’t changed: tests (ideally TDD), specs, and understanding the code are still critical.

Search, Fact-Checking, and Hallucinations

  • Disagreement over whether modern models meaningfully “fact check” or “cite” versus just wrapping search tools.
  • Critics stress that the core network lacks source attribution, so it can’t explain where a given code fragment or fact came from, which underlies hallucinations and licensing issues.
  • Supporters counter that tool-augmented LLMs already behave like fact‑checkers for many tasks.

Ethics, Training Data, and Citation

  • Significant concern that models are built on unconsented training data, can’t track provenance, and hallucinate licenses; contrasted with human expectations around attribution.
  • Others downplay this, focusing more on practical utility than on data origin.

Roles, Metaphors, and Anthropomorphism

  • AI compared variously to: crappy consultant, junior engineer, fast intern, or dangerous tool.
  • Some warn against anthropomorphizing; others argue it does exhibit rudimentary reasoning and produces genuinely novel, useful “knowledge,” disputing the “stochastic parrot” label.
  • Heated subthread over what “knowledge” means and whether LLMs “understand” anything.

Prompting Strategies and Tools

  • Multiple workflow tips: use AI for tests/docs and repetitive edits; constrain tasks tightly; ask for alternative solutions; force it to ask clarifying questions; reset context often.
  • Tools like Aider + advanced models are reported to outperform basic IDE integrations, though they introduce complexity (diff formats, configuration).

Socioeconomic and Cultural Notes

  • Some see AI as aligning with executive incentives: cheap, confident answers, fueling interest in replacing devs.
  • Calls for unionization and worries about low-quality, AI‑driven software becoming widespread.

Internet Artifacts

Overall Reaction & Nostalgia

  • Many commenters describe the site as a “pure nostalgia hit,” likening it to flipping through an old photo album.
  • Several recall specific formative memories: early blogging, first MP3s, Netscape vs IE wars, dial‑up, early flash games, and the “helicopter game” as a proto–Flappy Bird.
  • A few mention feeling old when they suddenly understand their parents’ nostalgia for the 50s/60s aesthetic.

Specific Artifacts & Notable Omissions

  • Strong reactions to seeing Netscape Navigator’s “meteors,” Homestar Runner, Line Rider, Million Dollar Homepage, Heaven’s Gate, etc.
  • People list many “missing” artifacts: Newgrounds, AltaVista, Yahoo Answers, LiveJournal, Something Awful, xkcd, Slashdot, digg/fark ecosystems, RuneScape/Ultima Online, toolbars, Clippy, WinRAR, early IM (ICQ), goatse and other shock sites, etc.
  • Debate over whether things like Bad Apple!! or scaruffi.com are important enough historically.
  • Some praise and some dislike the modern version of Ishkur’s Guide to Electronic Music, calling it overengineered compared to the original.

Space Jam Website & Redirect Rabbit Hole

  • Appreciation that the original 1996 Space Jam site is still accessible, and that the 2021 site kept a retro style.
  • Others note the main domain is now partially broken, with conflicting reports about redirect loops and certificate problems.
  • The thread dives into detailed HTTP/HTTPS, 301/302, and Upgrade: h2,h2c behavior, with people trading full curl -vvv traces and debating whether there is truly a loop.

Old vs Modern Internet

  • Strong sentiment that pre‑social, pre‑smartphone web felt more sincere, experimental, and less commercial.
  • Several blame the iPhone/app era and today’s “hypercommercial, grifty” environment for killing that spirit, though some note pockets of genuine self‑expression still exist.
  • Alternatives like the Gemini protocol are suggested as a way to recapture a simpler, text‑centric net.

Geography & Cultural Scope

  • Multiple commenters criticize the site as very US/Western‑centric and wish for parallel “artifact timelines” for German, Russian, Chinese, and other language communities.
  • Examples from Runet (bash.org.ru, Masyanya, “padonki” slang) and early German sites are cited as parallel but largely unknown histories.

Design & Interaction

  • Widespread praise for the polish and interactivity (scrollable “live” artifacts, playful details like Zombo.com audio persisting).
  • Minor UX complaints: some swipes jump multiple items; simulated slow image loading gets annoying over time.

Conspiracy theorists can be deprogrammed

AI “Deprogramming” as Tool and Threat

  • Many see using AI to “deprogram” conspiracy theorists as inherently political: deprogramming is just re-programming to someone else’s agenda.
  • Concern that such systems could easily become government or corporate propaganda tools, worse than a minority of conspiracy believers.
  • Others counter that this “tool” already exists and is being used; better to do it transparently than leave the field to opaque actors.

Can AI Also Radicalize?

  • Multiple comments argue the reverse is not only possible but already happening: social media algorithms bombard people with “alternative facts” for engagement.
  • Several say it’s easier and more profitable to create new believers than to deprogram.
  • Some think the real asymmetry is that extremist and Nazi-style content has many promoters and few effective deprogrammers.

Conspiracies: Noise vs Signal

  • One camp: conspiracy theories mostly add noise, making genuine conspiracies harder to detect (e.g., QAnon obscuring real trafficking).
  • Another: skepticism toward authority is rational; some “theories” later proved true (Watergate, industry cover-ups, etc.), so blanket pathologizing is wrong.
  • Debate over whether elite coordination is mostly “just incentives” or effectively a conspiracy in all but name.

Trust, Authority, and Epistemology

  • Repeated theme: conspiracists don’t actually reject authority; they just relocate it—from institutions to podcasts, influencers, and anonymous accounts.
  • Split between those who see conspiracists as curious but under-informed vs. those who see them as preferring emotionally satisfying narratives over primary sources.
  • Once institutional trust is broken, some say it is nearly impossible to restore; LLMs that follow “official lines” only deepen suspicion.

Study Design and Definition Issues

  • Criticism that the underlying research redefines “conspiracy theory” as “untrue conspiracy,” excluding widely accepted real conspiracies (e.g., lobbying, corporate cover-ups) and thereby baking in ideological bias.
  • Objections to heavy reliance on GPT-4 for screening without human validation.

Social Media, Addiction, and Regulation

  • Some argue treating social media addiction and regulating recommendation systems would be more effective than AI deprogramming.
  • Proposals include legal penalties for knowingly spreading political misinformation, but others warn this quickly becomes censorship.

LLMs as Socratic Partners

  • Several see value in AI’s patience and Socratic questioning to foster self-reflection without human fatigue.
  • Anecdotes show LLMs can generate strong counter-arguments, but may need human “shepherding” and risk being dismissed as partisan or censored.

One hundred and one rules of effective living

Ambiguity and Interpretation of Specific Rules

  • Several rules are criticized as vague or poorly phrased, e.g. “uncanny congruity between thought and experience,” which readers parse both as “experiences shape thoughts” and “attitude shapes experiences,” noting the latter can fuel both misery and happiness.
  • The rule about giving no second chances to disrespectful people is seen by some as protective and empowering (e.g., never returning to a disrespectful business, leaving a toxic boss), and by others as cruel, inflexible, and life‑shrinking.
  • The line about hourly billing is disputed: some argue hourly workers still serve the client and that billing mode is orthogonal to integrity; others note hourly is appropriate when scope is unknown, while flat‑rate work has its own pathologies.
  • The primacy of “do your work” as rule #1 is criticized as hollow and work‑obsessed; others reinterpret “work” broadly as meaningful projects, not just jobs.

Too Many Rules vs. Simpler Principles

  • Multiple commenters argue 101 rules are overengineered and self‑contradictory, preferring compact frameworks: the Daoist “three treasures,” the Golden Rule, short personal lists, or a single existential challenge.
  • Some note direct contradictions (e.g., avoid cruel people vs. endorsement of being feared) and see the list as a grab‑bag of quotes rather than a coherent ethic.
  • Others say the real value is not correctness but provoking reflection: noticing which rules trigger agreement, anger, or “that’s bullshit” becomes self‑knowledge.

Comparisons to Other Moral Programs

  • Benjamin Franklin’s 13 virtues receive attention: some admire their clarity and discipline, others find them joyless or hypocritical given Franklin’s reputed lifestyle.
  • A nuanced view emerges: strict ideals are aspirational, not fully attainable—like training a dog that will always still want to bark.

Limits of Inspirational Literature

  • Skepticism is expressed toward people deeply immersed in “inspirational” texts, which can become mind‑numbing platitudes detached from real trade‑offs.
  • Several note that many lessons on the list can only be truly learned through lived experience, but that a phrase can “hit” at the right psychological moment and catalyze change.

Meta: Speech, Knowledge, and Online Culture

  • The rule to “speak only of what you know well” is seen as incompatible with most internet discourse.
  • Some lament that online spaces often punish beginner questions and reward overconfident ignorance, pushing people toward tools like ChatGPT for nonjudgmental exploration.

Kindness, Cynicism, and Paranoia

  • A minimalist counterproposal is to boil everything down to kindness or “don’t be a jerk.”
  • Some rules in the list read as paranoid (e.g., expecting betrayal), while others explicitly condemn cynicism, creating tension that readers notice and debate.

Multiple security issues in GNU Screen

Setuid-root design and security issues

  • Core concern is Screen’s multi-user mode requiring a setuid-root binary, massively increasing attack surface for complex code.
  • Several commenters were unaware this feature existed; others used it heavily for shared debugging, training, and pair programming.
  • Some argue using setuid without fully understanding the codebase is especially bad; others note Screen’s design dates from the 1980s/1990s when setuid was common.
  • Multi-user mode is praised as powerful but now widely seen as a serious security liability.

Distro configurations and affectedness

  • Impact varies by distribution and build:
    • Some distros ship Screen setuid-root (or setgid to a special group), enabling multi-user mode and related vulnerabilities.
    • Others (Debian, Slackware, some Gentoo/Arch setups) ship Screen without setuid and are unaffected by the worst bugs, though specific CVEs differ by version/build.
  • Links to an openSUSE matrix clarify which versions and builds are affected.
  • Some distros (e.g., RHEL) have effectively deprecated Screen in favor of tmux, though it may linger in extra repositories.

Alternatives: tmux, zellij, and minimal tools

  • Many commenters long ago switched to tmux, noting:
    • No need for setuid; uses Unix domain sockets.
    • Audited in some OS bases; considered more modern and maintainable.
    • Some still prefer Screen for serial-port support and specific behavior; tmux’s server/session/window/pane model confuses a few users.
  • zellij is highlighted as a modern, more discoverable Rust-based alternative, though some report past latency and missing fully keyboard-driven copy/paste.
  • Minimal tools (dtach, abduco+dvtm, mtm) are recommended by those who equate security with very small codebases, though they have terminal capability limitations.

Project health, tech debt, and migration

  • Upstream reportedly requested the security review but appears understaffed and hard to reach; Screen is seen as “bitrotting” yet still widely used.
  • Some argue Screen’s architecture is so dated/complex that it’s effectively doomed; others warn against “rewrite from scratch” thinking.
  • Suggestions include:
    • Distros replacing Screen with tmux or a “tmux-as-screen” compatibility wrapper.
    • Keeping Screen only for niche features like serial console support.

Mailing lists and tooling / usability debates

  • Side discussion on mailing lists: praised for openness, federation, and longevity; criticized for poor web UX, accessibility issues, and difficult search.
  • Several note older projects (including GNU tools) bury issues on lists instead of using modern forge-based issue trackers, hurting discoverability and maintenance.

Git Bug: Distributed, Offline-First Bug Tracker Embedded in Git, with Bridges

Role of a Bug Tracker and Intended Audience

  • Several commenters note that real-world bug trackers are used by support, QA, design, and management, not just engineers.
  • Concern that a Git-embedded tracker may be too engineering-centric and awkward for non-developers, especially if access is via Git or terminal tools.
  • Others argue this is fine: the tool seems aimed more at OSS / engineering-heavy workflows than full corporate project management.

Why Put Issues in Git at All?

  • Proponents see clear benefits:
    • Offline, local-first workflows (file bugs, read history, etc. without network access).
    • All project knowledge travels with the repo (easier backups, mirroring, migration, and forge independence).
    • Potential for cross-forge interoperability if multiple platforms read/write the same Git-based issue data.
  • Some dislike that current forges bolt centralized SQL-backed trackers, CI/CD, and AI tightly onto Git, increasing lock-in.

Technical and Design Challenges

  • Doubts about representing inherently relational issue data in Git objects, and losing SQL-style querying and integrity.
  • Counterpoint: you can index Git-stored data into SQLite locally; ACID isn’t as critical in an offline, async, single-user context.
  • Open questions: schemas and workflows standardization, multi-repo or cross-project issues, avoiding confusing extra refs, and making this usable for non-devs.

Distributed Semantics and Conflict Handling

  • Skeptics worry about merging issue conversations and “weird” ordering when multiple people edit offline.
  • Maintainers explain they use Lamport timestamps and an operation log over a DAG, so merges don’t produce conflicts; events are replayed in logical order.
  • Late-pushed comments can appear earlier in the timeline, but that’s framed as reflecting the true creation order.

Ecosystem, Extensions, and Comparisons

  • Multiple references to Fossil and other Git-based issue tools; this project is seen as part of a broader “everything in the VCS” trend.
  • Git namespaces (e.g., refs/bugs) are praised as a flexible mechanism; maintainers position git-bug as a generic framework for storing entities in Git (issues, project boards, reviews, etc.).
  • Bridges exist for GitHub/GitLab/Jira; long-term hope is direct forge integration, though some doubt big platforms will adopt it.

UX, Documentation, and Adoption Concerns

  • Many find the README and docs fragmented and missing a clear “Why?” and basic “How do I add a bug?” path; screenshots and higher-level overviews are requested.
  • There is enthusiasm for the TUI and web UI, but also calls for better frontends (including editor integration).
  • Some see strong potential as a personal or small-team TODO/bug system; others think large companies will stick with centralized tools like Jira.

What were the MS-DOS programs that the moricons.dll icons were intended for?

Nostalgia for Windows 3.x and moricons.dll

  • Many recall discovering moricons.dll as a kid and treating it as a hidden treasure trove to decorate Program Manager and PIFs.
  • Several remember misinterpreting “moricons” as a nonsense word rather than “more icons.”
  • The icons trigger strong sensory memories: Windows startup sounds, hard-drive and floppy noises, and general early-90s PC vibes.
  • Some note using moricons not for the “intended” apps but as a general-purpose icon set, often just picking anything vaguely related and making sure no two apps shared the same icon.

WordPerfect, DOS Productivity Software, and Icon History

  • WordPerfect evokes intense nostalgia, especially its DOS-era dominance, function-key cheat strips, and “Reveal Codes,” which are compared to LaTeX-like explicit markup and praised for precise formatting (notably in legal work).
  • Discussion contrasts WordPerfect’s DOS excellence with a disastrous transition to Windows (crashes, bloat, interface and macro changes) that pushed users to Word.
  • Other canonical DOS apps mentioned: Lotus 1-2-3, dBase, Harvard Graphics, Flight Simulator, Crosstalk XVI, Q&A, DesignCAD, etc.
  • Clarification that “Access for DOS” was a communications tool, not a database.
  • Several comments answer “why ship third‑party icons?”: Windows 3.1’s setup scanned for existing DOS apps (via APPS.INF) and assigned them appropriate icons so the new GUI would feel polished and familiar.

Learning to Program: QBasic, Turbo Pascal, Borland C++

  • Many share childhood misunderstandings: thinking QBasic/Turbo Pascal were text editors that “ran games,” not compilers; not realizing .BAS files could be edited in any editor; odd confusions about strings, numeric types, or error messages.
  • QBasic, NIBBLES.BAS, and GORILLAS.BAS feature heavily as first programming experiences and bug-fix memories.
  • Turbo Pascal is praised for speed and a great debugger; Borland C++ is remembered as powerful but confusing for beginners.
  • Modern fantasy consoles (PICO-8 vs TIC‑80) are mentioned as ways to recapture that exploratory feeling, with some preferring polished constraints (PICO‑8) and others favoring open-source flexibility (TIC‑80).

PIF Files, Icons, and Technical Tidbits

  • PIF files are recalled as configuration for DOS apps under Windows (memory, options), later treated as special shortcuts whose extensions are hidden and which are effectively executable, contributing to past exploits.
  • Links are shared documenting the PIF format; some lament such lore disappearing without archives.
  • Icon craft is admired: low-res icons seen as surprisingly expressive; some criticize the “icon on lined paper” style as visually busy on modern displays.
  • A CSS snippet using image-rendering: pixelated is shared to view icons crisply, with side discussion about how CRT-era appearance differed from modern pixel-perfect scaling.

Broader Reflections on Computing Then vs Now

  • Several contrast the early PC era’s sense of user control and discovery with today’s locked-down, ad-driven, or subscription-based ecosystems.
  • Comments note that past constraints were mostly hardware or skill limits, whereas now they’re often manufacturer- or policy-imposed.
  • Others argue constraints and artificial segmentation existed even then (e.g., mainframes, 486 variants), but agree modern firmware locks and SoC keys raise the stakes.
  • There are calls for stronger consumer protection, clearer distinctions between sold vs. rented devices, and pushback against ownership-eroding practices.

Changes since congestion pricing started in New York

Perceived Early Outcomes in NYC

  • Many commenters see congestion pricing as “working”: fewer cars, faster traffic, better bus reliability, less noise, especially in Manhattan’s core.
  • Graphs showing cars averaging ~9 mph are viewed as stark evidence that pre‑policy car infrastructure was highly inefficient.
  • Some on-the-ground reports describe more foot traffic in lower Manhattan and easier bus travel from New Jersey/Queens; others anecdotally report eerie midweek emptiness and more vacant storefronts.
  • Business impacts are contested: article stats show visitors and restaurant bookings up, but several people are skeptical, noting business self‑diagnoses are unreliable and early data is thin.

Measurement, Causality, and “A/B Testing”

  • Several participants want policy changes bundled with explicit evaluation plans and success metrics (e.g., commute times, bus speeds, business revenue).
  • Others stress that true randomized A/B tests at city scale are impossible; before/after comparisons are confounded by macro trends (economy, seasons, other policies).
  • Some call these “fundamentally unanswerable questions” where we can only collect suggestive evidence rather than clean causality.
  • There’s criticism that the NYT piece feels like advocacy: positives are quantified, while sections on pollution, low‑income commuters, and public opinion are “too soon to tell.”

Equity, Fairness, and Who Pays

  • Critics argue congestion fees are inherently regressive, turning Manhattan into a “playground for the rich” and burdening low‑income drivers who lack viable transit.
  • Others counter that:
    • Most commuters into lower Manhattan already use transit.
    • Car commuters into the zone skew higher income.
    • A very small share of working poor drive into the zone and some receive waivers or discounts.
  • Debate over fairness: some see a flat fee as fair (same price for same service); others see any flat access charge as exclusionary when it changes behavior mainly for poorer users.
  • Alternative ideas (lottery/plate bans, income-based pricing) are discussed but criticized as harder to tune, more intrusive, and more disruptive to people with fixed-time obligations.

Cars vs. Transit, Bikes, and Urban Form

  • Strong anti‑car contingent: cars dominate space, generate noise, pollution, injuries, and sprawl; they argue many “needs” for cars are artifacts of car‑centric planning.
  • Defenders emphasize convenience, carrying capacity, weather protection, perceived personal safety, and US low density; they resist policies that intentionally raise “at‑use” costs.
  • Congestion pricing is framed by some as a bootstrapping tool: shifting marginal trips to transit, justifying better service and eventually supporting denser, more walkable neighborhoods.
  • Bikes and e‑bikes feature heavily:
    • Advocates highlight huge capacity and space gains, plus evidence that bike and foot traffic spend more locally.
    • Others describe “lawless” delivery e‑bikes, serious crashes, and call for registration and camera enforcement; replies stress cars remain orders of magnitude more dangerous.

Families, Accessibility, and Quality of Life

  • Some parents describe car‑centric suburbs as “hell” with kids and see dense, transit‑rich neighborhoods (Manhattan, Brooklyn, European cities) as ideal.
  • Others highlight the practical difficulties: double strollers on stairs, limited elevators, crowded trains, expensive family housing, and the need for cars in outer boroughs or US regions without good transit.
  • There’s broad agreement that improved transit accessibility (elevators, better buses) is a necessary complement to congestion pricing.

Politics, Slippery Slopes, and Scaling

  • NYC is seen as a special case: uniquely dense, pre‑car transit skeleton, and massive existing demand; many doubt simple transferability to car‑dependent US metros.
  • Some fear a “slippery slope” from pricing to broader driving restrictions or de facto bans on internal combustion vehicles; others call this normal policy diffusion (successful ideas spreading) rather than escalation.
  • The governor’s earlier delay and fee reduction are criticized as short‑term political maneuvering that risked underfunding the transit improvements pricing is meant to support.

The world could run on older hardware if software optimization was a priority

Economic incentives vs. optimization

  • Many comments frame this as a resource-allocation problem: developer time vs. hardware.
  • Hardware, cloud compute, and storage are seen as “dirt cheap” compared to engineers; it’s usually rational to ship features on slower code rather than spend months optimizing.
  • Several argue this creates negative externalities: e‑waste, higher energy use, constant forced upgrades, and time wasted by end‑users staring at sluggish apps. Others push back that this is just the tradeoff users and buyers implicitly choose.
  • Some connect this to “enshittification” and “market for lemons”: buyers can’t easily judge software quality/performance up front, so vendors optimize for features and sales decks, not efficiency.

Bloat, UX, and real hardware

  • A long thread of examples (Slack, Teams, VS Code, Outlook, web apps, Windows 11 UI, Electron generally) describe multi‑second startup times and janky interactions even on high‑end machines.
  • Developers often build and test on fast Macs with fiber; typical users run mid‑range Windows laptops or old phones on slow networks, where the same apps feel painful.
  • Some note that older native apps (Win9x/XP era, early Photoshop/Office, simple IDEs) felt instant on far weaker hardware; “modern” equivalents often feel slower on 100–1000× more capable machines.

Where performance went: abstractions, features, safety

  • Several claim most of the 1000× raw CPU gain since ~1980 has been spent on layers of abstraction (high‑level languages, VMs, ORMs, web stacks, Electron) and richer UIs, not on bounds checks or security.
  • Others counter that abstractions bought enormous developer productivity and software abundance; the marginal user often values “software exists at all” more than 2×–5× speed.
  • Heated subthreads debate whether writing “fast” C++/Rust is actually that costly, or whether huge speedups (10–1000×) are often low‑hanging fruit lost to ignorance, deadlines, and cargo‑cult patterns (N+1 queries, over‑network calls, JSON‑everywhere).

Servers vs clients; I/O, energy, and scale

  • Platform/infra engineers note that in large fleets, most CPU sits idle; the true bottlenecks are I/O (disks, networks, databases), contention, and bad schema/query design.
  • Even small inefficiencies in widely deployed code or libraries multiply to large global energy and hardware costs, but those costs are diffuse and rarely drive product decisions.
  • Others point out that in data centers newer hardware is upgraded mainly for power density and efficiency, not raw speed—older gear really can be too wasteful to keep.

Old hardware already in use

  • Several remind that much of the world already runs on “old” or low‑end chips: embedded controllers, industrial systems, cars, ATMs, power plants, even space hardware with decades‑old rad‑hard CPUs.
  • At home, many report perfectly usable experiences on 10–15‑year‑old desktops and laptops, provided they avoid heavy modern web apps or switch to lighter OSes and software.

Performance vs. safety and correctness

  • A substantial side discussion: if dynamic bounds checking and memory‑safe languages cost ~5% performance, would that be a good trade for drastically fewer vulnerabilities and easier debugging?
  • Some argue we collectively chose “1000× speed with same bugs” over “950× speed with far fewer memory issues”, and that this was a mistake; others reply that even safety competes with features, time‑to‑market, and other forms of value.

Cultural and tooling questions

  • Older developers lament a lost “craft” of efficiency, replaced by “get it working” on fast machines and frameworks.
  • Others note that where users or money demand speed (HFT, exchange engines, game engines, core infra, some big web properties), optimization still happens aggressively.
  • There’s recurring speculation that future AI tooling might eventually help auto‑optimize bloated codebases—but also concern that current AI waves mostly amplify compute waste rather than reduce it.

Working on complex systems: What I learned working at Google

Scale vs. Complexity

  • Several commenters stress that most projects are not at Google scale but can still be genuinely complex (e.g., game engines, logistics, enterprise integrations).
  • Others argue “complex” is orthogonal to “large”: small systems can be conceptually hard; large systems can do relatively simple things.
  • Some push back on developers copying FAANG-style architectures for tiny workloads (“cargo culting”), noting Google’s architecture is a response to scale, not a recipe for success.

Complex vs. Complicated / Types of Complexity

  • Multiple threads debate the article’s complex/complicated distinction and note overlaps with established notions: essential vs. incidental complexity, domain vs. accidental/environmental complexity.
  • Critics say the article doesn’t clearly separate:
    • complexity you choose (architecture, tools),
    • complexity imposed by the domain (logistics, manufacturing, regulations, human behavior).
  • Examples like electronic proof-of-delivery in logistics highlight socio-technical, behavioral, and infrastructural constraints as true “complexity,” not just big codebases.

Life Inside Google: Organizational Complexity

  • Commenters describe “collaboration headwind”: many approvals, domain owners, and process layers turning trivial changes (e.g., button text) into months of work.
  • A common claim is that much effort goes into fighting incidental organizational complexity rather than pure technical difficulty.
  • One perspective defends stricter process and testing as rational in a large org with high downside risk; others see it as demoralizing and leading to workarounds instead of improvements.

Testing, Code Ownership, and Friction

  • There is an extended back-and-forth on code review expectations:
    • Some reviewers insist on adding or backfilling tests before accepting even obvious fixes.
    • Contributors find it unreasonable when asked to introduce full test suites into untested code just to land a small change.
    • Others argue this is exactly what professional engineering should require; untested changes are seen as liabilities.

Incentives and Peer Bonuses

  • Peer bonus systems are described as “tips” funded by the company, intended to reward extra help.
  • Some see them as positive and motivating (especially when paid in cash); others view them as gamified, low-value recognition that substitutes for real raises or PTO.

Google, Games, and Product Judgment

  • Discussion notes Google’s failed or shuttered gaming efforts (e.g., Stadia) and past work via Niantic.
  • Several argue Google and also mobile platform owners struggle to understand the games industry and tend to push complexity onto developers.

Meta and Miscellaneous

  • Multiple people point out the irony of a buggy, reappearing cookie banner on an article about complex systems.
  • Others reference systems thinking (Meadows, Cynefin, complex systems literature) and appreciate the topic but criticize redefining established terms or glossing over long-standing research.