Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 464 of 544

Brain Hyperconnectivity in Children with Autism and Its Links to Social Deficits (2013)

Evolution, Fitness, and “Bigger Brains”

  • Some speculate autism‑linked hyperconnectivity could be an “evolutionary step” toward more powerful brains, noting high representation of mild autism in tech hubs and associated wealth.
  • Others push back: evolution only favors traits that increase reproduction in current environments; higher energy cost of “bigger, more connected” brains is a liability if societies can’t support them.
  • Several commenters argue autism often reduces reproductive success and social functioning, questioning any fitness advantage.
  • Others note female carriers, under‑diagnosis in women, and gene spread via a few “successful” carriers complicate simplistic reproduction arguments.

“More Connectivity” ≠ “Better Brain”

  • Multiple analogies (cancer, computers running all programs at once) emphasize that more cells or more connections are not automatically beneficial.
  • Developmental neuroscience examples (visual cortex pruning, reduced connections with maturation) are cited to argue that optimal, not maximal, connectivity matters.
  • Over‑excitation and poor excitation/inhibition balance are framed as reducing efficiency and flexibility, potentially explaining rigidity and sensory overload in ASD.

Methodological Skepticism and Conflicting Findings

  • The 2013 study is criticized for vague, “sexy” methods write‑up, thin detail on preprocessing and motion/noise correction, and multi‑site issues that modern harmonization techniques try to address.
  • Some suggest any decade‑old “brain–behavior” fMRI correlation study should be treated cautiously.
  • A newer study reporting lower synaptic density in autistic adults is raised; commenters suggest:
    • Different developmental stages (hyperconnectivity in children, later over‑pruning in adults).
    • Distinct etiologies leading to similar behavioral syndromes.
    • Extreme heterogeneity of “ASD” as a diagnostic catch‑all, making replication difficult.

Autism, Communication, and Social Friction

  • Several autistic commenters describe chronic confusion around implicit social rules (e.g., job‑interview questions), needing to consciously learn situation‑specific “scripts.”
  • The “double empathy problem” is referenced: miscommunication runs both ways, not solely as autistic deficit.
  • Others counter that, from the majority’s perspective, there is a functional deficit in typical social contexts, even without tissue “damage.”
  • Some argue much suffering comes from hostile or inflexible societies rather than intrinsic brain “wrongness,” while others describe intense stigma, slurs, and dehumanization.

Tech, Drugs, and Speculative Mechanisms

  • AI is already used informally as a “social coprocessor” to rephrase messages; some imagine autism + AI as a powerful combination.
  • Psychedelic‑induced hyperconnectivity and immunological pathways (e.g., IL‑17, Th17, thermoregulation) are mentioned as intriguing but very speculative links.
  • Lay hypotheses tie hyperconnectivity, myelination issues, ADHD comorbidity, and sensory hypersensitivity into unified but unproven models.

Is software abstraction killing civilization? (2021)

Overall view on “abstraction killing civilization”

  • Many commenters reject the headline claim outright, citing the old rule that sensational question-headlines usually have the answer “no.”
  • Abstraction is framed by several as a core enabler of progress, analogous to clean water or infrastructure: without it, most modern systems would be impossible to build or maintain.
  • A minority argue that overreliance on high-level layers in one region (especially the US) might hollow out practical capability there, but not globally.

Jonathan Blow and Casey Muratori debate

  • Blow’s “collapse” talk is widely seen as containing some valid critiques (loss of low‑level skills, poor performance) mixed with hyperbole and cherry‑picked examples.
  • His actual software output is debated: some see Braid/The Witness as landmark, carefully crafted works; others see “just puzzles” and question his authority to criticize mainstream software.
  • Muratori’s performance rants (slow debuggers, terminals, editors) resonate strongly; his fast terminal example is cited as proof that much slowness is avoidable.
  • Disagreement centers on root cause: cultural (“we stopped caring about performance”) vs systemic/market (“features sell, bloat wins”) vs technical (“we need better default tools”). Some think his “just learn more” message can’t fix things at scale.

Performance, bloat, and abstraction

  • Widespread frustration with slow mainstream tools: Jira, Slack, VS Code, Notepad, web UIs, mobile apps. Many argue hardware gains are squandered.
  • Others counter that software engineering rigor (testing, CI, fuzzing, safer languages) is far higher than in the 80s/90s; regressions are mostly in responsiveness, not discipline.
  • Distinction is drawn between abstraction and overabstraction: layers that don’t pay their way, leak badly, or are shipped as first drafts then widely copied.

Education and loss of fundamentals

  • Instructors report students who don’t grasp filesystems or basic architecture; advocates push NAND‑to‑Tetris–style, bottom‑up curricula in high school and early university.
  • Others note that CS isn’t supposed to be about computers per se, and most working programmers are “craft” rather than “science” practitioners.
  • There’s concern that hiding fundamentals in consumer systems (mobile OSs, cloud files) erodes baseline literacy, forcing universities to spend time on what used to be assumed.

Web, React, and front‑end stacks

  • Strong criticism of the modern web stack: React, Next, Vercel, server‑side JS are seen as massive, underperforming abstractions for relatively simple tasks.
  • Some younger developers reportedly think browsers “render React,” not HTML, which older commenters see as symptomatic of detached abstractions.
  • There’s pushback: JSX still requires understanding HTML/DOM; React can be used sanely; real villains are business incentives and tooling ecosystems that reward complexity and lock‑in.
  • Ideas like rendering React UIs purely via canvas/WebGL are attacked as accessibility‑hostile and oblivious to decades of interaction-design knowledge.

Files, OS design, and user abstractions

  • Long subthread on why mobile and cloud platforms de‑emphasize visible filesystems: usability for non‑experts, security sandboxing, sync convenience, and support costs.
  • Critics argue that per‑app silos and hidden file extensions damage user power and cross‑app interoperability; defenders say tree‑structured files confuse many users and aren’t the only viable model.
  • Historical notes mention richer file abstractions (record‑oriented files, ISAM, bundles) that lost out to simpler Unix‑style byte streams and today’s object stores.

Low‑level programming and graphics APIs

  • Some nostalgia for assembler and tight RAM constraints; others note low‑level work still thrives in areas like ffmpeg or embedded systems.
  • A concrete counterexample to “everything is overabstracted” is modern graphics: DirectX 12/Vulkan are less abstract and much harder to teach and staff for than older APIs.
  • Commenters worry that raising the difficulty bar there shrinks the pool of people who can “just draw pixels to the screen” in modern engines.

Geopolitics and who maintains the bottom of the stack

  • A thread reframes the issue as regional: the US drifting toward high‑level software while offshoring manufacturing and low‑level expertise, with China and others increasingly capable across the stack.
  • Some predict Western industrial decline but note that civilization overall won’t collapse; another wave of countries will “pick up the pieces.”

Jacksonpollock.org (2003)

Initial Reactions & UI Discoveries

  • Many users initially thought the page was broken or suffering from “HN hug of death” before discovering you must move or click to draw.
  • Interaction details emerged collaboratively:
    • Mouse movement paints; click cycles colors.
    • Number keys set grayscale; letter keys set colors; Shift+letter/number changes background.
    • Spacebar or double-click clears; some find double‑click-to-clear too sensitive.
  • People liked the lack of visible instructions, seeing it as encouraging exploration, though this confused touch and iPad users.
  • Overall sentiment: “fun for a few minutes,” but shallow in longevity; still praised as a joyful, ultra-simple UI with high “fun per byte.”

Pollock, Steadman, and Stylistic Fidelity

  • Several commenters say the results feel more like Ralph Steadman “ink explosions” than Jackson Pollock.
  • Others push back, distinguishing Pollock’s connected splotches and intent from Steadman’s illustrative, whitespace-heavy work.
  • Some treat it as a playful “background generator” rather than an attempt at faithful emulation of either artist.

What Counts as Art and Who Gets Recognized

  • A major thread debates whether Pollock’s work is “low-effort slop” anyone could do vs serious, intentional art.
  • One view: being first with a new idea is crucial; execution is often easy once the concept exists.
  • Counterview: ease or imitability doesn’t negate artistic worth or personal meaning; calling it “not worth doing” is seen as needlessly hostile.
  • Another theme: access to “the art world” (galleries, patrons, circles) strongly shapes who gets taken seriously, though several argue art remains art regardless of recognition.
  • Comparisons to Duchamp, jazz, popular music, and computer art underline that concept, context, and selection—not just skill—drive fame.

Politics, Propaganda, and the CIA

  • Multiple comments reference the CIA’s Cold War promotion of abstract expressionism as a symbol of American freedom.
  • Reactions diverge: some say this knowledge cheapens Pollock’s work; others see funding context as irrelevant to aesthetic value.
  • More conspiratorial claims (e.g., murder, money laundering) are present but challenged as baseless.

Technical Implementation & History

  • Original 2003 version was confirmed via archive.org to be Flash-based; the current site uses Canvas/JavaScript.
  • The piece is linked to Stamen Design’s earlier “Splatter” Flash work; there was past drama over rehosting without credit, later resolved.
  • Commenters reminisce about early-2000s web “gimmicks,” KidPix, chalk simulations, Clojure painting, and custom brush/Bezier experiments.
  • Libraries and tools like Paper.js and related art toys (e.g., Mondrian generator) are shared as follow-ons.

Tips for mathematical handwriting (2007)

Character Distinctions and Symbol Variants

  • Many commenters already follow similar “disambiguation” habits as the article:
    • Cross Z (and sometimes 7, 0) to avoid confusion with 2 and O.
    • Add tails/hooks to u vs v, and alter a so it’s distinct from u/v and 2.
    • Add a deliberate bottom swoop on i so j can be a straight descender.
    • Use dotted or slashed 0 vs plain O; some prefer a dot inside 0 to avoid clash with ∅ or φ.
    • Make l clearly different from 1 and I (loop, hook, or use ℓ); some argue this is essential, others say l is fine if written carefully.
    • Distinguish p from ρ, often by using \varrho, and φ vs ∅ vs 0 via \varphi and stroke orientation.
    • x vs × vs χ: hooks, curved “cc” style, centered ×, or just relying on · / inner-product notation instead of ×.
  • Greek letters: links to handwriting charts; some note common confusions (ξ vs ζ, cursive θ). One person jokingly bans ξ; another writes Ω as an underlined O for ease.

Paper, Tools, and Physical Setup

  • Strong opinions on paper:
    • Blank white praised for lack of visual clutter; others find it too open and prefer faint lines or dot grid.
    • Graph/engineering paper liked for alignment, tables, indentation; disliked by some as “busy.”
    • Mixed‑media / art sketchbooks (large, thick, rough paper) considered very pleasant and possibly cognitively helpful.
  • Suggestions for structure without clutter: pencil boards under blank paper, printable dot grids.
  • Digital math writing: consensus that a screen under the pen (iPad, Surface, Samsung tablet with Wacom) beats display‑less tablets; note on active vs passive pens and battery issues.

Teaching, Legibility, and Student Habits

  • Teachers report students producing maximally ambiguous glyphs (1 vs 7, 4 vs 8, T vs F) and even trying to game grading; countermeasures include “round toward wrong” policies or forcing circling TRUE/FALSE.
  • Several instructors explicitly teach handwriting of symbols and multiple Greek pronunciations; biologists and non‑math majors often struggle with notation reuse.
  • One view: many students are “derailed” very early by phrases like “let x be the unknown” and by a broader social attitude that being bad at math is normal.
  • Some people simply have chronically bad handwriting despite heavy practice, and find these tricks necessary rather than optional.

Notation Style and Greek vs Words

  • One thread argues for programmer‑style descriptive names instead of single‑letter (often Greek) symbols; others push back that:
    • Math notation is essentially dense, handwritten shorthand; longer names would explode expressions (e.g., quotient rule) and slow manipulation.
    • Symbols are only meaningful once defined; once internalized, longer names add little.
    • Historical experience with prose‑only math was far worse for comprehension.
  • Analogy drawn to short Unix command names: cryptic at first, efficient once learned.

Historical Scripts and Typography

  • Detailed discussion of Carolingian minuscule and how the original hooked lowercase l differed from I; disagreement over how “easily distinguishable” it really was in manuscripts.
  • Critique of modern serif and sans‑serif fonts (especially some sans) for making 1/I/l and similar glyphs overly similar; praise for typefaces that restore a hooked l and for programmer‑oriented monospace fonts that emphasize these distinctions.

Show HN: FlashSpace – fast, open-source, macOS Spaces replacement

Overall Reception

  • Many commenters are excited to try FlashSpace, especially those frustrated with macOS Spaces lag and animation.
  • The “no SIP, no tiling, no deep OS takeover” design is widely appreciated as a safer, less glitchy approach than typical tiling managers.
  • A subset of users say they won’t adopt it due to workflow mismatches, mainly around per-window handling and need for visible transitions.

FlashSpace Design & Features

  • Uses show/hide of applications rather than the native Spaces API; avoids heavy tiling logic and associated glitches.
  • Does not require disabling SIP.
  • Supports:
    • Unlimited workspaces.
    • Fast switching, with hotkeys to change spaces and move apps between them.
    • JSON-based configuration in ~/.config/flashspace, suitable for dotfile syncing.
    • Recently added grid view of workspaces.
  • Intentionally does not manage window layout; users are expected to pair it with tools like Rectangle Pro or similar.

Key Limitations & Pain Points

  • Space membership is app-level, not window-level:
    • You cannot keep different windows of the same app in different workspaces.
    • This is a dealbreaker for many who separate work/personal browser windows, per-project editor windows, multiple Screen Sharing hosts, etc.
    • Developer expresses interest in per-window support but notes possible macOS performance limits.
  • Some users dislike that unassigned apps get hidden when switching to a space, seeing this as requiring too much micromanagement.
  • Reports that performance under heavy load (e.g., switching while a game loads) still hits OS-level show/hide lag similar to Aerospace.

Comparisons to Other Tools

  • Versus Aerospace: FlashSpace focuses on fast workspaces without tiling; avoids Aerospace’s chronic lag but can still hit OS limits. Aerospace’s “every key is a workspace” model is seen as more convenient by some.
  • Versus yabai/TotalSpaces: avoiding SIP is a major advantage; TotalSpaces is effectively abandoned and broken on newer macOS.
  • Designed to coexist with tools like Amethyst (tiling) since FlashSpace only shows/hides apps rather than enforcing layout.

Wider macOS Window Management Discussion

  • Strong frustration with macOS Spaces animations, fixed delays, and poor multi-window/app behavior (especially browsers).
  • Mixed feelings on animations: some need instant switches; others rely on the animation for mental context switching.
  • Broader complaints about the Dock, Stage Manager, multi-monitor quirks, and reliance on third-party utilities; others defend the existing macOS model and auto-hiding Dock.

The LLMentalist Effect

Critique of the “LLMentalist / con-artist” analogy

  • Many commenters argue the psychic comparison is overstated: LLMs are continuously checked against ground truth (e.g., running code), while psychics cherry-pick safe claims.
  • Others accept a partial analogy: RLHF can reward answers that sound confident and emotionally satisfying even when unsupported, akin to cold reading.
  • Several think the article is internally inconsistent (both “it’s all an illusion” and “it stole all our work”) and light on technical detail; some see it as motivated by dislike of LLMs and note it’s dated (2023, pre-o1/o3).

What is “intelligence”?

  • Thread repeatedly notes that the article never pins this down.
  • Competing definitions:
    • Narrow/functional: ability to solve problems or use information effectively.
    • Richer: requires awareness, introspection, or “knowing with conscience.”
  • People point out you can’t resolve “are LLMs intelligent?” without first agreeing on a definition; otherwise the debate becomes circular.

Similarities and differences to human cognition

  • Some argue a large part of human cognition is statistical pattern-matching (especially grammar, conversational wandering), so LLMs plausibly mirror an aspect of mind.
  • Others emphasize missing facets: introspection, long-term memory, embodiment, consciousness, non-verbal reasoning, and the ability to notice and correct one’s own failures.
  • A minority worry about “religious” attitudes that insist human thinking must be fundamentally non-mechanical.

Capabilities, generalization, and benchmarks

  • Dispute over whether solving Olympiad/ARC-AGI/logic tasks shows real reasoning or just sophisticated pattern reuse/overfitting.
  • Some highlight LLM weaknesses on basic compositional tasks (like reliably counting letters) to argue limits of next-token prediction.
  • A custom NanoGPT sorting experiment is cited to counter “pure parroting,” sparking a technical subthread on what counts as genuine generalization.

Illusion, ELIZA effect, and world models

  • Several draw parallels to the ELIZA effect: humans over-attribute understanding to fluent text.
  • Others insist LLMs do build internal world models; the “illusion” is partly RLHF pushing them toward persuasive personas.
  • One framing: the chat persona is a fictional character users bond with, not the underlying system.

Usefulness and economics

  • Some say “do they think?” is secondary to “are they useful and cost-effective?”
  • Others see a gap between hype and current utility; examples like Copilot provoke disagreement over whether aggressive promotion reflects real value or a search for one.

We are destroying software

Overall reception & tone

  • Many see the essay as emotionally resonant but rhetorically over‑the‑top: more a crafted rant than an argument with clear solutions.
  • Some read it as a useful “wake‑up call” about culture and long‑term consequences; others call it nostalgia, cynicism, or “old man yells at cloud.”
  • Several argue the critique is heavily skewed toward Silicon Valley–style web development and doesn’t reflect the entire industry.

Reinventing the wheel, rewrites & backward compatibility

  • Commenters highlight tension between:
    • “Don’t reinvent the wheel” vs encouraging learning by re‑implementing things.
    • Preserving backward compatibility vs avoiding ever‑growing complexity and lock‑in.
  • Some say the industry has over‑indexed on “never rewrite,” others that SemVer culture normalized breaking APIs and forces constant, painful upgrades.
  • Consensus: both rewrites and reinvention are sometimes right; the damage comes from applying any simple rule universally.

Complexity, dependencies, and the modern stack

  • Many agree software systems have become bloated: deep dependency trees, fragile build systems, containers everywhere just to run “simple” services.
  • Web and npm/Electron stacks are frequent examples of accidental complexity that’s easy short‑term but hard to keep running for 10–20 years.
  • Pushback: abstractions are how we scale; lower‑level isn’t automatically “better,” and demand for distributed, mobile, secure, integrated systems really has grown.

Business incentives & engineering culture

  • A recurring theme: it’s less “engineers destroying software” and more business models that reward speed, feature count, and “impact” over robustness, simplicity, and maintainability.
  • Resume‑driven development, job‑hopping, metrics gaming, “move fast and break things,” and under‑valued documentation are all seen as systemic symptoms.
  • Some note that quality is often rationally traded away when products or companies may not exist in a few years.

Longevity, quality, and performance

  • Several argue we write too much short‑lived, hard‑to‑maintain code, and too few people ever see the long‑term consequences of their design decisions.
  • Others counter that not all systems need to last 30 years; many business domains change faster than that.
  • Strong minority concern about loss of performance‑minded craft and the normalization of slow, buggy software as “good enough.”

AI & the future of programming

  • The post’s request to “remove AI from the ledger” is contested. Some say recent life improvements (delivery, streaming, digital services) don’t require today’s complexity; others insist overall welfare has clearly improved.
  • On AI tools: some see them as accelerating the same cultural problems (less understanding, more code churn); others see them as a way to strip away tedium and refocus on design and outcomes.

'The Licensing Racket’ Review: There's a Board for That

Licensing as Protectionism

  • Many commenters argue that modern licensing often functions as cartel protection: incumbent trade groups lobby for rules that block new entrants and preserve profits (e.g., funeral homes controlling coffin sales, realtors embedded in law).
  • Historical parallels are drawn to medieval guilds and quota systems (e.g., dairy in Canada) as structurally similar restriction-of-trade mechanisms.
  • “Continuing education” requirements are framed as a secondary racket that adds cost with dubious public benefit.

Racial and Class Dimensions

  • Several posts link licensing to a history of race-based economic exclusion (post–Civil War Black Codes, hair-braiding rules disproportionately hurting Black women).
  • There’s debate over whether minimum wage laws originally functioned as a racial exclusion tool; some insist they were motivated by anti-“sweatshop” concerns, others emphasize racist unions and wage equalization.

Law, Private Equity, and AI

  • The legal profession is highlighted as a rare case where licensing has successfully blocked private equity ownership; big consulting firms provide “legal-adjacent” services in the U.S. but cannot put their own lawyers in court.
  • Motivations for capital to penetrate law (profit from rate–wage deltas) are weighed against the value of exclusive access to the legal system.
  • In medicine and law, AI is seen by some as potentially transformative, but others argue politics and entrenched interests will blunt its impact.

Healthcare Licensing and Mid‑Level Providers

  • Thread contains a long, detailed dispute over NPs and PAs vs MD/DOs:
    • Pro‑NP/PA side: their training plus supervised practice can approximate physicians for many tasks; MD training lengths may be excessive; expanding roles could reduce bottlenecks and costs.
    • Skeptical side: massive differences in supervised clinical hours, rigor, and oversight; NP education described as “wild west”; independent NP prescribing linked to overuse of antibiotics, benzos, and TikTok-driven misdiagnoses.
    • Consensus point: physicians’ salaries are only ~8–14% of healthcare spend, so even big cuts to MD costs barely move total system costs.

Licensing, DIY, and Building Codes

  • Anecdotes show large savings when individuals bypass licensed trades to build houses, but others describe dangerous unpermitted work and praise inspections for preventing failures.
  • Some jurisdictions have relaxed codes for owner‑builders with no apparent disaster; others recount regulators blocking sensible emergency fixes that later increased damage.

Cosmetology and “Everyday” Licenses

  • Multiple examples of extreme requirements (e.g., ~1000 hours classroom for barbers, 100+ hours to re‑activate an experienced stylist).
  • Many see cosmetology licensing as pure protectionism that blocks low‑income workers; opponents worry deregulation would push wages to minimum levels.
  • There is tension between viewing licensing as legitimate consumer protection (hygiene, safety) vs. as wage‑inflating supply restriction.

Economic Ideology and Deregulation

  • Milton Friedman’s anti‑licensing views are invoked; some call them outdated for a global, corporate-dominated economy, others defend their continuing relevance.
  • Debate over whether large modern problems require large regulatory institutions or whether deregulation experiments (e.g., Argentina, “default gone” regulations rhetoric) are promising or reckless.
  • Chesterton’s fence is cited to defend existing rules; others call that a “thought-terminating cliché” when empirical harms (e.g., blocking hair stylists) are visible.

International and Federalism Angles

  • Canada: interprovincial licensing barriers in services are flagged as a major remaining trade friction; there’s hope that external tariff threats will push rationalization.
  • Quebec is cited as an extreme case where ~80% of occupations reportedly require licensing, seen as stifling work opportunities.
  • Commenters note most U.S. licensing is state-level, so shrinking federal agencies wouldn’t touch many problematic licenses.

Costs, Enforcement, and Alternatives

  • Beyond provider pay, commenters highlight cost drivers: regulatory overhead on equipment and drugs, malpractice, admin staff, PE and insurer profit extraction, billing friction.
  • Suggestions include simplifying insurance interactions, real-time price visibility for prescriptions, and better pharmacist substitution rules.
  • A counterexample: food service is relatively easy to enter despite real health risks, suggesting some sectors manage safety without heavy occupational licensing.

Teen on Musk's DOGE team graduated from 'The Com'

DOGE staffing and cybercrime ties

  • The Krebs piece that a key DOGE teen came from “The Com” (SIM‑swapping, swatting, violent/CSAM‑adjacent networks) alarms many: they see him as a classic blackmail/extortion target now sitting near “the keys to the kingdom.”
  • Others note that ex‑hackers often end up in security work, but point out this activity was very recent, not youthful mischief decades ago.
  • There’s broad concern that a loose, crime‑adjacent online culture is now plugged directly into US government systems without normal guardrails.

Security clearances, data access, and LLMs

  • A major thread argues that security clearance and vetting processes exist precisely to weed out exploitable people and constrain damage (logging, least privilege, on‑network devices).
  • Defenders counter that the president has wide constitutional authority over classification and can grant access at will; they say “clearance” is being fetishized.
  • Critics reply that even if technically legal, bypassing standard processes massively increases risk: foreign intelligence can recruit, data can be exfiltrated invisibly, and normal audit trails may not exist.
  • Multiple comments note reporting that DOGE staff have fed large troves of sensitive data into Microsoft‑hosted LLMs, which would erase many existing access‑control and logging protections.

Audit vs. purge: what is DOGE actually doing?

  • Supporters frame DOGE as long‑overdue, aggressive audits of bloated, opaque agencies (especially USAID), arguing that “billions in fraud and waste” dwarf any process niceties.
  • They invoke examples like “shrimp treadmills,” “Iraqi Sesame Street,” transgender studies, and foreign grants as emblematic misuse of taxpayer money.
  • Critics scrutinize those talking points and find many based on old, cherry‑picked, or mischaracterized items (e.g., anti‑opium projects during the Afghanistan war, routine subscriptions to media).
  • They stress that USAID‑type programs are a tiny fraction of federal spending and that abruptly freezing them causes immediate real‑world harm (lost HIV meds, food aid, clinical trials, soft power).

Norms, law, and constitutional stakes

  • Several participants see a deeper crisis: Congress’s power of the purse being bypassed, courts publicly threatened (e.g., calls to impeach a judge who blocked access), and long‑standing norms around oversight and record‑keeping discarded.
  • Others insist this is just a hard‑nosed exercise of executive authority that prior administrations lacked the will to use, and that outrage is politically selective.

Twitter/X as a template

  • Pro‑Musk voices argue his “slash staff, keep product running” playbook at Twitter proves government can be similarly “leaned out.”
  • Opponents counter with X’s valuation collapse, broken features, brand‑safety problems, and say running a social network is not analogous to running essential public services where “rollback” isn’t possible.

LINUX is obsolete (1992)

Long-term predictions, hindsight, and humility

  • Many comments riff on how confident 1990s/2000s predictions (about Linux, iPhone, Flash, mobile web, etc.) look wrong now.
  • People describe cringing at their own old posts but frame that as evidence of growth; stagnation in opinions is seen as the real failure.
  • Several recall dismissing cameras in phones, web apps on phones, or app stores, using these as reminders that actual adoption routinely defies “expert” forecasts.

Archiving and future readers

  • Speculation about someone in 2058 reading today’s HN, perhaps from a parchment or cave archive.
  • Desire to create durable, book-like archives of HN for historians.
  • Side thread on Reddit’s “soft delete”: users’ deletions hurt public archives more than the company, which still retains and can monetize data.

Microkernel vs monolithic: theory vs practice

  • Tanenbaum’s 1992 claim that “microkernels have won” is challenged with historical counterexamples: BSD, SunOS, Windows NT, Mach/NeXTSTEP/macOS, OSF/1, all essentially monolithic or hybrid for performance reasons.
  • Microkernels are noted as common mainly in embedded and special-purpose systems (e.g., Intel ME’s Minix, L4/seL4), not as general-purpose desktop/server bases.
  • Several argue that early microkernel efforts (Mach, NT in strict form) were too slow, driving systems back toward monolithic or hybrid designs, especially for graphics and filesystems.
  • Others counter that shared-memory IPC can match intra-process performance and that the real problem was kernel-mediated IPC in first-generation microkernels.

Why Linux actually “won”

  • Consensus that success was driven more by pragmatics than purity:
    • Free as in beer and early GPL licensing enabling redistribution and collaboration.
    • Minix’s non-free status until 2000 and BSD’s lawsuit troubles left Linux as the obvious free Unix-like for hobbyists and students.
    • Rapid hardware support and willingness to work around cheap PC quirks made Linux attractive vs. more rigid BSDs and commercial Unixes.
    • Timing: Linux matured just as Intel PCs exploded in popularity.
  • Some stress the GPL’s “you must share improvements” as a unifying force, avoiding the fragmentation that plagued BSD-style licensed Unixes.

Is Linux “obsolete” today?

  • One camp: academically obsolete but commercially central; like a toaster, it’s not research-fresh but still indispensable.
  • Another: Linux/Unix are fundamentally outdated; microkernels, safer languages, and new designs (e.g., NixOS-like ideas, Rust microkernels) are where OS research “ought” to go.
  • Counterpoint: in practice, containers, microservices, JSON RPC, and serverless already waste any kernel-level performance advantage; for many workloads, “which kernel” barely matters.
  • “Worse is Better” is invoked: simpler, familiar, and incrementally improved monolithic Unix outcompeted more elegant designs.

Judging the original Tanenbaum–Torvalds debate

  • Some see Tanenbaum’s “you’d fail my course” and “debate is over” lines as argument-from-authority that aged poorly and colored perceptions of him.
  • Others maintain his theoretical points still hold: long-lived systems need clean design; Linux survives only because thousands are paid to maintain an increasingly unwieldy codebase.
  • Linus’s responses are remembered as surprisingly civil, especially for an unknown student addressing a famous professor; some suggest this restraint was strategically necessary.

Minix, secure OS research, and funding

  • Discussion of an EU-funded “secure Minix3” effort; blog activity appears to end around 2016, leading some to conclude the project is effectively dead.
  • Frustration that Intel benefited massively from using Minix in its management engine yet seemingly invested nothing back into Minix as a community OS.

Licensing, FOSS culture, and network effects

  • Multiple commenters argue Linux’s GPL license and open development model were decisive: it let companies and individuals collaborate in one codebase rather than quietly forking.
  • Stories of early Linux installs (from magazine CDs or floppies) emphasize the impact of “a full Unix-like OS, with compiler and server stack, for free” compared to proprietary, shareware-filled Windows ecosystems.
  • GNU Hurd is cited as an example of a theoretically appealing microkernel that stalled due to complexity and project management, reinforcing the “working code wins” narrative.

Industry–academia gap

  • The thread repeatedly returns to how academic certainty about “where OSes are going” diverged from reality.
  • Suggested reasons: researchers chase novelty and publishable originality; industry optimizes for profit, risk reduction, and incremental improvement.
  • Some liken decisions in OS and language adoption more to politics and organizational incentives than to pure technical merit.

Starlink in the Falkland Islands – A national emergency situation?

Monopoly ISP and Small-Island Economics

  • Many comments frame the Falklands’ situation as classic “tiny, remote market” economics: with ~3,500 residents, a monopoly licence was likely the only way to make upfront satellite infrastructure investment viable.
  • Some argue the incumbent’s exclusivity is essentially a long-dated bond: the government traded a legal monopoly for capital expenditure and now faces “default” due to technological disruption (Starlink).
  • Others counter that the ISP’s prices and performance (£100+ for 5 Mbps and low caps, frequent outages) are predatory given heavy subsidies and that islanders justifiably resent it.

Starlink: Price, Legality, and ‘Emergency’ Framing

  • Starlink offers far better speeds and pricing, prompting widespread gray-market use and a petition reportedly backed by ~70% of residents.
  • Using Starlink is currently illegal due to the monopoly licence; law changes to allow it have passed but are delayed for months.
  • Some see calling a hypothetical Starlink shutdown a “national emergency” as overblown given it is explicitly illegal today; others say connectivity is now so essential that abrupt loss would be an emergency.

Contracts, Remedies, and Who Pays

  • One side insists the monopoly must be compensated if exclusivity is removed; reneging would damage government credibility and invite lawsuits.
  • Others note the contract reportedly has a 5‑year notice clause, so the government can legally unwind it without “payoffs,” just patience.
  • Compromise proposals:
    • Remove exclusivity but subsidize the incumbent to maintain local autonomy and redundancy.
    • Tax Starlink users or levy a one-time fee to fund transition.
    • Let the UK (which benefits geopolitically) underwrite the cost.

Reliance on Starlink and Musk

  • Some are deeply wary of making the islands’ lifeline dependent on a single private company controlled by an unpredictable owner, citing Ukraine/Starlink controversies.
  • Others respond that Starlink is already transformative for remote regions and that fears, while understandable, haven’t materialized into systematic cutoffs.

Law, Enforcement, and Radio Regulation

  • Several note that unlicensed satellite terminals are criminal in many countries; Falklands’ rules are not unique.
  • Practically, with only thousands of residents, enforcement would likely target individual users (raids/confiscation), making “just ignore the law” unrealistic.

Wider Analogies and Comparisons

  • Commenters compare this to:
    • Saint Helena’s and cruise-ship satellite monopolies.
    • Grid/solar and EV/gas-tax debates: when users bypass centralized infrastructure, cost recovery and monopoly structures break down.

VSCode’s SSH agent is bananas

VSCode Remote SSH Architecture & Capabilities

  • VSCode’s “Remote - SSH” installs a substantial agent/server on the remote host (Node.js, many processes, persistent files).
  • The agent communicates with the local VSCode via port-forwarded WebSockets, and can browse the filesystem, edit files, spawn PTYs, and persist itself.
  • Crucially, Microsoft documents that a compromised remote can use this channel to execute code on the local machine; client and server are in the same trust boundary.
  • Several commenters note this is very different from a traditional SSH session, where the server cannot normally run arbitrary code on the client.

Security Concerns vs. “You Already Trust SSH”

  • Some argue there’s no new risk: if you can SSH and run arbitrary commands, you already have those powers; VSCode just automates it.
  • Others emphasize:
    • The agent is a long-lived, complex, partially closed-source service with a large attack surface.
    • It creates an unexpected reverse trust channel back into the client.
    • It leaves binaries and state behind, which is attractive for persistence and supply-chain attacks.
  • Comparisons are made to “curl | bash”: not inherently a bug, but a powerful pattern that normalizes risky behavior.

Comparison to TRAMP, sshfs, and Other Models

  • Emacs TRAMP and sshfs are held up as “living off the land”: no custom agent, just SSH/scp/sftp, and remote code stays on the server.
  • TRAMP/sshfs approaches are seen as safer but often slower, less featureful, and worse over high latency than VSCode/Zed-style smart remoting.
  • Some prefer explicit sync (rsync + watchexec) or tmux+vim/emacs on the server for predictability and simplicity.

Plugins, LLMs, and Capability Models

  • Unvetted extensions are a major concern: they get broad access locally and remotely, with no granular permission model.
  • VSCode’s “restricted mode” is seen as better than nothing but largely useless for real development because it disables most extensions.
  • There’s a long-standing request for per-extension capabilities (e.g., denying network/file access) with little visible progress.
  • LLM “agent” workflows that run and iterate on code automatically make these powers even more sensitive.

Microsoft Strategy, Openness, and Alternatives

  • Debate over whether VSCode is an “embrace, extend, extinguish” play:
    • Core is MIT-licensed, but key pieces (Remote SSH, Pylance, marketplace, some AI tooling) are proprietary or TOS-encumbered.
    • VSCodium and openvscode-server exist but can’t access all official extensions; some OSS reimplementations of remote exist.
  • Others counter that VSCode set a high bar, drove LSP adoption, and competitors had decades to improve.
  • Alternatives discussed: Eclipse Theia, JetBrains remote, Zed, Helix, Lapce (WASI plugins), Sublime, classic Vim/Emacs, sshfs-based workflows.

Real-World Impact & Admin Experiences

  • In teaching environments and multi-user servers, .vscode-server is blamed for:
    • Dozens of Node processes per user, heavy RAM/CPU usage, large disk footprints.
    • Students not learning basic SSH/CLI because VSCode hides it.
  • Some admins respond by killing VSCode server processes or banning its use; others say extra resources are trivial and the UX gains justify it.

Editor Culture & Pragmatism

  • Vim/Emacs users often see VSCode’s remote model as overcomplicated and fragile; VSCode users describe it as the first truly usable remote IDE.
  • Many commenters accept the risk: they run VSCode Remote only against dev VMs/containers, not production, and rely on isolation (VMs, Docker, bubblewrap) rather than trusting the IDE.

Obscure islands I find interesting

Site design and interaction

  • Commenters praise the site’s mobile experience, smooth “cosmic zoom” animations, and use of maps to tell stories without overcomplicating the UI.
  • Minor UX wishes: a clear “back to all islands” / full zoom-out button, and direct Wikipedia links from each island entry. One user reports a possible Firefox issue with navigation.

Additional obscure islands and curiosities

  • Many suggest additions: Pitcairn (debated as “too famous”), Johnston Atoll, Fakaofo, Deception Island, Socotra, Tokelau, Niʻihau, Middle Percy, Ball’s Pyramid, Cabrera, Ailsa Craig, and the snake-infested Ilha da Queimada Grande.
  • Recursive islands and island-in-lake-in-island structures fascinate people (Canadian “world’s most recursive island,” Vulcan Point in the Philippines, Moose Boulder lore).
  • Others note fun or oddities: Null Island, Kiritimati/“Christmas” pronunciation, Diomede islands vs Samoa/American Samoa time difference, and various personal favorite atolls.

North Sentinel Island and ethics of contact

  • A large subthread challenges the label “uncontacted,” arguing “isolated and vulnerable” is more accurate: shipwrecks, historical British incursions, and late‑20th‑century contact/gift expeditions are cited.
  • Satellite imagery reveals trails and probable fish traps; dense canopy likely hides dwellings.
  • One line of argument stresses they became hostile after lethal disease introductions and traumatic colonial encounters; therefore they deserve legal and practical protection, including being left alone.
  • Counter‑arguments push back against “noble savage” romanticism, speculating their society may include violence and suffering like any other; others criticize that as baseless demonization without evidence.
  • Debate extends to whether every human society “should” pursue innovation and be assimilated, versus respecting sustainable, low‑tech cultures and acknowledging the harms of forced “civilizing” projects (e.g., Stolen Generations, missionary deaths, OLPC‑style tech interventions).

Life on remote islands: health, happiness, and self‑sufficiency

  • Tristan da Cunha draws interest for its tiny population, evacuation after a volcanic eruption, unique English accent, and high incidence of genetic diseases (e.g., asthma, glaucoma) due to endogamy.
  • Devon Island is discussed as a Mars analogue; commenters question whether anyone has lived there truly self‑sufficiently and outline challenges: energy storage through polar night, greenhouses, spare parts.
  • There’s a broader philosophical exchange: some idealize simple, subsistence lives; others note that many “aboriginal” communities eagerly adopt modern goods and that isolation can feel like a “life sentence” for curious kids.

Space elevators and Ascension Island

  • One commenter imagines Ascension Island as a base for a space elevator: near the equator, sparsely populated, and symbolically named.
  • Others dissect feasibility: lack of a major harbor, volcanic risk, negligible benefit of mountains vs total cable length, and preference for equatorial continental sites (e.g., Ecuador, Kenya, Indonesia).
  • There’s side discussion on anchoring offshore, marine corrosion, and the weak economic case for large‑scale space infrastructure and asteroid mining.

Books, media, and long‑term island obsessions

  • Multiple people recommend island books: Atlas of Remote Islands, Fifty Islands I Have Not Visited, Palmyra‑Atoll murder narrative And the Sea Will Tell, early 1900s diaries from Tristan da Cunha, and assorted documentaries.
  • Several recount lifelong fascinations: poring over old atlases, hunting “lost” islands pre‑internet, virtually island‑hopping on Google Earth, or even working on cheap submarine fiber links to remote islands.

Do-nothing scripting: the key to gradual automation (2019)

Concept and perceived benefits

  • Many commenters like the approach as a low-friction way to encode processes, similar to SOPs or checklists but in executable form.
  • It lowers “activation energy”: you get something useful immediately (guided checklist), then gradually replace printed instructions with automation.
  • Helpful for rare, complex, or stressful tasks (deploys, hotfixes, yearly taxes, lab procedures, homelab ops) where remembering exact steps is hard.
  • Encourages clearer thinking: forces you to define ordered steps, inputs/outputs, and potential failure handling.

Implementation patterns and tools

  • Common implementations: shell scripts, Makefiles with .done markers, Python scripts (sometimes with simple classes), Ansible playbooks with pause, Invoke, PowerShell, Jupyter notebooks, Streamlit mini-apps.
  • Some prefer richer TUI/CLI libraries or OneNote/Obsidian/Confluence with checklists; others argue those aren’t as directly on the path to automation.
  • Techniques for resumability: touching marker files, using per-run directories, Make targets, or DAG-like orchestrators (e.g., Airflow with “sleep forever” steps).

Incremental automation & culture

  • Viewed as a way to get to “functional but not optimal” quickly and iteratively improve.
  • Supports culture change: shows “anyone can do this,” encourages contributions to automation, and can be repurposed into chatbots or self-service tools.
  • Parallels drawn with pseudo code and “holes” in functional programming: scaffolding that guides design and refactoring.

Concerns and criticisms

  • Some see it as overkill vs. a plain checklist or runbook; worry it becomes a process-centric spaghetti of scripts or tech debt if wired into production services.
  • OO-heavy implementations (one-method classes) are criticized as needless abstraction; others defend them as a light, extensible interface.
  • Practical doubts: people may stop using a script that “does nothing,” skip manual steps, or revert to old habits; interruptions and error handling can be awkward.

Edge cases, security, and GUIs

  • SSH key example drew security criticism: admins shouldn’t handle user private keys; cert-based SSH and better key flows are suggested.
  • GUI-only steps remain a major blocker to full automation; GUI testing tools are seen as brittle.
  • Several note the importance of documenting expected outputs and error conditions, or else users/scripts may blindly continue.

LLMs and evolution

  • Some argue that today LLMs can take a do-nothing script and generate partial automation quickly.
  • Others caution: LLM-produced code may encode unsafe assumptions (e.g., reusing keys, no passphrase), so human review and judgment remain essential.

Cities can cost effectively start their own utilities

Urban–Rural Cross-Subsidies and Wildfire Risk

  • Many argue PG&E’s core problem is California’s liability regime and the cost of serving wildfire- and rural zones, which is effectively socialized through very high urban rates.
  • Several commenters support cities carving out their own utilities so dense, low-fire-risk customers stop subsidizing undergrounding and line maintenance in hills and forests.
  • Others warn this “cherry picking” would strand poorer/rural users with unaffordable power and effectively unwind rural electrification; some explicitly say that forcing people out of high-risk fire zones is a feature, not a bug.
  • Alternatives proposed for remote areas: off‑grid solar plus batteries, local microgrids, or community-scale storage at the edge of transmission lines.

Legal, Regulatory, and Political Constraints

  • Debate over whether California could use liability to bankrupt PG&E and “scoop up” assets runs into the federal takings clause; most think outright confiscation is unconstitutional.
  • The CPUC and other state bodies already micromanage PG&E’s capex and executive pay, giving the state de facto control without ownership—and without direct political blame.
  • For cities to municipalize, they must either buy PG&E’s distribution assets (likely at a high, regulator-mediated price) or build a parallel grid; SF’s failed bid and Boulder’s decade-long fight with Xcel are cited as cautionary tales.

Rates, Costs, and Efficiency

  • Commenters note stark contrasts: municipal utilities in Santa Clara, Palo Alto, Alameda, Sacramento, Austin, Chattanooga, etc. charge roughly half or less of PG&E’s ~40–50¢/kWh retail rates.
  • Skeptics point to PG&E’s ~11% profit margin and argue you can’t get 30–50% savings just by removing profit; they suspect the article’s numbers ignore major capex, wildfire liabilities, and hidden costs that cities would still face.
  • Others counter that PG&E’s distribution charges and wildfire/legal overhead are inflated by decades of mismanagement and perverse “cost-plus” regulation that rewards spending, tree trimming, and undergrounding over smarter protection tech.
  • There’s debate over rate design: fixed vs per‑kWh charges, rooftop solar cost shifting, income-based fees, and whether cross-subsidies should be explicit taxes instead of buried in tariffs.

Governance, Privatization, and Ideology

  • Supporters of municipal utilities frame this as a classic natural-monopoly case: public or cooperative ownership avoids shareholder extraction and can reinvest surpluses in undergrounding and reliability.
  • Critics worry city governments will raid utility surpluses, under-maintain infrastructure for short-term political gain, or lack technical competence.
  • Thread repeatedly veers into broader arguments about socialism vs capitalism, neoliberal privatization of public assets, regulatory capture, and whether public or private entities have actually delivered cheaper, more reliable power in practice.

German civil activists win victory in election case against X

Legal Basis and Obligations under EU Law

  • Multiple commenters identify Article 40(12) of the EU Digital Services Act (DSA) as the key legal basis: very large platforms must provide researchers access to publicly available data to study “systemic risks” (including election interference).
  • The German case is framed as clarifying that this DSA right is judicially enforceable nationally.
  • The 6,000€ cost order is seen as routine court costs, not the main sanction; non‑compliance could trigger much larger EU‑level DSA fines (up to 6% global turnover) and/or further German court measures (injunctions, daily penalties).

Enforcement and Practical Consequences

  • Debate on how Germany/EU can enforce against a foreign platform: options mentioned include blocking domains/apps at ISP/DNS level, cutting off payments and ad business, or broader EU asset and service restrictions.
  • Examples cited: prior blocking of The Pirate Bay, illegal gambling sites, and X’s conflict with Brazilian courts.
  • Some doubt X will ever face serious personal consequences at the executive level; others note legal exposure could at least limit travel and operations.

Transparency vs Privacy and Cambridge Analytica Comparisons

  • Critics argue EU privacy policy is inconsistent (“privacy for me, not for thee”) and question forcing a private company to provide data “for free.”
  • Supporters respond that the law covers only publicly visible content and engagement data, not private messages, and that access would be for vetted researchers under strict conditions.
  • Cambridge Analytica is contrasted: that scandal involved private, identifiable data, secret sharing, and data brokerage; here the intent is regulated research transparency. Some push back that true anonymization of social data is hard.

Democratic Rationale and Election Integrity

  • Many see research access as essential to monitor disinformation, bots, and foreign interference in elections, recalling the Mueller/Russia investigations.
  • The DSA is praised for recognizing that at X/Facebook scale, “innocuous” features can create systemic risks, implying extra obligations for very large platforms.
  • One concern raised: hostile or illiberal future governments could weaponize vague notions of “researchers” and “systemic risk” to selectively scrutinize opponents; some argue the data should be broadly accessible to all, not just approved researchers.

X’s Political Direction and Need for Evidence

  • Several commenters claim it is “obvious” that X now amplifies right‑wing discourse, linking this to ownership, monetization changes, and influencer incentives to align with the owner’s views.
  • Others dispute that the directional bias is “clear,” or argue that shifts rightward are more about broader political trends or left‑wing alienation of former supporters.
  • A recurring point: precisely because perceptions diverge, systematic, data‑driven research is needed to characterize how discourse and reach have changed over time.

Sovereignty, Markets, and Normative Disagreements

  • One thread stresses that operating in Germany/EU means obeying local law; a company can exit the market if it dislikes the rules.
  • Some US‑leaning commenters frame this as European bureaucratic overreach or harassment of an American company; others counter that the US itself demands far more from foreign platforms (e.g., TikTok).
  • Moral objections surface against compelling a company to “work for free” for researchers; counter‑arguments emphasize that firms don’t have a right to unregulated operation, especially when their scale can impact democratic processes.

Ketamine for Depression: How It Works (2024) [video]

Personal experiences with ketamine & psychedelics

  • Several posters describe clinical ketamine (IV or esketamine) as unpleasant acutely but “miraculous” or strongly beneficial for depression/anxiety afterward, often after multiple sessions.
  • Others report strong but different effects from psilocybin “hero doses,” including identity dissolution, vivid synesthesia, confronting childhood/parental anxiety, and lasting relief from existential depression plus better emotional self-observation.
  • Some find psychedelics (LSD/shrooms/DMT) spiritually intense but not lasting; others say lasting change depends on dose, set/setting, integration, and sometimes microdosing.
  • People note that ketamine’s subjective “trip” is distinct from classic psychedelics and can feel more like being steered or dissociated than guided.

Risks, addiction, and unsafe combinations

  • Multiple comments warn about ketamine addiction and long-term personality/mind changes, citing real-world examples and historical figures.
  • Strong debate over combining ketamine with MDMA: some describe dangerous “overdoses” (psychologically overwhelming, not necessarily medically critical), others demand citations and emphasize careful dosing.
  • DXM is discussed as “baby ketamine,” with some users praising its mood “afterglow,” others criticizing casual recommendations of very high OTC doses and dangerous combos (e.g., with diphenhydramine).
  • Warnings against self-medicating: suggestions to read a widely-circulated suicide note related to long-term psychedelic use; concerns about MDMA every weekend leading to prolonged depression.

Ketamine, SSRIs, and other treatments

  • Comparisons between ketamine/psychedelics and SSRIs get heated:
    • One side stresses severe, sometimes persistent SSRI side effects and argues psychedelics can be safer when properly supervised.
    • Others push back, arguing risks for both are nontrivial and data is incomplete; emphasize matching patients to treatments with lowest tail-risk.
  • TMS and combined ketamine+TMS are reported as helpful for some, ineffective for others.
  • Treatment-resistant depression and hippocampal atrophy are mentioned; debate over whether neuroplasticity drugs can compensate for structural loss.

Access, legality, and clinical vs DIY use

  • Clinical psilocybin in Oregon is reported around $3,500/session; some see that as absurd vs cheap “street” shrooms, others note you’re paying for specialized therapists, overhead, and legal safety.
  • Some prefer home/retreat “set and setting” with trusted sitters over sterile clinics; others stress finding cautious medical professionals who don’t over-sell treatments.
  • Confusion about drug scheduling appears; commenters clarify ketamine is Schedule III (in the U.S.) and legally prescribed.

Culture & advocacy debates

  • One participant criticizes “psychedelic advocates” for allegedly framing use as a mark of enlightenment, making social pressure likely; others say most scenes are explicitly non-coercive and emphasize consent and safety.
  • There’s broad agreement that psychedelics are powerful tools, not magic cures, and can be very harmful for some people or in the wrong context.

Adjunct & alternative ideas

  • Suggestions for resistant cases include: psychedelic-assisted therapy, MDMA in loving group settings, re-checking diagnoses (e.g., ADHD, autism, medical causes), and even MRI to rule out structural issues.
  • Non-drug approaches raised: cold exposure, engaging work/social obligation, gut microbiome support via fermented foods, and standard psychotherapy.

Mechanism & future directions

  • A paper is cited suggesting many antidepressants (including ketamine and classic psychedelics) act via TrkB, the BDNF receptor.
  • This raises hope for future drugs that boost neuroplasticity like psychedelics but without hallucinations.

The origins of 60-Hz as a power frequency (1997)

Historical origins and legacy frequencies

  • Summary of paper as discussed: early systems experimented with ~130 Hz (resonance, motor issues) and ~25–30 Hz (severe light flicker).
  • 50–60 Hz emerged as a compromise between motor performance and lighting.
  • Westinghouse pushed 60 Hz (better for flicker) and won the US market over GE’s 50 Hz, which aligned with its European affiliate that had moved from 40 to 50 Hz.
  • Niagra Falls and rail systems used 25 Hz; parts of Amtrak and Ontario industry ran on 25 Hz well into recent decades. Central European rail still uses ~16.7 Hz for traction.

Modern relevance of mains frequency

  • Several argue that for most consumers, frequency is now “implementation detail”: electronics use SMPS, many motors are variable-speed or EC, and motor clocks are rare.
  • Others note frequency still affects certain motors (fans, dryers, some industrial loads) and especially transformers: 60 Hz transformers may overheat at 50 Hz; 50 Hz designs must be physically larger.

Grid behavior, timing, and clocks

  • Classic mains-synchronous clocks rely on long-term average frequency; grids are deliberately corrected so that daily average is exact (e.g., 60 Hz in US).
  • Frequency drifts with supply–demand imbalance; deviations are used as a control signal for generators.
  • Example: the European grid once accumulated a 6‑minute deficit due to a regional dispute, later corrected.
  • Recorded mains hum can be correlated with logged frequency to timestamp audio/video forensically.

Regional grid differences and interconnection

  • Japan famously has both 50 and 60 Hz regions due to historical generator purchases, complicating power sharing; some speculate this fostered early inverter development.
  • North America has multiple synchronous “interconnections” (East, West, Texas, others) joined mostly by HVDC or variable-frequency ties.
  • Rail and other special systems use their own frequencies with rotary or electronic converters.

Hypothetical optimal frequency/voltage

  • No consensus: trade-offs between transmission loss (favoring low frequency or DC, high voltage), transformer/motor size (favoring higher frequency), safety, and converter cost.
  • Strong thread arguing “0 Hz” (HVDC) is best for long-distance transmission given modern power electronics, but AC remains simple and robust to transform.
  • Some propose mixed systems: HVDC long-distance, then AC or low-voltage DC locally; others emphasize that multiple voltages/frequencies add complexity.

Voltage levels, wiring, and appliances

  • 240 V allows more power on typical household circuit currents than 120 V, enabling faster kettles/toasters; equivalent power at 120 V requires higher current and thicker copper.
  • Several detailed exchanges on I²R losses, wire gauge, copper vs aluminum, and why higher voltage transmission is cheaper but more dangerous.
  • Debate over safety statistics: some claim lower nominal US voltage reduces electrocution risk; others question comparability of data and note many deaths are from non-outlet sources.

Frequency in devices and signals

  • 60 Hz influenced:
    • Synchronous clocks (gear ratios align with 60 s/min).
    • Early TV frame rates and regional video standards.
    • Common motor speeds, which then influenced HDD RPMs (3600, 5400, 7200 rpm).
  • Anecdotes about audible hum around B0 (~61.7 Hz) from power systems.
  • Some mathematical niceties: 60 as highly composite; 2π·60 ≈ 377 rad/s, close to free-space impedance in ohms—seen as a neat but likely coincidental convenience.

Practical annoyances and path dependence

  • Travelers and movers complain about incompatible appliances despite “universal” power supplies; heavy or motorized gear, clippers, and some kitchen equipment still care about frequency.
  • LED flicker (lighting, car taillights) is now more tied to PWM design than mains frequency, but users find low PWM rates distracting, especially in peripheral vision or during eye movements.
  • Multiple commenters note that with modern wide‑range switchmode supplies, most small electronics are now agnostic to 50 vs 60 Hz and local voltage, even though legacy choices continue to ripple through infrastructure and some appliance categories.

Stop using zip codes for geospatial analysis (2019)

What ZIP codes actually are

  • Described as mail-sorting constructs, not geographic polygons: abstract sets of delivery points along routes.
  • Can be:
    • Non-contiguous areas.
    • A single point (e.g., a large company).
    • A single line (highway routes).
    • Overlapping or ambiguous when forced into polygons.
  • They reflect USPS operational structure and logistics, not geography or political boundaries; they also change over time as routes change.

Why ZIP codes are problematic for spatial analysis

  • Treating ZIPs as polygons is called a “category error” and can produce misleading results, especially for:
    • Rural vs urban comparisons, where a single ZIP can mix dense town centers with large rural surroundings.
    • Demographic or socioeconomic analysis where internal variation is large.
  • Census “ZIP Code Tabulation Areas” (ZCTAs) are only approximations and may include overlaps, missing regions, and temporal drift.
  • Relates to the Modifiable Areal Unit Problem: statistics and patterns change with how you draw boundaries.

Arguments that ZIPs are “good enough”

  • Widely known by the public, easy to collect on forms, and embedded in addresses.
  • Often “uniform enough” and “contiguous enough by travel time” for coarse analyses, marketing, bulk mail, sales-tax lookup, and sports blackouts.
  • Seen as a practical first step when you need aggregation but lack more precise spatial data.

Alternatives proposed

  • Census units (blocks, tracts, counties, CSAs): better-defined geographies and often population-normalized, but harder to collect from users and to explain.
  • Spatial grids like H3:
    • Hexagonal cells with consistent neighbor relationships and tunable resolution.
    • Good for counting people/phenomena in areas and joining disparate datasets.
    • Rectangular lat/long bins are possible but bring design choices and edge issues.
  • Exact addresses + geocoding, then aggregating to chosen units.
  • Custom regions (e.g., DMAs, custom market areas) built from population + internal data.

Data, privacy, and international issues

  • ZIP+4 and Canadian postal codes can be extremely granular, verging on household or building identifiers, raising re-identification risk.
  • Postal code concepts differ by country; some boundaries are proprietary or not widely known, and user familiarity varies.
  • Address and postal-code validation is messy, with numerous edge cases and conflicting datasets.

Show HN: Transductive regular expressions for text editing

Clarifying semantics & examples

  • Several readers were confused by README examples (especially the c:da:ot:g and cat/dog transformations) and initial typos/inconsistent outputs (cats vs dogs, infinite-loop examples).
  • Discussion revealed that:
    • Concatenation is implicit (like regex), with an “invisible” operator between characters.
    • : is a transduction operator with higher precedence than concatenation, so c:da:ot:g currently parses as c:d ~ a:o ~ t:g, not (cat):(dog).
    • Empty (epsilon) operands are implicitly injected on one side of : when omitted, which also affects parsing.

Operator precedence, grammar, and ambiguity

  • Multiple commenters found the current precedence (colon stronger than concatenation) unintuitive; many expect cat:dog to mean (cat):(dog), not ca(t:d)og.
  • Grammar in the docs is acknowledged as underspecified and potentially misleading, especially around:
    • Where epsilon can appear.
    • How multiple : operators (e.g. :a:) are parsed.
  • Suggestions included:
    • Making : lower precedence than concatenation.
    • Making epsilon explicit in the grammar rather than “injected”.
    • Possible alternative syntaxes (e.g. <regex>generator, or character-class mapping styles).

Capabilities vs. traditional regex/sed

  • Proponents see trre as:
    • A more “literal” search/replace syntax, particularly when doing contextual replacements without backreferences.
    • A small, direct implementation of finite-state transducers, enabling deterministic compilation, generation of matching strings, and tricks like Levenshtein-1 edits and simple spell-checking.
  • Skeptics argue:
    • It doesn’t provide fundamentally new capabilities beyond sed/regex substitutions and sometimes looks more verbose.
    • Lack of backreferences and structural transformations makes it weaker for some complex substitutions (reordering captured parts, more tree-like rewrites).
    • Adding : and extra escaping may make easy tasks (simple replacements) harder.

Use cases, limitations, and infinite generators

  • Current model is mostly “replace in place”; some worry it’s insufficient for more structural edits.
  • Right-side repetition (*, +) can cause infinite loops; author now leans toward disabling this, though infinite generators are seen as potentially interesting.
  • trre can also be used as a generator: given :(regex) it can enumerate strings in the regular language (with flags like -m -a), which some find compelling.

Implementation, ecosystem, and related work

  • Tool is praised for being small and readable C, with a clear automata-theoretic foundation (FSTs).
  • Many pointers to related FST toolkits and languages (XFST, FOMA, HFST, OpenFST, Pynini, Carmel, Rosie Pattern Language), and to prior work in morphology, speech recognition, and linguistics.
  • Some suggest this could fit well into editors lacking good regex-based replacement, though it’s still “raw” and evolving.