Hacker News, Distilled

AI powered summaries for selected HN discussions.

Page 109 of 350

Building a 2.5kWh battery from disposable vapes to power my workshop [video]

Disposable vapes, regulation, and visible waste

  • Many commenters are appalled that “disposable” vapes exist at all, noting they’re now ubiquitous litter, effectively “e‑waste packages” scattered like cigarette butts.
  • Legal status varies: banned or restricted in some countries (Australia, UK, some EU states), but enforcement is weak and black markets are common.
  • People are disturbed that devices contain not just batteries but also surprisingly capable microcontrollers, all treated as trash.

Safety of DIY vape‑cell battery packs

  • Multiple comments warn strongly against building large NMC packs at home, calling it a serious fire risk and recommending only outdoor, separated structures if attempted at all.
  • Suggested protections: high‑quality BMS with current limits and thermal probes, thermal fuses, spacing and airflow between cells, strict QA (capacity, internal resistance, self‑discharge), welding instead of soldering.
  • Several note that once thermal runaway starts, there’s little you can do except isolate the pack; specialized containers and dedicated sheds are standard in professional setups.
  • One detailed critique argues the video’s pack layout, wiring, imbalance between parallel groups, and lack of proper transfer switching are all unsafe; concludes that commercial packs are usually safer and more economical.

Battery chemistries and future directions

  • Strong preference for LiFePO₄ (LFP) over NMC for home storage: less prone to combustion, “good enough” energy density.
  • Some believe large NMC packs will eventually be pushed outside buildings by regulation, or replaced over time by LFP, sodium‑ion, and possibly solid‑state.
  • Debate on whether solid‑state can ever fully replace liquid‑electrolyte chemistries, especially where very high power (current) is needed.
  • Lead‑acid is discussed as still useful for specific high‑current, stationary roles, with good recyclability but poor energy density.

Broader e‑waste and “disposable” culture

  • Disposable vapes are seen as a symptom of a wider e‑waste problem: working but “obsolete” PCs, forced OS upgrades, and sealed‑battery devices.
  • Some argue that reusing old hardware can be environmentally better than building new “efficient” systems; others counter that datacenter‑class hardware is far more energy‑efficient, and labor and space costs dominate.
  • There’s skepticism that old desktops could economically replace modern data centers at scale.

Policy ideas: regulation, deposits, and design

  • Several point to EU‑style WEEE rules that require recyclability, but note enforcement is weak and “recyclable” often doesn’t mean “recycled.”
  • Proposed fixes:
    • Deposit schemes for vapes and small electronics, modeled on bottle/can returns, to virtually eliminate litter.
    • Design standards: standardized vape bases, user‑replaceable cells (AA/AAA/18650‑style), and mandatory take‑back obligations for manufacturers.
  • Others suggest disposable products in general should face much stricter regulation.

Hacker culture and scavenging

  • There’s tension between “never ever do this” safety warnings and encouragement for hackers to experiment carefully and learn from projects like the video.
  • Some enjoy the idea of harvesting microcontrollers from vapes and other devices for post‑collapse or censorship‑resistant systems, referencing projects like Collapse OS and fictional “FreeNet”‑style networks.

</> Htmx – The Fetch()ening

Versioning, Stability, and “htmx 4.0” Naming

  • Strong approval for the promise that htmx 1/2 will be supported indefinitely; seen as rare in a churn-heavy ecosystem.
  • Mixed reactions to skipping 3.0 and jumping to 4.0 to remain “technically correct”: many find it funny, others think it may cause needless confusion (PHP 6 analogy); some say a simple mea culpa 3.0 would be clearer.
  • A few argue that batching many breaking changes into one major release is the wrong way to achieve stability, comparing it to Python 3; they advocate Django-style gradual deprecations and more frequent majors.
  • At least one commenter says this big-bang major bump makes them avoid adopting htmx for new work, failing their “stability test”. Others are happy as long as previous versions remain usable.

Core Philosophy: HTML-First, Not JSON

  • Complaint: htmx doesn’t auto-parse JSON in hx-on::after-request callbacks.
  • Response: this is by design—htmx is meant for HTML hypermedia responses, not JSON APIs. Many emphasize that this HTML-centricity is a core design choice, not an omission.
  • 4.0 will expose the full request/response/swap pipeline and allow custom fetch implementations per-trigger, which should make JSON or other custom flows possible without changing the HTML-first philosophy.

fetch(), Streams, and Morphing

  • Move from XMLHttpRequest to fetch() is welcomed; it simplifies internals and enables readable streams for SSE/streaming partial updates.
  • 4.0 will integrate morph-based swaps (inner/outer) into core, made feasible by the fetch refactor; some question whether this belongs in core vs extension.
  • Example patterns show per-request fetch overrides via hx-on:htmx:config:request, enabling mocking or custom transport without global monkey-patching.

htmx vs Datastar: Overlap and Tradeoffs

  • Datastar is repeatedly mentioned as a more general, plugin-driven alternative with built-in SSE, signals, DOM morphing, and smaller core; some describe htmx 4 as “Datastar-lite”.
  • Pro/datastar skeptics worry about non-FOSS/pro features and potential rugpulls; defenders point to nonprofit ownership, MIT core, and pluggable architecture.
  • Technical debate:
    • Pro-htmx side: htmx’s constrained request/response model and HTML fragments are simpler for typical CRUD apps, easier to reason about and debug, and integrate well with URL/history semantics.
    • Pro-Datastar side: a more generalized, event/signal/stream-driven model handles both simple and complex use cases; they argue htmx’s attribute surface is actually larger and less composable.
  • Significant disagreement over URL/history: htmx users see “URL as state” and history updates as central to hypermedia; Datastar’s authors reportedly treat history-pushing as an antipattern, which several commenters strongly reject.

Attribute Inheritance and Naming Bikeshedding

  • 4.0 flips inheritance to opt-in; prior implicit inheritance caused confusion and support load.
  • Long subthread bikesheds the hx-inherited name; alternatives like inherit, inheritable, propagate, and cascade are suggested, with various jokes attached.

General Reception

  • Many praise the essay’s clarity and the direction of htmx 4.0, especially open request cycle and streaming.
  • Others express concern that the library is growing more complex, drifting away from its original minimalism.

Wikipedia row erupts as Jimmy Wales intervenes on 'Gaza genocide' page

Wikipedia’s Neutrality and Governance

  • Several commenters say the talk-page dispute shows Wikipedia’s normal consensus process working: Wales voiced a view; editors are debating it against policy and prior consensus.
  • Others argue Wales’ “Statement from Jimbo Wales” is effectively an exercise of power, backed by an NPOV working group and media interviews, so not “just another comment.”
  • It’s noted that he is not an administrator and cannot lock pages; many editors appear willing to push back and demand policy-based arguments.

Content and Balance of the “Gaza genocide” Article

  • One side claims the article reflects the near-consensus of genocide scholars and major human-rights organizations, which now label Israel’s actions in Gaza as genocide; by Wikipedia standards, siding with such sources is normal, as with the Holocaust or pseudoscience pages.
  • Critics call the article an extreme, one-sided “rant”:
    • They say it presents only one viewpoint, minimizes or omits Hamas’ October 7 atrocities and potential Palestinian genocidal acts, and closely tracks Hamas narratives on casualties and hospitals.
    • They highlight asymmetry with articles like “Allegations of genocide in the October 7 attacks,” which use more cautious titles and extensively air doubts and counterarguments.
  • There is disagreement about whether neutrality requires representing denial or minimization of genocide claims at all, especially while events are ongoing.

Neutral Tone vs. Substantive Claims

  • Some readers think Wales only asked for a more neutral tone; others stress he is pushing to remove or dilute the assertion that Israel is committing genocide, which they see as contradicting sourcing policy by elevating government denials to parity with academic work.
  • Commenters argue that “both sides” is not always neutral when one side lacks high-quality sources, drawing analogies to vaccine conspiracies and election denial.

Definitions, Logic, and AI Proposals

  • A thread debates whether concepts like “genocide” can be cleanly defined by rules: one camp wants rule-based, model-generated, fully symmetric treatment of claims; opponents say social constructs depend on human consensus, not pure logic.
  • LLMs are criticized as non-deterministic, easily manipulated, and the opposite of Wikipedia’s curated model.

External Pressure and Free Speech

  • Commenters discuss a US congressional inquiry into alleged anti-Israel bias on Wikipedia, including requests for editor-identifying data, seeing it as chilling and inconsistent with professed US free-speech ideals.
  • Broader worries surface about governments, propaganda, and biased casualty reporting in wartime.

Meta-Observations on Wikipedia

  • Some liken contentious topic areas to a game dominated by zealots and rule-obsessives, which can drive away subject-matter experts despite still producing a better resource than traditional encyclopedias.
  • Others note Wikipedia is structurally ill-suited to fast-moving conflicts and is “almost built” to avoid being the platform of record while events are unfolding.

The Case That A.I. Is Thinking

Access and meta-notes

  • Many comments focus on getting around the New Yorker paywall (archive links, Libby, Lynx).
  • Some note the author is a long-time HN participant, which colors how the piece is received but doesn’t change the arguments.

What does “thinking” even mean?

  • A recurring theme is that “thinking,” “intelligence,” “consciousness,” “sentience,” etc. are ill‑defined; people admit we lack agreed, testable definitions.
  • Several argue that debates quickly become semantic: like Dijkstra’s “can submarines swim?” – if you define “thinking” to require human-style consciousness, computers lose by definition.
  • Others say the term should track observable capabilities: if something solves problems, reasons, and adapts, calling it “thinking” is meaningful enough.

Arguments that LLMs are not thinking

  • Strong camp claiming LLMs are glorified autocomplete: probabilistic next‑token machines, closer to a huge if/else tree or database than a mind.
  • Points cited:
    • No agency or intrinsic goals; they never act without being prompted.
    • No persistent self-modification post‑training (no real learning, just context).
    • Hallucinations, fragile logic, and basic failures (classic “how many letters in ‘strawberry’”‑type tasks).
    • Same transformer trick works poorly in other domains (video, physics), suggesting the magic is in human language, not general cognition.
  • Some liken them to “stochastic parrots” or mirrors: powerful tools reflecting human text and biases, not genuine thinkers.

Arguments that LLMs are thinking (in some sense)

  • Others point to chain-of-thought traces, multi-step debugging, writing and running tests, revising assumptions, and solving novel coding/math tasks as evidence of genuine reasoning.
  • Emphasis that we didn’t “program the model,” we programmed the learning process; the internal circuits are discovered, not designed, and largely opaque even to creators.
  • Intelligence is framed by some as substrate‑independent computation; if a Turing‑complete system can emulate a human’s behavior arbitrarily well, calling it “thinking” and “sentient” is seen as reasonable.
  • Some suggest LLMs may approximate a subsystem of human cognition (pattern recognition, compression, concept mapping), without a full self-model or sustained goals.

Consciousness, sentience, and qualia

  • Long side threads on the “hard problem of consciousness,” qualia, identity of copies, brain simulations, and panpsychism.
  • Several note we cannot directly measure others’ subjective experience—human or machine—and in practice rely on self-report plus behavioral analogy.
  • Some propose graded or “fish-level” consciousness for current LLMs; others insist we’re nowhere near justifying that, and that new physics or at least new theory might be required.
  • There’s widespread acknowledgment that current science has no solid criterion to say an LLM is or isn’t conscious.

Capabilities, limits, and benchmarks

  • Participants highlight impressive performance on coding, debugging, and some reasoning puzzles, but also obvious brittleness and shallow world models.
  • Suggested “real” tests of human-like thinking include: independent frontier math research, robust ARC-style tasks, long-horizon interaction (months/years) without context collapse, and autonomous problem formulation.
  • Many expect LLMs to plateau on AGI-like metrics while remaining extremely useful “stochastic parrots” plus tools.

Architecture, learning, embodiment, and memory

  • Critical limitations identified: lack of continuous online learning, lack of embodiment and rich sensory input, no durable long-term memory integrated into the model.
  • Some see hope in agentic wrappers, tool use, external memory (RAG, vector DBs), and self-adapting or reasoning models; others see this as scaffolding around the same core autocomplete engine.
  • Comparisons are drawn to brains as pretrained, constantly fine‑tuned models tightly coupled to bodies; LLMs currently resemble a frozen policy with short-term working memory.

Ethics, personhood, and social effects

  • A few bring up pending legislation (e.g., Ohio bill against AI legal personhood) and worry about a future of “AI slaves” if we ever do create sentient systems.
  • Others stress that anthropomorphizing may be harmful or manipulative: it benefits vendors, and confuses users about reliability and moral status.
  • Some argue that even if AIs are not conscious, the way we treat them trains our habits toward living beings (e.g., being cruel to chatbots vs. tools as empathy practice).

Overall tone

  • The thread is sharply divided: one side sees current LLMs as a profound demystification of human thought; the other sees them as extremely powerful language tools plus 2022‑style hype, with “thinking” talk mostly rhetorical or philosophical rather than empirically grounded.

Israels top military lawyer arrested after she admitted leaking video of abuse

Free criticism and changing norms

  • Several comments praise societies where media can expose military or political misconduct even if it “makes the nation look bad” abroad, calling this a core pillar of a free society.
  • Others argue that in Israel this norm is eroding, with growing public hostility toward investigators, prosecutors, and whistleblowers.
  • Some counter that this isn’t a new change but a continuation of long‑standing attitudes.

Abuse, investigation, and “PR damage”

  • Commenters highlight the severity of the alleged assault (including anal rape and serious physical injuries) and contrast it with leaders framing the incident as primarily a “public relations attack” on Israel and the IDF.
  • The leaker’s rationale is seen as trying to protect investigators and prosecutors under attack, not to attack Israel itself.

Accountability vs. presumption of innocence

  • One side stresses “innocent until proven guilty” and notes the footage is said to be doctored and not fully conclusive.
  • Others respond by listing broader patterns of alleged violations and argue that “just allegations” is used to dismiss systemic abuse.
  • Some say Israel has “little or no accountability” now; others claim there was never real accountability, it’s just more visible due to social media.

Public support, protests, and societal values

  • The existence of protests in defense of the accused soldiers, including participation by lawmakers, is seen as particularly disturbing.
  • There is debate over whether these protesters represent a fringe or the “average” Israeli, with references to polling (details not given) suggesting high support for current military actions.
  • A few comments generalize to global trends of celebrating cruelty and pessimism in modern democracies.

US parallels and leverage

  • Some compare Israel’s handling of legal officers to US purges or sidelining of military lawyers and intelligence counsels.
  • Others argue the US could heavily constrain Israeli behavior through sanctions or aid cuts; this is contested by those emphasizing Israel’s domestic arms industry and desire to reduce reliance on US aid.

Language, framing, and media bias

  • The term “blood libel” is criticized as a disingenuous way to describe the leak, with explanations of its specific historical meaning and why it doesn’t apply to exposing documented abuse.
  • Media choices—gendered pronouns for anonymous sources, the word “abandoned” for a car at a beach—are scrutinized as potentially identifying or misleading.
  • Some see asymmetric treatment of Israel (“scandal” vs. “exposure”) compared to other countries.

Tech, politics, and HN

  • A minority calls for excluding political news from the forum.
  • Others argue this is impossible when tech companies are deeply entangled with state power, surveillance, warfare, and human‑rights issues.

Why engineers can't be rational about programming languages

Scope of the debate

  • Several commenters note many responses fixate on “rewrites” instead of the article’s main claim: engineers’ language choices are strongly driven by identity and emotion, then rationalized after the fact.
  • Others push back on the charged title, saying they’ve seen plenty of rational, pragmatic language decisions, and that the article overgeneralizes from a few anecdotes.

How important is language choice, economically?

  • The claim that “a programming language is the single most expensive choice a company makes” is widely disputed.
  • Many argue leadership quality, team composition, architecture, and infrastructure decisions dwarf language choice in impact.
  • Some concede language can be costly at the extremes (e.g., dead ecosystems, very mismatched performance needs, exotic databases), but say most mainstream languages are “good enough” for typical business software.
  • A minority argue tool choice can indeed make or break some projects (e.g., high‑performance DBs, real‑time systems), and treating language as interchangeable is itself bad engineering.

Rewrites and migrations

  • Strong consensus: rewriting solely to change language is usually a terrible idea; only justified when the platform/ecosystem is clearly dead, fundamentally wrong for the domain, or non‑maintainable.
  • Success stories exist but are treated as rare, special‑circumstance exceptions, often done incrementally.
  • Many emphasize that rewrites mostly re-learn hard‑won lessons and risk years of delay.

Identity, tribalism, and cognitive bias

  • Many endorse the article’s idea that language allegiance is tied to professional identity and fragile expertise; unfamiliar languages temporarily make people feel less competent.
  • Others liken languages and their communities to cults with charismatic leaders, dogma, aesthetics, and “holy texts.”
  • There’s discussion of Dunning–Kruger and emotional reasoning, but also criticism that the article pathologizes disagreement without rigorously proving its claims.

Who should choose, and based on what?

  • Tension between:
    • “Use what the team already knows” / minimize cognitive and hiring cost.
    • “Pick the right tool for the job” / avoid forcing everything into one stack.
  • Several note management often drives hype‑based choices (Rust, React, Rails, etc.), or imports biases from peers (“language X doesn’t scale”).
  • Some stress the real failure is hiring “language technicians” instead of language‑agnostic engineers who can move across stacks and treat tools instrumentally.

No Socials November

YouTube, Algorithms, and Short-Form “Holes”

  • Several commenters say YouTube’s recommendations feel increasingly aggressive, especially Shorts, which they find uniquely “doomscrollable.”
  • Common coping strategies:
    • Use only the Subscriptions tab and ignore the homepage.
    • Clear or disable watch history to reset or blunt personalization.
    • Browser extensions (Unhook, Focused YouTube) to hide recommendations/Shorts and turn YouTube into “search-only.”
    • Some pay for YouTube Premium to avoid ads; others refuse, preferring ad blockers or just not watching.

Is Hacker News Social Media?

  • Large disagreement:
    • “Yes” camp: upvotes, comments, dopamine hits, and addictive checking make it clearly social media, even if less optimized for engagement.
    • “No” camp: sees HN as a forum or “social news” site—text-only, no DMs, no personalized feed, no follower graph, topic- not identity-centric.
  • Some define social media by mechanics (user voting, algorithmic feeds, profiles, followers); others by how you use it (social interaction vs just reading).
  • Multiple people say HN is their “last remaining social” and also their hardest to quit.

Psychological Effects: Envy, FOMO, Addiction

  • Several note social media triggers envy and inadequacy; others say HN can feel just as status-laden and jealousy-inducing, especially around careers and money.
  • Others push back: much content is exaggerated or fake “lifestyle porn,” and these reactions are common enough to be considered normal (FOMO), not individual pathology.
  • HN is often described as lower-toxicity and better signal/noise than mainstream platforms, but still a dopamine source.

Alternatives, Boundaries, and Tools

  • Many advocate permanent structure over one-month fasts:
    • Use RSS, or email as a central hub with batching and filters.
    • Script or extension-based interventions to kill feeds, thumbnails, comments, or entire sites; avoid apps, use only browsers.
    • Move toward smaller, topic-focused communities (forums, Fediverse, custom microblogs, self-hosted “mini-Twitter” blogs).

Broader Critiques of Social Media & “Simulacrum”

  • Several comments frame social media and news as simplified, distorted models of reality that increasingly reshape the offline world.
  • Some argue corporate, ad- and engagement-driven platforms systematically promote outrage and anger; non-corporate or decentralized networks are seen by some as a partial antidote, by others as still fundamentally social-first distractions.

Why we migrated from Python to Node.js

Overall reaction to the rewrite

  • Many readers see this less as “Python vs Node” and more as “Django async mismatch” plus “use what you know.”
  • Several think a 3‑day rewrite early on is fine if it makes the team happier and faster; others see it as premature optimization and weak technical planning.
  • Some view the post as thinly veiled marketing / clickbait (“Python to Node” flamebait, “week 1 pivot” narrative).

Python, Django, and async

  • Strong consensus that Django is ill-suited for async-heavy, streaming, or high-concurrency workloads; its async story is described as awkward, “a parallel universe API,” and full of footguns.
  • Multiple commenters stress this is more about Django than Python: alternatives like FastAPI + SQLAlchemy, green threads (gevent), Celery, or Channels could have addressed many pain points.
  • Others argue Python async itself is messy: late arrival, multiple competing async models, confusing integration with libraries, and tricky interactions between anyio/asyncio.
  • Some defend Python: async can work well with proper patterns (processes/threads/greenlets, Celery for background work) and Django’s ORM/ecosystem are still huge productivity wins.

Node.js and TypeScript perspectives

  • Supporters say Node/TS is a natural fit for IO-bound, LLM/HTTP-heavy workloads: async is first-class, concurrency is straightforward, and TypeScript’s type system and tooling feel more mature than Python’s typing.
  • Several practitioners report that large Node+TS codebases have been easier to maintain than comparable Python+async+types setups.
  • Skeptics highlight npm dependency bloat, deprecation churn, security/supply-chain risks, and argue that Node is far from the “clean” choice; some would have preferred Go, C#, Java, or Rust if you’re rewriting anyway.

Alternative stacks & async models

  • FastAPI is widely mentioned as the more appropriate Python choice for this use case; some say the team should have “just learned async FastAPI.”
  • Others propose modern TS stacks (Hono, Elysia, Drizzle, Kysely), or completely different runtimes (Deno, Bun).
  • Elixir/BEAM is heavily praised as purpose-built for massive concurrency and async, with Phoenix/LiveView and OTP eliminating a lot of DIY scheduling/queueing—though hiring and ecosystem concerns are noted.
  • A few suggest that instead of a wholesale rewrite, they could have introduced Node microservices alongside Django and migrated incrementally.

ORMs and database access

  • Divisive topic: some would have dropped ORMs entirely for SQL; others emphasize ORMs’ value (migrations, relations, admin UIs, ecosystem integration).
  • Node ORM landscape is seen as fragmented; Django’s ORM is viewed as “complete and proven,” but can become painful off the “happy path.”

Learning to read Arthur Whitney's C to become smart (2024)

Reactions to Whitney’s C Style

  • Many describe the code as “obfuscated”, “psychotic”, or IOCCC‑like; several say even machine‑obfuscated C they’ve seen was clearer.
  • Others find it beautiful, dense, and “satisfying” once understood, praising how much functionality fits in one page.
  • A recurring joke: reading it causes “madness” or “mental illness”, and will make you “crazy and unemployable”, not smart.

APL / J / K Influence and How to Read It

  • Some argue the right way to understand this C is to first learn APL/J/K: Whitney is writing C as if it were an array language, with single‑character names, ultra‑dense one‑line functions, and a tight set of primitives.
  • Others counter that the C itself isn’t especially “APL‑like”: APL’s array primitives, immutability, and lack of macros don’t map directly to this style; it’s more Whitney‑specific.
  • There’s disagreement whether the article actually claims the C is APL‑inspired or just written by “the APL guy”.
  • Several note that after learning APL/J, Whitney’s interpreters become surprisingly readable and expressive, communicating design trade‑offs clearly.

Macros, Preprocessor Tricks, and DSLs

  • Core macros like _(e...), x(a,e...), $(a,b), i(n,e) are seen by some as elegant tools to build a tiny DSL on top of C and climb an “abstraction ladder” quickly.
  • Others call them “war crimes”: ad‑hoc, unreadable, sometimes unsafe (e.g. missing braces in $(a,b)), and dependent on non‑standard extensions like GCC’s a ?: b.
  • Broader debate: C macros can define useful DSLs (compared to SystemC, JSON‑in‑C configs, etc.) versus the view that they’re almost always a maintenance hazard.
  • One thread explores using preprocessors to selectively expand macros to “de‑DSL” a codebase.

Readability, Maintainability, and “Best Practices”

  • Strong camp: this is exactly how you should not write C—most teams need descriptive names, comments, and conventional control structures to debug and onboard people.
  • Counterpoint: “best practices” are really “mediocre practices” optimized for large, high‑turnover teams; for a single expert maintaining tiny interpreters over decades, extreme terseness can be rational.
  • Several emphasize that code is communication; deviating too far from familiar idioms is like inventing your own grammar and expecting others to cope.

Learning Value

  • Some readers find the blog post an excellent guided tour: a rare chance to see a compact, complete interpreter and think about density, abstraction, and DSL design.
  • Others argue that studying this style is interesting but not a path to being “smarter” at everyday C; better to read good conventional code in many styles.

Ask HN: Who is hiring? (November 2025)

Overall hiring landscape

  • Dozens of companies across stages (bootstrapped, seed, Series A–C, and big tech) are hiring, with many in AI, dev tools, security, fintech, and healthcare.
  • Roles skew strongly toward senior/staff-level: full-stack, backend, infra/SRE, ML/AI, data, and some product design/PM, with a few junior and intern opportunities.
  • Many firms are small (sub‑20 headcount) and explicitly want “founding” or early engineers with high ownership.

Technologies & domains

  • Common stacks:
    • Backend: Python, TypeScript/Node, Go, Java, Rust; Postgres, MySQL, ClickHouse, Redis.
    • Frontend: React, Next.js, Vue, Svelte, TypeScript.
    • Infra: AWS, GCP, Azure, Kubernetes, Terraform, Docker.
  • Heavy emphasis on AI: LLM agents, RAG, multimodal models, evaluation frameworks, MLOps, and AI-powered products in finance, healthcare, legal, devtools, and productivity.
  • Other notable verticals: robotics/physical AI, climate/energy, security, blockchains, edtech, gov/regtech, and creative tools (design, gaming, audio/video).

Location & work style

  • Remote‑first is common (US, Canada, EU/UK, LATAM), often with time‑zone constraints; some roles are explicitly hybrid or fully onsite (SF Bay Area, NYC, London, Berlin, Stockholm, Zurich, etc.).
  • Several posts highlight in‑person culture as a selling point (small teams, high bandwidth, early-stage execution), while others emphasize async, low‑meeting environments.

Compensation & equity

  • Many listings provide explicit ranges, especially in the US: senior engineering often in the $150k–$250k base range plus equity; some EU roles list €60k–€140k, others note “competitive + meaningful equity.”
  • A few posters question compensation practices:
    • One company is challenged for promoting pay transparency while not posting ranges; they respond that location‑based pay makes ranges very wide and promise to improve disclosure.
    • Another is accused of advertising a higher salary band publicly than recruiters state on calls.
    • One commenter criticizes location‑based pay as exploitative; the company invites further discussion rather than debating in‑thread.

Process, expectations & culture

  • Common expectations: strong fundamentals, prior startup or early‑stage experience, end‑to‑end ownership, ability to move fast, and comfort working with or alongside AI tools (Cursor, Claude Code, Copilot, etc.).
  • Several posts stress product sense and direct customer contact, not just coding.
  • A few meta comments praise some products (e.g., analytics, editors, infra tools) and raise minor issues (e.g., broken forms, missing salary making CA posting non‑compliant).

Ask HN: Who wants to be hired? (November 2025)

Overall Shape of the Thread

  • The thread is almost entirely people advertising themselves for hire rather than debating anything.
  • Profiles span senior principal engineers, mid-level devs, juniors, interns, designers, PMs, data people, and a few non-technical roles (talent, customer success, product marketing).
  • A handful of short reply chains appear (e.g., “wrong thread”, follow-ups on contact attempts, light banter), but there’s very little argument or extended discussion.

Candidate Profiles & Experience

  • Many candidates have 10–20+ years’ experience, often ex-CTO, staff/principal, or founders with exits.
  • Strong presence of backend/platform engineers, infrastructure/DevOps, security engineers, embedded/low-level systems, graphics/game dev, and data/ML specialists.
  • Also represented: product managers, technical product leaders, UX/UI and brand designers, DevRel, technical writers, project/program managers, and recruiting/people leaders.
  • A noticeable number of students and new grads explicitly seeking internships, co-ops, or junior roles.

Technologies & Domains

  • Dominant stacks: Python, TypeScript/JavaScript (React/Next/Node), Go, Rust, C/C++, Java, C#, and Ruby/Rails.
  • Many mention cloud and infra: AWS, GCP, Azure, Kubernetes, Terraform, Docker, observability stacks.
  • Domains frequently cited: fintech and payments, healthcare and bioinformatics, security, data platforms/ETL, DevTools, gaming/graphics, IoT, robotics, and geospatial.

AI/ML and LLM Work

  • Very large subset focus on AI/ML, especially LLMs, RAG, agents, MLOps, and applied ML in production.
  • Several describe concrete systems: LLM-assisted pipelines, interview simulators, fraud detection, AI copilots, voice AI, diarization, and agentic orchestration frameworks.
  • A minority explicitly say they are not interested in AI or blockchain, pushing back against hype and “AI cycle” roles.

Work Preferences & Geography

  • Remote work is overwhelmingly preferred; many are “remote only”, some OK with hybrid/on-site locally.
  • Time zones: heavy representation from US, Canada, and Europe, with additional candidates from Latin America, Africa, Middle East, and Asia; many are flexible on overlap.
  • Relocation stances vary: some very open (especially within EU/US), others firmly no-relocate.

Meta & Miscellany

  • Occasional corrections (e.g., someone posting in the wrong monthly thread; freelancer-thread question).
  • One stray remark about a Postgres article and PG vectors appears off-topic relative to the hiring focus.

State of Terminal Emulators in 2025: The Errant Champions

Scope and test data concerns

  • Several comments note that ucs-detect’s results use outdated versions of some terminals (e.g., VTE, Konsole, Foot, Kitty, Zutty, xterm Sixel support), making 2025 rankings somewhat misleading.
  • Multiple people submit or suggest pull requests to refresh results, emphasizing that the test suite is easy to run and the ranking is only about Unicode behavior, not overall quality or performance.
  • Some feel the article title overpromises (“state of terminals”) given it only measures Unicode conformance, which may be irrelevant for users who rarely hit complex Unicode cases.

WezTerm, Kitty, Ghostty, and preferences

  • WezTerm fans cite strong Lua programmability, multiplexing, native SSH, configurability, and speed to open a window as key reasons to choose it over Kitty or Ghostty.
  • Others report font rendering quirks in WezTerm but say they can be configured away.
  • Kitty is liked for graphics protocol and features, but some are moving away due to deliberate limits (e.g., tmux support) or deprecations.
  • Ghostty gets praise for polish, performance, UI, and especially its advanced built‑in theme picker; some users say it feels like a “better Alacritty.”

Images and graphics protocols (Sixel, Kitty, etc.)

  • There is active debate over Sixel vs Kitty image protocols and other schemes (iTerm2, Inline Images).
  • Pro‑image users rely on inline images for:
    • Remote image viewing over SSH,
    • File manager previews,
    • Plots from Python/Julia/ML tooling,
    • Debugging graphics-heavy pipelines.
  • Skeptics see all image protocols as inefficient or “gimmicky,” arguing that proper remote graphics should use dedicated protocols (X forwarding, Wayland tools, RDP, web apps) instead of overloading terminals.
  • Historical and technical notes: Sixel is older and simpler; Kitty protocol is more powerful but larger and harder to implement, contributing to fragmentation.

Platform defaults and ergonomics

  • macOS Terminal ranks low in tests; some say it’s stagnant but “good enough” and lighter than alternatives, others strongly prefer richer tools like iTerm2/Ghostty.
  • Windows Terminal ranks surprisingly high and is praised for tabs, theming, and “smart” Ctrl+C/Ctrl+V (copy when selected, interrupt otherwise), which many find ergonomically superior.
  • Linux/X11/Wayland users mention primary selection (select + middle‑click) as very convenient, but keyboard‑only users push back.

Ghostty strengths and missing pieces

  • Strong enthusiasm for Ghostty’s theme picker and overall UX; some call it mind‑blowing compared to Alacritty.
  • Major commonly cited gap: no native scrollback search yet. It’s on the roadmap but not “immediate,” which is a deal‑breaker for some and irrelevant for others (especially heavy tmux users).
  • Some note Ghostty’s higher memory usage vs minimalist terminals like Foot, likely due to its GTK + GPU approach.

Other notable terminals

  • Foot (Wayland-only) is lauded as extremely fast, lightweight, and responsive; users mention very low launch time and nice link‑opening UX.
  • Konsole’s high ranking is welcomed; users like its KDE integration, configurability, infinite scrollback backed by files, and Dolphin “open terminal here” integration.
  • Alacritty is still appreciated for speed and simplicity, but lack of images and long‑standing ligature issues push some to newer options.
  • xterm, st and forks are mentioned as having strong legacy features (Sixel, ReGIS, Tek 4010, patches), but are hard to compare because of patch culture and defaults.

Unicode, emojis, and TUI pain points

  • Some say they “never use non‑ASCII,” but others argue Unicode is unavoidable in filenames, logs, non‑English text, and tools that output emojis.
  • Several users actively dislike emojis in terminals and CLIs, viewing them as noisy, hard to grep, and visually unclear; they prefer making them optional.
  • Developers building TUIs describe Unicode as a minefield: different terminals cluster graphemes differently, ambiguous widths vary, and there is no reliable way to know how many columns a sequence will occupy.
  • Debate over “correct” handling of variation selectors (especially VS‑15 for text presentation of emoji): some claim only a couple of terminals “get it right”; others counter that these behaviors don’t match the Unicode spec and can create layout bugs.

Security, trust, and conservatism

  • A few users are reluctant to adopt “non‑standard” or newer terminals because they type passwords into them and want maximum trust, sticking to system defaults like Konsole or macOS Terminal if they are “good enough.”

Philosophy and future directions

  • Some are excited about richer terminal capabilities: images, variable‑sized text, even embedded GUI/Wayland compositors; others worry about turning terminals into half‑browsers and prefer strict, simple, VT‑style behavior.
  • There’s nostalgia for “real” terminals and classic scripting‑heavy tools (e.g., vintage BBS clients, DEC hardware), contrasted with frustration that modern terminals still emulate decades‑old models instead of adopting a clean, modern text/graphics protocol.

OpenAI signs $38B cloud computing deal with Amazon

Deal scope, Azure “exclusivity,” and AWS/Bedrock

  • Confusion over what the $38B actually means: is it firm spend, an upper-bound “option,” or largely PR?
  • Thread notes OpenAI’s recent statement that “API products” remain Azure‑exclusive, while “non‑API products” can run on any cloud, and people debate whether Bedrock counts as “API.”
  • Some think OpenAI models won’t appear as native Bedrock endpoints; AWS is likely just hosting training/inference for OpenAI’s own products.
  • Others note two OpenAI open‑weight models already on AWS and see the wording as intentionally fuzzy.

Financial realism and risk

  • Major concern: OpenAI reportedly has ~$13–20B annual revenue but has committed to compute with a stated total cost of ~$1.4T over several years.
  • Skeptics see a looming cash crunch and compare this to WeWork’s long‑term lease commitments and creative metrics.
  • Others argue these are multi‑year, cancellable deals, partly paid in equity, and sized assuming steep revenue growth, not immediate cash.
  • Some think AWS itself is stretching its balance sheet and power capacity for AI buildouts, which could be painful if demand stalls.

Bubble vs. rational bet

  • Many frame this as peak‑bubble behavior: circular financing, “monopoly money” figures, and explicit fears of a crash larger than dot‑com that could drag the broader tech market and retirement funds.
  • Counter‑view: huge infra bets (like early Google storage) built lasting moats; OpenAI or its peers could similarly dominate compute or ad‑driven information services.
  • Others argue this isn’t just an “AI bubble” but a symptom of excess global liquidity needing somewhere to go.

Strategic motives and competitive landscape

  • Some see Amazon as late to the hype cycle but using its power and datacenter footprint to fill gaps Microsoft can’t (power constraints, capacity).
  • Speculation that the real aim is to lock Anthropic and other rivals out of AWS capacity or at least prioritize OpenAI on Nvidia GPUs.
  • The omission of Trainium in the announcement is read by some as a signal OpenAI didn’t like AWS’s custom chips; others think it’s just capacity juggling (Anthropic -> Trainium, freeing Nvidia for OpenAI).

Business model and monetization

  • Recurrent worry: how do you pay for all this? Current revenue, even if fast‑growing, seems small vs. capex.
  • One camp is convinced the endgame is Google‑style ad monetization: product recommendations, ad slots embedded in AI answers, and a search‑replacement business.
  • Others argue ads would undermine trust (“sponsored answers”), turning AI into another enshittified channel and eroding its core value.
  • Debate whether OpenAI can realistically displace Google in search/ads, given Google’s ecosystem, data, and first‑party hardware.

Impact on Google, Anthropic, and the rest of tech

  • Some insist OpenAI is an existential threat to Google’s search+ads; others reverse it, calling Google the real existential threat to OpenAI via token pricing and its own LLMs.
  • Anthropic is variously described as “enterprise leader” or an “also‑ran,” with disagreement over whose usage metrics matter (API vs. consumer vs. Office integrations).
  • Concern that mid‑tier SaaS (CRMs, HR, productivity tools) and smaller tech firms that over‑leveraged into “AI” could be wiped out if the cycle turns.

User‑level value vs. macro skepticism

  • Several commenters report large personal productivity gains from LLMs (especially coding assistants), claiming multiples of ROI on $20–200/month spend.
  • Others push back that this is subjective, hard to measure, and doesn’t automatically translate into sustainable profits for providers.
  • There’s tension between genuine local usefulness and doubts that this usefulness scales to justify trillion‑dollar infra and valuations.

Financing structures and systemic concerns

  • Discussion of datacenters being financed via special‑purpose vehicles and off‑balance‑sheet debt, drawing parallels to pre‑crisis financial engineering.
  • Some note that as long as public markets and institutional investors keep buying the story (and shares), the circular machine can run; if sentiment flips, the unwind could be brutal.

Türkiye will not sell rare earth elements to the USA

What the statement about rare earths actually means

  • Several commenters argue the article misrepresents the minister’s quote.
  • The claim is that the minister was responding to rumors that rare earth elements are already being sold to the US, saying that no such sale or agreement exists.
  • Interpretation in the thread: it’s about not having made a rare-earth-related agreement with the US yet, not a blanket future export ban.

Turkish domestic politics and natural resources

  • Commenters familiar with Turkish politics describe recurring pre‑election “discoveries” of big resource deposits (gas, minerals) as a political tool.
  • These announcements are seen as overhyped but usually based on some real deposits.
  • There is skepticism that promises not to involve foreign companies or protect the environment will actually be kept, citing previous mining disasters and deforestation protests.

US–Turkey relations and alliances

  • Some see Turkey’s stance as evidence that it is an unreliable ally or “playing both sides.”
  • Others counter that the US has treated Turkey poorly: blocking F‑35 sales after the S‑400 deal with Russia, and not supporting Turkey in Syria.
  • The US–Kurdish relationship and the absence of a Kurdish state are mentioned as part of this fraught triangle, with disagreement over whether the US ever “owned” the right to partition Iraqi territory.

Rare earth deposits vs processing

  • Multiple comments stress that many countries have rare earth ores; the bottleneck is refining and processing capacity.
  • China dominates processing because it has invested heavily and tolerates severe environmental damage.
  • Others note that not all countries have economically viable heavy rare earth deposits; resources are not the same as reserves.
  • The US has significant ore but limited refining, largely due to cost, pollution, and political resistance.

Why this matters for “hacker”/tech audiences

  • Rare earths are described as critical inputs for electronics and high‑tech manufacturing; supply disruptions could severely impact the US tech industry.

Debate over the name “Türkiye” vs “Turkey”

  • A long tangent debates whether English‑language media should adopt “Türkiye.”
  • One side: countries have the right to define their own names; using “Türkiye” is respectful and aligns with UN usage.
  • Other side: English speakers aren’t obliged to change; diacritics are hard to type; this is seen as nationalist posturing driven by the current Turkish leadership.
  • Comparisons are made to Germany/Deutschland, Belarus/White Russia, India/Bharat, etc., with no consensus reached.

Tech workers' fight for living wages and a 32-hour workweek is a battle for all

Scope and fairness of a 32‑hour workweek

  • Some see 32 hours with no off‑hours work as “very short”; others argue tech is a good starting point because workers have leverage.
  • Counterpoint: push should begin in harsher, lower‑paid sectors (manufacturing, trades), not relatively cushy tech.
  • Several commenters already work 4×8 or 32h for reduced pay and say it dramatically improves mental health and life quality.
  • Key divide: advocates assume “same pay for fewer hours”; critics insist less work should mean less total compensation.

Pay, overtime, and inequality

  • Manufacturing examples: overtime is culturally entrenched and used to “inflate” paychecks; some argue this masks low base wages and doesn’t increase real output.
  • Others push back: overtime and investing are legitimate paths to improving net worth; stories of intergenerational upward mobility challenge “locked into your class” claims.
  • Disagreement over whether investing is “snake oil” for the middle class versus a realistic path to a comfortable retirement.

Productivity, hours, and automation

  • Debate whether productivity gains should translate into shorter weeks or more “stuff.”
  • Some argue diminishing returns: 32 hours of knowledge work may equal 40 in output; extreme hours (e.g., 80) can reduce productivity.
  • Others emphasize infinite demand (“there is always more work”) and note pre‑industrial or 19th‑century work patterns are not straightforward guides.
  • On automation/AI, some propose systematically converting efficiency gains into reduced hours for all; critics cite competition, investor incentives, and enforcement challenges.

Global competition and offshoring

  • Concern that unilateral 32‑hour standards will repeat manufacturing’s offshoring: cheaper, longer‑hour labor abroad undercuts domestic workers.
  • Supporters note many countries have <40h averages and still “thrive,” often citing Northern Europe; skeptics question comparability and point to debt, inequality, and U.S. security umbrella.
  • WFH is seen by some as an own‑goal: easier global hiring weakens domestic tech workers’ bargaining power.

Unions, policy, and perceptions of tech workers

  • Several insist market forces alone won’t deliver shorter weeks; unions or legislation are needed. Others argue mandated caps harm competitiveness and individual choice to work more.
  • Skepticism that U.S. tech workers are truly struggling, especially when demanding both “living wages” (e.g., $85k in NYC) and 32 hours; some fear this will be seen as entitlement and alienate other workers.

The problem with farmed seafood

Alternative Feeds for Aquaculture

  • Black soldier fly larvae repeatedly cited as a promising fish feed: turn agricultural waste into protein and fertilizer, can be farmed at scale, and are already used commercially.
  • Duckweed and algae suggested as fast-growing, high‑density protein sources that could replace fishmeal and provide omega‑3s, especially if production is automated.
  • Some confusion and correction around “algae oil”: industrial omega‑3 oil often comes from Schizochytrium (a non‑photosynthetic microbe fed on sugars/waste), so its climate benefit is mainly replacing fish oil, not carbon capture.
  • Concern that algae’s “100% consumable mass” is overstated and that nutrient content, taste, and digestibility also matter.

Carbon, Climate, and Food Systems

  • Debate over whether algae‑based feeds materially help with CO₂: most ingested carbon is exhaled as CO₂; real gains come from not burning fossil carbon and from protecting natural sinks.
  • Some argue marine systems (sinking biomass, shells) are important long-term carbon sinks that fishing can disrupt.
  • Side discussion on livestock emissions, especially cattle and grass‑fed vs grain‑fed beef, and how land-use change and methane dominate their climate impact.
  • Broader point: greenhouse gas metrics alone don’t capture all sustainability issues (e.g., habitat loss, runoff, pesticides).

Wild vs Farmed Fish and What to Eat

  • Several commenters argue we should “just eat forage fish” (sardines, anchovies, sprats) directly instead of feeding them to farmed salmon, with bonus of lower mercury.
  • Taste and texture are a major criticism of farmed salmon: often described as paler, mushier, fattier in the “wrong” way, and less flavorful than wild. Others report the opposite experience or prefer consistent premium farmed lines.
  • Farmed shrimp from low‑regulation countries described as heavily polluting and poorly managed; one inland Spanish shrimp farm is discussed as a technical curiosity.
  • Some see farmed fish (especially in regulated regions) as relatively low‑GHG protein; others call it “garbage” in practice due to antibiotics, feed quality, and ecosystem impacts.

Aquaculture Practices, Policy, and Certification

  • Core problem framed as “how we farm” more than the concept itself: sea‑lice spread, chemical treatments, pollution, and feedback loops harming wild stocks.
  • Chinese distant‑water fleets provoke a heated subthread: one side calls them de facto “acts of war”; the other stresses they mostly fish in international waters under existing law and are not uniquely bad compared to other nations’ fleets.
  • Best Aquaculture Practices (BAP) mentioned as a certification system trying to tighten standards for farms and feed mills over time.

Alternatives and Future Directions

  • Environmental NGOs criticized for saying only what not to do; defenders list actions like documenting illegal fishing and promoting gear/method changes.
  • Some advocate drastically reducing or eliminating seafood consumption; others say this is unrealistic and argue for “harm reduction” via better species and production choices.
  • Lab‑grown/cultivated salmon is described as promising but currently expensive, with open questions about feed inputs and scalability.
  • Insects for feed (especially black soldier fly larvae) seen as low‑tech, already‑workable for certain species (trout, chickens).
  • Oysters and other farmed shellfish praised as net water cleaners and ecologically positive; others argue they’re overpriced, overhyped “poor man’s food turned luxury.”

Is Health Insurance Even Worth It Anymore?

ACA, HSAs, and Pre-Existing Conditions

  • Some argue “repeal Obamacare, go back to HSAs,” claiming pre‑ACA individual plans were cheaper and regulation drove costs up.
  • Others counter that pre‑ACA was only “working” for the healthy; denial and pricing of pre‑existing conditions were described as a moral catastrophe.
  • Several note ACA did not kill HSAs, but Bronze/Catastrophic plan design often made them incompatible until very recently.
  • A middle view: ACA’s guarantee of coverage for pre‑existing conditions is its main success; much of the surrounding regulatory machinery is seen as overcomplicated and cost‑inflating.

Is Going Uninsured Rational?

  • Young, healthy people increasingly consider skipping insurance, paying cash for routine care, and relying on bankruptcy or debt settlement after catastrophes.
  • Commenters warn this only “works” if you stay lucky; many share stories of sudden cancer, surgeries, or chronic disease that would have instantly destroyed savings.
  • Some suggest leaving the U.S. or using medical tourism; others are tied to family or point out the complexity and risk of foreign systems.

Catastrophic vs Routine Coverage

  • Strong sentiment that U.S. “insurance” is really prepayment for routine care plus catastrophic coverage, which bloats costs and bureaucracy.
  • Many want true catastrophic-only plans with high deductibles and HSAs; others note Bronze plans are already close to that but still very expensive because underlying care is expensive and risk pooling is broad.
  • Several emphasize that insurance is inherently a wealth transfer from young/healthy to old/sick; you can’t avoid that if you want a functional system.

Direct Primary Care and Partial Workarounds

  • Direct Primary Care (subscription primary care) is widely praised: more time with doctors, dramatically lower prices for labs, and no insurance games.
  • However, commenters stress DPC doesn’t address big-ticket items (surgeries, hospitalizations, biologic drugs), so it must be paired with some form of catastrophic insurance.

Incentives, Pricing, and Profit

  • Many describe U.S. healthcare as a “capital extraction” system: fragmented billing, inflated list prices, coding games, and overuse of marginal or unnecessary procedures.
  • Others note major insurers’ profit margins are modest and argue most excess money flows to providers, hospitals, pharma, and system-wide overhead, not just insurers.
  • Negotiated rates are viewed as one real value insurers provide; without them, cash payers often face absurd “retail” prices.

International Comparisons and Universal Care

  • Multiple comments contrast U.S. outcomes and costs with universal or single‑payer systems, arguing those countries spend less per person and get better life expectancy.
  • Skeptics raise concerns about wait times and rationing, but data-linked replies say delays are mainly for elective procedures, while Americans often get no care due to cost.
  • Political resistance to “socialism,” lobbying, and generational interests (e.g., Medicare vs working-age costs) are blamed for blocking systemic reform.

Moral and Social Dimensions

  • Debate over “paying for others’ bad choices” (obesity, smoking, etc.) runs into pushback: many illnesses are genetic, environmental, or structurally driven, and moralizing is seen as both inaccurate and cruel.
  • Several highlight how fear of losing insurance locks people into jobs and likely suppresses entrepreneurship.
  • Personal stories—medical bankruptcy despite “good” insurance, constant battles over approvals, or effortless Canadian hospital discharges with no billing—underscore both the financial and psychological burden of the U.S. model.

Google suspended my company's Google cloud account for the third time

Blame, risk tolerance, and “why not just leave GCP?”

  • Many commenters argue that after the second and third suspensions, staying on GCP is the company’s responsibility: they’re prioritizing convenience over reliability.
  • Others push back that most of their customers are on GCP, and alternatives (OIDC, API keys, per-customer service accounts) add significant setup or usability burden for customers.
  • There’s disagreement over how “cumbersome” OIDC really is: some say it’s scriptable and manageable; others say a 7‑step setup is guaranteed to be misconfigured by customers.

Google Cloud as an unreliable business partner

  • Strong consensus that GCP (and Google generally) is risky for anything critical unless you’re a very large customer with named support contacts.
  • Multiple anecdotes: accounts locked over trivial billing issues, opaque suspensions for ads or app submissions, “Login with Google” suddenly disabled, problems changing verified addresses, and long outages of Workspace with no effective recourse.
  • People note the fear of losing not just infrastructure, but also Gmail, Google Fi, Android dev access, or YouTube income if an automated system flags you.

Automation, scale, and support failures

  • Discussion centers on Google’s heavy reliance on automated abuse detection: if the system flags you, you’re out, often with only vague ToS language.
  • Some see this as an inevitable consequence of massive scale and fraud pressure; others say it’s a choice—Google could afford meaningful human review but optimizes margin and liability instead.
  • Several note that Google’s own docs recommend patterns (like shared service accounts) that appear to be punished by internal anti‑abuse systems, implying deep organizational disconnect.

Legal, regulatory, and structural responses

  • Commenters debate whether affected businesses should sue (breach of contract, tortious interference), or at least use small-claims court to force escalation beyond tier‑1 support.
  • Others call for regulation of “critical” identity/email providers and limits on purely automated decisions (citing GDPR as an example).

Broader lessons: cloud and dependency

  • Repeated advice: don’t rely on any hyperscaler or single platform for irreplaceable data or core identity.
  • Suggestions include owning your domain, using smaller or multi-vendor email/infra providers, and avoiding social logins where business continuity matters.

Why Nextcloud feels slow to use

Overall sentiment

  • Many agree with the article’s premise: Nextcloud “feels slow,” especially via the web UI, despite being feature‑rich and widely useful.
  • People still value it as one of the few full, self‑hostable Google Drive / MS365–style suites, especially for files, calendars, contacts, and basic collaboration, but often describe a love–hate relationship.

Frontend performance & JavaScript bloat

  • The large JS payload (15–20 MB, ~4–5 MB compressed) is heavily criticized; some call it “outrageous” for a calendar/files UI and note that Google Calendar uses significantly less.
  • Others argue size alone isn’t the main problem: the real issue is many small requests and waterfall loading patterns (e.g., ~120+ requests for the calendar view, lots of per‑calendar and per‑feature calls).
  • Complaints include: each app as its own SPA with duplicated dependencies, poor bundling/minification, loading everything on every page, and excessive client‑side work for simple CRUD UIs.

Backend / architecture concerns

  • Several describe the core as “encrusted layers” of historical PHP/Owncloud code: lots of DB touches for trivial actions, heavy reliance on Redis/cron to paper over design issues, and fragile performance that needs careful tuning (DB on separate disk, Redis, PHP‑FPM).
  • Some see the modular “app” system and 350+ repos as a source of incoherence and overbuild; others defend it as the reason Nextcloud can replace many services at once.

Client apps & reliability

  • Mobile clients, especially for photo backup, draw strong criticism: reports of WebDAV lockups, stalled or duplicate uploads, confusing behavior when deleting local photos, and even data loss.
  • Many abandon the official clients and use generic WebDAV, Syncthing, or FolderSync for sync instead; WebDAV itself is described as brittle for large transfers.
  • Desktop sync is generally liked and used heavily; many treat Nextcloud more as a NAS + sync engine and avoid the web apps.

Maintenance & “production” use

  • Experiences range from “rock solid for years” (especially small business with a few users and AIO/docker images) to “every upgrade breaks something,” leading some to freeze versions or abandon it.
  • It’s seen as “good enough” for family or small‑company file/groupware use, but not at the polish or reliability level of big‑tech clouds.

Alternatives & specialized stacks

  • Many commenters now prefer “one tool per job” over an all‑in‑one:
    • Files/sync: Syncthing, Seafile, Resilio, OpenCloud/OCIS, Filebrowser, Copyparty, BewCloud, SMB/rsync.
    • Photos: Immich, Ente, Nextcloud Memories.
    • Calendar/contacts: Radicale, DAV servers.
    • Tasks: Vikunja.
  • Tradeoff noted: lighter, faster, simpler tools vs. Nextcloud’s convenience of a single integrated, SSO‑backed platform.

I analyzed 180M jobs to see what jobs AI is replacing today

Software engineering job security and demand

  • Several commenters agree with the article that software engineering remains relatively secure versus other white‑collar jobs, at least for the next 10–15 years.
  • Others note that many engineers are currently employed on the “AI boom” thesis; if those expectations cool, cascading layoffs and harder job searches could happen even without full automation.
  • There’s concern that IT headcount growth has slowed, especially in large offshore markets, leaving many new grads under- or unemployed.

AI tools vs programmers and compilers

  • Some compare LLM coding tools to compilers or higher-level languages: historically these increased programmer productivity rather than eliminating programmers.
  • Others strongly disagree, arguing LLMs let non-programmers produce working software in ways compilers never did, citing degrees or projects largely completed via ChatGPT.

Who is an “engineer”?

  • Long subthread debates whether using AI to build things makes users “software engineers.”
  • One side uses a broad dictionary definition (design/build/maintain systems, no credential needed), treating prompt-writing and LLM orchestration as software design.
  • The other side stresses profession, training, responsibility, and outcomes—likening “AI users” to flight-sim hobbyists vs licensed pilots, or to mechanics vs engineers.

Methodology and data-quality criticism

  • Multiple commenters argue that job postings ≠ jobs: ghost listings, duplicates across sites, reposting, and unknown fill rates heavily pollute the data.
  • Critics say the analysis conflates changes in posting counts with “jobs AI is replacing,” without causality or adjustment for layoffs/attrition.
  • Short time window (2024→2025) and post‑pandemic volatility make trend attribution to AI especially questionable.
  • Lack of absolute counts (only percentages) and missing categories (e.g., sales roles) are flagged as major gaps.

Sector-specific observations

  • Frontend roles: many report LLMs are very strong at UI/React work; smaller firms can “vibe code” UIs, larger ones boost FE productivity and hire less.
  • Mobile: decline may reflect offshore shift and cross-platform tools; LLMs seem particularly competent at React Native.
  • Creative roles: demand for “executors” falls while director-level creative rises—interpreted as induced demand plus cost-cutting in rank-and-file.
  • Security: mixed views—some see declining postings, commoditization, and “snake oil”; others report booming consulting work and argue engineers are being pushed to own more security themselves.
  • Nursing and other non‑AI‑affected jobs dropping in postings is cited as evidence that broader economic factors, not AI, drive many changes.

AI as productivity multiplier vs headcount reducer

  • Several practitioners say AI makes them far more productive and shifts work toward “babysitting” or supervising models, leading management to attempt more projects rather than cut staff.
  • Others argue that when firms must choose between guaranteed cost reduction (fewer people) and speculative growth (more projects), they’ll often cut headcount and use AI as the justification.

Offshoring and regional shifts

  • Some believe big tech is simultaneously cutting Western headcount and expanding AI and engineering hubs in India, pointing to recent investment announcements and headcount growth there.
  • It’s unclear from the discussed dataset whether declines in US postings reflect automation, offshoring, or general belt‑tightening.