Sam Altman's response to Molotov cocktail incident

Violence against Altman and his family

  • Almost everyone states that throwing a Molotov at a home (even “just” the gate) is wrong and dangerous, especially with a child present.
  • Some still argue that when elites profit from large‑scale harm (war, economic dispossession), they shouldn’t be surprised by violent blowback, even if it’s not morally justified.
  • A minority explicitly flirts with or endorses political violence, while others push back hard that this is unethical and corrosive.

Altman’s blog post: authenticity vs PR

  • Many see the post as a calculated attempt to humanize Altman, exploit the incident for sympathy, and reframe criticism as dangerous “incendiary” rhetoric.
  • The family photo is widely viewed as manipulative “baby‑on‑board” framing; some call it using his child as a shield.
  • Several note that he doesn’t contest factual claims in the recent New Yorker profile, only its tone, which they see as telling.
  • A minority finds the post well‑written and appreciates that a powerful CEO publicly affirms limits on concentration of power, even if they doubt he believes it.

AI, power, democracy, and OpenAI’s actions

  • There is deep skepticism toward statements like “AI must be democratized” and “prosperity for everyone,” given OpenAI’s closed models, abandonment of open‑source roots, concentration of wealth, and aggressive lobbying.
  • Many contrast his rhetoric about avoiding concentrated power with:
    • Pursuit of US DoD contracts to weaponize AI and assist in kill chains.
    • Support for liability regimes that could shield AI developers from mass‑harm consequences.
  • Some argue he genuinely wants guardrails and new policy; others say the public position is a “permission structure” while behind‑the‑scenes lobbying preserves control.

AGI narrative and existential framing

  • Critics mock references to “once you see AGI you can’t unsee it,” pointing out that current systems are still autocomplete‑like and that claiming to “have seen AGI” is marketing or delusion.
  • Some fear that apocalyptic AGI talk both stokes public anxiety and serves to justify elite control (“ring of power” rhetoric) while continuing rapid deployment.

AI, jobs, and social unrest

  • Many connect the attack to broader anger: layoffs, stagnant living standards, and CEOs openly boasting that AI will soon replace many jobs.
  • There’s pessimism that governments will build safety nets (e.g., UBI) in time; several foresee mass unemployment, harsher policing and surveillance, and eventual unrest or “AI wars.”
  • Others argue past tech shocks didn’t cause permanent mass unemployment and expect society to adapt with new job categories.

Open vs closed models and who “controls the future”

  • Some question why Altman/Anthropic talk as if they “control humanity’s future” when strong Chinese and open‑weight models exist, and gaps may be months, not years.
  • Others stress frontier labs’ compute, data, and regulatory leverage as real moats, and note that open models often distill from proprietary ones (though this claim is contested).

Media, criticism, and blame

  • Altman’s description of the New Yorker piece as “incendiary” is widely disputed; many found it careful, sourced, and merely unflattering.
  • Commenters object to any insinuation that critical journalism caused the attack, seeing it as an attempt to equate scrutiny with “stochastic terrorism” and chill legitimate reporting.

Meta: HN, norms, and moderation

  • Several are alarmed by the level of vitriol and quasi‑justifications of violence in the thread; at least one long‑time participant says it makes them consider leaving.
  • The moderator intervenes repeatedly, flags explicit calls for violence as unacceptable, and describes the thread as a “mob” moment that violates site norms.