OpenAI – How to delete your account

Context: OpenAI, Anthropic, and the DoD/DoW

  • The thread is driven by OpenAI’s new Defense Dept. contract, coming immediately after Anthropic was labeled a “supply chain risk” and dropped for refusing to allow mass surveillance of Americans and autonomous weapons use.
  • Some commenters believe OpenAI accepted essentially the same terms Anthropic rejected, thereby legitimizing coercive government behavior toward private AI labs. Others cite reporting claiming OpenAI’s contract contains the same formal prohibitions Anthropic wanted, and say the story “makes no sense” or is political theater.

Why People Are Deleting / Boycotting

  • Many are deleting or cancelling as a protest against:
    • Military integration of AI (surveillance, targeting, “AI in the kill chain”).
    • Government pressure on labs and perceived “fascistic” use of supply‑chain rules to punish dissent.
    • OpenAI’s leadership, seen as untrustworthy, ad-driven, and power-seeking, having abandoned the original “AI for science” mission.
  • For some it’s symbolic (“like voting”), about signaling norms and not personally funding what they see as warmongering or surveillance capitalism, even if the practical impact is small.

Skepticism and Pushback

  • Others call this “performative virtue signaling” with negligible effect, arguing:
    • All major powers and labs will weaponize AI anyway; the only question is who does it.
    • If OpenAI refused, a worse actor (e.g., ideologically aligned competitors) would step in.
    • Anthropic is also ethically compromised (copyright training data, user data collection, prior participation in controversial operations), so its moral high ground is questioned.
  • Several argue systemic change should come via regulation or political action, not consumer boycotts of individual labs.

Alternatives, Migration, and Usage Strategies

  • Many report cancelling OpenAI subscriptions and moving spend to Anthropic; some to Gemini or open‑weight models. A minority advocate boycotting all frontier labs with government contracts and using only local/open models.
  • Others prefer to keep using OpenAI’s free tier to “cost them money,” but critics note this still feeds metrics, training data, and ad inventory.
  • There’s active discussion on exporting ChatGPT history (and tools that convert exports to markdown) and the lack of straightforward ways to recreate those chats in Claude or other systems.

Account Deletion Experience and Data Concerns

  • Multiple users hit errors and “too many attempts” blocks when trying to delete accounts; some see this as accidental load, others as a “dark pattern.”
  • App‑store subscriptions and phone‑number limits (a number only ever usable for three accounts, even after deletion) are noted gotchas.
  • The deletion policy’s carve‑out (“we may retain a limited set of data where required or permitted by law”) is widely distrusted; some interpret it as effectively “we’ll keep everything unless forced not to.”

Broader Ethics, Power, and HN Meta

  • Commenters debate whether the US already functions as an oligarchy, the role of the military in preserving vs. threatening freedom, and the inevitability of AI‑driven weapons and surveillance.
  • There’s discussion of AI‑worker organizing as a more effective lever than consumer action.
  • Some think the thread is overtly political and contrary to HN guidelines; others see it as a legitimate technical‑ethics topic where “voting with your wallet” is appropriate.