OpenAI, the US government and Persona built an identity surveillance machine

User reactions and data‑deletion attempts

  • Several commenters verified with Persona via LinkedIn or Discord and now seek remedies.
  • Suggested steps (esp. for EU residents): request data access, demand deletion, object to AI training uses, and use Persona’s DSAR form.
  • Skepticism that GDPR/DPAs will enforce anything; expectation that companies may ignore or stall requests.
  • Persona’s canned reply shifts responsibility to “controllers” (e.g. LinkedIn) for most data, leading to frustration (“TL;DR we’re not responsible”).

Persona–OpenAI–government stack & security incident

  • Thread discusses a leaked Persona source map from a FedRAMP‑authorized environment, revealing architecture of an ID/surveillance stack used by US agencies and OpenAI.
  • One view: this is a concrete compliance failure on a sensitive government system.
  • Counter‑view: it was a non‑production endpoint and source‑map exposure itself isn’t a real security issue; overblown if framed as “hack”.
  • Persona’s incident write‑up and “damage control” are linked; some think it’s solid, others see classic PR semantics (“we weren’t hacked”).

Normal KYC or something more?

  • Some argue this is just standard KYC/AML infrastructure: passports, face scans, fraud checks.
  • Others say there is “nothing normal” about extending this level of identity surveillance to AI access and general web services, with risk scoring, long‑term biometric retention, and law‑enforcement tooling.
  • Confusion over why AML‑style checks are needed for AI use at all.

Surveillance, dystopia, and broken social contracts

  • Many see this as another step toward a panopticon: global platforms, state security, and corporate vendors forming a “Super Leviathan”.
  • Debate over whether collapse/revolt is still possible or whether technology and automation have removed previous checks on elite power.
  • Concern that AI reduces the need for large workforces, removing human “friction” that used to block the worst ideas.

Ethics and responsibility of engineers

  • Repeated question: why do engineers build harmful systems?
  • Answers: money, lack of empathy, belief they’re “stopping bad guys”, compartmentalization, and replaceability (H‑1B/outsourcing used as leverage).
  • Some call for tracking and socially shunning individuals who build surveillance tools; others warn this risks scapegoating and fatalism.

Politics, censorship, and social media surveillance

  • Disagreement on whether surveillance creep is primarily a right‑wing project, bipartisan, or simply state‑driven regardless of party.
  • Fivecast ONYX is cited as an example of mass social‑media scraping and risk scoring; concern that lacking social media could itself become suspicious.
  • EU commenters doubt their governments will meaningfully resist US surveillance vendors despite prior NSA scandals.

Convenience, agency, and resistance

  • One camp blames user appetite for convenience (cars, delivery, seamless onboarding) as the fuel for surveillance capitalism.
  • Others argue the real problem is unregulated corporations and intelligence agencies exploiting that desire.
  • Suggested responses: local organizing, pushing for deletion “jubilees,” preferring homegrown or open‑source solutions, and refusing voluntary ID verification where possible.

Meta: article style and site UX

  • Strong reactions to the site’s retro UI: autoplay music, animated cat cursor, OS emulation. Some love the “old web” vibe; others find it unusable on mobile or in public.
  • A few think parts of the text feel like LLM‑generated prose or conspiratorial in tone; others find it dense but clear and information‑rich.