They stole my voice with AI

Legal and Rights Questions

  • Many argue existing law already covers AI voice impersonation via “right of publicity” and cases like Midler v. Ford and Tom Waits’ ad lawsuits: you can’t deliberately imitate a distinctive, known voice to sell products.
  • Others note coverage is patchy: limited to some U.S. circuits or states; legal standards for voice likeness, AI training, and “confusion in the marketplace” are still emerging.
  • Debate over whether a voice can be “owned” like likeness or other personality rights; some see this as coherent IP/personality protection, others as conceptually absurd because sound patterns aren’t scarce.
  • Disagreement on First Amendment limits: civil lawsuits and private rights vs government censorship are distinguished, but some fear new laws could overreach.

Ethical Concerns and Power Imbalances

  • Strong sentiment that cloning a creator’s voice to pitch products—especially in their own niche—fraudulently trades on their reputation and implied endorsement.
  • Fears that studios and platforms will use contracts with unknown actors/creators to lock in perpetual AI likeness rights, then avoid paying once they’re famous.
  • Concerns about exploitation of underdogs vs protection of entrenched stars; some see residuals/royalties for AI use as a fairer model.

Technological Capabilities and Scale

  • Current tools can make convincing clones from seconds of audio; future systems expected to produce bespoke, highly optimized synthetic voices.
  • Key shift is scale and accessibility: what once required skill and effort (Photoshop, voice acting) is now cheap and mass-producible, changing risk profiles.

Impacts on Trust, Evidence, and Society

  • Widespread anxiety about deepfakes in politics, scams, pornography, and personal vendettas (e.g., fake racist or blasphemous statements, fabricated evidence in disputes).
  • Expectation that audio/video evidence will become less trusted in courts and public opinion; some jurisdictions already treat such media cautiously.
  • Broader “post-truth” worries: easier propaganda, mob justice, and long-term erosion of basic trust in digital media.

Proposed Regulations and Technical Fixes

  • Mention of new and proposed laws (e.g., California statutes, U.S. “No AI Fraud” bill) targeting unauthorized replicas, especially in porn, politics, and employment.
  • Ideas include: regulating uses (deceptive/commercial cloning) rather than tools; stronger libel/slander enforcement; mandatory provenance/cryptographic signing for media; platform-level detection and takedowns.
  • Skepticism about enforceability, especially with foreign providers and anonymous actors; some argue openness and broad familiarity with fakes may be the only realistic defense.