I tried to prove I'm not AI. My aunt wasn't convinced

Shared secrets, shibboleths & social defenses

  • Many argue families should share offline “codewords” or shibboleths to authenticate unusual calls (e.g., emergencies, ransom scams).
  • Others prefer shared private memories instead of new passphrases, doubting people will reliably remember codes.
  • Several note scammers create urgency and emotional pressure so victims ignore or can’t recall codewords.
  • One concern: once a phrase is used over a network, it can be captured in breaches; the secret degrades over time.
  • Some already use shibboleth/duress words with alarm companies or joke about spy‑style countersigns.

Cryptography, cameras & technical fixes

  • Strong sentiment that society failed to adopt widespread cryptographic signatures early enough (email, VOIP).
  • Proposals: signed emails, signed media, tamper‑proof or authenticated cameras, and provenance chains from capture device to viewer (possibly blockchain‑backed).
  • Objections:
    • Device or key compromise and social engineering remain weak points.
    • People may let their own AI agents send signed messages.
    • Editing photos/video breaks simple hash‑based verification and professional workflows rarely use straight‑from‑camera output.
    • Centralized signing by major platforms could further concentrate power.

Trust, scams & economic / social impact

  • Multiple anecdotes of hijacked email accounts and AI‑crafted, highly personalized scams.
  • Call‑center and “grandparent” scams now potentially enhanced with AI voice and video.
  • Some foresee shrinking trust spheres: only in‑person or local, verified relationships are considered reliable.
  • Predicted impacts include more in‑person interviews and travel, difficulty trusting online information, and heavier economic and psychological costs.
  • Others say misinformation, doctored media, and spam are old problems; AI mainly lowers cost and scales them.

Detection limits & human psychology

  • Many feel “spotting AI” visually is already unreliable; context and provenance matter more than artifacts like extra fingers.
  • Discussion of phenomena like analysis paralysis, flip‑flopping under doubt, and how manufactured urgency suppresses skepticism.
  • Some suggest “reverse captchas” using taboo or disallowed topics to distinguish humans from safety‑constrained corporate models, though this fails for uncensored/local models.

Regulation, watermarking & policy

  • Suggestions include legal requirements for AI watermarking and platform‑level labeling of synthetic media.
  • Critics argue bad actors will ignore rules, watermarking is technically fragile, and laws may only burden compliant companies.

Cultural, political & philosophical angles

  • Worries that deepfakes erode courtroom evidence and public accountability (e.g., plausible deniability for incriminating footage).
  • Some see AI as accelerating a “post‑truth” or “spam‑saturated” world; others push for “touching grass” and prioritizing offline community.
  • A side debate questions whether human–human interaction is intrinsically more valuable than interaction with convincing AI facsimiles.