What Happens When AI-Generated Lies Are More Compelling Than the Truth?
Role of Images and the Return to Source-Trust
- Many see generative AI as ending the brief era when photos/video could function as strong evidence; we’re “back” to asking who published something, not what it shows.
- Others argue fakery has always existed; what’s new is cost and scale. Cheap, mass-produced forgeries transform the information landscape in a way that “nothing has changed” rhetoric ignores.
- Several commenters stress that “scale and degree” can make an old problem qualitatively different.
Watermarks, Logging, and Cryptographic Signing
- Proposals:
- Log all generated images;
- Invisible watermarks / hashes for AI output;
- Cryptographically signed images directly from cameras, with provenance chains.
- Objections:
- Watermarks can be algorithmically removed, or bypassed via photographing a screen/print.
- Full logging is costly and incompatible with self‑hosted models.
- Camera signing relies on trusting hardware vendors, secure enclaves, and key management; past keys have been extracted.
- Any “must‑be-signed” regime risks DRM‑like control and abuse (e.g., framing people, surveillance).
Institutions vs Technology as the Anchor of Truth
- A recurring view: technical solutions can at best attest that “this outlet stands by this content,” not that it’s true.
- Trust must ultimately rest in people and institutions (news orgs, reputational systems), with cryptography as a support, not a substitute.
- Social media complicates this: most people get information from diffuse, weakly vetted sources.
Psychology of Lies, Cynicism, and Demand for Misinformation
- Lies have long been more compelling than truth because they flatter desires, fears, and faith; evidence often plays a secondary role.
- Some worry AI will not just increase gullibility but deepen cynicism: if everything might be fake, people may dismiss inconvenient truths as “AI.”
- Others note misinformation is monetized and amplified by platforms and capitalism; AI just lowers production cost and raises polish.
Adaptation, Countermeasures, and Norms
- Historical analogies (printing press, telephone, photography) suggest societies adapt, but often after real damage.
- Some propose assuming all content and participants are “bots” and instead focusing on transparent processes and norms.
- AI may also help debunk at scale (e.g., tailored dialogues reducing conspiracy beliefs), partially rebalancing the cost asymmetry between lying and fact‑checking.