When ChatGPT turns informant
Scope of the Risk: Local Snooping vs Institutional Surveillance
- Many see the “jealous partner / nosy colleague” scenarios as real but small compared with platform and government access to full chat histories.
- Several comments assume any data stored unencrypted on third‑party servers (ChatGPT, email, social media) is ultimately accessible to law enforcement or others with enough power or budget.
- Some argue this isn’t new—email, cloud storage, and search logs are already discoverable—but others stress that AI changes the scale and ease of exploiting that data.
ChatGPT vs Search Engines and Other Logs
- One side: search history plus browser data already reveals as much or more; you can always feed that into an LLM for analysis.
- Other side: conversational prompts to LLMs are longer, more explicit, and often include motives, emotions, and confessions—closer to a diary than keyword search; LLMs make it trivial to summarize “what this person believes / feels” in seconds.
- Reduced friction and automation are repeatedly cited as the key new danger.
Memory, Data Retention, and Technical Uncertainty
- There is confusion about how ChatGPT “memory” works: “Saved memories” in settings vs apparent semantic search over all conversations.
- Users report prompts like “What user knowledge memories do you have?” producing surprisingly detailed, structured profiles, even with empty “Saved memories.”
- Some believe disabling memory mainly affects personalization, not backend retention; references are made to long‑term storage despite deletion, but details in the thread are unclear.
Manipulation, Hallucinations, and Evidence
- Concern that LLMs can:
- Accurately synthesize a persuasive profile from long chat histories, and
- Hallucinate or be prompted into biased answers that make someone look dangerous, unfaithful, etc.
- People worry about dragnet scripts over all users (e.g., “which users would likely do X illegal thing?”) and about use in border control or criminal trials as circumstantial evidence of intent.
Coping Strategies and Norms
- Suggested mitigations: treat all prompts as if they may be read in court, disable memory, delete histories before travel, or use local models for sensitive topics.
- Others argue that the deeper issue is users treating LLMs as therapists or confidants and oversharing in ways they’d never do in email or public forums.