AI's real superpower: consuming, not creating

Using AI as a “second brain” over personal archives

  • Many describe feeding AI large personal corpora (Obsidian vaults, exported Evernote notes, life stories, codebases) and querying it for recall, synthesis, and career guidance.
  • Users report value in: quick recall of past meetings and writings, pattern-spotting across 1:1s, tailoring résumés, and getting “rubber-duck” style reflections.
  • Others note the payoff heavily depends on already having a rich, well-structured archive; the “superpower” is partly the human diligence in capturing notes.

Quality, understanding, and epistemic risk

  • Positive accounts: AI summaries are helpful for quick definitions in technical lectures and triaging whether material is worth deeper reading.
  • Negative accounts: models frequently hallucinate, miss key nuances, or mix correct and incorrect sources—especially in medicine, news, ambiguous terms, or multi-use medications.
  • Several argue LLMs often “abridge” rather than truly summarize, failing to capture higher-level abstractions and overemphasizing trivia.
  • There’s concern that people will over-consume low-quality summaries, becoming unable to verify claims or engage deeply, while believing they’re well-informed.

Privacy, data ownership, and local models

  • Strong unease about uploading highly personal notes to cloud LLMs; people fear profiling, training reuse, and future misuse (e.g., immigration, law enforcement, targeted manipulation).
  • Coping strategies: only upload documents one would accept being leaked; use local or rented-GPU models; or wait until local models are good and sandboxed.
  • Others are dismissive of privacy worries, arguing “nothing online is private” and that benefits (better tools, ads, search) outweigh risks.

Capabilities, limits, and hype

  • Some see the article’s “consumption, not creation” framing as accurate and non-new: enterprises already want AI to consume internal docs and answer questions.
  • Others think the piece overstates AI’s ability to find genuine patterns in personal data; current models are seen as superficial, mediocre at long-context reasoning, and easily steered into plausible but wrong “insights.”
  • There’s ongoing dispute over whether LLMs are already superior to average humans on many cognitive tasks, or still clearly inferior and dangerously oversold.

Workflows and guardrails

  • Suggested best practices:
    • Force models to surface underlying notes and sources, not just conclusions.
    • Use iterative loops, subagents, tests, and verification to reduce cherry-picking.
    • Treat AI outputs as hypotheses or prompts for human reasoning, not authoritative answers.