Google Titans architecture, helping AI have long-term memory

Openness and AI research ecosystem

  • Many commenters praise Google for publishing detailed Titans, MIRAS, and Nested Learning/HOPE papers.
  • Others note Meta, Bytedance, DeepSeek, and Chinese labs are also highly open, often backing papers with open models.
  • Some argue big US labs only publish ideas that are not central to their best production systems; if it worked “too well,” it wouldn’t be public.
  • There’s awareness that Google papers pass internal competitive review and may be partly PR/performance-review driven.

Titans, MIRAS, HOPE: what’s new

  • Titans is seen as “learning at test time”: fast weights updated during inference, using surprise/gradient as an internal error signal.
  • Instead of hoarding ever-growing KV caches, Titans stores long-term information in a continually trained memory MLP, updating only for highly surprising tokens.
  • HOPE combines self-modifying Titans with a Continuum Memory System (slow, high-capacity memory) for multi-timescale “long-term memory.”
  • Some consider this a qualitatively bigger shift than “transformer with a tweak,” closer to a new paradigm for continual learning.

Skepticism and lack of public models

  • Strong criticism that ~11 months after the first paper, there are no official Titans-based models or weights; only an unofficial PyTorch implementation exists.
  • Path dependence: even if better than transformers, scaling new architectures is risky and extremely costly; internal approval for multi-million-dollar experiments is hard.
  • One claim that Gemini 3 uses this architecture is met with mixed impressions of Gemini’s real-world quality versus GPT.

Security, robustness, and poisoning

  • Concern that “surprise-driven” memory could be exploited by feeding improbable junk or late contradictions (e.g., “everything in this book was a lie”).
  • Counterpoints: training should teach Titans to assign low learning signal to irrelevant junk; any tool can be broken by adversarial input.
  • Some highlight parallels to human vulnerability to cult-like information streams.

Alignment, drives, and “wants”

  • One view: effective memory/attention ultimately requires something like an internal emotional/valuational system (“AI needs to want something”).
  • Opposing view: giving powerful AI persistent goals or drives would be a major alignment risk; intelligence and “wanting” should be kept separate if possible.

Product and societal implications

  • Long-term memory is widely seen as a “missing piece” that could transform AI assistants, including deeply personalized companions.
  • Several argue the long-term “winners” in AI will be companies with strong product lines and infrastructure (Google, Amazon, Microsoft), not just whoever trains the biggest base model.