Unpowered SSDs slowly lose data

Practical implications for backups

  • Many commenters realized their “cold” SSDs (laptop pulls, shelf backups, unused game/arcade systems, etc.) are risky: several report SSDs that were fine when stored but dead or badly corrupted after a couple of years unpowered.
  • HDDs also fail, but tend to degrade mechanically or show bad sectors while still allowing partial recovery; SSD failures are more often sudden and total.
  • Several people store backups only on HDDs or ZFS/Btrfs NASes, and treat SSDs strictly as “in-use” storage. Others prefer paying cloud providers rather than managing media aging.

How and why SSD data fades

  • Explanations center on charge leakage from flash cells: programming and erase are probabilistic, and over time voltages drift.
  • Higher‑density modes (MLC/TLC/QLC) pack more levels into each cell, so thresholds are closer, retention is worse, and endurance lower; 3D NAND now uses charge‑trapping rather than classic floating gates, but the basic problem remains.
  • Retention strongly depends on program/erase cycles and temperature: more wear and higher temps shorten safe unpowered time.

Specs, standards, and uncertainty

  • Discussion of JEDEC standards (JESD218/219):
    • “Client” vs “Enterprise” drives have different power‑off retention requirements (≈1 year vs ≈3 months), but those specs apply at end of rated life (after TBW/DWPD endurance testing).
  • Consumer SSDs often don’t publish clear retention specs; commenters question the concrete numbers in the article and note manufacturers rarely talk about unpowered use.

Refreshing / “recharging” SSDs

  • Consensus: merely powering on is not enough; blocks must be read so the controller’s ECC can detect weak cells and rewrite/relocate data.
  • Firmware behavior is opaque and model‑dependent. Enterprise firmware often performs background refresh when powered and idle; consumer drives may do less.
  • Suggested user tactics: periodic full‑device reads (dd if=/dev/sdX of=/dev/null, pv, ZFS/Btrfs scrubs), or regular fsck/scrub schedules on always‑on systems. For truly cold drives, some recommend fully rewriting data periodically.

File systems, tools, and strategies

  • Strong support for filesystems with checksums and scrubs (ZFS, Btrfs, UBIFS/ubihealthd) to detect and auto‑repair bitrot when redundancy exists.
  • Others augment backups with hash databases, parity tools (par2), and multi‑media 3‑2‑1 strategies (multiple copies, different media, offsite).

Media choices for long‑term storage

  • For long‑term archives, commenters lean toward:
    • Spinning disks (with periodic spins and checks).
    • Tape (LTO) for serious archival, despite cost/complexity.
    • Industrial/SLC or NOR flash for niche, high‑retention needs.
  • Several stress that flash of all kinds (SSDs, USB sticks, SD cards, even console cartridges) should not be treated as “stone tablets” for decade‑scale cold storage.