How DRAM changed the world

DRAM Scaling Limits and Economics

  • Several comments argue DRAM scaling has effectively stalled: stuck near “10 nm” classes, with very slow cost/GB improvements over the last ~15 years.
  • One view: DRAM cell capacitors hit a practical speed limit around 400 MHz and a charge limit of tens of thousands of electrons, making further shrinking and faster access extremely hard.
  • Another thread disputes “flat prices,” citing DRAM dropping from ~$10/GB (2009) to around $1–2/GB recently, but agrees the price curve has flattened relative to the 1990s–2000s.

SRAM as DRAM Replacement / Large Caches

  • Idea raised: once advanced nodes (e.g., 5 nm) are cheaper, put ~GB of SRAM on-die as an L4 cache and potentially replace DRAM.
  • Pushback: SRAM is many times more expensive per bit, uses more power, and very large dies face latency limits from signal propagation and energy cost of data movement.
  • Some suggest partial solutions (e.g., hundreds of MB of SRAM alongside DRAM), but others question market demand and note that hardware-managed caches already serve this role.
  • Historical hybrid designs (e.g., DRAM with embedded SRAM cache) existed but saw little adoption.

DRAM Operation, Refresh, and Reliability

  • Discussion contrasts DRAM vs SRAM: DRAM stores each bit in a transistor+capacitor cell that must be periodically refreshed; SRAM uses multi-transistor flip-flops that hold state without refresh at typical system timescales but at much higher area and power.
  • Reading DRAM is “destructive”: entire rows are sensed into SRAM-like buffers and then written back, so refresh can be implemented by periodic row reads.
  • Early systems sometimes needed explicit software refresh loops; later, controllers automated it.
  • Reduced margins and infrequent refresh lead to phenomena like rowhammer; some commenters call all modern DDR3/4 “defective by design” from a correctness standpoint.

Debate on Memory Latency Across DDR Generations

  • One line claims DRAM latency has been roughly stuck around ~13–17 ns since early DDR, limited by capacitor physics.
  • Others note specific modules (e.g., fast DDR2, DDR4, DDR5) achieving ~7.5–10 ns “first data” latency, arguing there has been some progress, though acknowledged as modest.
  • Consensus: bandwidth has risen dramatically; latency improvements are small and often offset by higher CAS timings.

User Experience and Nostalgia Around RAM

  • Many reminisce about transformative 1990s upgrades (e.g., 4→16 MB, 8→32 MB), which eliminated swapping and enabled new software classes.
  • In contrast, 8→32 GB today is seen as incremental for typical use (more tabs, VMs) rather than life-changing, partly because SSDs have narrowed the penalty of not fitting entirely in RAM.
  • Stories highlight past RAM cost (SIMM prices rivaling CPUs), elaborate upgrade hacks, and how rapid hardware progress then contrasts with today’s long-lived machines.

8K Video and High-Resolution Capture

  • Skepticism: for casual users, 4K and especially 8K impose heavy storage/battery costs with limited visible benefit on typical screens and streams.
  • Supporters point to professional and niche uses:
    • Post-production flexibility (crop, reframe, stabilize while still outputting 4K).
    • VR video, where 8K+ is described as clearly better than 4K.
    • Scientific/industrial imaging (e.g., mineral studies under high magnification).
  • Some emphasize that streaming services often under-deliver bitrate, so “4K” streams can look worse than high-bitrate 1080p; local playback can better exploit high resolution.