Cray versus Raspberry Pi

Design, Nostalgia, and Retro Builds

  • Cray-1’s iconic cylindrical look is compared to Apple’s “trash can” Mac; people speculate about subconscious design influence.
  • Several commenters fantasize about Pi or Pico clusters housed in Cray-style cases and reference existing Cray-shaped DIY builds and 3D-printed Y-MP cases.
  • There’s broader nostalgia for 70s–80s sci-fi props (Knight Rider, Blake’s 7, Space: 1999) that could now be almost trivially replicated with modern SBCs.

Sci-Fi Expectations vs Today’s Reality

  • Commenters note that a 1970s person shown an RPi5 or modern phone would find it “impossible,” echoing how sci-fi imagined talking computers and cars.
  • Early text-to-speech (C64, Atari, car voice warnings) is contrasted with current LLM-based conversational systems; consensus is that KITT-level dialogue is only now becoming plausible.
  • There’s sharp disagreement over whether self-driving is “already mundane”: some argue tech is effectively ready but blocked by law; others say current systems are still “sparkling lane assist” and nowhere near safe, unattended autonomy.

Could Old Supercomputers Have Run LLMs?

  • One line of discussion claims Cray-era machines could have run small neural models useful for autocomplete, linting, or summarization; the blockage was concepts and datasets, not hardware.
  • Others push back, arguing that even tiny models require far more parameters, data, and training compute than those systems could feasibly support.
  • The debate dives into parameter counts, FLOP estimates, historical systems like LeNet-5, and whether a 300K-parameter toy model proves anything beyond “it technically runs.”

What Happened to Cray-Class Workloads?

  • Original Cray workloads (weather forecasting, CFD, nuclear simulations, fusion coil design, CGI like “2010”) are still done, but at higher resolution, in 3D, or inside optimization loops.
  • Several note that many scientific and engineering problems remain compute-bound; better hardware mostly buys finer meshes, more physics, and higher accuracy, not “instant solutions.”

Hardware Progress, Moore’s Law, and Cost

  • Multiple comparisons: Cray-1 vs Pi, Pico 2 / RP2350, Pi Zero 2, and consumer GPUs (e.g., ray tracing “1 Cray per pixel” vs a single RTX 4080).
  • Discussion highlights that Moore’s law is about transistor counts, not FLOPS, and that real systems (including TOP500 supercomputers) don’t track the idealized curve.
  • Some stress that the miracle isn’t just performance but economics and infrastructure: supercomputer-class capability in sub-$20 boards or essentially free microcontrollers.

Software Bloat and Use of Compute

  • Several lament that vast gains in hardware are “spent” on bloated web stacks, JavaScript-heavy sites, Electron-style apps, and tracking/ads instead of pure computation.
  • Others counter with examples where massive compute has quietly enabled whole fields (modern CAE, improved forecasts, stealth design, etc.).

Time-Travel Thought Experiments

  • People speculate what 70s–80s scientists might have done if each had an RPi-class machine instead of queuing for shared Crays; ideas focus on higher-dimensional simulations and more ambitious experiments.