For algorithms, a little memory outweighs a lot of time
Clarifying the Result and Complexity Classes
- Thread starts with a precise statement: any multitape Turing machine running in time t can be simulated in O(√(t log t)) space (at the cost of more time).
- Several commenters initially misread this as implying P = PSPACE; others quickly correct that this would also imply P = NP and would be vastly bigger news than the article.
- People reiterate the standard beliefs: PSPACE is thought strictly larger than P, but this is unproven; similarly for separations like P vs EXPTIME (time hierarchy is mentioned).
Intuitions About Space vs Time
- Many find it “intuitive” that memory is more powerful than time: O(n) time only touches O(n) cells, but O(n) space holds 2ⁿ configurations.
- Others push back that you still need time to update cells, so the intuition isn’t trivial.
- A clear framing emerges: “O(n) space with unbounded time” can do more than “O(n) time with any space,” because time-bounded machines are automatically space-bounded, but not vice versa.
Turing Machines, Decision Problems, and Formalities
- A confused example about writing N ones leads to clarifications:
- Space in the theorem counts working space, not input/output.
- The paper focuses on decision problems (single-bit output), so huge outputs are outside its scope.
- Multitape vs single-tape machines are distinguished: multitape is only faster, not more powerful in what it can compute.
Lookup Tables, Caching, and Real-World Trade-offs
- Many connect the theorem to everyday patterns: lookup tables, memoization, dynamic programming, LLM weights, GPU-side tables in games.
- Repeated theme: “space as time compression” — storing intermediate results to avoid recomputation.
- Others note the opposing regime: when storage/bandwidth is expensive and compute is cheap, recomputation can win.
Hardware Scaling and Non-constant Access
- O(1) RAM access is criticized as unrealistic at massive scales; people discuss effective access costs growing like √n or n^(1/3), with tangents on data center geometry and even holographic bounds.
- Parallelism is noted as another gap between Turing models and real machines.
Hashing, Deduplication, and Combinatorial Explosion
- A humorous “precompute every 4KB block” idea leads into deduplication, hash collisions, cryptographic hashes, and birthday bounds.
- Several explain why the space of possible blocks or images (e.g., 256^307200) is unimaginably huge, using combinatorial reasoning.
On Quanta’s Style and Accessibility
- Some criticize Quanta for poetic framing and for avoiding explicit terms like “polynomial,” arguing this breeds misconceptions.
- Others defend the article’s accuracy and accessibility; the author appears in-thread to explain the intended nuance and audience trade-offs.
Practical Takeaways
- Consensus: the new theorem is mainly a theoretical breakthrough in time–space complexity, not an immediate systems recipe.
- At an intuitive level, it reinforces a familiar engineering lesson: extra memory, used well, can often buy enormous reductions in time.