The Framework Desktop is a beast
Soldered RAM, Physics, and Repairability
- Big debate around soldered LPDDR5X: critics say it contradicts the brand’s DIY/repair ethos and makes the desktop less repairable than typical PCs (and even their own laptops).
- Defenders argue it’s a hard technical constraint: at these frequencies, sockets hurt signal integrity (impedance changes, reflections, crosstalk), and this AMD Strix Halo platform only supports high-bandwidth LPDDR5X in soldered form.
- CAMM/LPCAMM is mentioned as a possible future middle ground, but current attempts reportedly failed for this CPU.
- Some see soldered RAM as making the system “throwaway” if RAM fails or needs upgrading; others note many users never upgrade RAM and have almost never seen RAM failures.
Framework’s Mission vs This Product
- One camp feels this desktop undermines the company’s core promise of repairability and upgradability, suggesting a different sub-brand for less-modular products.
- Another camp sees it as a pragmatic one-off: everything except RAM remains modular/standard (ITX-like board, flexATX PSU, storage, case), and future mainboard swaps can still extend life.
Strix Halo, Unified Memory, and AI/LLM Workloads
- Core appeal is the Ryzen AI Max 395+ APU: 16 cores plus a large iGPU sharing up to 128GB of unified memory at ~256 GB/s, similar in concept to Apple’s unified memory.
- This makes big local models possible (especially ~100B MoE) with GPU access to essentially “128GB VRAM,” but token speeds are much lower than big Nvidia cards; some benchmarks show ~5 tok/s on 70B models and slow prompt processing for long contexts.
- NPU/“AI” block exists but is seen as weak, under-documented, or hard to use today; most real work lands on the iGPU via Vulkan/HIP.
Comparisons to Macs and Traditional Desktops
- Many compare directly to M4 Pro/Max and Mac Studio/Mini:
- Apple has higher memory bandwidth on high-end parts (up to ~2×) and, in some tests, much better AI performance, but at much higher prices and with macOS lock-in and poor repairability.
- Price comparisons are contested: depending on config, Framework is cheaper than a Studio but close to an M4 Pro Mini; Apple’s RAM/SSD pricing is widely called predatory.
- Against classic PC builds: for pure gaming or maxed LLM inference, people still recommend 9800X3D/9950X + large Nvidia GPU or Threadripper/EPYC, at the cost of size, power, and noise.
Software Ecosystem and Alternatives
- CUDA still dominates many AI workflows; AMD’s ROCm works on these chips but support and tooling are viewed as immature and fragmented. llama.cpp + Vulkan/HIP works but optimal backends differ per model.
- SCALE and ZLUDA are cited as emerging bridges for CUDA code on AMD.
- Several commenters opt instead for:
- Used EPYC servers for huge but slower RAM,
- Minisforum/Beelink/GMKtec Strix Halo boxes,
- HP Z2 Mini (with limited “link ECC” only),
- Or simply sticking with Mac or conventional SFF PCs.