Sony's Mark Cerny Has Worked on "Big Chunks of RDNA 5" with AMD
Mark Cerny, RDNA5, and AMD Collaboration
- Commenters note Cerny appears to work as a consultant rather than a Sony employee, influencing both PlayStation SoCs and AMD GPUs.
- The article’s “RDNA5” branding is questioned: Cerny himself is quoted as saying “RDNA 5, or whatever AMD ends up calling it,” suggesting a name in flux.
RDNA, CDNA, and UDNA Convergence
- Several posts argue that AMD’s public roadmap ends RDNA at 4, with a shift to “UDNA1,” a unified architecture with CDNA (HPC/datacenter).
- There’s disagreement on how similar RDNA and CDNA already are: some claim broad commonality, others detail substantial differences (wavefront width, execution model, latency, feature sets).
- UDNA is seen as both an architectural and organizational consolidation, potentially merging teams and long-term strategy.
Console Custom Silicon and Semi-Custom Paths
- Sony could theoretically request an RDNA4-derived design instead of adopting early UDNA, resulting in a “RDNA5” that remains semi-custom and never appears as a retail GPU.
- Past semi-custom work (e.g., console/APU overlap, Steam Deck–style chips) is cited as precedent, with console learnings later feeding into APUs.
Generational Leap: PS4 → PS5
- Many feel the visual jump from PS4 Pro to PS5 is small relative to the FLOPS increase, especially compared with earlier eras (PS1→PS3).
- Explanations offered:
- More pixels (4K vs 1080p) consume much of the extra compute.
- Hardware progress has slowed (Dennard scaling and Moore’s law weakening, rising node costs).
- Huge performance gains now often go into higher FPS and faster loading instead of visibly new effects.
- The biggest PS5 leap is widely credited to NVMe SSD + hardware decompression, not raw GPU power.
Engines, Bloat, and Unreal
- Some argue engine practices, especially Unreal’s default pipeline optimized for highly dynamic scenes, waste potential for many game types.
- Others respond that fully dynamic lighting and environments dramatically improve workflows and design freedom, even if they cost performance.
Anti-Aliasing and Image Quality Tradeoffs
- TAA is debated:
- One side says it’s an efficient, necessary replacement for supersampling/MSAA and has improved significantly.
- Critics argue temporal and upscaling techniques sacrifice clarity, introduce ghosting/blur, and inflate FPS metrics while degrading real image quality.
- There’s agreement that objective metrics for temporal artifacts are poor, making evaluation hard.
Performance vs Fidelity and Player Preferences
- Multiple comments say most players choose performance modes when presented with a choice; a cited Sony stat claims ~75% pick performance.
- Others question how representative this is across genres, noting fast competitive games may skew the data.
- Some users report being FPS-tolerant (e.g., 25–30fps is acceptable if stable); others insist modern displays make low framerates intolerably blurry.
Optimization Culture and Rising Costs
- Several posts lament that modern games are less optimized, with studios relying on hardware advances and middleware.
- A few argue the bottleneck has shifted:
- Asset production (high-res models/textures) dominates cost, leading to teams with far more artists than programmers.
- AAA engines increasingly optimize for artist workflows rather than peak runtime performance.
- Others counter that even in earlier eras, many console games already used C and higher-level tooling; the “all hand-tuned asm” narrative is overstated.
Storage, Streaming, and New Techniques
- PS5’s SSD and streaming capabilities are highlighted as enabling design changes (fewer fake loading corridors, highly detailed continuous worlds).
- Examples mentioned include Cyberpunk’s serialization bottlenecks on PS4 and newer techniques like Nanite:
- Supporters say Nanite shines in extremely complex scenes and is optional.
- Critics say it adds overhead and can hurt performance in simpler content.
Hardware Progress, GPUs, and AI/Crypto
- Some posters attribute weaker generational jumps to fundamental tech limits (SRAM/IO scaling stalling, expensive shrinks).
- Others note that GPU vendors now prioritize datacenter/AI features (high VRAM, interconnects), potentially slowing pure gaming advances; there’s disagreement on how much this affects consoles.
- A side thread argues that software has also grown more capable at leveraging parallelism, while another stresses that fully saturating modern multi-core + GPU + NPU systems remains rare.
APIs and Vulkan-on-PlayStation Debate
- One question asks why Sony doesn’t support Vulkan on PS5.
- Defenders of Sony’s proprietary APIs (GNM/GNMX) say consoles benefit from ultra-low-level, hardware-specific interfaces and avoid Khronos politics.
- Pro-Vulkan voices argue standards reduce developer burden and avoid NIH/lock-in; they criticize Vulkan’s extension “spaghetti” but still see it as the best collaborative option.
- There’s nuanced discussion of Vulkan’s strengths (barrier model, SPIR-V) and weaknesses (complex extensions, OpenGL legacy).
AMD Software Stack and ROCm
- A commenter reports that recent ROCm releases now “just work” with tools like llama.cpp on AMD GPUs, contrasting with years of painful setup.
- Others note llama.cpp can bypass ROCm entirely via Vulkan, but ROCm compatibility is treated as a useful barometer of AMD’s software maturity.
Miscellaneous
- Some express excitement that UDNA and Cerny’s work could improve AMD’s datacenter competitiveness against Nvidia, with the caveat that poor drivers/support could again damage trust.
- There’s skepticism about current-gen consoles’ limited exclusive library, but anticipation that titles like GTA VI may finally showcase the hardware.