AMD: Microcode Signature Verification Vulnerability
Disclosure, timelines, and technical details
- Some commenters object to Google’s partial disclosure and framing of “re‑establishing trust,” arguing that trust is earned, not restored by PR.
- Others note they promised fuller details in March and only disclosed early because ASUS leaked the fix in beta BIOS notes.
- The advisory’s reference to an “insecure hash function” for validating microcode sparked guesses: CRC32, a weak SHA variant, or more likely an implementation bug (e.g., comparing hashes incorrectly).
- Evidence that newer microcode is rejected by older AGESA suggests AMD also changed the trust/validation chain for runtime patches.
RDRAND payload and RNG implications
- The demo that forces RDRAND to always return 4 is viewed as a humorous but powerful proof that arbitrary microcode was loaded, not a claim that RDRAND itself is generically broken.
- Long discussion of OS RNG design: Linux and Windows treat RDRAND/RDSEED as one of many entropy sources, not the only one, and mix outputs via hash functions.
- Some argue mixed entropy protects against faulty hardware RNG; others point out a malicious microcode implementation can observe state and manipulate outputs so that mixing still yields attacker‑chosen values, and such subversion may be very hard to detect.
- There’s debate over how much attack logic can realistically fit in a microcode payload.
Threat model, severity, and exploitability
- High severity is defended on the grounds that confidential computing (SEV‑SNP, DRTM) explicitly assumes ring‑0 outside the VM cannot break guest isolation; this bug invalidates that assumption.
- Several people initially say “if you have ring 0 you’ve already lost,” but others emphasize that in these models, host root is not supposed to be able to read guest memory.
- Clarifications: microcode runs at a higher privilege than OS/VMM; microcode updates can be applied at boot by firmware or later by the OS; they are not persistent across power cycles.
Cloud, attestation, and verifying fixes
- Users wonder how to know a cloud provider is running genuine patched microcode rather than a malicious patch that claims to be fixed.
- Answer: for SEV‑SNP, guests can verify TCB values via attestation reports; what exact state is attested (just a revision ID vs full configuration) is unclear from public docs.
- Without SEV‑SNP/attestation, you already fully trust the hypervisor, so microcode patch level is largely moot.
Owner control vs “vulnerability” framing
- Some commenters argue this is only a “vulnerability” from the vendor’s/remote‑attestor’s perspective; from an owner’s perspective, the ability to load arbitrary microcode restores control over their own hardware.
- Others push back that DRTM/remote attestation are also used to defend against bootkits and that most users want vendor‑managed security, not full hardware programmability.
- There is concern that widespread, reliable attestation will eventually enable coercive requirements on what software users are allowed to run.
Hobbyist microcode and firmware distribution
- The possibility of custom microcode excites people interested in reverse engineering, performance tweaks (e.g., undoing mitigations), or alternative behavior, though practical limits (microcode size, compatibility) are acknowledged.
- AMD’s reduced microcode distribution via linux‑firmware is criticized: many consumer CPUs rely on BIOS vendors for updates, and with new AGESA restrictions, older boards that never get new firmware may miss future microcode fixes entirely.