AWS Adds support for nested virtualization
Feature, scope, and rollout
- AWS is adding nested virtualization support to non–bare metal EC2, starting with specific 8th‑gen Intel instance families (M8id/C8id/R8id and related c8i/m8i/r8i lines) and at least in us‑west‑2.
- Documentation hints that when nested virt is enabled, Virtual Secure Mode (VSM) is disabled.
- The feature surfaces through the standard EC2 APIs/SDKs and is expected to appear across SDKs as their autogenerated models update.
Why people care (use cases)
- Run Firecracker and other microVM-based sandboxing or multi-tenant services (e.g., database per-tenant, AI sandboxes) on regular EC2 instead of expensive bare metal.
- Stronger isolation for “containers in VMs” stacks (Kata Containers, gVisor, etc.) and potential support for live migration and easier maintenance of stateful workloads.
- CI/CD and testing workflows: Android emulators, OS image building, build systems that spin up their own VMs, network simulators (e.g., GNS3), Hyper-V labs, and other third‑party virtual appliances.
- Lets customers subdivide large instances into their own VMs when they don’t have enough load to justify entire bare‑metal hosts.
Performance and technical concerns
- Reported overhead estimates cluster around ~5–15% in practice, but highly workload-dependent.
- CPU‑bound work can be near-native if hardware nesting is used; I/O performance can range from “barely measurable hit” to “absolutely horrible” depending on implementation.
- Some worry about the complexity and maturity of nested VMX in Linux; others counter that major clouds have run this in production for years.
- Clarification that nested virt isn’t just “another VM layer”: modern CPUs support cooperative nesting where guest hypervisors manage their own virtualization structures.
AWS “late to the game” vs engineering constraints
- Many commenters note GCP, Azure, OCI, DigitalOcean and others have exposed nested virt for years and see AWS as lagging.
- A contrasting view emphasizes AWS’s stricter security and isolation bar (Nitro, custom hardware, non‑stock KVM stack), plus the need to integrate VPC networking, hardening, performance, and control plane — not just flip a bit in KVM.
- Debate over whether AWS’s custom stack slowed delivery versus being necessary to meet its internal standards.
Costs and ecosystem tangents
- Some see this as “expensive VM instead of expensive bare metal,” while others stress operational simplicity and avoiding having to build cloud‑like primitives on cheaper hosts.
- Side discussions compare Hetzner/OVH bare metal pricing and setup fees, and whether avoiding deep AWS dependence can simplify architectures.