Docker limits unauthenticated pulls to 10/HR/IP from Docker Hub, from March 1
Scope and mechanics of the new limits
- New policy: 10 unauthenticated pulls per hour per IPv4 or IPv6 /64, 40/hr for free authenticated “Personal” users; paid tiers get higher “consumption-based” limits.
- Several note this is numerically similar to existing 100-per-6-hours limits but far less burst‑friendly, which matters for cluster rebuilds and “update everything at once” workflows.
- Some report seeing rate-limit behavior already, and others point out Docker quietly updated docs and FAQs months ago, but communication is widely viewed as confusing or buried.
Practical impact: NAT, CI, k8s, homelabs, universities
- Under CGNAT or campus NAT, many users share one IPv4, so 10/hr can break classes, shared labs, and hobbyist setups.
- CI/CD (especially GitHub Actions, other cloud runners) may hit limits on PRs where secrets (auth) aren’t available; k8s node joins and autoscaling events can easily exceed 10 pulls.
- Self-hosted NAS GUIs and “click to deploy” stacks that don’t expose Docker login are called out as likely to break.
- Some FUD about caches: a pull‑through cache is still subject to the 10/hr limit when populating, but dramatically reduces repeat traffic once primed.
Mitigations and alternatives
- Common advice:
- Create a free Docker account and use auth everywhere possible.
- Run an internal registry or pull‑through cache (Harbor, Artifactory, Nexus, GitLab Registry/Dependency Proxy, ECR pull-through, K3s embedded mirror, Docker’s own registry image).
- Republish important images to GHCR, ECR Public, Quay, or a self-hosted registry and update image references.
- Friction points: Docker client’s hard‑coded docker.io default; lack of easy, authenticated registry-mirrors; rejected patches to override default registry. Podman’s configurable registries are cited as a better model.
Business model, bandwidth costs, and “bait and switch”
- One camp: bandwidth, storage, and infra at Docker’s scale are genuinely costly; free unlimited pulls were never sustainable; businesses should pay or run their own infra; current free limits are still generous for hobby use.
- Opposing view: transit bandwidth is cheap outside hyperscalers; this is primarily a monetization/lock‑in move after years of conditioning the ecosystem to rely on a centralized, “free” default registry and volunteer‑produced images.
- Many describe this as a classic “enshittification” pattern and a rug pull that will push projects and users toward other registries or away from Docker entirely.
Operational best practices and security arguments
- Several argue that any “serious” Docker/Kubernetes user should already:
- Mirror/vendor all dependencies (containers, packages, language registries).
- Avoid pulling directly from Docker Hub in production.
- Use internal caching for reliability, performance, and supply‑chain security.
- Others counter that for small teams these are non-trivial overheads, and Docker’s previous behavior reasonably led people to treat Hub like apt/npm-style infrastructure.
Storage pricing and future uncertainty
- New storage fees for private repos (e.g., $10/100GB/month) alarm organizations with TBs of historical images.
- Docker employees in the thread say storage enforcement is delayed to 2026 and pull limits at least a month, with better deletion and policy tooling promised, but public comms are seen as late and unclear.
- This, plus the rate limits, drives calls to move images off Docker Hub, treat docker.io as “just another registry,” or adopt alternatives like Podman and non-Docker registries as the new default.