Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised
Nature of the compromise
- PyPI releases
litellm1.82.7 and 1.82.8 were malicious. - 1.82.7 hid a payload in
litellm/proxy/proxy_server.pythat ran on import. - 1.82.8 added a
litellm_init.pthfile so arbitrary code ran at Python startup; simply installing it was enough. - The malware spawned Python processes, searched for credentials (e.g.,
~/.git-credentials, crypto wallet info), encrypted and exfiltrated data to attacker-controlled URLs.
Attack chain and actors
- Maintainers say the initial compromise came via a malicious Trivy scanner in CI/CD, which exfiltrated CircleCI secrets.
- Stolen tokens reportedly included PyPI publish credentials and a GitHub personal access token.
- Attacker then uploaded compromised versions to PyPI and appears to have taken over a maintainer’s GitHub account, defacing repos and closing issues.
- The same attacker group is linked in the thread to earlier Trivy compromises; timeline writeups and “TeamPCP” references are shared.
Impact and ecosystem blast radius
- LiteLLM is widely used as an LLM gateway and as a direct dependency (DSPy, CrewAI, browser-use, nanobot, others).
- Many projects had unpinned
litellminrequirements.txt/pyproject.toml, increasing exposure. - Users report systems freezing or being fork-bombed after indirect installs via other tools.
- Official Docker images pinned to older versions are repeatedly described as unaffected.
Mitigations and detection
- PyPI quarantined the project, then removed 1.82.7 and 1.82.8.
- Maintainers say all tokens/accounts have been rotated, publishing is paused, and an external incident-response team is engaged.
- Suggested local checks include searching for
litellm_init.pthand installedlitellm1.82.7/1.82.8 across environments. - Advice: build deployable artifacts instead of live
pip/uv run; pin versions and hashes; use lockfiles and “exclude newer than X days” features; mirror dependencies; rotate any possibly exposed credentials.
Debate on wrappers and dependency hygiene
- Some argue LiteLLM-like wrappers are not worth the extra supply-chain risk given most providers expose OpenAI-compatible APIs.
- Others note such gateways add real value (API unification, fallbacks, guardrails, key management, spend tracking), which is why they’re so widely embedded.
- There is broad criticism of large, messy dependency trees and of LiteLLM’s code quality; several users say they’re switching to alternatives or writing minimal shims.
Broader supply-chain and tooling concerns
- Strong calls for:
- Better isolation (VMs, containers, sandboxes, OS-level controls) for dev tools and CI scanners.
- Static analysis and malware scanning of new dependencies.
- More conservative update policies (delays, age thresholds).
- Stronger CI credential scoping and trusted publishing via OIDC.
- GitHub’s weak spam controls are criticized; the flooded issue thread with low-effort “thanks” comments is seen as deliberate discussion suppression.
- Many expect such AI-tooling supply-chain attacks to become routine rather than exceptional.