GitHub Git Operations Are Down

Immediate impact of the outage

  • All Git operations (including SSH) returned errors; many couldn’t pull, push, or run GitHub Actions.
  • CI/CD pipelines halted, breaking builds and deployments; some releases and Yocto/embedded builds were blocked.
  • Multiple people initially assumed local misconfiguration (SSH keys, OS upgrades) and wasted time debugging before checking status.

GitHub vs. Git; centralization debate

  • Several point out that Git is distributed, but workflows have re‑centralized around GitHub for convenience (PRs, issues, CI, releases).
  • Some argue users clearly prefer professionally run centralized services over complex decentralized setups.
  • Others counter that this creates fragility and lock‑in; outages should motivate more resilient, distributed approaches.

Self‑hosting and alternatives

  • Alternatives mentioned: GitLab, Gitea, Forgejo, GitHub Enterprise Server, on‑prem Bitbucket.
  • Experiences vary: some report self‑hosted instances with better uptime than github.com; others say self‑hosting is heavier, more complex, and usually less reliable at scale.
  • Mirrors and backups are discussed: useful for resilience, but can cause complexity/conflicts around PRs, CI, and integrations.
  • Some are actively planning or executing moves away from GitHub; others see the outage as too minor to justify migration.

Reliability, uptime, and status communication

  • GitHub’s availability is perceived by some as “not great lately,” with recurring outages or jankiness (slow UI, flaky Actions).
  • Discussion of SLAs: public numbers like 99.9% vs. observed incidents; concern that different features can each be down without SLA violation.
  • Status page wording (“degraded performance”) is seen as downplaying a full failure of git operations; skepticism that status pages are fully honest.

CI/CD lock‑in and disaster recovery

  • Heavy reliance on GitHub Actions raises questions: how to deploy hotfixes if Actions or git hosting are down?
  • Some advocate keeping build logic in scripts/Makefiles and using CI only as a runner to ease migration and local execution.
  • Suggested DR patterns: secondary self‑hosted CI/VCS, periodic testing of backups, “escape hatches” for manual deployments—though many suspect such fallbacks are rarely battle‑tested.

Regulation, risk management, and audits

  • In regulated industries, auditors now ask for concrete plans for extended outages or forced vendor exits.
  • Self‑hosting is one answer but adds overhead; others propose external backups, standards‑based tooling, and documented runbooks for reconstructing services from distributed git clones.

Decentralized/federated tooling and issue tracking

  • Various decentralized/federated options appear: radicle, ForgeFed/ActivityPub support in Forgejo/Gitea, and git‑native issue tools like git-bug.
  • Debate over whether email/mailing lists count as a “distributed bug tracker”; consensus that Git itself lacks a built‑in issue tracker, but its ecosystem can approximate one.

Human reactions and culture

  • Many treat it as a “developer snow day” and joke about going outside or playing games.
  • Others report genuine anxiety (e.g., briefly thinking they’d been fired when SSH access failed).
  • The outage reinforces how central GitHub has become to daily development workflows.