GitHub was having issues

Outage specifics and immediate reactions

  • Core problem was issues and pull requests not loading; many saw “zero issues” and joked about enjoying briefly empty backlogs.
  • Some framed it as “day one” under new management and mocked the timing; others noted outages have felt frequent for weeks.
  • A few said they were barely impacted because they can keep coding locally; others said outages block critical workflows like hotfix deployments tied to PRs and CI.

Reliability, pattern of incidents, and transparency

  • Several commenters described GitHub reliability as “abysmal” lately and linked to the status history, noting not all incidents are listed.
  • Others pushed back that while reliability is worse than they’d like, calling it “easily the most unreliable SaaS” is exaggerated, and pointed to worse experiences with Atlassian / Bitbucket or GitLab.
  • Some enterprise users encouraged demanding SLA reports and credits to create internal pressure at GitHub/Microsoft.

Centralization vs distributed git and SPOF risk

  • Many criticized the irony of centralizing on a single forge while using a distributed VCS.
  • Suggested mitigations: mirroring repos (e.g., to GitLab or a bare git+ssh server), running secondary “upstream” remotes, and regular exports.
  • GitHub’s broader role—issues, PRs, CI/CD, releases, docs, project boards—means outages are more serious than “just” git hosting.

Alternatives and self‑hosting

  • Popular self‑hosted options: Forgejo, Gitea, GitLab, plus hosted Forgejo via Codeberg; some also mentioned Tangled, Radicle, and Phorge.
  • Self‑hosting experiences ranged from “months of uptime, minutes per month of admin” (e.g., Gitea/Forgejo, GitLab) to warnings that GitLab is heavy and painful to run at scale.
  • Network effects and social/discoverability features were repeatedly cited as GitHub’s main moat, not unique features.

IPv6 and Azure concerns

  • Lack of IPv6 support was called embarrassing, forcing some to pay for IPv4.
  • One thread blamed Azure’s problematic IPv6 implementation (NATed v6, many limitations) as a likely factor.

Culture, tech stack, and AI

  • Speculation that internal pressure to ship features (including AI) on top of a large Ruby on Rails codebase contributes to fragility.
  • Some connected repeated incidents with executive churn and “vibe-coded” changes.