GitHub cuts AI deals with Google, Anthropic

New Copilot capabilities

  • GitHub Copilot will let users choose between multiple LLMs (OpenAI, Anthropic/Claude via AWS Bedrock, Google/Gemini; Llama/Mistral mentioned as future/partial options).
  • Multi‑model support is mostly for chat / code editing; impact on inline autocomplete speed is unclear.
  • Copilot is expanding IDE support (e.g., Xcode) and integrating with external sources like Stack Overflow.

Motives and strategy

  • Many see this as Microsoft:
    • Hedging against over‑dependence on OpenAI after governance drama.
    • Turning Copilot into a model‑agnostic platform and “commoditizing the complement” (models) to keep strategic power at the IDE/DevOps layer.
    • Potentially helping antitrust optics by not being tied to a single provider.

Model comparisons and tool ecosystem

  • Several commenters prefer Claude 3.5 Sonnet for code quality and reasoning; others find GPT‑4o/o1 better for some tasks, especially with web tools.
  • ChatGPT app is praised for polish (code interpreter, search, voice, custom GPTs), while Claude is praised for raw coding ability and artifacts.
  • Many alternative frontends and IDE tools mentioned (Cursor, Aider, Cody, Continue, local LLM frontends), often valued for multi‑model support and deep project context.

Productivity vs. reliability

  • Strong split:
    • Some report 2–5× productivity gains, using LLMs for boilerplate, one‑off scripts, refactors, and cross‑lib “glue”.
    • Others see little or negative net gain due to hallucinated APIs, subtle bugs, repetitive error cycles, and time spent verifying.
  • Common “sweet spots”: bash/scripts, SQL, poorly documented libs, initial scaffolding, and test boilerplate.
  • Common failure modes: short prompts, complex or novel problems, large refactors, domain‑specific logic, and over‑trusting generated code.

Open source, licensing, and GitHub data

  • Strong concern that Copilot and other tools are trained on OSS (including copyleft like GPL/AGPL) without attribution or compensation; some call this IP “laundering”.
  • Others argue it’s analogous to humans learning from code; legality and “derivative work” status are seen as unsettled.
  • Some developers are considering or executing migrations away from GitHub, though network effects and convenience are high.

Perceptions of AI progress

  • Many see rapid capability gains; others perceive diminishing returns and predict an eventual “AI winter” or bubble correction.
  • Debate over whether LLMs show “intelligence” or only powerful pattern prediction; standardized test performance is contested as a metric.
  • Consensus that LLMs are already changing how people search, learn APIs, and approach coding—even if they’re far from trustworthy autonomous programmers.