DeepSeek Open Infra: Open-Sourcing 5 AI Repos in 5 Days

Excitement and comparison to OpenAI

  • Many commenters find this more exciting than OpenAI’s “12 days” marketing, framing DeepSeek as closer to the original spirit of “open AI”.
  • Some push back: they see OpenAI’s o1 as a genuine paradigm shift in reasoning, while DeepSeek is seen more as a shift in economics and openness than in raw capability.

Moats, economics, and Nvidia

  • Ongoing debate on where the “moat” in AI lies:
    • Some argue hardware and GPU farms are the real moat; models, prompts, and UX are copyable.
    • Others say the real moat is products and owning user data, with LLMs as infrastructure like databases.
  • Open models may not hurt Nvidia; cheaper, better models can increase overall GPU demand (Jevons paradox).
  • There’s discussion of shifting from per-request opex to capex (self-hosting) making new applications viable.

Open source, AGI, and digital commons

  • Several see DeepSeek’s openness (weights, infra tools) as closer to a “real AGI for everyone” vision: powerful models that are free, modifiable, and not gatekept.
  • Others caution that open weights don’t solve harms: job displacement, disinformation, and psychological ops could be accelerated.
  • Some frame foundation models as part of a “digital commons,” analogous to Linux or databases, with value created at the application layer.

Geopolitics, trust, and China-specific concerns

  • Strong skepticism about Chinese firms’ claims:
    • Suspicions of state subsidies, sanction evasion, and strategic IP theft.
    • Concerns about data sharing, censorship, and embedded propaganda in training data.
  • Others counter that China is a leading tech power, that governments worldwide align tech with national interests, and that fears can be exaggerated or tribal.

DeepSeek’s resources and technical stack

  • Interest in what will be released: especially distributed training, inference stack, and how they optimized under China-specific GPU constraints (A100/H800/H20, large clusters, MoE inference).
  • Some think open-sourcing infra partly crowdsources their platform; others note open-sourcing often increases, not reduces, support costs.

Motivations, PR, and bubble implications

  • Split between viewing DeepSeek as altruistic vs. executing a savvy PR and competitive play to erode closed-source moats (possibly even “popping the US AI bubble”).
  • Several expect an AI valuation bubble to burst while underlying AI usage persists, analogous to the dot-com era.