DeepSeek gives Europe's tech firms a chance to catch up

DeepSeek’s impact and who benefits

  • Many see DeepSeek as a global equalizer, not just a European opportunity: usable by firms in the US, EU, Asia, Africa, and the Middle East due to its relatively open licensing.
  • Commenters note it’s one of the first “frontier-grade” models with a relatively friendly license, in contrast to prior highly capable but restricted models.

Licensing, sanctions, and regulation

  • Some expect US sanctions or policy moves to discourage or block use of Chinese models like DeepSeek, with potential knock-on effects for European infrastructure providers and GPU access.
  • Italy has already blocked DeepSeek’s service (as it previously did with ChatGPT), with the expectation it might be unbanned once privacy requirements are met.
  • There is debate over the EU AI Act: critics say EU bureaucracy and compliance burdens will stifle innovation; defenders argue the rules mostly target high‑risk uses (e.g., social scoring) and require quality systems similar to other regulated industries.

Model quality, distillation, and tooling confusion

  • A large subthread covers disappointing results from “deepseek-r1:8b/32b” via Ollama, especially for Verilog code generation.
  • Others explain these are distilled models based on Qwen/Llama, not the full 671B R1, and that Ollama’s naming and defaults are confusing by design (small model first).
  • Distills, especially under 32B and heavy quants, are widely reported as weak and hallucination‑prone; the full 671B model is described as slow but roughly in o1’s class.
  • Ollama is criticized for custom weight formats, sloppy chat templates, and limited support for some hardware; alternatives like llama.cpp, vLLM, LM Studio are suggested.

Use cases, limitations, and expectations

  • Several note that small models and generic training struggle with niche domains like Verilog; specialized coder models or large unquantized versions work better.
  • “Thinking” models always emit long reasoning traces, making prompting and steering a different skill.

Europe, UK, and AI competitiveness

  • Opinions diverge sharply: some say EU overregulation, “lazy culture,” and lack of a coherent tech strategy doom it to irrelevance; others defend regulation and point to structural issues like housing, demographics, and welfare taking priority over an “AI race.”
  • The UK is viewed as strong in talent and research (e.g., major labs, academic strength) but weak in scaling businesses; Brexit and proposed strict AI/CSAM laws are seen as additional headwinds.
  • There’s skepticism that national “sovereign LLM” projects and broad EU collaborations will produce world‑class models without deeper strategic and industrial changes.

Economics, pricing, and moats

  • A pricing comparison shows DeepSeek’s advertised per‑token rate can be misleading (cached vs. uncached rates); its API has also been unstable.
  • Some argue there is “no moat in AI”: Europe can free‑ride on US/Chinese spending and distill top models cheaply.
  • Others note that export controls could change that calculus, though some think aggressive restrictions would also hurt US firms.
  • The natural‑language API paradigm is seen as reducing vendor lock‑in: switching between providers can be as simple as changing endpoints and keys.