The first year of free-threaded Python

Concerns about removing the GIL

  • Many participants express unease that free-threaded Python will expose a huge class of subtle concurrency bugs, especially in dynamically-typed, “fast and loose” code.
  • Some fear existing multithreaded Python that “accidentally worked” under the GIL will start failing in odd ways, and worry about decades of legacy code and tutorials that implicitly assumed a global lock.
  • Others argue the “must be this tall to write multithreaded code” bar is already high, and Python without a GIL risks more non–sequentially-consistent, hard-to-reason-about programs.

What the GIL actually does (and doesn’t)

  • Several comments stress the GIL never made user code thread-safe; it just protected CPython’s internal state (e.g., reference counts, object internals).
  • It has always been possible for the interpreter or C extensions to release the GIL between operations, so race conditions already exist on Python-level data structures.
  • Free-threaded Python replaces the GIL with finer-grained object locking while preserving memory safety; races like length-check-then-index can already happen today.

Performance tradeoffs and use cases

  • Expected tradeoff: small single-threaded slowdown (numbers cited from low single digits up to ~20–30% in some benchmarks) in exchange for true multicore parallelism and simpler user code (no more heavy multiprocess workarounds).
  • Debate on impact: some argue 99% of code is single-threaded and will only get slower; others reply that many workloads (web servers, data processing, ML “Python-side” bottlenecks) will benefit significantly once threading becomes viable.
  • Free-threaded mode is currently opt-in; some expect a “Python 4–style” ecosystem split if/when it becomes the default.

Multiprocessing, shared memory, and async

  • Several suggest sticking with processes plus multiprocessing.shared_memory or SharedMemory/ShareableList for many workloads; this avoids shared-state bugs but requires serialization and replicated memory.
  • There’s discussion of the real overhead: OS process creation is cheap; Python interpreter startup is not.
  • Async I/O (e.g., asyncio) is widely recommended for network-bound workloads; some see proper threading as complementary rather than a replacement.

Impact on ecosystem: C extensions, tooling, LLMs

  • Biggest breakage risk is in C extensions that assumed “GIL = global lock on my state.” Free-threading and subinterpreters complicate those designs.
  • Libraries like NumPy reportedly already support free-threading in principle but are still chasing bugs.
  • Some worry LLMs trained on GIL-era examples will confidently emit unsafe threaded code unless prompted otherwise.

Governance, sponsorship, and priorities

  • Microsoft’s layoff of its “Faster CPython” team is viewed as a setback for both performance work and free-threading, though Python has multiple corporate sponsors.
  • There’s criticism of CPython governance (claims of overpromising, politics, and alienating strong contributors), but others push back as unsubstantiated.
  • Some question prioritizing free-threading over a JIT; others reply that most hot paths are already in C extensions and multicore scaling offers larger wins than a JIT for typical Python workloads.

Language design and alternatives

  • Ongoing meta-debate: instead of deep surgery on CPython, why not use languages with safer concurrency models (Rust, Go, Erlang/BEAM, Clojure) or faster runtimes (JS engines)?
  • Counterpoint: Python’s ecosystem and legacy codebase make “just switch languages” unrealistic; working through the technical debt (GIL removal, speedups, better abstractions) is seen as future-proofing the platform.