Skip to main content
  1. Blog/

Python 3.11 Is Shaping Up to Be Seriously Fast

·1118 words·6 mins
Osmond van Hemert
Author
Osmond van Hemert
Python Evolution - This article is part of a series.
Part : This Article

For as long as I’ve been using Python — and that goes back to the 1.5 days — “Python is slow” has been the criticism that never goes away. It’s the tax you pay for the language’s expressiveness, readability, and enormous ecosystem. You accept the performance tradeoff, reach for C extensions or PyPy when it really matters, and move on with your life.

But the latest alpha releases of Python 3.11 are suggesting that the tradeoff may be getting significantly less painful. The Faster CPython project, led by Mark Shannon and funded by Microsoft (which hired Shannon and Guido van Rossum specifically for this effort), is producing results that have the Python community genuinely excited. Benchmarks on the alpha releases are showing 10-60% speedups across the pyperformance suite compared to Python 3.10, with some individual benchmarks improving even more dramatically.

What’s Actually Changing
#

The speedups in Python 3.11 come from several complementary optimizations, all targeting the CPython interpreter rather than changing the language itself. This is important — your existing Python code gets faster without any modifications.

Specializing Adaptive Interpreter. This is the big one. Python 3.11 introduces a specializing adaptive interpreter (PEP 659) that optimizes bytecode at runtime based on the types it actually encounters. When the interpreter sees that a particular LOAD_ATTR instruction consistently accesses an attribute on the same type of object, it replaces the generic instruction with a specialized version that skips the general-purpose attribute lookup.

This is conceptually similar to what JIT compilers do, but it’s implemented as bytecode specialization rather than native code generation. Each bytecode instruction has a counter, and after being executed enough times with the same type pattern, it gets “quickened” to a specialized version. If the type assumption later breaks, it reverts to the generic version. This approach avoids the complexity and warmup time of a full JIT while still capturing a significant portion of the benefit.

Faster Startup. Python 3.11 includes frozen imports for the standard library (PEP 690 work), reducing the overhead of importing common modules. If you’ve ever profiled a Python application’s startup, you know that importing the standard library can account for a surprising chunk of time.

Cheaper Exceptions. The cost of try/except blocks when no exception is raised has been reduced to near-zero in Python 3.11. Previously, entering a try block had a measurable overhead even in the happy path. This matters because try/except is used extensively in Pythonic code — “ask forgiveness, not permission” is a language idiom, and the performance penalty for following it has always been a quiet friction.

Frame Object Laziness. Python 3.11 lazily creates frame objects — the internal data structures that represent function call stack frames. Previously, a frame object was created for every function call. Now, the interpreter uses a more compact internal representation and only creates the full frame object when something actually needs it (like a debugger or sys._getframe()).

The Benchmark Numbers
#

The pyperformance benchmark suite is the standard tool for measuring CPython performance across a range of real-world workloads. On the 3.11 alphas, the results are impressive:

  • Overall geometric mean: ~25% faster than Python 3.10
  • Some benchmarks like spectral_norm show 40-60% improvement
  • Startup time for python -c "pass" is measurably faster
  • Exception-heavy code paths show significant gains

These aren’t micro-benchmarks designed to flatter the optimizer. The pyperformance suite includes template rendering, regular expressions, JSON serialization, scientific computing kernels, and other realistic workloads.

It’s worth noting that I/O-bound applications — which is what many web services and data pipelines are — won’t see the full benefit of CPU-level optimizations. If your application spends most of its time waiting on database queries or HTTP responses, a 25% faster interpreter doesn’t translate to a 25% faster application. But for compute-heavy tasks, data processing pipelines, and application startup, the improvements are very real.

The Faster CPython Roadmap
#

What makes this particularly exciting is that Python 3.11 is explicitly positioned as just the first phase. The Faster CPython project has a multi-release roadmap:

  • Python 3.11 (this release): Specializing adaptive interpreter, frame optimizations — targeting 1.25x faster (they’re hitting this)
  • Python 3.12: More aggressive specializations, potential for a basic JIT compiler — targeting 2x faster than 3.10
  • Future releases: Progressively more sophisticated optimization — aspirational target of 5x faster than 3.10

A 5x improvement over the current interpreter would fundamentally change the performance conversation around Python. It wouldn’t match C or Rust, but it would put Python in the same ballpark as Java and JavaScript V8 for many workloads — languages that nobody dismisses as “too slow for production.”

Why This Matters Beyond Benchmarks
#

The performance improvements in Python 3.11 matter for reasons beyond raw execution speed:

Lower barrier for Python in new domains. There are projects today where teams choose Go, Java, or even Node.js over Python specifically because of performance requirements. Narrowing that gap expands the range of problems where Python is a viable choice.

Reduced cloud costs. If your Python Lambda functions or container workloads run 25% faster, that translates directly to reduced compute costs. At scale, this is real money. I’ve seen organizations spend significant effort rewriting Python services in Go purely for cost reasons — faster CPython makes that calculus different.

Better developer experience. Faster startup means faster test suites, faster CLI tools, and snappier development workflows. The cumulative effect on developer productivity is hard to measure but very real.

My Take
#

I’ve watched Python’s performance story evolve over decades, from the early days when nobody cared about speed because scripts were small, through the era of “just use C extensions,” to the current moment where Microsoft is funding a multi-year effort to make CPython itself faster.

The Faster CPython project feels like the most promising thing to happen to Python performance since PyPy. The key difference is that these improvements are landing in CPython itself — the reference implementation that 95%+ of Python users actually run. PyPy’s performance was always excellent, but the compatibility gaps and ecosystem friction limited its adoption. CPython optimizations have no such barrier — you just upgrade Python and your code gets faster.

I’m cautiously optimistic that the 3.12 and beyond targets are achievable. Mark Shannon’s track record and the team’s methodical approach — focusing on well-understood optimization techniques rather than moonshot redesigns — gives me confidence.

If you’re running Python in production, start testing your applications against the 3.11 alphas and betas. The final release is expected in October, and you’ll want to be ready to upgrade quickly. This is the rare Python release where “what’s new” includes a compelling performance pitch alongside the usual language features.

The age of “Python is slow, deal with it” may finally be coming to an end.

Python Evolution - This article is part of a series.
Part : This Article