Python 3.11 is currently in beta, with the final release targeted for October, and the benchmarks are turning heads. The Faster CPython project, led by Mark Shannon and backed by Microsoft (who hired Shannon and Guido van Rossum specifically for this), is delivering on its promise: CPython 3.11 is showing 10-60% speedups across the standard benchmark suite, with an average improvement of around 25%. For a language often criticized for its speed, this is significant.
The Faster CPython Initiative#
To understand why this matters, you need the context. Python has always traded raw performance for developer productivity, and for most of its history, the core team accepted that trade-off. If you needed speed, you’d drop to C extensions, use NumPy, or reach for Cython. The language’s interpreted, dynamically-typed nature was considered an inherent performance ceiling.
The Faster CPython project, documented in PEP 659, takes a different approach. Rather than accepting the overhead as inevitable, the team is implementing a specializing adaptive interpreter — a technique that sits between traditional interpretation and full JIT compilation.
The core idea: CPython now monitors the types that flow through bytecode instructions at runtime. When it detects that an operation consistently handles the same types (say, integer addition), it replaces the generic instruction with a specialized version optimized for that specific case. If the assumption breaks, it falls back to the generic path. This is a form of inline caching that’s been used in JavaScript engines like V8 for years, but it’s new for CPython.
What’s Actually Faster#
The improvements aren’t evenly distributed, and that’s important to understand. The biggest gains are in:
Function calls and returns: Python’s function call overhead has been significantly reduced. Frame objects are now lazily created, and the function call mechanism has been streamlined. This matters enormously because Python code is heavily function-call driven compared to languages with more inline optimization.
Attribute access: Looking up attributes on objects — something Python does constantly — is now cached more aggressively. If obj.method resolved to the same method last time, the interpreter takes a fast path.
Arithmetic operations: Integer and float operations between common types now hit specialized fast paths instead of going through the generic type dispatch mechanism.
Startup time: There are improvements to how modules are imported and initialized, though startup remains an area with room for growth.
What’s less improved: I/O-bound code (obviously — you can’t make the network faster), code that spends most of its time in C extensions (NumPy-heavy workloads are already fast), and code with extremely dynamic type patterns that prevent specialization.
Better Error Messages#
Performance isn’t the only story in 3.11. The error messages have received a substantial upgrade that I think will have an outsized impact on developer experience, especially for newcomers.
Python 3.11 now shows precise error locations within expressions. Instead of pointing at an entire line, tracebacks highlight exactly which part of an expression caused the error:
Traceback (most recent call last):
File "example.py", line 3, in <module>
result = data["users"][0]["name"].upper()
~~~~~~~~~~~~~~~~~~~~^^^^^^
TypeError: 'NoneType' object is not subscriptableThat ~~~~ and ^^^^^^ precisely indicating which operation failed? That’s going to save developers countless hours of debugging. I’ve lost track of how many times I’ve stared at a long chained expression in a traceback trying to figure out which part was None. This is a genuinely user-centric improvement.
Exception Groups and TaskGroups#
Python 3.11 also introduces Exception Groups (PEP 654), a new mechanism for raising and handling multiple exceptions simultaneously. This is directly motivated by the async programming model — when you’re running concurrent tasks, multiple failures can occur simultaneously, and the current exception model can only represent one at a time.
The new except* syntax allows handling specific exception types from a group while letting others propagate:
try:
async with asyncio.TaskGroup() as tg:
tg.create_task(operation_a())
tg.create_task(operation_b())
except* ValueError as eg:
handle_value_errors(eg.exceptions)
except* OSError as eg:
handle_os_errors(eg.exceptions)TaskGroup itself is a new asyncio primitive that replaces the somewhat clunky gather() pattern. It provides structured concurrency — all tasks in the group are properly awaited, and if one fails, the others are cancelled. This is a pattern that Trio popularized, and it’s great to see it making its way into the standard library.
The Broader Implications#
What excites me most about the Faster CPython project isn’t the 3.11 numbers — it’s the trajectory. The team has published a roadmap targeting a 5x speedup over several releases. 3.11 is the first step, and if they maintain this pace, Python’s performance story changes fundamentally.
There’s also a compounding effect. As CPython gets faster, the threshold at which you’d reach for C extensions or an alternative language rises. Code that currently needs Cython for acceptable performance might run fine on pure Python in a few releases. This simplifies deployment, reduces maintenance burden, and makes the ecosystem more accessible.
My Take#
I’ve been writing Python since the 2.x days, and the language has never felt more vital. The combination of performance improvements, better error messages, and structured concurrency support shows a project that’s listening to its users and investing in the areas that matter.
Will Python 3.11 make Python competitive with Go or Rust for performance-sensitive workloads? No, and it shouldn’t try to be. But a 25% average speedup means real cost savings on cloud compute, faster test suites, quicker data processing pipelines, and a better experience for developers who choose Python for its productivity advantages.
If you’re running Python in production, start testing against the 3.11 beta. The compatibility story looks good — the specializing interpreter is an implementation detail that shouldn’t affect correctly-written code. October’s release is going to be worth the upgrade.
