Skip to main content
  1. Blog/

Redis 6.0 in Production — ACLs, Threading, and What Actually Matters

·945 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Cloud Operations - This article is part of a series.
Part : This Article

Redis 6.0 has been GA for a couple of months now, and I’ve been running it in production across three projects since the RC phase. The headline features — access control lists and I/O threading — sound like incremental improvements, but they represent a significant shift in how Redis positions itself for production workloads. After years of “just put it behind a firewall,” Redis is finally getting serious about security and scalability.

ACLs: Better Late Than Never
#

For the entire history of Redis up to version 5, security was essentially a single shared password set via requirepass. Every client that connected had full access to every command and every key. If your application needed Redis for caching and your analytics pipeline also needed Redis, they shared the same credentials and the same permission level. One misconfigured analytics job could FLUSHALL your production cache.

The community worked around this with network segmentation, separate Redis instances, and the rename-command directive (which is a hack, not a security model). But these workarounds don’t scale, and they don’t meet the compliance requirements that more enterprises are imposing on their infrastructure.

Redis 6.0’s ACL system changes this fundamentally. You can now create named users with specific permissions:

ACL SETUSER analytics on >analytics_password ~analytics:* +get +set +del -flushall

This creates a user analytics that can only access keys prefixed with analytics: and can only run GET, SET, and DEL — no FLUSHALL, no KEYS *, no DEBUG. This is basic stuff for a database, but for Redis, it’s transformative.

I’ve been using ACLs to separate concerns in a multi-service architecture where different microservices access different key namespaces. The session service gets access to session:* keys. The rate limiter gets ratelimit:*. The cache layer gets cache:*. If a service is compromised, the blast radius is contained.

The implementation is clean. ACLs can be defined in a file and loaded at startup, or managed dynamically via commands. There’s an ACL LOG that records denied operations, which is invaluable for debugging permission issues during rollout.

I/O Threading: Understanding What It Actually Does
#

The threading story in Redis 6.0 is widely misunderstood. Redis is not becoming a multi-threaded database. The core command execution is still single-threaded — one thread processes commands sequentially, which is what gives Redis its consistency guarantees and simplicity.

What is threaded now is I/O: reading data from client sockets and writing responses back. On a busy Redis instance handling tens of thousands of connections, the I/O overhead of parsing incoming commands and serializing responses can become a bottleneck before the CPU-bound command execution does. The I/O threads handle this parsing and serialization work in parallel, then hand off the parsed commands to the main thread for execution.

In my benchmarks, enabling I/O threads (I’m using 4 threads on an 8-core machine) improved throughput by roughly 40-60% for workloads that are heavy on small commands — think high-frequency GET/SET operations from many concurrent clients. For workloads dominated by large values or complex commands like ZUNIONSTORE, the improvement is smaller because the bottleneck is in command execution, not I/O.

To enable it, add to your redis.conf:

io-threads 4
io-threads-do-reads yes

A word of caution: don’t just set io-threads to your total CPU count. The main execution thread still needs a core, and you want headroom for background tasks like RDB persistence and AOF rewriting. I’ve found that setting I/O threads to roughly half your available cores gives the best results.

TLS Native Support
#

Redis 6.0 also adds native TLS support, which eliminates the need for stunnel or other TLS proxies in front of Redis. This is another “finally” feature. Running Redis without encryption in transit has been a persistent compliance headache, and the stunnel workaround adds latency and operational complexity.

Enabling TLS is straightforward:

tls-port 6380
tls-cert-file /path/to/redis.crt
tls-key-file /path/to/redis.key
tls-ca-cert-file /path/to/ca.crt

The performance overhead of TLS is measurable — expect roughly 10-15% lower throughput compared to plaintext connections. But in most real-world deployments, the network round-trip time dwarfs the TLS overhead, so the practical impact is small.

Client-Side Caching with Tracking
#

A less-discussed but potentially impactful feature is client-side caching support via the new CLIENT TRACKING mechanism. Redis can now notify clients when keys they’ve previously read have been modified. This enables clients to maintain a local cache of frequently-read values and invalidate them precisely when they change, rather than polling or using short TTLs.

This is particularly useful for read-heavy workloads where the same keys are read thousands of times per second. Instead of hitting Redis for every read, the client reads from local memory and only queries Redis when notified of a change. In theory, this can reduce Redis load dramatically for certain access patterns.

I haven’t deployed this in production yet — client library support is still catching up — but I’m watching the Lettuce (Java) and redis-py implementations closely.

My Take
#

Redis 6.0 is the most significant Redis release in years, not because any single feature is groundbreaking, but because the collection of features moves Redis from “fast cache that you protect with network rules” to “production-grade data store with real security and better scalability.”

If you’re still running Redis 5.x, the upgrade path is smooth. The new features are all opt-in — ACLs default to a default user with full access (backward compatible), I/O threading is disabled by default, and TLS requires explicit configuration. There’s very little risk in upgrading, and the ACL support alone justifies the effort.

The Redis ecosystem continues to impress me with its balance of simplicity and capability. In an industry that loves to pile on complexity, Redis’s commitment to doing a few things extremely well remains refreshing.

Cloud Operations - This article is part of a series.
Part : This Article