Skip to main content
  1. Blog/

AWS re:Invent 2021 Kicks Off — Serverless and the Cloud Keep Evolving

·950 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Table of Contents
Cloud Platform Watch - This article is part of a series.
Part : This Article

AWS re:Invent is back in person this week after last year’s fully virtual event, and Las Vegas is once again filled with cloud engineers comparing badge ribbons. I’ve been following the announcements remotely, and while the full keynote slate is still ahead of us, the pre-conference launches and early sessions are already revealing where Amazon sees cloud infrastructure heading.

This year’s event comes at an interesting moment. Cloud adoption accelerated dramatically during the pandemic, and many organisations are now past the “lift and shift” phase and into the “how do we actually operate efficiently in the cloud” phase. The announcements so far reflect that maturity shift.

The Serverless Story Continues
#

AWS has been steadily expanding the serverless model beyond Lambda functions, and the direction is clear: they want serverless to be the default deployment model, not the exception.

Lambda itself has seen incremental improvements throughout 2021 — longer execution times, larger ephemeral storage, better cold start performance. But the more interesting story is how serverless thinking is permeating other services.

Amazon Aurora Serverless v2 has been in preview, and the general availability is anticipated soon. The promise of a relational database that scales to zero and handles burst traffic without pre-provisioning is compelling. I’ve been running Aurora Serverless v1 for low-traffic applications, and while v1 had painful cold start issues (30+ seconds to wake up), the v2 architecture is fundamentally different — it scales in increments of 0.5 ACUs and can go from idle to full capacity in under a second.

AWS App Runner, launched earlier this year, is Amazon’s answer to the “I just want to deploy a container without thinking about infrastructure” use case. It’s not as mature as Google Cloud Run or Azure Container Apps, but it represents AWS acknowledging that not every team wants to configure VPCs, load balancers, and auto-scaling groups just to run a web service.

Graviton and the ARM Transition
#

One of the more consequential stories at this year’s re:Invent is the continued expansion of AWS Graviton processors. The Graviton2-based instances have been available for over a year now, and the performance-per-dollar advantage is real — roughly 40% better price-performance compared to equivalent x86 instances for many workloads.

Graviton3 is expected to be announced this week, and early indicators suggest another significant performance jump. For developers, this means taking ARM compatibility seriously if you haven’t already.

In practice, most modern application stacks work fine on ARM64. If you’re running containerised workloads with interpreted languages (Python, Node.js, Ruby), the switch is often as simple as building multi-architecture Docker images. Compiled languages need ARM64 builds, but Go, Rust, and .NET all have excellent cross-compilation support.

Where teams run into issues is with native dependencies — packages that include compiled C/C++ extensions. In the Node.js world, packages like sharp, bcrypt, and node-sass have ARM64 variants, but you occasionally hit a library that doesn’t. It’s worth auditing your dependency tree now.

The cost savings are significant enough that I’d recommend every team at least test their workloads on Graviton instances. For compute-heavy batch processing and containerised microservices, switching to Graviton can reduce your EC2 bill by 20-40%.

Observability and the Operational Gap
#

A recurring theme in the sessions I’ve been following is observability. AWS has been investing in CloudWatch, X-Ray, and the recently launched CloudWatch Evidently and CloudWatch RUM (Real User Monitoring). The message is clear: as architectures become more distributed, understanding what’s happening in production becomes harder and more important.

The challenge for AWS has always been that their native observability tools lag behind dedicated platforms like Datadog, New Relic, and Grafana Cloud. CloudWatch metrics are essential, but the dashboarding and alerting experience remains clunky compared to third-party alternatives.

What’s encouraging is the OpenTelemetry adoption. AWS has been contributing to OpenTelemetry and supporting the AWS Distro for OpenTelemetry (ADOT) as a first-class option. This is the right approach — standardise on open instrumentation protocols and let teams choose their analysis backend. I’ve been migrating several projects from X-Ray-specific instrumentation to OpenTelemetry, and the flexibility is worth the effort.

Cost Management: The Unsexy Essential
#

Among the less headline-grabbing but critically important developments is AWS’s continued investment in cost management tooling. The new Cost Anomaly Detection uses machine learning to identify unexpected spending patterns, and Savings Plans now cover more service categories.

This matters because cloud cost management is the number one operational concern I hear from engineering teams. The pay-as-you-go model that makes cloud attractive also makes it unpredictable. I’ve seen startups get bill shock from a misconfigured auto-scaling group or a forgotten development environment running over a weekend.

If you’re running any significant AWS workload and not using Cost Explorer with budgets and alerts, you’re flying blind. It’s not glamorous work, but it’s the kind of infrastructure discipline that separates mature cloud operations from expensive experiments.

My Take
#

re:Invent has become so large that it’s impossible to absorb everything in real-time. Hundreds of announcements across dozens of services, many of them incremental improvements that individually seem minor but collectively reshape how we build software.

The trends I’m watching most closely are the serverless expansion (particularly Aurora Serverless v2), the Graviton processor line, and the OpenTelemetry adoption. These represent genuine improvements in cost, performance, and operational sanity — the things that matter when you’re actually running production workloads, not just building demos.

I’ll be diving deeper into specific announcements as the keynotes roll out over the coming days. For now, if you’re not at re:Invent, the livestreams and session recordings are excellent. And if you are there — stay hydrated, wear comfortable shoes, and remember that the expo hall is not a viable lunch strategy despite the amount of free snacks available.

Cloud Platform Watch - This article is part of a series.
Part : This Article

Related