Skip to main content
  1. Blog/

OpenTelemetry Reaches Full Maturity — Observability Finally Has a Standard

·946 words·5 mins
Osmond van Hemert
Author
Osmond van Hemert
Developer Tooling - This article is part of a series.
Part : This Article

It’s been a long road, but OpenTelemetry has effectively reached full maturity. With the logging signal now generally available across the major language SDKs — joining traces and metrics that have been stable for a while — the project has delivered on its original promise: a single, vendor-neutral standard for all three pillars of observability. For those of us who’ve spent years navigating the fragmented observability tooling landscape, this is a genuinely significant milestone.

Why This Matters
#

To appreciate why OpenTelemetry’s maturity is a big deal, you need to understand the pain it was designed to solve. Before OTel, if you wanted observability in your application, you were essentially locked into a vendor’s SDK. Want to use Datadog? Import their libraries. Prefer New Relic? Different libraries. Switching vendors meant re-instrumenting your entire codebase.

The OpenTracing and OpenCensus projects tried to solve this with open standards, but having two competing “open” standards was almost worse than having none. The merger of these projects into OpenTelemetry in 2019 was the right move, but it meant years of development to build a comprehensive, production-ready framework.

Now, with all three signals — traces, metrics, and logs — at GA status, teams can instrument once and send telemetry data to any compatible backend. Jaeger, Zipkin, Prometheus, Datadog, New Relic, Grafana Cloud, Honeycomb — they all speak OTel. Your instrumentation code is no longer a vendor commitment.

The Logging Signal Changes Things
#

Traces and metrics reaching GA was important, but logging completing the picture changes how we should think about observability architectures. Here’s why:

Traditional logging (think: structured JSON logs sent to Elasticsearch or CloudWatch) and distributed tracing have historically been separate systems with separate instrumentation, separate storage, and separate query interfaces. Correlating a log entry with a trace span required manually propagating trace IDs through your logging framework — doable, but tedious and error-prone.

With OpenTelemetry’s logging signal, logs are first-class citizens in the same telemetry pipeline as traces and metrics. Log records can be automatically correlated with trace contexts, meaning you can click from a slow trace span directly into the relevant log entries without manual correlation. This is the kind of integrated observability experience that was previously only available within proprietary platforms.

The OTel log bridge API is particularly well-designed. Rather than asking you to replace your existing logging framework (nobody wants to rewrite every logger.info() call), it bridges your existing logging library — whether that’s SLF4J, Python’s logging module, or Winston in Node.js — into the OTel pipeline. You keep your familiar logging patterns while gaining OTel’s correlation and export capabilities.

Practical Adoption Patterns
#

Having worked on several projects implementing OpenTelemetry over the past year, I’ve developed some opinions about what works and what doesn’t.

Start with auto-instrumentation: Every major OTel SDK offers auto-instrumentation that hooks into common frameworks and libraries without code changes. For a typical web application, auto-instrumentation gives you HTTP request traces, database query spans, and outbound HTTP call spans essentially for free. Start here and add manual instrumentation for your business-critical code paths later.

Use the OTel Collector: Don’t send telemetry directly from your application to your backend. Deploy the OTel Collector as an intermediary. It handles batching, retrying, and routing, and it lets you change backends without touching your application configuration. I’ve seen teams skip the Collector for simplicity and regret it when they need to add filtering, sampling, or a second backend destination.

Implement tail-based sampling for traces: Head-based sampling (deciding whether to sample at the start of a trace) is simple but wasteful — you’ll miss interesting traces and capture boring ones. Tail-based sampling in the Collector lets you make sampling decisions after seeing the complete trace, keeping error traces and slow traces while dropping routine ones. This can reduce your storage costs by 90% while keeping the data that actually matters.

Define semantic conventions early: OpenTelemetry defines semantic conventions for common attributes like http.method, db.system, and rpc.service. Adopt these consistently from the start. Custom attributes are fine for domain-specific data, but inconsistent naming across services will make your observability data much harder to query.

The Vendor Landscape Responds
#

It’s been interesting watching observability vendors respond to OTel’s maturation. The smart ones — Honeycomb, Grafana Labs, Lightstep (now part of ServiceNow) — embraced OTel early and built their products around it. Others have been slower to adapt, maintaining proprietary agents while adding OTel compatibility as a secondary option.

The market dynamic is shifting. When your instrumentation is vendor-neutral, switching costs drop dramatically. Vendors need to compete on analysis capabilities, user experience, and pricing rather than lock-in. This is good for customers and, ultimately, good for the industry.

My Take
#

I’ve been following OpenTelemetry since the OpenTracing days, and I’ll admit there were moments when I wondered if the project would ever reach this point. The scope was ambitious, the governance was complex, and the pace of development sometimes felt glacial. But the approach of building a comprehensive, vendor-neutral standard — rather than shipping something quick and incomplete — has paid off.

For teams that haven’t adopted OpenTelemetry yet, now is the time. The “it’s not mature enough” objection is no longer valid. All three signals are GA. The auto-instrumentation libraries cover most common frameworks. The Collector is battle-tested. The ecosystem of compatible backends is broad.

If you’re still using vendor-specific SDKs for your observability, start planning your migration. Not because your current vendor is bad, but because vendor-neutral instrumentation gives you options. And in infrastructure, having options is always better than the alternative.

The observability space has been waiting for a true standard for years. OpenTelemetry has delivered, and it’s time to build on that foundation.

Developer Tooling - This article is part of a series.
Part : This Article