It’s been a long road, but OpenTelemetry has finally reached general availability for all three pillars of observability: traces, metrics, and now logs. The logging specification and SDK implementations hitting stable status this fall completes a journey that started back in 2019 when OpenTracing and OpenCensus merged. For those of us who’ve been dealing with observability tooling for years, this is a quiet but significant milestone.
Why This Completion Matters#
If you’re wondering why a logging specification reaching GA is newsworthy, you probably haven’t felt the pain of running observability at scale across multiple services, languages, and backend vendors. Let me paint the picture.
In a typical microservices environment, you might have traces going to Jaeger, metrics going to Prometheus, and logs going to Elasticsearch via Fluentd. Three different collection agents, three different data formats, three different correlation mechanisms. When an incident occurs and you need to jump from a trace span to the relevant logs, you’re manually copying trace IDs and searching across systems. It works, but it’s slow and error-prone when you’re debugging at 3 AM.
OpenTelemetry’s promise has always been a unified, vendor-neutral standard for all telemetry data. With logs now GA, you can instrument your application once and have traces, metrics, and logs all using the same context propagation, the same attribute conventions, and the same export pipeline. A log line automatically carries the trace ID and span ID of the operation that produced it. Correlation becomes automatic rather than manual.
The OpenTelemetry Logging Model#
The logging approach OpenTelemetry took is pragmatic and worth understanding. Rather than creating yet another logging framework to compete with Log4j, logback, Python’s logging module, or Winston, OpenTelemetry provides a Log Bridge API. This bridges existing logging frameworks into the OpenTelemetry ecosystem.
In practice, this means you keep using whatever logging library you already use. You add an OpenTelemetry log appender/handler, and your existing log statements automatically get enriched with trace context, resource attributes, and semantic conventions. The logs then flow through the same OpenTelemetry Collector pipeline as your traces and metrics.
This was the right design choice. Asking developers to replace their logging framework would have been a non-starter. By bridging instead of replacing, OpenTelemetry can be adopted incrementally — exactly how good infrastructure tools should work.
Here’s what it looks like in a Java application:
// Your existing logging code — no changes needed
logger.info("Processing order {}", orderId);
// The OpenTelemetry log appender automatically adds:
// - trace_id from the current span context
// - span_id from the current span
// - resource attributes (service.name, deployment.environment, etc.)
// - semantic conventions for structured attributesAnd in Python:
import logging
from opentelemetry._logs import set_logger_provider
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
# Set up the OTel log provider once at startup
provider = LoggerProvider()
set_logger_provider(provider)
# Your existing logging continues to work exactly as before
logging.info("Processing order %s", order_id)The Collector as Universal Pipeline#
With all three signal types now stable, the OpenTelemetry Collector becomes an incredibly powerful piece of infrastructure. A single Collector deployment can receive traces, metrics, and logs from your applications, process them (filtering, sampling, enriching, transforming), and export them to any supported backend.
The processor pipeline is where this gets really interesting. You can:
- Sample traces based on error status or latency thresholds, and automatically keep the associated logs
- Derive metrics from trace spans (request duration histograms, error rates) without additional instrumentation
- Enrich logs with Kubernetes metadata by connecting to the K8s API
- Filter sensitive data from all telemetry types in a single processor
- Route different signal types to different backends while maintaining correlation
I’ve been running a Collector setup that sends traces to Jaeger, metrics to Prometheus, and logs to Loki — all through a single pipeline. The operational simplification compared to running separate collection agents for each signal type is substantial. One deployment to manage, one configuration format, one set of health checks.
Migration Strategy#
If you’re currently running a traditional observability stack (ELK for logs, Prometheus for metrics, Jaeger for traces), here’s the migration path I’d recommend:
Phase 1 — Traces first: If you haven’t already, instrument your services with OpenTelemetry tracing. The trace SDKs have been GA for over a year and are production-ready. This gives you the foundation for context propagation.
Phase 2 — Metrics alongside Prometheus: OpenTelemetry metrics can export in Prometheus format, so you can run both in parallel. Gradually migrate custom metrics to OpenTelemetry instrumentation while keeping your Prometheus infrastructure.
Phase 3 — Logs bridge: Add the OpenTelemetry log bridge to your existing logging framework. Route logs through the Collector. You’ll immediately get trace-correlated logs without changing any application code.
Phase 4 — Consolidate backends: Once all three signals flow through the Collector, you can evaluate whether to consolidate on a single backend (like Grafana’s LGTM stack) or keep specialized backends with the Collector handling routing.
My Take#
I’ve lived through the observability evolution from Nagios check scripts to the current ecosystem, and OpenTelemetry is the most important infrastructure standard to emerge in the last decade. Not because it’s technically revolutionary — many of the ideas existed in proprietary form — but because it provides a vendor-neutral foundation that prevents lock-in.
The logging GA specifically matters because logs are still how most developers debug. Traces are powerful but conceptually harder. Metrics are great for dashboards but don’t tell you why something broke. Logs with trace context give you the “why” linked directly to the “where” and “when.”
If your organization hasn’t started adopting OpenTelemetry, now is the time. The standard is stable, the ecosystem is mature, and the major observability vendors — Datadog, New Relic, Dynatrace, Grafana Labs — all support it. You’re no longer an early adopter; you’re following a well-trodden path.
The three pillars are complete. The excuses for not having correlated observability are running out.

