Bridging Prometheus and OpenTelemetry: practical patterns for scalable metrics and Grafana dashboards

Prometheus and Grafana are often the heart of application monitoring, while OpenTelemetry is becoming the lingua franca for instrumenting services. Treating the combination as a band: Prometheus keeps the beat (time-series storage and query), OpenTelemetry writes the sheet music (rich, language-neutral telemetry), and Grafana conducts the ensemble (dashboards and SLOs). This article walks through the common, recent pattern of using the OpenTelemetry Collector as a bridge between application instrumentation and Prometheus + Grafana, with a focus on scaling and long-term storage using Prometheus remote_write (for example, Grafana Mimir). The goal is an explanation of why the pattern exists, how the pieces map together, and practical config snippets that illustrate the flow.

Why bridge OpenTelemetry and Prometheus?

Two common bridging patterns

  1. Collector exposes a Prometheus scrape endpoint (pull model).
    • The OpenTelemetry Collector can host a Prometheus exporter endpoint that exposes translated metrics in the Prometheus text format; Prometheus scrapes that endpoint like any other target. This pattern fits teams that want to keep Prometheus as the collector of record for short-term, high-resolution storage while centralizing instrumentation in OTLP. (opentelemetry.io)
  2. Collector (or Prometheus) pushes to long-term storage via remote_write (push model).
    • Prometheus has a native remote_write mechanism that forwards samples to compatible backends. Grafana Mimir and other long-term stores implement the same receiver API so Prometheus (or agents) can push metrics for long retention, multi-tenancy, and horizontal scale. The remote-write protocol continues to evolve (Remote-Write 2.0 adds richer metadata and semantics). (prometheus.io)

How the Collector fits in (practical roles)

Concrete config examples (illustrative)

exporters: prometheus: endpoint: “0.0.0.0:9464” # Prometheus will scrape this

service: pipelines: metrics: receivers: [otlp] exporters: [prometheus]

This makes the Collector the scrape target at port 9464; Prometheus scrape jobs point at the Collector pods/services. The OpenTelemetry Prometheus exporter follows the text-format conventions and can be configured to translate OTEL names to Prometheus naming when needed. ([opentelemetry.io](https://opentelemetry.io/docs/specs/otel/metrics/sdk_exporters/prometheus/?utm_source=openai))

- Prometheus: remote_write to Grafana Mimir (long-term store)
```yaml
remote_write:
  - url: "http://mimir:9009/api/v1/push"
    # optional: bearer_token_file, queue_config, send_exemplars, send_native_histograms

Grafana Mimir exposes a push receiver at POST /api/v1/push, making it compatible with Prometheus remote_write clients such as Prometheus itself, the Grafana Agent, or other shippers. (grafana.com)

Notes on the important trade-offs

SLOs and dashboards in Grafana: why long-term metrics matter

Operational realities and recent developments

Practical debugging pointers (conceptual)

Closing thoughts Instrumenting with OpenTelemetry and keeping Prometheus + Grafana for querying and dashboards can combine the best of both worlds: standardized, language-agnostic instrumentation with a powerful query engine and visualization layer. The Collector-as-bridge pattern allows teams to centralize telemetry pipelines, handle transformations, and forward metrics to scalable long-term stores (e.g., Grafana Mimir) via the well-established Prometheus remote_write path. The design choices center on scrape versus push, label management, histogram semantics, and the desired retention/query performance trade-offs—each of which affects dashboard fidelity and SLO calculations. (uptrace.dev)

References