1 - Using OpenTelemetry Collector to collect traces

How to use Dapr to push trace events through the OpenTelemetry Collector.

Dapr directly writes traces using the OpenTelemetry (OTLP) protocol as the recommended method. For observability tools that support the OTLP directly, it is recommended to use the OpenTelemetry Collector, as it allows your application to quickly offload data and includes features, such as retries, batching, and encryption. For more information, read the Open Telemetry Collector documentation.

Dapr can also write traces using the Zipkin protocol. Prior to supporting the OTLP protocol, the Zipkin protocol was used with the OpenTelemetry Collector to send traces to observability tools such as AWS X-Ray, Google Cloud Operations Suite, and Azure Monitor. Both protocol approaches are valid, however the OpenTelemetry protocol is the recommended choice.

Using OpenTelemetry Collect to integrate with many backend

Prerequisites

Set up OTEL Collector to push to your trace backend

  1. Check out the open-telemetry-collector-generic.yaml.

  2. Replace the <your-exporter-here> section with the correct settings for your trace exporter.

  3. Apply the configuration with:

    kubectl apply -f open-telemetry-collector-generic.yaml
    

Set up Dapr to send traces to OTEL Collector

Set up a Dapr configuration file to turn on tracing and deploy a tracing exporter component that uses the OpenTelemetry Collector.

  1. Use this collector-config.yaml file to create your own configuration.

  2. Apply the configuration with:

    kubectl apply -f collector-config.yaml
    

Deploy your app with tracing

Apply the appconfig configuration by adding a dapr.io/config annotation to the container that you want to participate in the distributed tracing, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "appconfig"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

Deploy and run some applications. Wait for the trace to propagate to your tracing backend and view them there.

2 - Using Dynatrace OpenTelemetry Collector to collect traces to send to Dynatrace

How to push trace events to Dynatrace, using the Dynatrace OpenTelemetry Collector.

Dapr integrates with the Dynatrace Collector using the OpenTelemetry protocol (OTLP). This guide walks through an example using Dapr to push traces to Dynatrace, using the Dynatrace version of the OpenTelemetry Collector.

Prerequisites

  • Install Dapr on Kubernetes
  • Access to a Dynatrace tenant and an API token with openTelemetryTrace.ingest, metrics.ingest, and logs.ingest scopes
  • Helm

Set up Dynatrace OpenTelemetry Collector to push to your Dynatrace instance

To push traces to your Dynatrace instance, install the Dynatrace OpenTelemetry Collector on your Kubernetes cluster.

  1. Create a Kubernetes secret with your Dynatrace credentials:

    kubectl create secret generic dynatrace-otelcol-dt-api-credentials \
      --from-literal=DT_ENDPOINT=https://YOUR_TENANT.live.dynatrace.com/api/v2/otlp \
      --from-literal=DT_API_TOKEN=dt0s01.YOUR_TOKEN_HERE
    

    Replace YOUR_TENANT with your Dynatrace tenant ID and YOUR_TOKEN_HERE with your Dynatrace API token.

  2. Use the Dynatrace OpenTelemetry Collector distribution for better defaults and support than the open source version. Download and inspect the collector-helm-values.yaml file. This is based on the k8s enrichment demo and includes Kubernetes metadata enrichment for proper pod/namespace/cluster context.

  3. Deploy the Dynatrace Collector with Helm.

    helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    helm repo update
    helm upgrade -i dynatrace-collector open-telemetry/opentelemetry-collector -f collector-helm-values.yaml
    

Set up Dapr to send traces to the Dynatrace Collector

Create a Dapr configuration file to enable tracing and send traces to the OpenTelemetry Collector via OTLP.

  1. Update the following file to ensure the endpointAddress points to your Dynatrace OpenTelemetry Collector service in your Kubernetes cluster. If deployed in the default namespace, it’s typically dynatrace-collector.default.svc.cluster.local.

    Important: Ensure the endpointAddress does NOT include the http:// prefix to avoid URL encoding issues:

     apiVersion: dapr.io/v1alpha1
     kind: Configuration
     metadata:
       name: tracing
     spec:
       tracing:
         samplingRate: "1"
         otel:
           endpointAddress: "dynatrace-collector.default.svc.cluster.local:4318" # Update with your collector's service address
    
  2. Apply the configuration with:

    kubectl apply -f collector-config-otel.yaml
    

Deploy your app with tracing

Apply the tracing configuration by adding a dapr.io/config annotation to the Dapr applications that you want to include in distributed tracing, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "tracing"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

Deploy and run some applications. After a few minutes, you should see traces appearing in your Dynatrace tenant:

  1. Navigate to Search > Distributed tracing in your Dynatrace UI.
  2. Filter by service names to see your Dapr applications and their associated tracing spans.
Dynatrace showing tracing data.

3 - Using OpenTelemetry Collector to collect traces to send to App Insights

How to push trace events to Azure Application Insights, using the OpenTelemetry Collector.

Dapr integrates with OpenTelemetry (OTEL) Collector using the OpenTelemetry protocol (OTLP). This guide walks through an example using Dapr to push traces to Azure Application Insights, using the OpenTelemetry Collector.

Prerequisites

Set up OTEL Collector to push to your App Insights instance

To push traces to your Application Insights instance, install the OpenTelemetry Collector on your Kubernetes cluster.

  1. Download and inspect the open-telemetry-collector-appinsights.yaml file.

  2. Replace the <CONNECTION_STRING> placeholder with your App Insights connection string.

  3. Deploy the OpenTelemetry Collector into the same namespace where your Dapr-enabled applications are running:

    kubectl apply -f open-telemetry-collector-appinsights.yaml
    

Set up Dapr to send traces to the OpenTelemetry Collector

Create a Dapr configuration file to enable tracing and send traces to the OpenTelemetry Collector via OTLP.

  1. Download and inspect the collector-config-otel.yaml. Update the namespace and otel.endpointAddress values to align with the namespace where your Dapr-enabled applications and OpenTelemetry Collector are deployed.

  2. Apply the configuration with:

    kubectl apply -f collector-config-otel.yaml
    

Deploy your app with tracing

Apply the tracing configuration by adding a dapr.io/config annotation to the Dapr applications that you want to include in distributed tracing, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "tracing"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include any SDKs or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

Deploy and run some applications. After a few minutes, you should see tracing logs appearing in your App Insights resource. You can also use the Application Map to examine the topology of your services, as shown below:

Application map

4 - Using OpenTelemetry to send traces to Jaeger V2

How to push trace events to Jaeger V2 distributed tracing platform using OpenTelemetry protocol.

Dapr supports writing traces using the OpenTelemetry (OTLP) protocol, and Jaeger V2 natively supports OTLP, allowing Dapr to send traces directly to a Jaeger V2 instance. This approach is recommended for production to leverage Jaeger V2’s capabilities for distributed tracing.

Configure Jaeger V2 in self-hosted mode

Local setup

The simplest way to start Jaeger is to run the pre-built, all-in-one Jaeger image published to DockerHub and expose the OTLP port:

docker run --rm --name jaeger \
  -p 16686:16686 \
  -p 4317:4317 \
  -p 4318:4318 \
  -p 5778:5778 \
  -p 9411:9411 \
  cr.jaegertracing.io/jaegertracing/jaeger:2.11.0

Next, create the following config.yaml file locally:

Note: Because you are using the Open Telemetry protocol to talk to Jaeger, you need to fill out the otel section of the tracing configuration and set the endpointAddress to the address of the Jaeger container.

apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: tracing
  namespace: default
spec:
  tracing:
    samplingRate: "1"
    stdout: true
    otel:
      endpointAddress: "localhost:4317"
      isSecure: false
      protocol: grpc 

To launch the application referring to the new YAML configuration file, use the --config option. For example:

dapr run --app-id myapp --app-port 3000 node app.js --config config.yaml

View traces

To view traces in your browser, go to http://localhost:16686 to see the Jaeger UI.

Configure Jaeger V2 on Kubernetes

The following steps show you how to configure Dapr to send distributed tracing data directly to a Jaeger V2 instance deployed using the OpenTelemetry Operator with in-memory storage.

Prerequisites

Set up Jaeger V2 with the OpenTelemetry Operator

Jaeger V2 can be deployed using the OpenTelemetry Operator for simplified management and native OTLP support. The following example configures Jaeger V2 with in-memory storage.

Note on Storage Backends: This example uses in-memory storage (memstore) for simplicity, suitable for development or testing environments as it stores up to 100,000 traces in memory. For production environments, consider configuring a persistent storage backend like Cassandra or Elasticsearch to ensure trace data durability.

Installation

  1. Install cert-manager to manage certificates:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.19.1/cert-manager.yaml -n cert-manager
    

    Verify that all resources in the cert-manager namespace are ready.

  2. Install the OpenTelemetry Operator:

    kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
    

    Confirm that all resources in the opentelemetry-operator-system namespace are ready.

  3. Deploy a Jaeger V2 instance with in-memory storage: Apply the following configuration to create a Jaeger V2 instance:

    apiVersion: opentelemetry.io/v1beta1
    kind: OpenTelemetryCollector
    metadata:
      name: jaeger-inmemory-instance
      namespace: observability
    spec:
      image: jaegertracing/jaeger:latest
      ports:
      - name: jaeger
        port: 16686
      config:
        service:
          extensions: [jaeger_storage, jaeger_query]
          pipelines:
            traces:
              receivers: [otlp]
              exporters: [jaeger_storage_exporter]
        extensions:
          jaeger_query:
            storage:
              traces: memstore
          jaeger_storage:
            backends:
              memstore:
                memory:
                  max_traces: 100000
        receivers:
          otlp:
            protocols:
              grpc:
                endpoint: 0.0.0.0:4317
              http:
                endpoint: 0.0.0.0:4318
        exporters:
          jaeger_storage_exporter:
            trace_storage: memstore
    

    Apply it with:

    kubectl apply -f jaeger-inmemory.yaml -n observability
    

Set up Dapr to send traces to Jaeger V2

Create a Dapr configuration file to enable tracing and export the sidecar traces directly to the Jaeger V2 instance.

  1. Create a configuration file (e.g., tracing.yaml) with the following content, updating the namespace and otel.endpointAddress to match your Jaeger V2 instance:

    apiVersion: dapr.io/v1alpha1
    kind: Configuration
    metadata:
      name: tracing
      namespace: order-system
    spec:
      tracing:
        samplingRate: "1"
        otel:
          endpointAddress: "jaeger-inmemory-instance-collector.observability.svc.cluster.local:4317"
          isSecure: false
          protocol: grpc
    
  2. Apply the configuration:

    kubectl apply -f tracing.yaml -n order-system
    

Deploy your app with tracing enabled

Apply the tracing Dapr configuration by adding a dapr.io/config annotation to the application deployment that you want to enable distributed tracing for, as shown in the following example:

apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    metadata:
      ...
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "MyApp"
        dapr.io/app-port: "8080"
        dapr.io/config: "tracing"

You can register multiple tracing exporters at the same time, and the tracing logs are forwarded to all registered exporters.

That’s it! There’s no need to include the OpenTelemetry SDK or instrument your application code. Dapr automatically handles the distributed tracing for you.

View traces

To view Dapr sidecar traces, port-forward the Jaeger V2 service and open the UI:

kubectl port-forward svc/jaeger-inmemory-instance-collector 16686 -n observability

In your browser, go to http://localhost:16686 to see the Jaeger V2 UI.

jaeger

References