Skip to content

How to ship traces, logs, and metrics to Observability using OpenTelemetry

STACKIT Kubernetes Engine (SKE) delivers CNCF-compliant Kubernetes clusters and makes it easy to deploy standard Kubernetes applications and containerized workloads. Customized Kubernetes clusters can be easily created as self-service via the STACKIT Cloud Portal.

In this tutorial, we’ll take a look at how to use OpenTelemetry to ship various telemetry data (metrics, logs, and traces) from a managed Kubernetes cluster provided by the STACKIT Kubernetes Engine to the STACKIT Observability Service.

OpenTelemetry is an initiative that provides a collection of APIs, SDKs, and tools. You can use them to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) to help you analyze your software’s performance and behavior. OpenTelemetry provides a demo application, which conveniently covers all the use cases that we want to demonstrate in this tutorial. Please note that by using the demo application, we can focus on the actual integration of the two STACKIT services instead of building a custom application and instrumenting it with OpenTelemetry first. For your use case, it might be beneficial to take a look at the different instrumentations available for a lot of popular programming languages. The process of instrumenting an existing application is out of scope for this tutorial.

The OpenTelemetry Collector component is the key piece that allows us to send various telemetry data to Observability. Its configuration is the main step you need to take.

The Collector has four components:

  • Receivers (get data into the Collector)
  • Processors (run on data between receiving and exporting it)
  • Exporters (ship data to different destinations)
  • Connectors (connect two collection pipelines)

The demo application comes with some receivers, processors and exporters preconfigured to demonstrate how they work. We will additionally configure Observability as an exporter. We do not need to configure additional receivers, processors, or connectors for the purposes of this demo. If you’re interested, you can learn more about the behavior of these components in the Collector configuration docs.

We will use the Helm chart provided by OpenTelemetry to deploy the demo application in a moment. Before, we configure the Collector with the additional Observability exporters by overwriting some of the default Helm Chart values.

All exporters follow the same schema, which looks like this:

exporters type>/<name>:
otlp:
endpoint: <observability url for the respective metric type>
headers:
authorization: Basic <base64-encoded authorization string, consisting of username and password>

Let’s see this abstract configuration in action for metrics, logs, and traces, respectively.

prometheusremotewrite/observability:
endpoint: {instance.pushMetricsUrl}
headers:
authorization: Basic {base64ConnectionString}
loki/observability:
endpoint: {instance.logsPushUrl}
headers:
authorization: Basic {base64ConnectionString}
otlp/observability:
endpoint: {instance.otlpTracesUrl}
headers:
authorization: Basic {base64ConnectionString}

We can now place the exporter configs into the overall Collector configuration and amend the existing pipelines. Adding the exporters to the respective pipelines is crucial, as the Collector will only take into account exporters that are part of a pipeline. Save the following with the correct information from your Observability service filled in to a file named values.yaml.

values.yaml

opentelemetry-collector:
config:
exporters:
prometheusremotewrite/observability:
endpoint: {instance.pushMetricsUrl}
headers:
authorization: Basic {base64ConnectionString}
loki/observability:
endpoint: {instance.logsPushUrl}
headers:
authorization: Basic {base64ConnectionString}
otlp/observability:
endpoint: {instance.otlpTracesUrl}
headers:
authorization: Basic {base64ConnectionString}
service:
pipelines:
metrics:
receivers: [otlp, spanmetrics]
processors: [memory_limiter, filter/ottl, transform, batch]
exporters: [prometheus, logging, prometheusremotewrite/observability]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp, loki/observability]
traces:
receivers: [otlp] # Added missing 'receivers' for traces
processors: [memory_limiter, batch]
exporters: [otlp, logging, spanmetrics, otlp/observability]

For the service.pipelines section of the config, note that all the listed receivers, processors and exporters are default, except the added Observability exporters. We are now ready to deploy the demo application.

Install the OpenTelemetry Demo Application using Helm, using the custom values.yaml for Observability.

Terminal window
helm install -n otel --create-namespace otel-demo open-telemetry/opentelemetry-demo -f values.yaml

This command installs a Helm Release into the otel namespace, which it will create if it no exists. The Release comes from the Opentelemetry Demo Helm repository.

Additionally, we pass in the values.yaml file created in the previous step.

Open the Grafana Dashboard associated with your Observability instance and go to the “Explore” tab. You can select different data sources:

  • Thanos to access the metrics exposed by the demo application
  • Loki to access the logs shipped from the demo application
  • Tempo to access the traces recorded by the demo application

Take a look at the available data and play around with the options. Note that advanced interpretation and processing of telemetry data is not part of this tutorial.

When you are done, you can delete the installation:

Terminal window
helm uninstall otel-demo -n otel