Zum Inhalt springen

Send log data to Logs instance

STACKIT Logs supports logs ingestions with Grafana Loki API or OpenTelemetry logs over HTTP. In the next sectons you will see examples of both protocols usage with STACKIT Logs authentication configured.

First you need to collect following information for STACKIT Logs:

  1. Visit the STACKIT Portal.
  2. On the sidebar click on Logs.
  3. Select the Logs instance you want to connect.
  4. On the sidebar click on API.
  5. Under Info copy/note the url from Data source.
  6. Then the value of token under Access tokens with write permissions.

Now you can configure your client to send log data. Below are some popular clients examples.

Use Grafana Alloy to send data with Loki API

Section titled “Use Grafana Alloy to send data with Loki API”

Grafana Alloy is a telemetry collector which supports collecting logs and exporting them to multiple destinations. It has built-in support for Loki as an exporter which can be configured to target STACKIT Logs instance.

Grafana Alloy can be configured with the next steps. The conf file output of steps is shown first, you can extend it according to your requirements.

config.alloy
local.file "stackit_logs_secret" {
// File containing STACKIT Logs access token
filename = "/var/secrets/stackit_logs.token"
is_secret = true
}
loki.write "stackit_logs" {
endpoint {
// set the url of your STACKIT Logs instance
url = "<put-data-source-in-logs-instance>/loki/api/v1/push"
// reference the token file
bearer_token = local.file.stackit_logs_secret.content
// // or access the token if provided via env variable
//bearer_token = sys.env("LOKI_ACCESS_TOKEN")
}
}
// Example: read logs from file and forward to loki write's receiver
loki.source.file "example_log_files" {
targets = {__path__ = "/logs/foo.txt", "color" = "pink"},
// reference the loki write receiver
forward_to = [loki.write.stackit_logs.receiver]
}
  1. Create or edit the Grafana Alloy configuration file usually named config.alloy.
  2. You can provide access token either via environment variable or by creating a file and write the access token value for STACKIT Logs instance.
  3. If access token is in file, add a block local.file with a name e.g. stackit_logs_secret. In the block add file path and mark it as secret.
  4. Add a block of type loki.write and give it a name for example “stackit_logs”.
  5. In the loki.write "stackit_logs" under endpoint set the url key with value from STACKIT Logs instance’s data source url and appending /loki/api/v1/push.
  6. If access token is in file then under endpoint reference local.file content in bearer_token , e.g. local.file.stackit_logs_secret.content.
  7. If access token is in environment variable then under endpoint reference sys.env() in bearer_token , e.g. sys.env("LOKI_ACCESS_TOKEN").
  8. Enable TLS for the output by setting tls and tls.verify keys with value on.
  9. The Grafana Alloy loki.write is now configured to connect to STACKIT Logs instance.
  10. You can now further configure the loki.write according to your needs. See the configuration of loki.write for more details.

Fluentbit is a popular agent for collecting logs and shipping them to multiple destinations. It has built-in plugin available for Loki as an output which can be configured to target STACKIT Logs instance.

Fluentbit can be configured with the next steps. The conf file output of steps is shown first, you can extend it according to your requirements.

fluent-bit.conf
[OUTPUT]
name loki
host <put-host-name-from-url-of-data-source-in-logs-instance>
port 443
bearer_token <Token value from Logs instance Access Token>
tls on
tls.verify on
  1. Create or edit the fluentbit configuration file usually named fluent-bit.conf
  2. Add an ouput with name as loki so that it uses Loki format.
  3. In the ouput set the host key with value of host from STACKIT Logs instance’s data source url.
  4. Set the port key to 443. Default in the plugin is 3100.
  5. Set the bearer_token key with value of the access token for STACKIT Logs instance. Make sure the token has write permissions.
  6. Enable TLS for the output by setting tls and tls.verify keys with value on.
  7. The fluentbit output is now configured to connect to STACKIT Logs instance.
  8. You can now further configure the output according to your needs. See the configuration of the plugin for more details.

Vector is another popular observability tool which can collect logs and ship them to multiple destinations. It has built-in support available for Loki as a sink which can be configured to target STACKIT Logs instance.

Vector sink for Logs can be configured with the next steps. The vector.yaml output of steps is shown first, you can extend it according to your requirements.

vector.yaml
sinks:
stackit_logs_sink: # sink id
type: loki
endpoint: <put-host-name-from-url-of-data-source-in-logs-instance>
auth:
strategy: bearer # bearer stretegy will put the token as bearer authorization header
token: <Token value from Logs instance Access Token>
healthcheck:
enabled: false
encoding:
codec: json
labels:
test_label: test_value
# ...
  1. Create or edit the vector configuration file usually named vector.yaml
  2. Add a new sink object under sinks with type: loki so that it uses Loki format.
  3. In the sink set the endpoint key with value of STACKIT Logs instance’s data source url.
  4. In the sink set auth.strategy value bearer
  5. Set the auth.token key with value of the access token for STACKIT Logs instance. Make sure the token has write permissions.
  6. Disbale healthcheck for Loki by setting healthcheck.enabled to false
  7. Set json encoding for sending data with encoding.codec: json
  8. Set appropiate labels under labels which are attached before sending data
  9. You can now further configure the output according to your needs. See the configuration of the sink for more details.

Use OpenTelemetry collector to send data with OTLP API

Section titled “Use OpenTelemetry collector to send data with OTLP API”

STACKIT Logs also support ingesting OpenTelemetry logs over HTTP. You can configure the OpenTelemetry HTTP exporter for sending log data to STACKIT Logs instance. Additionally we will need the bearertokenauth extension to enable the STACKIT Logs access token authentication. Make sure these features come bundled in the Otel Collector you are using, for example OpenTelemetry Collector.

OpenTelemetry Collector’s otlphttp exporter for Logs can be configured with the next steps. The configuration file output of steps is shown first, you can extend it according to your requirements.

/etc/otelcol/configs.yaml
# 1. Define extension
extensions:
bearertokenauth/stackit_logs_access_token:
scheme: "Bearer"
# either put token directly here or put the token value in a file and reference the file in `filename`
token: "<Token value from Logs instance Access Token>"
filename: "file-containing-logs-instance-access.token"
# 2. Define otlphttp exporter
exporters:
otlphttp:
endpoint: https://<put-host-name-from-url-of-data-source-in-logs-instance>/otlp
auth:
authenticator: bearertokenauth/stackit_logs_access_token
# 3. Definition of the collector pipeline for logs
service:
extensions:
- bearertokenauth/stackit_logs_access_token # reference the extension to put STACKIT Logs access token
pipelines:
logs:
exporters:
- otlphttp # reference the STACKIT Logs exporter
receivers: [...] # Set the receivers according to your usecase
processors: [...] # Set the processors according to your usecase
  1. Create or edit the otel-cllector config file usually named config.yaml
  2. Configure extension of type bearertokenauth, with scheme: Bearer either set the STACKIT Logs access token in token field or in a file and reference that with filename field.
  3. Configure exporter of type oltphttp to set endpoint and authentication.
  4. For otlphttp endpoint key, set the value of STACKIT Logs instance’s data source URL appended by /otlp.
  5. For otlphttp auth.authenticator key, set the value as the name of bearertokenauth extension configured above.
  6. Configure the service section to setup extensions and pipeline for logs.
  7. Under service.extensions array, add the name of bearertokenauth extension configured above.
  8. Under service.pipelines.logs.exporters array, add the name of the otlphttp exporter name configured above.
  9. You can now further configure the otlphttp exporter according to your needs. See the configuration of the exporter for more details.