Zum Inhalt springen

How to push logs Observability

Diese Seite ist noch nicht in deiner Sprache verfügbar. Englische Seite aufrufen

First you have to create logs from your operating system, service, or application. Any log library can be used. There are two ways the logs can be handed over:

  • Written to a log file or
  • written to standard out.

Once the logs exist, they have to be collected. We recommend using Grafana Promtail or Fluent Bit with the Loki plugin. Below you can find example configurations of those tools:

In this example an app writes logs to its internal directory /app/logs.

We mount in a volume./logs which is the local file system.

Since Promtail needs access to the log files we need this shared volume.

In Promtail we mount in the logs to it’s logging directory /var/log and configured it to scrape all files that end with *.log and export it to the Observability Loki instance under URL.

Promtail configuration

server:
http_listen_port: 9080
grpc_listen_port: 0
log_level: "debug"
positions:
filename: /tmp/positions.yaml
clients:
# To get the URL, make a GET request to the observability API with the path /v1/projects/{projectId}/instances/{instanceId}. The URL can be found in the "logsPushUrl" key of the "instance" dictionary in the response
  - url: https://\[username\]:\[password\]@logs.stackit.argus.eu01.stackit.cloud/instances/\[instanceId\]/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*.log
app: app

Docker Compose

version: "3"
services:
app:
build:
context:./app
dockerfile:./docker/local/Dockerfile
image: app
container_name: app
privileged: true
volumes:
-./logs:/app/logs
promtail:
image: grafana/promtail:2.4.1
volumes:
-./logs:/var/log
-./config::/etc/promtail/
ports:
- "9080:9080"
command: -config.file=/etc/promtail/promtail.yaml

In the second example we want to scrape container logs and use Fluentbit for that (Promtail can also be used).

Fluentbit config

Terminal window
[Output]
Name loki
Match *
Tls on
Host logs.stackit\[cluster\].argus.eu01.stackit.cloud
Uri /instances/\[instanceId\]/loki/api/v1/push
Port 443
Labels job=fluent-bit,env=${FLUENT_ENV}
Http_User $FLUENT_USER
Http_Passwd $FLUENT_PASS
Line_format json
[SERVICE]
Parsers_File /fluent-bit/etc/parsers.conf # fluentbit needs a parser. This should point to the parser
Flush 5
Daemon Off
Log_Level debug
[FILTER]
Name parser
Match *
Parser docker
Key_name log
[INPUT]
Name forward
Listen 0.0.0.0
Port 24224

Parsers.conf

Terminal window
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As json log log
Decode_Field_As json level
Decode_Field_As json ts
Decode_Field_As json caller
Decode_Field_As json msg msg

docker-compose.yaml

version: "3"
services:
fluentbit:
image: grafana/fluent-bit-plugin-loki:2.4.1-amd64
container_name: fluentbit_python_local
volumes:
-./logging:/fluent-bit/etc # logging directory contains parsers.conf and fluent-bit.conf
ports:
- "24224:24224"
- "24224:24224/udp"
app:
build:
context:./app
dockerfile:./docker/local/Dockerfile
image: app
container_name: app
privileged: true
volumes:
-./:/app
ports:
- "3000:3000"
command: sh -c 'air'
logging:
driver: fluentd # to make fluentbit work with docker this driver is needed

Install fluent-bit for Loki inside of customers Kubernetes cluster

Section titled “Install fluent-bit for Loki inside of customers Kubernetes cluster”

There is the need to send logs of one or more applications to Loki. It is necessary to install fluent-bit in customer cluster
to send output to the customers Observability instance.

It is possible to send logs to multiple STACKIT Observability Instances. Though url of fluent-bit output plugin have to contain instance id.
Please check version of fluent-bit. Older versions than 2.2.2 does not support url’s with non standard path.

Deploy following files to your application cluster in a different namespace:

  • create a namespace
    namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
name: kube-logging
  • create a serviceaccount
    service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: kube-logging
  • create a cluster role
    role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit-read
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs: ["get", "list", "watch"]
  • create a role binding
    role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit-read
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: kube-logging
  • create fluent-bit daemonset
    daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: kube-logging
labels:
app.kubernetes.io/name: fluent-bit
app.kubernetes.io/instance: fluent-bit-loki
app.kubernetes.io/version: "2.2.2"
spec:
selector:
matchLabels:
k8s-app: fluent-bit-logging
template:
metadata:
labels:
k8s-app: fluent-bit-logging
spec:
containers:
- name: fluent-bit
image: "fluent/fluent-bit:latest"
imagePullPolicy: Always
command:
- /fluent-bit/bin/fluent-bit
args:
- --workdir=/fluent-bit/etc
- --config=/fluent-bit/etc/conf/fluent-bit.conf
ports:
- name: http
containerPort: 2020
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /api/v1/health
port: http
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: journal
mountPath: /journal
readOnly: true
- name: fluent-bit-config
mountPath: /fluent-bit/etc/conf
terminationGracePeriodSeconds: 10
volumes:
- name: varlog
hostPath:
path: /var/log
- name: journal
hostPath:
path: /var/log/journal
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: fluent-bit-config
configMap:
name: fluent-bit-config
serviceAccountName: fluent-bit
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
  • create a configmap, containing all fluent-bit config files.
    Create a service account with following steps:
    - open https://portal.stackit.cloud/,
    - select a project,
    - open Services in section Overview,
    - if necessary, create an Observability service,
    - klick on instance name,
    - klick on Create credentials in Credentials section,
    - name this credentials,
    - save credentials using button copy JSON,
    - following fields are obligatory for next step:
    logsPushUrl:
https://logs.stackit\[cluster\].argus.eu01.stackit.cloud/instances/\[instanceId\]/loki/api/v1/push

username: $FLUENT_USER
password: $FLUENT_PASS

You need to customize following fields in OUTPUT section of fluent-bit.conf of configmap.yaml file:
host: logs.stackit[cluster].argus.eu01.stackit.cloud
uri: /instances/[instanceId]/loki/api/v1/push
http_user: $FLUENT_USER
http_passwd: $FLUENT_USER

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: kube-logging
labels:
k8s-app: fluent-bit
data:
fluent-bit.conf: |
[SERVICE]
Daemon Off
Flush 1
Log_Level info
Parsers_File /fluent-bit/etc/parsers.conf
Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
Health_Check On
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser cri
Tag kube.*
Mem_Buf_Limit 5MB
Skip_Long_Lines On
[INPUT]
Name systemd
Tag host.*
Systemd_Filter _SYSTEMD_UNIT=kubelet.service
Read_From_Tail On
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
K8S-Logging.Exclude On
[OUTPUT]
name loki
match *
host logs.stackit\[cluster\].argus.eu01.stackit.cloud
uri /instances/\[instanceId\]/loki/api/v1/push
port 443
http_user $FLUENT_USER
http_passwd $FLUENT_PASS
tls on
tls.verify on
line_format json
labels job=fluent-bit
label_map_path /fluent-bit/etc/conf/labelmap.json
parsers.conf: |
# CRI Parser
[PARSER]
# http://rubular.com/r/tjUt3Awgg4
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<message>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
custom_parsers.conf: |
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L
Time_Keep On
Decode_Field_As json log log
Decode_Field_As json level
Decode_Field_As json ts
Decode_Field_As json caller
Decode_Field_As json msg ms
labelmap.json: |-
{
"kubernetes": {
"container_name": "container",
"host": "node",
"labels": {
"app": "app",
"release": "release"
},
"namespace_name": "namespace",
"pod_name": "instance"
},
"stream": "stream"
}
  • create a small test application to produce logs
    At first we need a volume for nginx test application
    mypvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Request 1 Gigabyte of storage
  • now we are able to deploy a nginx webserver
    nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
resources:
limits:
memory: "256Mi" # Maximum memory allowed
cpu: "200m" # Maximum CPU allowed (200 milliCPU)
requests:
memory: "128Mi" # Initial memory request
cpu: "100m" # Initial CPU request
livenessProbe:
httpGet:
path: / # The path to check for the liveness probe
port: 80 # The port to check on
initialDelaySeconds: 15 # Wait this many seconds before starting the probe
periodSeconds: 10 # Check the probe every 10 seconds
readinessProbe:
httpGet:
path: / # The path to check for the readiness probe
port: 80 # The port to check on
initialDelaySeconds: 5 # Wait this many seconds before starting the probe
periodSeconds: 5 # Check the probe every 5 seconds
volumes:
- name: data
persistentVolumeClaim:
claimName: my-pvc # Name of the Persistent Volume Claim
  • Check pods after deployment
Terminal window
$ kubectl get namespaces
NAME STATUS AGE
default Active 6d
kube-logging Active 4d3h
kube-node-lease Active 6d
kube-public Active 6d
kube-system Active 6d
$ kubectl get pods -n kube-logging
NAME READY STATUS RESTARTS AGE
fluent-bit-c9b8d 1/1 Running 0 23h
$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
nginx-deployment-58fc999d7b-56jrq 1/1 Running 0 9m2s
nginx-deployment-58fc999d7b-5gkwz 1/1 Running 0 9m2s
nginx-deployment-58fc999d7b-gxbfr 1/1 Running 0 9m2s

Now you can open Grafana and check the logs with the Loki datasource.