FAQ
Last updated on
We want to give our customers the information they need to get the most out of STACKIT Workflows. This FAQ section answers common questions. This helps you quickly find solutions and improve your experience. We encourage you to check these FAQ before contacting our support team, as you might find your answer here.
-
General information
How can I use images from a private registry with Workflows?
Please open a support ticket for assistance. Important: For security reasons, do not include any credentials or secrets in your initial message. We will request this information through a secure channel if needed.
How do I increase the size of my Workflows cluster?
This question is about the size of the Workflows Infrastructure sizing (Web servers, schedulers, triggers), not about maximum resource usage of DAGs (see below). It is currently not possible to do so via STACKIT Portal. Please open a support ticket and provide information about your use-case such that STACKIT Engineering can help you to determine the right cluster size.
DAGs are failing with the error message "[...] forbidden: exceeded quota: airflow-worker-quota [...]". What to do?
The error message indicates that your DAGs are requesting more CPU and/or memory than allowed. Please open a support ticket and request an increase of your quota.
My Identity Provider is not supported. What to do?
STACKIT Workflows uses the OpenID Connect protocol to interact with Identity Providers. If your provider does support OpenID Connect, but is not listed as a supported provider, please open a Support Ticket.
How can I use secrets from STACKIT Secrets Manager in my Workflows instance?
Currently there is no out-of-the-box integration between STACKIT Secrets Manager and STACKIT Workflows. However, you can still use Secrets Manager by integrating it yourself via the Secrets Manager REST API.
Currently there is no out-of-the-box integration between STACKIT Secrets Manager and STACKIT Workflows. However, you can still use Secrets Manager by integrating it yourself via the Secrets Manager REST API.
A common approach is to implement a custom vault client that calls the Secrets Manager API and use it inside your DAGs. The credentials required to access Secrets Manager can be stored securely as an Airflow Connection in your Workflows instance.
The client can then be used in two typical ways:
1. Runtime retrieval in DAGs
Section titled “1. Runtime retrieval in DAGs”The client fetches secrets directly during DAG task instance processing when processing data.
2. Synchronization DAG
Section titled “2. Synchronization DAG”A dedicated DAG periodically retrieves secrets from Secrets Manager and synchronizes them, for example by:
- creating Kubernetes Secrets, or
- creating Airflow Connections via the Airflow API
With the second approach, the synchronized secrets can then be used as parameters for operators within your workflows.
Creating the secret as Kubernetes secret could be implemented in a similar way as the following:
from kubernetes import client, configfrom kubernetes.client.rest import ApiExceptionsecret = client.V1Secret(metadata=client.V1ObjectMeta(name=name, labels=labels or {}),type="Opaque",data=<encoded_secret>,)try:v1.create_namespaced_secret(namespace=namespace, body=secret)except ApiException as e:if e.status == 409:v1.patch_namespaced_secret(name=name, namespace=namespace, body=secret)else:raiseThe secret can then be used in worker pods either by mounting it to the KubernetesPodOperator or by reading it using Python functions.
-
Known issues
DAG import error persists after DAG deletion
This is a known error of Airflow. When a DAG enters the error state, this error-state is not removed when the DAG itself is not loaded anymore. You can recreate a valid dummy DAG with the same name/path. That will clear the issue. Remove the DAG afterwards.
Airflow task logs
Section titled “Airflow task logs”When examining logs via Workflow’s User Interface, you might see an error message similar to the below. Please just ignore the error — no logs are missing and the error is not caused by your code. This is a known-issue and will be fixed in a future release.
[2025-05-26, 16:53:59 UTC] {pod_manager.py:505} ERROR - Reading of logs interrupted for container 'base'; will retry.Traceback (most recent call last):File "/home/airflow/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 459, in consume_logsfor raw_line in logs:^^^^File "/home/airflow/.local/lib/python3.12/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 263, in __iter__for data_chunk in self.response.stream(amt=None, decode_content=True):^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "/home/airflow/.local/lib/python3.12/site-packages/urllib3/response.py", line 1063, in streamyield from self.read_chunked(amt, decode_content=decode_content)File "/home/airflow/.local/lib/python3.12/site-packages/urllib3/response.py", line 1219, in read_chunkedself._update_chunk_length()File "/home/airflow/.local/lib/python3.12/site-packages/urllib3/response.py", line 1149, in _update_chunk_lengthraise ProtocolError("Response ended prematurely") from Noneurllib3.exceptions.ProtocolError: Response ended prematurely