Skip to content

Kubernetes Engine

STACKIT Kubernetes Engine is a fully managed Kubernetes service that simplifies the deployment, scaling, and management of containerized applications.

STACKIT Kubernetes Engine offers a range of features to simplify application development and deployment:

  • Self-service cluster creation (Portal): Create Kubernetes clusters quickly and easily via the self-service user interface in the STACKIT Cloud Portal.
  • Infrastructure as code & automation: Provision and manage clusters using the Terraform provider, SKE API, or STACKIT CLI for repeatable automated workflows.
  • Managed control plane: The Control Plane of the clusters is managed and highly available.
  • Auto-updates: Kubernetes and operating system versions are automatically updated to keep clusters up to date.
  • Automatic repair functions: Detect and repair problems on the cluster automatically.
  • Event-driven autoscaling: Pod and node autoscaling elastically adjust clusters based on workload.
  • Temporary cluster shutdown: Clusters can be automatically switched off if the application is only to be accessible at certain times of the day.

Popular use cases for STACKIT Kubernetes Engine are the following:

  • Migration of existing applications: Quickly and easily containerize existing applications and execute them on SKE in the European cloud without worrying about the underlying infrastructure.
  • Operation of cloud-native applications: Create new cloud-native applications in the form of microservices, utilizing the Kubernetes ecosystem for service meshes, serverless applications, and CI/CD pipelines.
  • Creation of stateful applications: Operate stateful applications on SKE clusters using persistent block storage.
  • Development and testing environments: Set up isolated environments for development and testing to ensure consistency and reliability across different stages of application deployment.
  • Scalable web applications: Deploy web applications that can automatically scale based on demand, ensuring high availability and performance.
  • Machine learning workloads: Run machine learning models and data processing tasks efficiently using Kubernetes orchestration and resource management.