Zum Inhalt springen

Basics

The following defines some of the terms which are used in the context of STACKIT Kubernetes Engine (SKE):

A container is an executable standard package of software that includes code, runtime, configuration and system libraries so that it can be run anywhere. At runtime, containers make use of their own isolated slide of operating system (OS) resources, like CPU, RAM, Disk and Networking. As containers do not need to include a guest OS in every instance like a virtual machine, they are small, fast, and portable.

Containerization refers to the process of encapsulating an application with its relevant configuration files, environment variables, libraries and software dependencies into a container image. The image can then be run on a container platform.

Kubernetes is an open source container management system that is used to manage a distributed cluster of hosted containers. Kubernetes provides robust container cluster orchestration capabilities that include provisioning, health monitoring, resource allocation, scaling and load balancing, or failover.

A (Kubernetes) cluster is a set of nodes that run containerized applications.

A node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster.

A master node is a node which controls and manages the workload of the cluster. It consists of multiple components, such as the API Server, the etcd, the Controller Manager and the Scheduler.

A worker node (or shortly, worker) is a machine where containers are deployed. Each worker node must run a container runtime (for example, containerd, CRI-O, Docker), as well as the Kubelet and Kube-proxy components needed by Kubernetes.

A worker group is a node pool consisting of worker nodes of the same configuration.

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or multiple containers which share volume and network resources. A Pod represents a process running on a cluster. Pods are ephemeral - this means if a Pod fails, Kubernetes automatically creates a new replica of that Pod to continue operations.

SKE, or shortly SKE, is the name of the Kubernetes service offering of STACKIT.

The SKE dashboard is the user interface of SKE that is used to create, update or delete Kubernetes clusters, provides access to the cluster credentials - the kubeconfig - and is used for administration tasks like configuration of maintenance windows or hibernation schedules.

SKE is a fully managed, scalable and certified Kubernetes service for the deployment and management of Kubernetes clusters and containerized applications. It delivers upstream conformant Kubernetes and makes it easy to deploy standard Kubernetes applications without any refactoring being required.

Kubernetes splits the control plane of an end-user cluster from its worker nodes. The actual end-user cluster consists only out of the worker nodes, while its control plane is running in an infrastructure cluster of SKE that can’t be accessed by end users. This makes end-user clusters very robust, as accidental damages of the control plane aren’t possible, although end users get administrative access to their clusters. It also allows effective management of day-2 operations, i.e. updates and optimizations of running end user clusters.

  • Create clusters quickly and easily through a self-service with the SKE Dashboard.
  • Support of internet-facing clusters for public workloads.
  • Horizontal Pod and node autoscaling provides elasticity matching the actual workload.
  • Auto-update capabilities keep the Kubernetes and operating system versions up-to-date.
  • Auto-repair capabilities of the cluster reduce operational overhead.
  • Auto-hibernation of clusters allow effective cost reduction.

A cluster management fee per running or hibernated cluster and started hour, independent on the cluster size and topology will be applied. The cluster management fee ends with the deletion of the cluster.

Actual resources used in the clusters (i.e. machine types, storage, network resources) are metered and priced according their metric.

The SKE SLA is measured according to following Service Level Indicator (SLI):

  • Availability of Kubernetes API Server of SKE clusters - SLA: 99,9%.
  • The SLA for an SKE cluster is violated, if the API server for the cluster can’t be externally accessed.

Excluded from SLA violations are:

  • Failures of Kubernetes Pods or Kubernetes nodes running in the cluster.

SKE provides several Kubernetes storage classes, ranging from performance class perf-0 to perf-6 (default is perf-1). For further information about storage classes, refer to the Block Storage service plans.

The ske-csi integrates a Kubernetes cluster using the stackit-block-storage-service. This enables dynamically provision of block-storage for container applications in Kubernetes. Some application can require to access the same files and application data across multiple Pods (containers).

The block-storage service from STACKIT is not able to bind a single block-storage to multiple Pods (containers). This means that this is a ReadWriteOnce (RWO) solution, since the volume is confined to one node.

The Network File System (NFS) protocol, on the other hand, does support exporting the same share to many Pods (containers). This is called ReadWriteMany (RWX).

The setup of an NFS Server can leverage the benefits of stackit-block-storage-service.