Skip to content

Node pools

This tutorial guides you through the steps of how to modify the worker nodes of an existing Kubernetes cluster.

Nodes form the underlying compute infrastructure for your containerized workloads and are mandatory for your Kubernetes cluster to function. SKE provides the option to individually configure so-called Node Pools for your cluster. A node pool describes a group of virtual machines (“nodes”) with the same configuration settings, for example, all allocating the same CPU cores or memory.

Add a new node pool to a Kubernetes cluster

Section titled “Add a new node pool to a Kubernetes cluster”

In order to horizontally scale it up or down to a changed workload demand, it is necessary to add or remove worker nodes to/from a Kubernetes cluster. To change the node pool of a cluster, open the Kubernetes section in the STACKIT Cloud Portal and select the particular cluster in the Cluster panel. From the cluster overview page, head to the Node pool section and click on the button Create node pool.

First, you can specify a name for this set of nodes. Additionally, you can choose multiple availability zones (AZ) your nodes will be running in. Currently, SKE provides the option to choose between three single AZ and one metro AZ. To find out more about the STACKIT topology, please take a look at this article.

In SKE, for every cluster, runs a cluster autoscaler. It allows adding and removing nodes from/to your cluster automatically, based on your cluster utilization. Therefore, you can specify a minimum and a maximum amount of nodes. You can define a maximum surge which takes effect on updates for your nodes. The amount of new nods that will be created concurrently, in an OS or Kubernetes update, is defined by maximum surge.

When configuring multiple AZ, the values for minimum, maximum, maxSurge & maxUnavailable are distributed across each zone. Therefore, it is highly recommended to use multiples of the amount of configured AZ to ensure equal distribution.
Rollouts are carried out independently for each zone, i.e., if, for example, the operating system version is updated, the nodes of each zone are rolled out in parallel.
For example, if you want to create a node pool with three AZ, it is advisable to set minimum, maximum and maxSurge to values that are multiples of three, for example, minimum to 3, maximum to 9 and maxSurge to 3.

Next, define an operating system and its version. Additionally, a container runtime needs to be selected. Currently, only containerd is available for selection.

Define the compute resources of your virtual machines in the Machine Type panel. You can only select machine types which have a minimum of 2 CPU cores as Kubernetes worker node. Define the Volume type and volume size that will be used for your virtual machines. Each virtual machine will have one of these volumes, storing the operating system, your container images and ephemeral storage. You can see the performance of the volume types in the Service Details - Block Storage. We do not recommend to use Performance Class 0 as it is very slow to download and run containers on this machine.

Lastly, click on Order fee-based to successfully complete the configuration process. A confirmation will be shown in the top right corner.

Delete a node pool of a Kubernetes cluster

Section titled “Delete a node pool of a Kubernetes cluster”

In case you want to remove a node pool of your SKE cluster, click on the context menu in the node pool section and select Delete. This action will trigger a request to SKE, which will take approximately five minutes to delete your node pool.

It is also possible to change the settings of a node pool afterwards. In this case, all nodes of this node pool will execute a rolling update, meaning new nodes join the cluster gradually, so no downtime will occur.

Editing a node pool can be done by clicking on the desired cluster in the Kubernetes section. Next, click on Node Pools on the left side of your screen and choose the specific node pool you want to adjust. This will open a new overview page for this node pool to adapt its configuration.