Skip to content

Quotas and limits

The maximum number of running nodes in a cluster is limited to 1000, regardless of the configured size of your node pools.

This restriction is enforced by the cluster autoscaler, which uses the --max-nodes-total flag to set the upper limit for your cluster.

If you created your SKE Cluster in a SNA network, the maximum node count might be even lower due to limitations in the available address space of that network.

When specifying node pools for your SKE Cluster, the following restrictions apply:

  • The maximum of one individual node pool must always 1000 or less.
  • The minimum of one individual node pool must always 1000 or less.
  • The total of all maximum sizes can be greater than 1000.
  • The total of all minimum sizes must always 1000 or less.

Resource Reservation until 21.07.2025

SKE reserves 1GiB memory and 80m CPU per Node, independently of the node sizes.

To ensure the stability of your Kubernetes clusters, SKE reserves resources on all nodes of the cluster to provide memory and CPU for system components. The amount of reserved resources varies depending on the size of the respective nodes.

The following calculation will be applied.

Memory reservations

For memory resources, SKE reserves the following:

  • 25% of the first 4 GiB of memory
  • 20% of the next 4 GiB of memory
  • 10% of the next 8 GiB of memory
  • 6% of the next 112 GiB of memory
  • 2% of any memory above 128 GiB

SKE also reserves an additional 100 MiB of memory on every node to handle Pod evictions.

CPU reservations

For CPU resources, SKE reserves the following:

  • 3,5% of the first 2 cores
  • 0,5% of the next 2 cores
  • 0,25% of any cores above 4 cores

Example

For flavor g1.2 (8 GiB memory & 2 CPU):

SKE will reserve 1 GiB (25%) of the first 4 GiB and 800 MiB of the next 4 GiB. With the additional 100 MiB for the Pod eviction mechanism SKE reserves in total 1.9 GiB memory.

Additionally, SKE will reserve 3,5% CPU of the two cores.