Zum Inhalt springen

Migrating from Ingress to Gateway API

For years, the Kubernetes Ingress API has been the standard for managing external access to services in a cluster. However, as cloud-native networking becomes more complex, the ecosystem is shifting toward the Gateway API.

This guide explains the differences between Ingress and Gateway API, why the shift is happening, and how you can start using Gateway API on STACKIT Kubernetes Engine (SKE) today.

The Ingress resource was designed to provide a centralized way to manage external HTTP/HTTPS access. While it served the community well, it has notable limitations that the Gateway API aims to resolve.

  • Lack of standardization: Advanced features like traffic splitting, header manipulation, or authentication often rely on vendor-specific annotations (e.g., nginx.ingress.kubernetes.io/...). This makes configurations hard to port between different Ingress controllers.
  • Limited protocol support: Ingress is designed primarily for HTTP/HTTPS (Layer 7). Supporting other protocols often requires non-standard workarounds.
  • Frozen status: The Ingress API is being retired. While it is not being removed, new features and developments are focused on the Gateway API.

The Gateway API is the next-generation specification for Kubernetes networking. It introduces a resource model that separates responsibilities and standardizes advanced traffic management.

  • Role-oriented: It splits definitions into GatewayClass (infrastructure), Gateway (entry point), and Routes (app routing). This allows platform engineers to manage the load balancers while developers manage their own routes.
  • Portable: Features like request mirroring, traffic weighting, and header matching are part of the core API, reducing reliance on custom annotations.
  • Multi-protocol: Native support for HTTP (HTTPRoute), gRPC (GRPCRoute), TCP (TCPRoute), and more.

STACKIT Kubernetes Engine (SKE) supports standard Kubernetes networking concepts. While SKE does not yet provide a native GatewayClass that automatically provisions STACKIT infrastructure from a Gateway resource, you can easily adopt Gateway API using the Bring Your Own Controller (BYOC) pattern.

On SKE, you can deploy a Gateway API-compliant controller (such as Envoy Gateway, Traefik, or Contour). This controller acts as the data plane. Here’s how it integrates with the SKE infrastructure:

  1. The Controller: You install the controller in your cluster.

  2. The Service: The controller creates a Kubernetes Service of type LoadBalancer.

  3. The Infrastructure: SKE provisions a STACKIT Network Load Balancer (TCP/UDP) with a public IP that forwards traffic to your controller.

  4. The Routing: The controller handles the Layer 7 routing defined in your HTTPRoute resources.

This tutorial walks you through setting up Envoy Gateway on an SKE cluster to demonstrate the Gateway API.

  • Access to an SKE cluster (created via Portal or API).
  • kubectl installed and configured with your cluster’s kubeconfig.
  • helm installed.

First, install the standard Gateway API Custom Resource Definitions (CRDs) into your cluster.

Terminal window
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/latest/download/standard-install.yaml

Use Helm to install the Envoy Gateway controller. This controller will watch for Gateway API resources and configure the underlying Envoy proxies.

Terminal window
helm install eg oci://docker.io/envoyproxy/gateway-helm --namespace envoy-gateway-system --create-namespace

Wait for the Envoy Gateway controller deployment to reach a ready state:

Terminal window
kubectl rollout status deployment/envoy-gateway -n envoy-gateway-system --timeout=90s

You can verify that the deployment is successfully running by checking the pods in the system namespace:

Terminal window
kubectl get pods -n envoy-gateway-system

Define the GatewayClass to tell Kubernetes that Envoy Gateway should manage these resources. Then, create a Gateway to request a listening entry point.

Create a file named gateway.yaml:

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: my-gateway
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80

Apply the configuration:

Terminal window
kubectl apply -f gateway.yaml

It may take a few moments for the STACKIT Network Load Balancer to provision. You can watch the Gateway status to see when the IP is assigned:

Terminal window
kubectl get gateway my-gateway

Once provisioned, you will see an IP address in the ADDRESS column.

Now, deploy a sample application and use an HTTPRoute to direct traffic to it.

Create a file named app-route.yaml:

# Sample Application
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: hashicorp/http-echo
args:
- "-text=Hello from Gateway API on SKE!"
---
apiVersion: v1
kind: Service
metadata:
name: echo-service
namespace: default
spec:
selector:
app: echo
ports:
- port: 5678
---
# HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: echo-route
namespace: default
spec:
parentRefs:
- name: my-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: echo-service
port: 5678

Apply the file:

Terminal window
kubectl apply -f app-route.yaml

You can now access your application using the Public IP retrieved in step 4: http://<YOUR_GATEWAY_IP>/

SKE allows you to vertically scale the underlying STACKIT Network Load Balancer using annotations. If your Gateway handles high throughput, you can adjust the service plan of the LoadBalancer service created by your controller.

You can typically apply these annotations to the Service generated by your Gateway controller (check your controller’s documentation on how to propagate service annotations):

If you require an internal Gateway (not exposed to the internet), you can use the internal load balancer annotation:

annotations:
lb.stackit.cloud/internal-lb: "true"

Take a look at the SKE load balancing documentation for more details.

Adopting Gateway API on SKE today via a modern controller allows you to future-proof your network architecture while leveraging the robust infrastructure of STACKIT Network Load Balancers. Here’s a summary of the key differences between Ingress and Gateway API: