Migrating from Ingress to Gateway API
For years, the Kubernetes Ingress API has been the standard for managing external access to services in a cluster. However, as cloud-native networking becomes more complex, the ecosystem is shifting toward the Gateway API.
This guide explains the differences between Ingress and Gateway API, why the shift is happening, and how you can start using Gateway API on STACKIT Kubernetes Engine (SKE) today.
Why the shift away from Ingress?
Section titled “Why the shift away from Ingress?”The Ingress resource was designed to provide a centralized way to manage external HTTP/HTTPS access. While it served the community well, it has notable limitations that the Gateway API aims to resolve.
Limitations of the Ingress API
Section titled “Limitations of the Ingress API”- Lack of standardization: Advanced features like traffic splitting, header manipulation, or authentication often rely on vendor-specific annotations (e.g.,
nginx.ingress.kubernetes.io/...). This makes configurations hard to port between different Ingress controllers. - Limited protocol support: Ingress is designed primarily for HTTP/HTTPS (Layer 7). Supporting other protocols often requires non-standard workarounds.
- Frozen status: The Ingress API is being retired. While it is not being removed, new features and developments are focused on the Gateway API.
Advantages of the Gateway API
Section titled “Advantages of the Gateway API”The Gateway API is the next-generation specification for Kubernetes networking. It introduces a resource model that separates responsibilities and standardizes advanced traffic management.
- Role-oriented: It splits definitions into
GatewayClass(infrastructure),Gateway(entry point), andRoutes(app routing). This allows platform engineers to manage the load balancers while developers manage their own routes. - Portable: Features like request mirroring, traffic weighting, and header matching are part of the core API, reducing reliance on custom annotations.
- Multi-protocol: Native support for HTTP (
HTTPRoute), gRPC (GRPCRoute), TCP (TCPRoute), and more.
Gateway API on SKE
Section titled “Gateway API on SKE”STACKIT Kubernetes Engine (SKE) supports standard Kubernetes networking concepts. While SKE does not yet provide a native GatewayClass that automatically provisions STACKIT infrastructure from a Gateway resource, you can easily adopt Gateway API using the Bring Your Own Controller (BYOC) pattern.
How it works
Section titled “How it works”On SKE, you can deploy a Gateway API-compliant controller (such as Envoy Gateway, Traefik, or Contour). This controller acts as the data plane. Here’s how it integrates with the SKE infrastructure:
-
The Controller: You install the controller in your cluster.
-
The Service: The controller creates a Kubernetes Service of type
LoadBalancer. -
The Infrastructure: SKE provisions a STACKIT Network Load Balancer (TCP/UDP) with a public IP that forwards traffic to your controller.
-
The Routing: The controller handles the Layer 7 routing defined in your
HTTPRouteresources.
Deploying Envoy Gateway on SKE
Section titled “Deploying Envoy Gateway on SKE”This tutorial walks you through setting up Envoy Gateway on an SKE cluster to demonstrate the Gateway API.
Prerequisites
Section titled “Prerequisites”- Access to an SKE cluster (created via Portal or API).
kubectlinstalled and configured with your cluster’s kubeconfig.helminstalled.
1. Install Gateway API CRDs
Section titled “1. Install Gateway API CRDs”First, install the standard Gateway API Custom Resource Definitions (CRDs) into your cluster.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/latest/download/standard-install.yaml2. Install Envoy Gateway
Section titled “2. Install Envoy Gateway”Use Helm to install the Envoy Gateway controller. This controller will watch for Gateway API resources and configure the underlying Envoy proxies.
helm install eg oci://docker.io/envoyproxy/gateway-helm --namespace envoy-gateway-system --create-namespaceWait for the Envoy Gateway controller deployment to reach a ready state:
kubectl rollout status deployment/envoy-gateway -n envoy-gateway-system --timeout=90sYou can verify that the deployment is successfully running by checking the pods in the system namespace:
kubectl get pods -n envoy-gateway-system3. Create a GatewayClass and Gateway
Section titled “3. Create a GatewayClass and Gateway”Define the GatewayClass to tell Kubernetes that Envoy Gateway should manage these resources. Then, create a Gateway to request a listening entry point.
Create a file named gateway.yaml:
apiVersion: gateway.networking.k8s.io/v1kind: GatewayClassmetadata: name: egspec: controllerName: gateway.envoyproxy.io/gatewayclass-controller---apiVersion: gateway.networking.k8s.io/v1kind: Gatewaymetadata: name: my-gateway namespace: defaultspec: gatewayClassName: eg listeners: - name: http protocol: HTTP port: 80Apply the configuration:
kubectl apply -f gateway.yaml4. Retrieve the public IP
Section titled “4. Retrieve the public IP”It may take a few moments for the STACKIT Network Load Balancer to provision. You can watch the Gateway status to see when the IP is assigned:
kubectl get gateway my-gatewayOnce provisioned, you will see an IP address in the ADDRESS column.
5. Deploy an application and route
Section titled “5. Deploy an application and route”Now, deploy a sample application and use an HTTPRoute to direct traffic to it.
Create a file named app-route.yaml:
# Sample ApplicationapiVersion: apps/v1kind: Deploymentmetadata: name: echo-app namespace: defaultspec: replicas: 1 selector: matchLabels: app: echo template: metadata: labels: app: echo spec: containers: - name: echo image: hashicorp/http-echo args: - "-text=Hello from Gateway API on SKE!"---apiVersion: v1kind: Servicemetadata: name: echo-service namespace: defaultspec: selector: app: echo ports: - port: 5678---# HTTPRouteapiVersion: gateway.networking.k8s.io/v1kind: HTTPRoutemetadata: name: echo-route namespace: defaultspec: parentRefs: - name: my-gateway rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: echo-service port: 5678Apply the file:
kubectl apply -f app-route.yamlYou can now access your application using the Public IP retrieved in step 4: http://<YOUR_GATEWAY_IP>/
Configuration options
Section titled “Configuration options”Performance tuning
Section titled “Performance tuning”SKE allows you to vertically scale the underlying STACKIT Network Load Balancer using annotations. If your Gateway handles high throughput, you can adjust the service plan of the LoadBalancer service created by your controller.
You can typically apply these annotations to the Service generated by your Gateway controller (check your controller’s documentation on how to propagate service annotations):
| Annotation | Description |
|---|---|
lb.stackit.cloud/service-plan-id | Defines the plan (e.g., p50, p250) for higher CPU/RAM on the LB. |
lb.stackit.cloud/tcp-proxy-protocol | Enables TCP PROXY protocol to preserve client IP addresses. |
Private gateways
Section titled “Private gateways”If you require an internal Gateway (not exposed to the internet), you can use the internal load balancer annotation:
annotations: lb.stackit.cloud/internal-lb: "true"Take a look at the SKE load balancing documentation for more details.
Summary
Section titled “Summary”Adopting Gateway API on SKE today via a modern controller allows you to future-proof your network architecture while leveraging the robust infrastructure of STACKIT Network Load Balancers. Here’s a summary of the key differences between Ingress and Gateway API:
| Feature | Ingress | Gateway API |
|---|---|---|
| Protocol support | HTTP/HTTPS (L7) | HTTP, gRPC, TCP, UDP (L4 & L7) |
| Extensibility | Custom annotations | Native resources (Filters, Matches) |
| Role separation | Monolithic resource | Split into Class, Gateway, and Routes |
| Status | Retired | Active development |