Kubernetes Gateway API in GKE, Contour, and NGINX Implementations
This article explains Gateway API - a relatively recent yet impressively powerful technology that is called the way to “evolve” the Kubernetes network. We will explain the benefits of using Gateway API with examples of GKE, Contour and NGINX providers.

Kubernetes Gateway API in GKE, Contour, and NGINX Implementations

:)You have successfully subscribed! Thank you for subscribing to our newsletter!! Email has already been taken
back to blog
Kubernetes Gateway API in GKE, Contour, and NGINX Implementations
Nov 22 2022 | byAnton Shaleynikov

What is the Gateway API and will it replace Ingress?

Gateway API is an open-source project managed by the SIG-NETWORK community. This community provides a collection of resources that model service networking in Kubernetes.

It is important to remember that the Gateway is not an Ingress replacement but rather its evolution (or its more advanced form). Gateway provides the same function as Ingress but this function is delivered as a superset composed of improved Ingress capabilities.

What problems does Ingress currently have?

The main issue with Ingress is that it supports only one user role (either the Kubernetes administrator or an operator) that manages the configuration. Ingress doesn't work with multiple teams, where you are sharing a cluster between developers, system administrators and platform operators (i.e. where there are many user roles).

Another problem is the proliferation of annotations and custom resource definitions (CRDs) in many Ingress implementations, where they unlock the capabilities of different data planes and implement features that aren’t built into the Ingress resource. Examples of these features are header‑based matching, traffic weighting, and multi‑protocol support. Gateway API, on the contrary, delivers these options as part of the core API standard.

What advantages does the Gateway API have over the Ingress?

Gateway API was intended to resolve the issues that Ingress currently has. Below, we’ll look at the biggest advantages of Gateway API over Ingress.

Cross namespace routing

Cross namespace routing allows user access control to be applied differently across namespaces for Routes and Gateways, dividing access and control to different parts of the cluster routing configuration.

How does it work in Ingress? Ingress demands that resources need to be in the same namespace. It’s not a big deal if your cluster is managed by one team, but one-to-one relationship can be a problem if you are sharing a cluster between several teams.

Unlike Ingress, the Kubernetes Gateway API provides cross-namespace routing, allowing the one-to-many relationship between the Gateway and Route and deploying services to different namespaces.


Routing allows to match on HTTP traffic and direct it to Kubernetes backends.

One of the most important features missing in Ingress is advanced traffic routing. Up until now, this was resolved with a service mesh, which made routing complex and tightly coupled with the mesh implementation.

With Gateway API, you can implement numerous protocols, including support for TCPRoute, HTTPRoute and GRPCRoute.

Note! The GRPCRoute and TCPRoute resources are included in the "Experimental" channel of Gateway API.

HTTP path redirects and rewrites

A redirects allow giving more than one URL address to a page, a rewrite allows to completely separate the URL from the resource.

This is a necessary and powerful thing, which is available in Ingress only through annotations.

Filters to path redirects and rewrites became available with v1beta1 version of the Gateway API, but are still in experimental mode. However, Gateway API has them at least which is good news.

Traffic splitting

Traffic splitting allows specifying weights to shift traffic between different backends, which you can combine with A/B or canary strategies to achieve complex rollouts in a simple way.

The Gateway API supports typed Route resources and typed backends. In this way, Gateway API allows you to create a flexible API that supports various protocols (HTTP, GRPC) and different backends (Kubernetes Services, storage buckets, or functions).


TLS is a cryptographic protocol that powers encryption for many network applications.

The Gateway API supports TLS configuration at various points in the network path between the client and the service — for upstream and downstream independently. Depending on the listener configuration, various TLS modes and route types are possible. The cert-manager support is also available.

You can also configure the Gateway to reference a certificate in different namespaces.

Integration with Progressive Delivery Tools

Progressive Delivery is a modern software development lifecycle that is based on the principles of Continuous Integration and Continuous Delivery (CI/CD).

The API Gateway currently offers integration with Flagger — a progressive delivery tool for Kubernetes, which provides advanced deployment strategies such as A/B, blue-green, and canary.

Gateway API resources

  • GatewayClass
  • Gateway
  • Route Resources


A GatewayClass is a resource that defines a template for TCP/UDP (level 4) load balancers and HTTP(S) (level 7) load balancers in a Kubernetes cluster. It a cluster-scoped resource that serves as a template for creating load balancers in a cluster.


Defines where and how the load balancers listen for traffic. Cluster operators create Gateways in their clusters based on a GatewayClass.

Route Resources

Route resources define protocol-specific rules for mapping requests from a Gateway to Kubernetes Services. It includes resources like HTTPRoute, TLSRoute, TCPRoute, UDPRoute, GRPCRoute.

From theory to practice. Prerequisites

The Gateway API is supported by many projects. But in this article I will show how to deploy Kubernetes API Gateway resources, using implementations and integrators of the Contour, the NGINX Kubernetes Gateway and the recently released Google Kubernetes Engine.

The following prerequisites must be completed before using the Gateway API:

  • a Kubernetes cluster
  • the kubectl command-line tool

Note! There are various clusters supported by Contour and NGINX. But for the GKE Gateway, you must use the GKE version 1.20 or later and VPC-native (Alias IP) clusters only.

Deploying the demo with the Contour Gateway API

  1. Go to the contour directory:
cd contour
  1. Create Gateway API CRDs:
kubectl apply -f 
  1. Create a GatewayClass:
kubectl apply -f gatewayClass.yaml
  1. Create a Gateway in the projectcontour namespace:
kubectl apply -f gateway.yaml
  1. Deploy Contour:
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

This command creates:

  • namespace projectcontour to run Contour
  • contour CRDs
  • contour RBAC resources
  • contour Deployment / Service
  • envoy DaemonSet / Service
  • contour ConfigMap
  1. Update the Contour config map to enable Gateway API processing by specifying a gateway controller name, and restart Contour to pick up the config change:
kubectl apply -f configMap.yaml
kubectl -n projectcontour rollout restart deployment/contour
  1. Deploy the test application:
kubectl apply -f 

The following kuard resources were created with this command in the default namespace:

  • deployment to run kuard as the test application.
  • a service to expose the kuard application on TCP port 80.
  • HTTPRoute, attached to the contour Gateway, to route requests for local.projectcontour.io to the kuard service.
  1. Verify that the kuard resources are available:
kubectl get po,svc,httproute -l app=kuard
NAME                         READY   STATUS    RESTARTS   AGE
pod/kuard-75bff7b748-8gvdk   1/1     Running   0          24m
pod/kuard-75bff7b748-l4g6d   1/1     Running   0          24m
pod/kuard-75bff7b748-nhccq   1/1     Running   0          24m

NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kuard   ClusterIP   <none>        80/TCP    24m

NAME                                        HOSTNAMES                     AGE
httproute.gateway.networking.k8s.io/kuard   ["local.projectcontour.io"]   22s
  1. Testing the Gateway API
kubectl -n projectcontour port-forward service/envoy 8888:80

In another terminal, make a request to the application via the forwarded port (note that local.projectcontour.io is a public DNS record resolving to to make use of the forwarded port):

curl -i http://local.projectcontour.io:8888

You will receive a 200 response code along with the HTML body of the main kuard page.

You can also open http://local.projectcontour.io:8888/ in a browser.

Deploying the demo with the GKE Gateway API

  1. Go to the gke directory:
cd gke
  1. Create Gateway API CRDs:
kubectl apply -k 

This command installs the v1beta1 CRDs:

customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
  1. Create the Gateway in your cluster:
kubectl apply -f gateway.yaml
  1. Validate that the Gateway was deployed correctly:
kubectl describe gateways.gateway.networking.k8s.io external-http
  1. Deploying the demo applications
kubectl apply -f 

Сreated with this command:

store-v1 Deployment / Service
store-v2 Deployment / Service
store-german Deployment / Service
  1. Verify again that resources are available:
kubectl get svc
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
store-german   ClusterIP   <none>        8080/TCP   71m
store-v1       ClusterIP   <none>        8080/TCP   71m
store-v2       ClusterIP   <none>        8080/TCP   71m
kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
store-german-66dcb75977-5gr2n   1/1     Running   0          20m
store-v1-65b47557df-jkjbm       1/1     Running   0          20m
store-v2-6856f59f7f-sq889       1/1     Running   0          20m
  1. Deploy the HTTProute in your cluster:
kubectl apply -f route.yaml
  1. Validate that the store HTTPRoute was applied successfully:
kubectl describe httproute.gateway.networking.k8s.io store

These routing rules will process HTTP traffic in the following manner:

8 .Testing

Retrieve the IP address from the Gateway to send traffic to your application:

kubectl get gateways.gateway.networking.k8s.io external-http -o=jsonpath="{.status.addresses[0].value}"

or get the IP address of the Gateway by looking at the output of

kubectl describe gateway external-http

Make a request

curl -H "host: store.example.com" IP

Replace IP with the IP address from the previous step.

Deploying the demo with the NGINX Kubernetes Gateway

  1. Go to the nginx directory:
cd nginx
  1. Install the Gateway CRDs:
kubectl apply -k 
  1. Create the nginx-gateway Namespace:
kubectl apply -f namespace.yaml
  1. Create the njs-modules ConfigMap:
cd modules
kubectl create configmap njs-modules --from-file=httpmatches.js -n nginx-gateway
  1. Create the GatewayClass resource:
kubectl apply -f gatewayclass.yaml
  1. Deploy the NGINX Kubernetes Gateway:
kubectl apply -f nginx-gateway.yaml
  1. Create a LoadBalancer Service

Note! Access to NGINX Kubernetes Gateway can be obtained by creating a NodePort Service or a LoadBalancer Service. In this example, we will use a LoadBalancer Service for GCP or Azure (It also works with DigitalOcean cluster). For AWS deploy the service with AWS loadBalancer type

kubectl apply -f loadbalancer.yaml
  1. Lookup the public IP of the load balancer, which is reported in the EXTERNAL-IP column in the output of the following command:
kubectl get service/nginx-gateway -n nginx-gateway

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
nginx-gateway   LoadBalancer   80:32407/TCP,443:31840/TCP   11m
  1. Deploy the Cafe Application
cd app
kubectl apply -f cafe.yaml
  1. Create the Gateway:
kubectl apply -f gateway.yaml
  1. Create HTTPRoute resources:
kubectl apply -f cafe-routes.yaml
  1. To access the application, we will use curl to send requests to the coffee and tea Services.

To get coffee:

curl --resolve cafe.example.com:80: http://cafe.example.com:80/coffee
Server address:

To get tea:

curl --resolve cafe.example.com:80: http://cafe.example.com:80/coffee
Server address:


The Gateway API is the newest and the most efficient way to expose Kubernetes API and it’s positioned as a role-oriented, portable, expressive and extensible standard for developing.

By using the advantages of this API, it now became possible to create flexible and portable applications not only for the end user but also for multiple teams with various user roles, such as managers, developers and administrators.

Keep in mind though that despite all the advantages of the Gateway API, several important features like request redirect and rewrite are still in experimental mode. Of course, they sound very promising and will surely have a great impact in the future, but we recommend waiting for a more stable implementation of these features. And while you wait, you can definitely pay attention to the Ingress controller. For example, NGINX Ingress Controller can be a good solution to the problem with redirect and rewrite.

Also, remember that there is no good or bad technology - everything depends on where and how you are going to use a particular tool. 


This article was created by using the following sources: Kubernetes Gateway API, Contour Gateway API, GKE Gateway API, NGINX Kubernetes gateway.

Average: 0 / 5 (0 votes)
Related articles

01 / 14