AWS NLB

In this guide you explore how to expose the kgateway proxy with an AWS network load balancer (NLB). The following use cases are covered:

  • NLB HTTP: Create an HTTP listener on the NLB that exposes an HTTP endpoint on your gateway proxy. Traffic from the NLB to the proxy is not secured.
  • TLS passthrough: Expose an HTTPS endpoint of your gateway with an NLB. The NLB passes through HTTPS traffic to the gateway proxy where the traffic is terminated.
⚠️

Keep in mind the following considerations when working with an NLB:

  • An AWS NLB has an idle timeout of 350 seconds that you cannot change. This limitation can increase the number of reset TCP connections.

Before you begin

  1. Create or use an existing AWS account.
  2. Follow the Get started guide to install kgateway, set up a gateway resource, and deploy the httpbin sample app.

Step 1: Deploy the AWS Load Balancer controller

  1. Save the name and region of your AWS EKS cluster and your AWS account ID in environment variables.

    export CLUSTER_NAME="<cluster-name>"
    export REGION="<region>"
    export AWS_ACCOUNT_ID=<aws-account-ID>
    export IAM_POLICY_NAME=AWSLoadBalancerControllerIAMPolicyNew
    export IAM_SA=aws-load-balancer-controller
  2. Create an AWS IAM policy and bind it to a Kubernetes service account.

    # Set up an IAM OIDC provider for a cluster to enable IAM roles for pods
    eksctl utils associate-iam-oidc-provider \
     --region ${REGION} \
     --cluster ${CLUSTER_NAME} \
     --approve
    
    # Fetch the IAM policy that is required for the Kubernetes service account
    curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.3/docs/install/iam_policy.json
    
    # Create the IAM policy
    aws iam create-policy \
     --policy-name ${IAM_POLICY_NAME} \
     --policy-document file://iam-policy.json
    
    # Create the Kubernetes service account
    eksctl create iamserviceaccount \
     --cluster=${CLUSTER_NAME} \
     --namespace=kube-system \
     --name=${IAM_SA} \
     --attach-policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/${IAM_POLICY_NAME} \
     --override-existing-serviceaccounts \
     --approve \
     --region ${REGION}
  3. Verify that the service account is created in your cluster.

    kubectl -n kube-system get sa aws-load-balancer-controller -o yaml
  4. Deploy the AWS Load Balancer Controller.

    kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"
    
    helm repo add eks https://aws.github.io/eks-charts
    helm repo update
    helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
      -n kube-system \
      --set clusterName=${CLUSTER_NAME} \
      --set serviceAccount.create=false \
      --set serviceAccount.name=${IAM_SA}

Step 2: Deploy your gateway proxy

  1. Create a GatewayParameters resource with custom AWS annotations. These annotations instruct the AWS load balancer controller to expose the gateway proxy with a public-facing AWS NLB.

    kubectl apply -f- <<EOF
    apiVersion: gateway.kgateway.dev/v1alpha1
    kind: GatewayParameters
    metadata:
      name: custom-gw-params
      namespace: kgateway-system
    spec:
      kube: 
        service:
          extraAnnotations:
            service.beta.kubernetes.io/aws-load-balancer-type: "external" 
            service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing 
            service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "instance"
    EOF
    Setting Description
    aws-load-balancer-type: "external" Instruct Kubernetes to pass the Gateway’s service configuration to the AWS load balancer controller that you created earlier instead of using the built-in capabilities in Kubernetes. For more information, see the AWS documentation.
    aws-load-balancer-scheme: internet-facing Create the NLB with a public IP addresses that is accessible from the internet. For more information, see the AWS documentation.
    aws-load-balancer-nlb-target-type: "instance" Use the Gateway’s instance ID to register it as a target with the NLB. For more information, see the AWS documentation.
  2. Depending on the annotations that you use on your gateway proxy, you can configure the NLB in different ways.

    Create a simple NLB that accepts HTTP traffic on port 80 and forwards this traffic to the HTTP listener on your gateway proxy.

    This Gateway resource references the custom GatewayParameters resource that you created.

    kubectl apply -f- <<EOF
    kind: Gateway
    apiVersion: gateway.networking.k8s.io/v1
    metadata:
      name: aws-cloud
      namespace: kgateway-system
    spec:
      gatewayClassName: kgateway
      infrastructure:
        parametersRef:
          name: custom-gw-params
          group: gateway.kgateway.dev
          kind: GatewayParameters        
      listeners:
      - protocol: HTTP
        port: 80
        name: http
        allowedRoutes:
          namespaces:
            from: All
    EOF

    Pass through HTTPS requests from the AWS NLB to your gateway proxy, and terminate TLS traffic at the gateway proxy for added security.

    1. Create a self-signed TLS certificate to configure your gateway proxy with an HTTPS listener.

      1. Create a directory to store your TLS credentials in.

        mkdir example_certs
      2. Create a self-signed root certificate. The following command creates a root certificate that is valid for a year and can serve any hostname. You use this certificate to sign the server certificate for the gateway later. For other command options, see the OpenSSL docs.

        # root cert
        openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=any domain/CN=*' -keyout example_certs/root.key -out example_certs/root.crt
      3. Use the root certificate to sign the gateway certificate.

        openssl req -out example_certs/gateway.csr -newkey rsa:2048 -nodes -keyout example_certs/gateway.key -subj "/CN=*/O=any domain"
        openssl x509 -req -sha256 -days 365 -CA example_certs/root.crt -CAkey example_certs/root.key -set_serial 0 -in example_certs/gateway.csr -out example_certs/gateway.crt
      4. Create a Kubernetes secret to store your server TLS certificate. You create the secret in the same cluster and namespace that the gateway is deployed to.

        kubectl create secret tls -n kgateway-system https \
          --key example_certs/gateway.key \
          --cert example_certs/gateway.crt
        kubectl label secret https gateway=https --namespace kgateway-system
    2. Create a Gateway with an HTTPS listener that terminates incoming TLS traffic. Make sure to reference the custom GatewayParameters resource and the Kubernetes secret that contains the TLS certificate information.

      kubectl apply -f- <<EOF
      apiVersion: gateway.networking.k8s.io/v1
      kind: Gateway
      metadata:
        name: aws-cloud
        namespace: kgateway-system
        labels:
          gateway: aws-cloud
      spec:
        gatewayClassName: kgateway
        infrastructure:
          parametersRef:
            name: custom-gw-params
            group: gateway.kgateway.dev
            kind: GatewayParameters
        listeners:
          - name: https
            port: 443
            protocol: HTTPS
            hostname: https.example.com
            tls:
              mode: Terminate
              certificateRefs:
                - name: https
                  kind: Secret
            allowedRoutes:
              namespaces:
                from: All
      EOF
  3. Verify that your gateway is created.

    kubectl get gateway aws-cloud -n kgateway-system
  4. Verify that the gateway service is exposed with an AWS NLB and assigned an AWS hostname.

    kubectl get services aws-cloud -n kgateway-system

    Example output:

    NAME        TYPE           CLUSTER-IP      EXTERNAL-IP                                                                     PORT(S)        AGE
    aws-cloud   LoadBalancer   172.20.39.233   k8s-kgateway-awscloud-edaaecf0c5-2d48d93624e3a4bf.elb.us-east-2.amazonaws.com   80:30565/TCP   12s
    
  5. Review the NLB in the AWS EC2 dashboard.

    1. Go to the AWS EC2 dashboard.
    2. In the left navigation, go to Load Balancing > Load Balancers.
    3. Find and open the NLB that was created for you, with a name such as k8s-kgateway-awscloud-<hash>.
    4. On the Resource map tab, verify that the load balancer points to EC2 targets in your cluster. For example, you can click on the target EC2 name to verify that the instance summary lists your cluster name.

Step 3: Test traffic to the NLB

  1. Create an HTTPRoute resource and associate it with the gateway that you created.

    kubectl apply -f- <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: httpbin-elb
      namespace: httpbin
      labels:
        example: httpbin-route
    spec:
      parentRefs:
        - name: aws-cloud
          namespace: kgateway-system
      hostnames:
        - "www.nlb.com"
      rules:
        - backendRefs:
            - name: httpbin
              port: 8000
    EOF
  2. Get the AWS hostname of the NLB.

    export INGRESS_GW_ADDRESS=$(kubectl get svc -n kgateway-system aws-cloud -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    echo $INGRESS_GW_ADDRESS
  3. Send a request to the httpbin app.

    curl -vi http://$INGRESS_GW_ADDRESS:80/headers -H "host: www.nlb.com:80"

    Example output:

    ...
    < HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    
  1. Create an HTTPRoute resource and associate it with the gateway that you created.

    kubectl apply -f- <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: httpbin-elb
      namespace: httpbin
      labels:
        example: httpbin-route
    spec:
      parentRefs:
        - name: aws-cloud
          namespace: kgateway-system
      hostnames:
        - "https.example.com"
      rules:
        - backendRefs:
            - name: httpbin
              port: 8000
    EOF
  2. Get the IP address that is associated with the NLB’s AWS hostname.

    export INGRESS_GW_HOSTNAME=$(kubectl get svc -n kgateway-system aws-cloud -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    echo $INGRESS_GW_HOSTNAME
    export INGRESS_GW_ADDRESS=$(dig +short ${INGRESS_GW_HOSTNAME} | head -1)
    echo $INGRESS_GW_ADDRESS
  3. Send a request to the httpbin app. Verify that you see a successful TLS handshake and that you get back a 200 HTTP response code from the httpbin app.

    curl -vi --resolve "https.example.com:443:${INGRESS_GW_ADDRESS}" https://https.example.com:443/status/200

    Example output:

    * Hostname https.example.com was found in DNS cache
    *   Trying 3.XX.XXX.XX:443...
    * Connected to https.example.com (3.XX.XXX.XX) port 443 (#0)
    * ALPN, offering h2
    * ALPN, offering http/1.1
    * successfully set certificate verify locations:
    *  CAfile: /etc/ssl/cert.pem
    *  CApath: none
    * TLSv1.2 (OUT), TLS handshake, Client hello (1):
    * TLSv1.2 (IN), TLS handshake, Server hello (2):
    * TLSv1.2 (IN), TLS handshake, Certificate (11):
    * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
    * TLSv1.2 (IN), TLS handshake, Server finished (14):
    * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
    * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
    * TLSv1.2 (OUT), TLS handshake, Finished (20):
    * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
    * TLSv1.2 (IN), TLS handshake, Finished (20):
    * SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
    * ALPN, server accepted to use h2
    * Server certificate:
    *  subject: CN=*; O=any domain
    *  start date: Jul 23 20:03:48 2024 GMT
    *  expire date: Jul 23 20:03:48 2025 GMT
    *  issuer: O=any domain; CN=*
    *  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
    * Using HTTP2, server supports multi-use
    * Connection state changed (HTTP/2 confirmed)
    * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
    * Using Stream ID: 1 (easy handle 0x14200fe00)
    ...
    < HTTP/2 200 
    HTTP/2 200 
    

Cleanup

You can remove the resources that you created in this guide.
  1. Delete the Ingress and Gateway resources.

    kubectl delete gatewayparameters custom-gw-params -n kgateway-system
    kubectl delete gateway aws-cloud -n kgateway-system
    kubectl delete httproute httpbin-elb -n kgateway-system
    kubectl delete secret tls -n kgateway-system
  2. Delete the AWS IAM resources that you created.

    aws iam delete-policy --policy-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:policy/${IAM_POLICY_NAME}
    eksctl delete iamserviceaccount --name=${IAM_SA} --cluster=${CLUSTER_NAME}
  3. Uninstall the Helm release for the aws-load-balancer-controller.

    helm uninstall aws-load-balancer-controller -n kube-system