CEL-based RBAC

Use Common Language Expressions (CEL) expressions to secure access to AI resources.

About CEL-based RBAC

Agentgateway proxies use CEL expressions to match requests or responses on specific parameters, such as a request header or source address. If the request matches the condition, it is allowed. Requests that do not match any of the conditions are denied.

For an overview of supported CEL expressions, see the agentgateway docs.

Before you begin

  1. Install kgateway and enable the agentgateway integration.

  2. Verify that the agentgateway integration is enabled.

    helm get values kgateway -n kgateway-system -o yaml

    Example output:

    agentgateway:
      enabled: true

Set up access to Gemini

Configure access to an LLM provider such as Gemini. You can use any other LLM provider, an MCP server, or an agent to try out CEL-based RBAC.

  1. Save your Gemini API key as an environment variable. To retrieve your API key, log in to the Google AI Studio and select API Keys.

    export GOOGLE_KEY=<your-api-key>
  2. Create a secret to authenticate to Google.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: google-secret
      namespace: kgateway-system
      labels:
        app: agentgateway
    type: Opaque
    stringData:
      Authorization: $GOOGLE_KEY
    EOF
  3. Create a Backend resource to define the Gemini destination.

    kubectl apply -f- <<EOF
    apiVersion: gateway.kgateway.dev/v1alpha1
    kind: Backend
    metadata:
      labels:
        app: agentgateway
      name: google
      namespace: kgateway-system
    spec:
      ai:
        llm:
         gemini:
              apiVersion: v1beta
              authToken:
                kind: SecretRef
                secretRef:
                  name: google-secret
              model: gemini-1.5-flash-latest
      type: AI
    EOF
    Review the following table to understand this configuration.
    Setting Description
    gemini The Gemini AI provider.
    apiVersion The API version of Gemini that is compatible with the model that you plan to use. In this example, you must use v1beta because the gemini-1.5-flash-latest model is not compatible with the v1 API version. For more information, see the Google AI docs.
    authToken The authentication token to use to authenticate to the LLM provider. The example refers to the secret that you created in the previous step.
    model The model to use to generate responses. In this example, you use the gemini-1.5-flash-latest model. For more models, see the Google AI docs.
  4. Create an HTTPRoute resource to route requests to the Gemini backend. Note that kgateway automatically rewrites the endpoint that you set up (such as /gemini) to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the Backend resource.

    kubectl apply -f- <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: google
      namespace: kgateway-system
      labels:
        app: agentgateway
    spec:
      parentRefs:
        - name: agentgateway
          namespace: kgateway-system
      rules:
      - matches:
        - path:
            type: PathPrefix
            value: /gemini
        backendRefs:
        - name: google
          namespace: kgateway-system
          group: gateway.kgateway.dev
          kind: Backend
    EOF
  5. Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.

    curl -vik "$INGRESS_GW_ADDRESS:8080/gemini" -H content-type:application/json  -d '{
      "model": "",
      "messages": [
       {"role": "user", "content": "Explain how AI works in simple terms."}
     ]
    }'
    curl -vik "localhost:8080/gemini" -H content-type:application/json  -d '{
      "model": "",
      "messages": [
       {"role": "user", "content": "Explain how AI works in simple terms."}
     ]
    }'

    Example output:

    {"id":"aGLEaMjbLp6p_uMPopeAoAc",
    "choices":
      [{"index":0,"message":{
          "content":"Imagine teaching a dog a trick.  You show it what to do, reward it when it's right, and correct it when it's wrong.  Eventually, the dog learns.\n\nAI is similar.  We \"teach\" computers by showing them lots of examples.  For example, to recognize cats in pictures, we show it thousands of pictures of cats, labeling each one \"cat.\"  The AI learns patterns in these pictures – things like pointy ears, whiskers, and furry bodies – and eventually, it can identify a cat in a new picture it's never seen before.\n\nThis learning process uses math and algorithms (like a secret code of instructions) to find patterns and make predictions.  Some AI is more like a dog learning tricks (learning from examples), and some is more like following a very detailed recipe (following pre-programmed rules).\n\nSo, in short: AI is about teaching computers to learn from data and make decisions or predictions, just like we teach dogs tricks.\n",
          "role":"assistant"
          },
       "finish_reason":"stop"
       }],
     "created":1757700714,
     "model":"gemini-1.5-flash-latest",
     "object":"chat.completion",
     "usage":{
         "prompt_tokens":8,
         "completion_tokens":205,
         "total_tokens":213
         }
    }

Set up RBAC permissions

  1. Create a TrafficPolicy with your CEL rules. The following example allows requests with the x-llm: gemini header.

    kubectl apply -f- <<EOF
    apiVersion: gateway.kgateway.dev/v1alpha1
    kind: TrafficPolicy
    metadata:
      name: rbac
      namespace: kgateway-system
      labels:
        app: agentgateway
    spec:
      targetRefs:
      - group: gateway.networking.k8s.io
        kind: HTTPRoute
        name: google
      rbac:
        policy:
          matchExpressions:
            - "request.headers['x-llm'] == 'gemini'"
    EOF
  2. Send a request to the LLM provider API without the llm header. Verify that the request is denied with a 403 HTTP response code.

    curl -vik "$INGRESS_GW_ADDRESS:8080/gemini" -H content-type:application/json  -d '{
      "model": "",
      "messages": [
       {"role": "user", "content": "Explain how AI works in simple terms."}
     ]
    }'
    curl -vik "localhost:8080/gemini" -H content-type:application/json  -d '{
      "model": "",
      "messages": [
       {"role": "user", "content": "Explain how AI works in simple terms."}
     ]
    }'

    Example output:

    * upload completely sent off: 109 bytes
    < HTTP/1.1 403 Forbidden
    HTTP/1.1 403 Forbidden
    < content-type: text/plain
    content-type: text/plain
    < content-length: 20
    content-length: 20
    < 
    
    * Connection #0 to host localhost left intact
    authorization failed
    
  3. Send another request to the LLM provider. This time, you include the llm header. Verify that the request succeeds with a 200 HTTP response code.

    curl -vik "$INGRESS_GW_ADDRESS:8080/gemini" \
      -H "content-type: application/json" \
      -H "x-llm: gemini" -d '{
      "model": "",
      "messages": [
       {"role": "user", "content": "Explain how AI works in simple terms."}
     ]
    }'
    curl -vik "localhost:8080/gemini" \
      -H "content-type: application/json" \
      -H "x-llm: gemini" -d '{
      "model": "",
      "messages": [
       {"role": "user", "content": "Explain how AI works in simple terms."}
     ]
    }'

Cleanup

You can remove the resources that you created in this guide.
kubectl delete TrafficPolicy -n kgateway-system -l app=agentgateway
kubectl delete httproute google -n kgateway-system
kubectl delete backend google -n kgateway-system
kubectl delete secret google-secret -n kgateway-system