View traces
Use a custom ConfigMap to configure your agentgateway proxy for tracing.
Before you begin
-
Install kgateway and enable the agentgateway integration.
-
Verify that the agentgateway integration is enabled.
helm get values kgateway -n kgateway-system -o yaml
Example output:
agentgateway: enabled: true
Set up an OpenTelemetry collector
Install an OpenTelemetry collector that the agentgateway proxy can send traces to. Depending on your environment, you can further configure your OpenTelemetry to export these traces to your preferred tracing platform, such as Jaeger.
-
Install the OTel collector.
helm upgrade --install opentelemetry-collector-traces opentelemetry-collector \ --repo https://open-telemetry.github.io/opentelemetry-helm-charts \ --version 0.127.2 \ --set mode=deployment \ --set image.repository="otel/opentelemetry-collector-contrib" \ --set command.name="otelcol-contrib" \ --namespace=telemetry \ --create-namespace \ -f -<<EOF config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 exporters: otlp/tempo: endpoint: http://tempo.telemetry.svc.cluster.local:4317 tls: insecure: true debug: verbosity: detailed service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [debug, otlp/tempo] EOF
-
Verify that the collector is up and running.
kubectl get pods -n telemetry
Example output:
NAME READY STATUS RESTARTS AGE opentelemetry-collector-traces-8f566f445-l82s6 1/1 Running 0 17m
Configure your proxy
-
Create a ConfigMap with your agentgateway tracing configuration. The following example collects additional information about the request to the LLM and adds this information to the trace. The trace is then sent to the collector that you set up earlier. To learn more about the fields that you can configure, see the agentgateway docs.
ℹ️For more tracing providers, see Other tracing configurations.kubectl apply -f- <<EOF apiVersion: v1 kind: ConfigMap metadata: name: agent-gateway-config namespace: kgateway-system data: config.yaml: |- config: tracing: otlpEndpoint: http://opentelemetry-collector-traces.telemetry.svc.cluster.local:4317 otlpProtocol: grpc randomSampling: true fields: add: gen_ai.operation.name: '"chat"' gen_ai.system: "llm.provider" gen_ai.request.model: "llm.requestModel" gen_ai.response.model: "llm.responseModel" gen_ai.usage.completion_tokens: "llm.outputTokens" gen_ai.usage.prompt_tokens: "llm.inputTokens" EOF
-
Create a GatewayParameters resource that references the ConfigMap that you created.
kubectl apply -f- <<EOF apiVersion: gateway.kgateway.dev/v1alpha1 kind: GatewayParameters metadata: name: tracing namespace: kgateway-system spec: kube: agentgateway: customConfigMapName: agent-gateway-config EOF
-
Create your agentgateway proxy. Make sure to reference the GatewayParameters resource that you created so that your proxy starts with the custom tracing configuration.
kubectl apply -f- <<EOF kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: agentgateway namespace: kgateway-system labels: app: agentgateway spec: gatewayClassName: agentgateway infrastructure: parametersRef: name: tracing group: gateway.kgateway.dev kind: GatewayParameters listeners: - protocol: HTTP port: 8080 name: http allowedRoutes: namespaces: from: All EOF
-
Verify that your agentgateway proxy is up and running.
kubectl get pods -n kgateway-system
Example output:
NAMESPACE NAME READY STATUS RESTARTS AGE kgateway-system agentgateway-8b5dc4874-bl79q 1/1 Running 0 12s
-
Get the external address of the gateway and save it in an environment variable.
export INGRESS_GW_ADDRESS=$(kubectl get svc -n kgateway-system agentgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
kubectl port-forward deployment/agentgateway -n kgateway-system 8080:8080
Set up access to Gemini
Configure access to an LLM provider such as Gemini and send a sample request. You later use this request to verify your tracing configuration.
-
Save your Gemini API key as an environment variable. To retrieve your API key, log in to the Google AI Studio and select API Keys.
export GOOGLE_KEY=<your-api-key>
-
Create a secret to authenticate to Google.
kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: google-secret namespace: kgateway-system labels: app: agentgateway type: Opaque stringData: Authorization: $GOOGLE_KEY EOF
-
Create a Backend resource to define the Gemini destination.
kubectl apply -f- <<EOF apiVersion: gateway.kgateway.dev/v1alpha1 kind: Backend metadata: labels: app: agentgateway name: google namespace: kgateway-system spec: ai: llm: gemini: apiVersion: v1beta authToken: kind: SecretRef secretRef: name: google-secret model: gemini-1.5-flash-latest type: AI EOF
Setting Description gemini
The Gemini AI provider. apiVersion
The API version of Gemini that is compatible with the model that you plan to use. In this example, you must use v1beta
because thegemini-1.5-flash-latest
model is not compatible with thev1
API version. For more information, see the Google AI docs.authToken
The authentication token to use to authenticate to the LLM provider. The example refers to the secret that you created in the previous step. model
The model to use to generate responses. In this example, you use the gemini-1.5-flash-latest
model. For more models, see the Google AI docs. -
Create an HTTPRoute resource to route requests to the Gemini backend. Note that kgateway automatically rewrites the endpoint that you set up (such as
/gemini
) to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the Backend resource.kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: google namespace: kgateway-system labels: app: agentgateway spec: parentRefs: - name: agentgateway namespace: kgateway-system rules: - matches: - path: type: PathPrefix value: /gemini backendRefs: - name: google namespace: kgateway-system group: gateway.kgateway.dev kind: Backend EOF
-
Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.
curl -vik "$INGRESS_GW_ADDRESS:8080/gemini" -H content-type:application/json -d '{ "model": "", "messages": [ {"role": "user", "content": "Explain how AI works in simple terms."} ] }'
curl -vik "localhost:8080/gemini" -H content-type:application/json -d '{ "model": "", "messages": [ {"role": "user", "content": "Explain how AI works in simple terms."} ] }'
Example output:
{"id":"aGLEaMjbLp6p_uMPopeAoAc", "choices": [{"index":0,"message":{ "content":"Imagine teaching a dog a trick. You show it what to do, reward it when it's right, and correct it when it's wrong. Eventually, the dog learns.\n\nAI is similar. We \"teach\" computers by showing them lots of examples. For example, to recognize cats in pictures, we show it thousands of pictures of cats, labeling each one \"cat.\" The AI learns patterns in these pictures – things like pointy ears, whiskers, and furry bodies – and eventually, it can identify a cat in a new picture it's never seen before.\n\nThis learning process uses math and algorithms (like a secret code of instructions) to find patterns and make predictions. Some AI is more like a dog learning tricks (learning from examples), and some is more like following a very detailed recipe (following pre-programmed rules).\n\nSo, in short: AI is about teaching computers to learn from data and make decisions or predictions, just like we teach dogs tricks.\n", "role":"assistant" }, "finish_reason":"stop" }], "created":1757700714, "model":"gemini-1.5-flash-latest", "object":"chat.completion", "usage":{ "prompt_tokens":8, "completion_tokens":205, "total_tokens":213 } }
Verify tracing
-
Get the logs of the agentgateway proxy. In the CLI output, find the
trace.id
.kubectl logs deploy/agentgateway -n kgateway-system
Example output:
info request gateway=kgateway-system/agentgateway listener=http route=kgateway-system/google endpoint=generativelanguage.googleapis.com:443 src.addr=127.0.0.1:49576 http.method=POST http.host=localhost http.path=/gemini http.version=HTTP/1.1 http.status=200 trace.id=d65e4eeb983e2d964e71e8dc8c405f97 span.id=b836e1b1d51b3e74 llm.provider=gemini llm.request.model= llm.request.tokens=8 llm.response.model=gemini-1.5-flash-latest llm.response.tokens=313 duration=3165ms
-
Get the logs of the collector and search for the trace ID. Verify that you see the additional LLM that you configured initially.
kubectl logs deploy/opentelemetry-collector-traces -n telemetry
Example output:
Span #0 Trace ID : d65e4eeb983e2d964e71e8dc8c405f97 Parent ID : ID : b836e1b1d51b3e74 Name : POST /gemini Kind : Server Start time : 2025-09-24 18:12:58.653868462 +0000 UTC End time : 2025-09-24 18:13:01.821700755 +0000 UTC Status code : Unset Status message : Attributes: -> gateway: Str(kgateway-system/agentgateway) -> listener: Str(http) -> route: Str(kgateway-system/google) -> endpoint: Str(generativelanguage.googleapis.com:443) -> src.addr: Str(127.0.0.1:49576) -> http.method: Str(POST) -> http.host: Str(localhost) -> http.path: Str(/gemini) -> http.version: Str(HTTP/1.1) -> http.status: Int(200) -> trace.id: Str(d65e4eeb983e2d964e71e8dc8c405f97) -> span.id: Str(b836e1b1d51b3e74) -> llm.provider: Str(gemini) -> llm.request.model: Str() -> llm.request.tokens: Int(8) -> llm.response.model: Str(gemini-1.5-flash-latest) -> llm.response.tokens: Int(313) -> duration: Str(3165ms) -> url.scheme: Str(http) -> network.protocol.version: Str(1.1) -> gen_ai.operation.name: Str(chat) -> gen_ai.system: Str(gemini)
Other tracing configurations
Review common tracing providers configurations that you can use with agentgateway.
apiVersion: v1
kind: ConfigMap
metadata:
name: jaeger-tracing-config
data:
config.yaml: |-
config:
tracing:
otlpEndpoint: http://jaeger-collector.jaeger.svc.cluster.local:4317
otlpProtocol: grpc
randomSampling: true
fields:
add:
gen_ai.operation.name: '"chat"'
gen_ai.system: "llm.provider"
gen_ai.request.model: "llm.request_model"
gen_ai.response.model: "llm.response_model"
gen_ai.usage.completion_tokens: "llm.output_tokens"
gen_ai.usage.prompt_tokens: "llm.input_tokens"
apiVersion: v1
kind: ConfigMap
metadata:
name: langfuse-tracing-config
data:
config.yaml: |-
config:
tracing:
otlpEndpoint: https://us.cloud.langfuse.com/api/public/otel
otlpProtocol: http
headers:
Authorization: "Basic <base64-encoded-credentials>"
randomSampling: true
fields:
add:
gen_ai.operation.name: '"chat"'
gen_ai.system: "llm.provider"
gen_ai.prompt: "llm.prompt"
gen_ai.completion: 'llm.completion.map(c, {"role":"assistant", "content": c})'
gen_ai.usage.completion_tokens: "llm.output_tokens"
gen_ai.usage.prompt_tokens: "llm.input_tokens"
gen_ai.request.model: "llm.request_model"
gen_ai.response.model: "llm.response_model"
gen_ai.request: "flatten(llm.params)"
apiVersion: v1
kind: ConfigMap
metadata:
name: phoenix-tracing-config
data:
config.yaml: |-
config:
tracing:
otlpEndpoint: http://localhost:4317
randomSampling: true
fields:
add:
span.name: '"openai.chat"'
openinference.span.kind: '"LLM"'
llm.system: "llm.provider"
llm.input_messages: 'flatten_recursive(llm.prompt.map(c, {"message": c}))'
llm.output_messages: 'flatten_recursive(llm.completion.map(c, {"role":"assistant", "content": c}))'
llm.token_count.completion: "llm.output_tokens"
llm.token_count.prompt: "llm.input_tokens"
llm.token_count.total: "llm.total_tokens"
apiVersion: v1
kind: ConfigMap
metadata:
name: openllmetry-tracing-config
data:
config.yaml: |-
config:
tracing:
otlpEndpoint: http://localhost:4317
randomSampling: true
fields:
add:
span.name: '"openai.chat"'
gen_ai.operation.name: '"chat"'
gen_ai.system: "llm.provider"
gen_ai.prompt: "flatten_recursive(llm.prompt)"
gen_ai.completion: 'flatten_recursive(llm.completion.map(c, {"role":"assistant", "content": c}))'
gen_ai.usage.completion_tokens: "llm.output_tokens"
gen_ai.usage.prompt_tokens: "llm.input_tokens"
gen_ai.request.model: "llm.request_model"
gen_ai.response.model: "llm.response_model"
gen_ai.request: "flatten(llm.params)"
llm.is_streaming: "llm.streaming"
Cleanup
You can remove the resources that you created in this guide.kubectl delete gateway agentgateway -n kgateway-system
kubectl delete GatewayParameters tracing -n kgateway-system
kubectl delete configmap agent-gateway-config -n kgateway-system
helm uninstall opentelemetry-collector-traces -n telemetry
kubectl delete httproute google -n kgateway-system
kubectl delete backend google -n kgateway-system
kubectl delete secret google-secret -n kgateway-system