Azure OpenAI
Configure Azure OpenAI as an LLM provider in agentgateway.
Before you begin
Set up an agentgateway proxy.
Set up access to Azure OpenAI
-
Deploy a Microsoft Foundry Model in the Foundry portal.
-
Go to the Foundry portal to access your model deployment. From the Details tab, retrieve the endpoint and key to access your model deployment. Later, you use this endpoint information to configure your Azure OpenAI backend, including the base URL, your deployment model name, and API version.
For example, the following URL
https://my-endpoint.cognitiveservices.azure.com/openai/deployments/gpt-4.1-mini/chat/completions?api-version=2025-01-01-previewis composed of the following details:my-endpoint.cognitiveservices.azure.comas the base URLgpt-4.1-minias the name of your model deployment2025-01-01-previewas the API version
-
Store the key to access your model deployment in an environment variable.
export AZURE_OPENAI_KEY=<insert your model deployment key> -
Create a Kubernetes secret to store your model deployment key.
kubectl apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: azure-openai-secret namespace: kgateway-system type: Opaque stringData: Authorization: $AZURE_OPENAI_KEY EOF
-
Create a Backend resource to configure an LLM provider that references the Azure OpenAI key secret.
kubectl apply -f- <<EOF apiVersion: gateway.kgateway.dev/v1alpha1 kind: Backend metadata: name: azure-openai namespace: kgateway-system spec: type: AI ai: llm: azureopenai: endpoint: my-endpoint.cognitiveservices.azure.com deploymentName: gpt-4.1-mini apiVersion: 2025-01-01-preview authToken: kind: SecretRef secretRef: name: azure-openai-secret EOFReview the following table to understand this configuration. For more information, see the API reference.
Setting Description typeSet to AIto configure this Backend for an AI provider.aiDefine the AI backend configuration. The example uses Azure OpenAI ( spec.ai.llm.azureopenai).endpointThe endpoint of the Azure OpenAI deployment that you created, such as my-endpoint.cognitiveservices.azure.com.deploymentNameThe name of the Azure OpenAI model deployment to use. For more information, see the Azure OpenAI model docs. apiVersionThe version of the Azure OpenAI API to use. For more information, see the Azure OpenAI API version reference. authTokenConfigure the authentication token for Azure OpenAI API. The example refers to the secret that you previously created. The token is automatically sent in the api-keyheader. -
Create an HTTPRoute resource that routes incoming traffic to the Backend. The following example sets up a route on the
/azure-openaipath to the Backend that you previously created. Note that kgateway automatically rewrites the endpoint to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the Backend resource.kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: azure-openai namespace: kgateway-system spec: parentRefs: - name: agentgateway-proxy namespace: kgateway-system rules: - matches: - path: type: PathPrefix value: /azure-openai backendRefs: - name: azure-openai namespace: kgateway-system group: gateway.kgateway.dev kind: Backend EOF
-
Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.
curl "$INGRESS_GW_ADDRESS/azure-openai" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Write a short haiku about cloud computing." } ] }' | jqcurl "localhost:8080/azure-openai" -H content-type:application/json -d '{ "model": "", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Write a short haiku about cloud computing." } ] }' | jqExample output:
{ "id": "chatcmpl-9A8B7C6D5E4F3G2H1", "object": "chat.completion", "created": 1727967462, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Floating servers bright,\nData streams through endless sky,\nClouds hold all we need." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 28, "completion_tokens": 19, "total_tokens": 47 } }
Next steps
- Explore other guides for LLM consumption, such as function calling, model failover, and prompt guards.