Cloud LLM providers
Set up cloud LLM providers with AI Gateway.
Before you begin
- Set up AI Gateway.
- Choose a supported LLM provider.
Supported LLM providers
The examples throughout the AI Gateway docs use OpenAI as the LLM provider, but you can use other providers that are supported by AI Gateway.
Cloud providers
Kgateway supports the following AI cloud providers:
- Anthropic
- Azure OpenAI
- Gemini
- OpenAI. You can also use
openai
support for LLM providers that use the OpenAI API, such as DeepSeek and Mistral. - Vertex AI
Local providers
You can use kgateway with a local LLM provider, such as the following common options:
- Ollama for local LLM development.
- Gateway API Inference Extension project to route requests to local LLM workloads that run in your cluster.
OpenAI
OpenAI is the most common LLM provider, and the examples throughout the AI Gateway docs use OpenAI. You can adapt these examples to your own provider, especially ones that use the OpenAI API, such as DeepSeek and Mistral.
To set up OpenAI, continue with the Authenticate to the LLM guide.
Gemini
-
Save your Gemini API key as an environment variable. To retrieve your API key, log in to the Google AI Studio and select API Keys.
export GOOGLE_KEY=<your-api-key>
-
Create a secret to authenticate to Google. For other ways to authenticate, see the Auth guide.
kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: google-secret namespace: kgateway-system labels: app: ai-kgateway type: Opaque stringData: Authorization: $GOOGLE_KEY EOF
-
Create a Backend resource to define the Gemini destination.
kubectl apply -f- <<EOF apiVersion: gateway.kgateway.dev/v1alpha1 kind: Backend metadata: labels: app: ai-kgateway name: google namespace: kgateway-system spec: ai: llm: provider: gemini: apiVersion: v1beta authToken: kind: SecretRef secretRef: name: google-secret model: gemini-1.5-flash-latest type: AI EOF
Setting Description gemini
The Gemini AI provider. apiVersion
The API version of Gemini that is compatible with the model that you plan to use. In this example, you must use v1beta
because thegemini-1.5-flash-latest
model is not compatible with thev1
API version. For more information, see the Google AI docs.authToken
The authentication token to use to authenticate to the LLM provider. The example refers to the secret that you created in the previous step. model
The model to use to generate responses. In this example, you use the gemini-1.5-flash-latest
model. For more models, see the Google AI docs. -
Create an HTTPRoute resource to route requests to the Gemini backend. Note that kgateway automatically rewrites the endpoint that you set up (such as
/gemini
) to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the Backend resource.kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: google namespace: kgateway-system labels: app: ai-kgateway spec: parentRefs: - name: ai-gateway namespace: kgateway-system rules: - matches: - path: type: PathPrefix value: /gemini backendRefs: - name: google namespace: kgateway-system group: gateway.kgateway.dev kind: Backend EOF
-
Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.
curl "$INGRESS_GW_ADDRESS:8080/gemini" -H content-type:application/json -d '{ "contents": [ { "parts": [ { "text": "Explain how AI works in a few words" } ] } ] }' | jq
curl "localhost:8080/gemini" -H content-type:application/json -d '{ "contents": [ { "parts": [ { "text": "Explain how AI works in a few words" } ] } ] }' | jq
Example output:
{ "candidates": [ { "content": { "parts": [ { "text": "Learning patterns from data to make predictions.\n" } ], "role": "model" }, "finishReason": "STOP", "avgLogprobs": -0.017732446392377216 } ], "usageMetadata": { "promptTokenCount": 8, "candidatesTokenCount": 9, "totalTokenCount": 17, "promptTokensDetails": [ { "modality": "TEXT", "tokenCount": 8 } ], "candidatesTokensDetails": [ { "modality": "TEXT", "tokenCount": 9 } ] }, "modelVersion": "gemini-1.5-flash-latest", "responseId": "UxQ6aM_sKbjFnvgPocrJaA" }
Next
Now that you can send requests to an LLM provider, explore the other AI Gateway features.