Use Google Gemini native format to call API
google-generativeai SDK) or scenarios requiring direct use of Gemini data structures.
If using OpenAI-compatible clients (such as OpenAI SDK), please use the /v1/chat/completions endpoint.
| Feature | Gemini Native | OpenAI Compatible |
|---|---|---|
| Message Structure | contents[].parts[] | messages[].content |
| Role Names | user / model | user / assistant |
| Streaming Parameter | URL param ?alt=sse | Body param stream: true |
| System Prompt | systemInstruction | messages[0].role: "system" |
| Multimodal | Mixed parts[] array | Mixed content[] array |
| Function | Method | Path |
|---|---|---|
| Text Generation (Non-streaming) | POST | /v1beta/models/{model}:generateContent |
| Text Generation (Streaming) | POST | /v1beta/models/{model}:streamGenerateContent?alt=sse |
| Single Embedding | POST | /v1beta/models/{model}:embedContent |
| Batch Embedding | POST | /v1beta/models/{model}:batchEmbedContents |
| Method | Header | Example |
|---|---|---|
| Bearer Token (Recommended) | Authorization | Bearer sk-xxxxxxxxxx |
| Google Style | x-goog-api-key | sk-xxxxxxxxxx |
| Parameter | Type | Required | Description |
|---|---|---|---|
contents | array | Yes | Conversation content array |
generationConfig | object | No | Generation configuration |
safetySettings | array | No | Safety filter settings |
systemInstruction | object | No | System instruction |
tools | array | No | Tool definitions (function calling, search, etc.) |
cachedContent | string | No | Cached content name |
| Parameter | Type | Description |
|---|---|---|
temperature | number | Randomness (0-2) |
topP | number | Nucleus sampling (0-1) |
topK | integer | Top-K sampling |
maxOutputTokens | integer | Maximum output tokens |
stopSequences | array | Stop sequences |
candidateCount | integer | Number of candidate responses |
thinkingConfig | object | Thinking mode configuration |
thinkingBudget:thinkingLevel:| Parameter | Applicable Model | Options |
|---|---|---|
thinkingBudget | Gemini 2.5 Pro | 1-24576 (token count) |
thinkingLevel | Gemini 3 Pro | LOW / MEDIUM / HIGH |
| HTTP Status | Error Type | Description |
|---|---|---|
| 400 | INVALID_ARGUMENT | Invalid request parameter |
| 401 | UNAUTHENTICATED | Invalid or missing API key |
| 403 | PERMISSION_DENIED | No access to this model |
| 404 | NOT_FOUND | Model not found |
| 429 | RESOURCE_EXHAUSTED | Rate limit exceeded |
| 500 | INTERNAL | Internal server error |
| Feature | Gemini Native | OpenAI Compatible |
|---|---|---|
| Base URL | https://console.mixroute.io/v1beta | https://console.mixroute.io/v1 |
| Message Structure | contents[].parts[] | messages[].content |
| Role Names | user / model | user / assistant |
| System Prompt | systemInstruction | messages[0].role: "system" |
| Streaming Request | URL param ?alt=sse | Body param stream: true |
| Temperature Range | 0-2 | 0-2 |
| Function Calling | tools[].functionDeclarations | tools[].function |
| Search Grounding | tools[].googleSearch | Not supported |
| Thinking Mode | thinkingConfig | Not supported |