Skip to main content
POST
https://console.mixroute.io
/
v1beta
/
models
/
{model}
:generateContent
curl --request POST \
  --url 'https://console.mixroute.io/v1beta/models/gemini-2.5-pro:generateContent' \
  --header 'Authorization: Bearer sk-xxxxxxxxxx' \
  --header 'Content-Type: application/json' \
  --data '{
    "contents": [
      {"role": "user", "parts": [{"text": "Explain artificial intelligence in one sentence"}]}
    ],
    "generationConfig": {
      "temperature": 0.7,
      "maxOutputTokens": 1024
    }
  }'
{
  "candidates": [
    {
      "content": {
        "parts": [{"text": "Artificial intelligence is a discipline that studies how to make computers simulate and implement human intelligence."}],
        "role": "model"
      },
      "finishReason": "STOP",
      "index": 0,
      "safetyRatings": []
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 20,
    "totalTokenCount": 30
  },
  "modelVersion": "gemini-2.5-pro"
}

Introduction

The Gemini Native API uses Google Gemini’s request and response format, suitable for Google official clients (such as google-generativeai SDK) or scenarios requiring direct use of Gemini data structures. If using OpenAI-compatible clients (such as OpenAI SDK), please use the /v1/chat/completions endpoint.

Differences from OpenAI Format

FeatureGemini NativeOpenAI Compatible
Message Structurecontents[].parts[]messages[].content
Role Namesuser / modeluser / assistant
Streaming ParameterURL param ?alt=sseBody param stream: true
System PromptsystemInstructionmessages[0].role: "system"
MultimodalMixed parts[] arrayMixed content[] array

API Endpoints

FunctionMethodPath
Text Generation (Non-streaming)POST/v1beta/models/{model}:generateContent
Text Generation (Streaming)POST/v1beta/models/{model}:streamGenerateContent?alt=sse
Single EmbeddingPOST/v1beta/models/{model}:embedContent
Batch EmbeddingPOST/v1beta/models/{model}:batchEmbedContents

Authentication

Two authentication methods are supported:
MethodHeaderExample
Bearer Token (Recommended)AuthorizationBearer sk-xxxxxxxxxx
Google Stylex-goog-api-keysk-xxxxxxxxxx

Request Parameters

ParameterTypeRequiredDescription
contentsarrayYesConversation content array
generationConfigobjectNoGeneration configuration
safetySettingsarrayNoSafety filter settings
systemInstructionobjectNoSystem instruction
toolsarrayNoTool definitions (function calling, search, etc.)
cachedContentstringNoCached content name

generationConfig Parameters

ParameterTypeDescription
temperaturenumberRandomness (0-2)
topPnumberNucleus sampling (0-1)
topKintegerTop-K sampling
maxOutputTokensintegerMaximum output tokens
stopSequencesarrayStop sequences
candidateCountintegerNumber of candidate responses
thinkingConfigobjectThinking mode configuration

Basic Examples

curl -X POST "https://console.mixroute.io/v1beta/models/gemini-2.5-pro:generateContent" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-xxxxxxxxxx" \
  -d '{
    "contents": [
      {"role": "user", "parts": [{"text": "Explain artificial intelligence in one sentence"}]}
    ],
    "generationConfig": {
      "temperature": 0.7,
      "maxOutputTokens": 1024
    }
  }'

Advanced Features

Thinking Mode

Gemini 2.5 Pro and Gemini 3 Pro support thinking mode, allowing the model to perform deep reasoning before answering.Gemini 2.5 Pro - Using thinkingBudget:
{
  "contents": [{"role": "user", "parts": [{"text": "Solve this geometry problem step by step"}]}],
  "generationConfig": {
    "maxOutputTokens": 16384,
    "thinkingConfig": {
      "includeThoughts": true,
      "thinkingBudget": 8192
    }
  }
}
Gemini 3 Pro - Using thinkingLevel:
{
  "contents": [{"role": "user", "parts": [{"text": "Explain the principles of quantum entanglement"}]}],
  "generationConfig": {
    "maxOutputTokens": 16384,
    "thinkingConfig": {
      "includeThoughts": true,
      "thinkingLevel": "MEDIUM"
    }
  }
}
ParameterApplicable ModelOptions
thinkingBudgetGemini 2.5 Pro1-24576 (token count)
thinkingLevelGemini 3 ProLOW / MEDIUM / HIGH

Embedding API

Single Embedding

curl -X POST "https://console.mixroute.io/v1beta/models/text-embedding-004:embedContent" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-xxxxxxxxxx" \
  -d '{
    "content": {
      "parts": [{"text": "This is text to be vectorized"}]
    }
  }'
Response Example:
{
  "embedding": {
    "values": [0.0123, -0.0456, 0.0789, ...]
  }
}

Batch Embedding

curl -X POST "https://console.mixroute.io/v1beta/models/text-embedding-004:batchEmbedContents" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-xxxxxxxxxx" \
  -d '{
    "requests": [
      {
        "model": "models/text-embedding-004",
        "content": {"parts": [{"text": "First text"}]}
      },
      {
        "model": "models/text-embedding-004",
        "content": {"parts": [{"text": "Second text"}]}
      }
    ]
  }'
Response Example:
{
  "embeddings": [
    {"values": [0.0123, -0.0456, ...]},
    {"values": [0.0234, -0.0567, ...]}
  ]
}

Response Format

{
  "candidates": [
    {
      "content": {
        "parts": [{"text": "Response text"}],
        "role": "model"
      },
      "finishReason": "STOP",
      "safetyRatings": [
        {
          "category": "HARM_CATEGORY_HARASSMENT",
          "probability": "NEGLIGIBLE"
        }
      ]
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 20,
    "totalTokenCount": 30
  }
}

Error Handling

HTTP StatusError TypeDescription
400INVALID_ARGUMENTInvalid request parameter
401UNAUTHENTICATEDInvalid or missing API key
403PERMISSION_DENIEDNo access to this model
404NOT_FOUNDModel not found
429RESOURCE_EXHAUSTEDRate limit exceeded
500INTERNALInternal server error
Error Response Example:
{
  "error": {
    "code": 400,
    "message": "Invalid value at 'contents[0].parts[0]'",
    "status": "INVALID_ARGUMENT"
  }
}

Comparison with OpenAI Format

FeatureGemini NativeOpenAI Compatible
Base URLhttps://console.mixroute.io/v1betahttps://console.mixroute.io/v1
Message Structurecontents[].parts[]messages[].content
Role Namesuser / modeluser / assistant
System PromptsystemInstructionmessages[0].role: "system"
Streaming RequestURL param ?alt=sseBody param stream: true
Temperature Range0-20-2
Function Callingtools[].functionDeclarationstools[].function
Search Groundingtools[].googleSearchNot supported
Thinking ModethinkingConfigNot supported
curl --request POST \
  --url 'https://console.mixroute.io/v1beta/models/gemini-2.5-pro:generateContent' \
  --header 'Authorization: Bearer sk-xxxxxxxxxx' \
  --header 'Content-Type: application/json' \
  --data '{
    "contents": [
      {"role": "user", "parts": [{"text": "Explain artificial intelligence in one sentence"}]}
    ],
    "generationConfig": {
      "temperature": 0.7,
      "maxOutputTokens": 1024
    }
  }'
{
  "candidates": [
    {
      "content": {
        "parts": [{"text": "Artificial intelligence is a discipline that studies how to make computers simulate and implement human intelligence."}],
        "role": "model"
      },
      "finishReason": "STOP",
      "index": 0,
      "safetyRatings": []
    }
  ],
  "usageMetadata": {
    "promptTokenCount": 10,
    "candidatesTokenCount": 20,
    "totalTokenCount": 30
  },
  "modelVersion": "gemini-2.5-pro"
}