Chat Completions
POST /api/v1/chat/completions — the OpenAI-compatible chat completions endpoint. Send messages, get a response. Supports every model in the AnyRouter catalog.
Chat Completions
Send a sequence of messages to a model and receive a completion. This is the primary AnyRouter endpoint and is a drop-in replacement for OpenAI's /v1/chat/completions.
TEXT
text
POST https://anyrouter.dev/api/v1/chat/completionsRequest body
| Field | Type | Required | Description |
|---|---|---|---|
model | string | yes | provider/model id (e.g. openai/gpt-4-turbo). See List models. |
messages | array | yes | Ordered conversation. Each message has role (system / user / assistant / tool) and content. |
temperature | number | no | 0–2. Lower is more deterministic. Default: 1. |
top_p | number | no | 0–1. Nucleus sampling. Default: 1. |
max_tokens | integer | no | Upper bound on output tokens. |
stream | boolean | no | If true, returns a streaming SSE response. |
stop | string | string[] | no | Stop sequences. |
tools | array | no | Function-calling / tool-use spec. |
tool_choice | string | object | no | Force a specific tool or auto. |
response_format | object | no | { "type": "json_object" } for guaranteed JSON. |
seed | integer | no | Deterministic sampling seed (provider-dependent). |
user | string | no | Stable end-user identifier for abuse monitoring. |
Example
BASH
bash
curl https://anyrouter.dev/api/v1/chat/completions \
-H "Authorization: Bearer ar-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-sonnet-4.6",
"messages": [
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "Explain NAT in one sentence."}
],
"max_tokens": 200
}'Response
JSON
json
{
"id": "chatcmpl-8yWq4JqLfEjJ9L",
"object": "chat.completion",
"created": 1760000000,
"model": "anthropic/claude-sonnet-4.6",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "NAT maps many private IP addresses to one public IP so devices behind a router can share a single internet connection."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 28,
"completion_tokens": 30,
"total_tokens": 58
}
}Finish reasons
| Value | Meaning |
|---|---|
stop | Model completed naturally or hit a stop sequence. |
length | Hit max_tokens. |
tool_calls | Model invoked one or more tools; caller must run them and resume. |
content_filter | Upstream safety filter triggered. |
error | Upstream provider error; see error on the response. |
Tool use
Pass a tools array with function schemas, then handle any returned tool_calls:
TYPESCRIPT
typescript
const response = await client.chat.completions.create({
model: "openai/gpt-4-turbo",
messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
tools: [
{
type: "function",
function: {
name: "get_weather",
description: "Get the current weather for a city",
parameters: {
type: "object",
properties: { city: { type: "string" } },
required: ["city"],
},
},
},
],
})See Tool use for a deeper walkthrough.