Tool Use

Define tools, let the model call them, and feed results back. Build agents, function-calling workflows, and structured outputs across every model.

Tool Use

Tool use (also known as function calling) lets a model invoke named functions you define. The model decides which tool to call and what arguments to pass; your code runs the tool and sends the result back.

AnyRouter normalizes tool use to the OpenAI schema. Every model in the catalog that supports tools is callable through the same request shape — swap the model id and keep your code unchanged.

Defining tools

TYPESCRIPT
typescript
const tools = [
  {
    type: "function",
    function: {
      name: "get_weather",
      description: "Return the current weather for a city.",
      parameters: {
        type: "object",
        properties: {
          city: { type: "string", description: "City name" },
          unit: { type: "string", enum: ["celsius", "fahrenheit"] },
        },
        required: ["city"],
      },
    },
  },
]

Request

TYPESCRIPT
typescript
const response = await client.chat.completions.create({
  model: "openai/gpt-4-turbo",
  messages: [{ role: "user", content: "What's the weather in Tokyo?" }],
  tools,
  tool_choice: "auto",
})

Handling tool calls

When the model decides to call a tool, the response contains tool_calls instead of plain content:

JSON
json
{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_01ABC",
            "type": "function",
            "function": {
              "name": "get_weather",
              "arguments": "{\"city\":\"Tokyo\",\"unit\":\"celsius\"}"
            }
          }
        ]
      },
      "finish_reason": "tool_calls"
    }
  ]
}

Run the tool, then send the result back as a tool role message:

TYPESCRIPT
typescript
const toolCall = response.choices[0].message.tool_calls![0]
const args = JSON.parse(toolCall.function.arguments)
const weather = await getWeather(args.city, args.unit)

const followup = await client.chat.completions.create({
  model: "openai/gpt-4-turbo",
  messages: [
    { role: "user", content: "What's the weather in Tokyo?" },
    response.choices[0].message,
    {
      role: "tool",
      tool_call_id: toolCall.id,
      content: JSON.stringify(weather),
    },
  ],
  tools,
})

The model sees the tool result and composes a natural-language answer.

Forcing a tool

Set tool_choice to force a specific call:

TYPESCRIPT
typescript
tool_choice: { type: "function", function: { name: "get_weather" } }

Or use "none" to disable tool use entirely and force plain-text output.

Parallel tool calls

Modern models emit multiple tool calls in a single turn. Iterate over the full tool_calls array, run them in parallel, and include one tool message per call in your follow-up request.

TYPESCRIPT
typescript
const calls = response.choices[0].message.tool_calls ?? []
const results = await Promise.all(
  calls.map(async (call) => ({
    role: "tool" as const,
    tool_call_id: call.id,
    content: JSON.stringify(await runTool(call)),
  })),
)

Structured outputs

For cases where you want the model to return JSON without an actual side effect, use response_format instead of tools:

TYPESCRIPT
typescript
response_format: { type: "json_object" }

Or provide a JSON schema for guaranteed conformance:

TYPESCRIPT
typescript
response_format: {
  type: "json_schema",
  json_schema: {
    name: "weather_report",
    schema: { /* JSON Schema */ },
    strict: true,
  },
}