TechLead
Lesson 5 of 24
7 min read
AI Agents & RAG

Tool Use and Function Calling

Master OpenAI and Anthropic function calling APIs, tool schemas, and building robust tool-using agents

What is Function Calling?

Function calling (also known as tool use) is the ability of LLMs to generate structured outputs that invoke external functions. Instead of just generating text, the model outputs a function name and its arguments in a structured format (JSON), which your application can then execute. This bridges the gap between language understanding and real-world actions.

Function calling is the backbone of AI agents. Without it, an LLM is limited to generating text. With it, an LLM can query databases, call APIs, execute code, send emails, manage files, and interact with any system that has a programmatic interface.

Function Calling Providers

  • Anthropic Claude: Tool use via the messages API with input_schema definitions
  • OpenAI GPT-4: Function calling with JSON Schema tool definitions
  • Google Gemini: Function declarations with FunctionDeclaration schemas
  • Vercel AI SDK: Unified tool() API that works across all providers

Anthropic Claude Tool Use

Claude's tool use is defined through the tools parameter in the messages API. Each tool has a name, description, and an input_schema that follows JSON Schema format. The model returns tool_use content blocks when it wants to call a tool.

// Anthropic Claude tool use - complete example
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

// Define tools with JSON Schema
const tools: Anthropic.Tool[] = [
  {
    name: "get_stock_price",
    description: "Get the current stock price for a given ticker symbol. Returns the price in USD.",
    input_schema: {
      type: "object" as const,
      properties: {
        ticker: {
          type: "string",
          description: "The stock ticker symbol, e.g., AAPL, GOOGL, MSFT",
        },
      },
      required: ["ticker"],
    },
  },
  {
    name: "calculate_portfolio_value",
    description: "Calculate the total value of a stock portfolio given holdings.",
    input_schema: {
      type: "object" as const,
      properties: {
        holdings: {
          type: "array",
          items: {
            type: "object",
            properties: {
              ticker: { type: "string" },
              shares: { type: "number" },
              purchase_price: { type: "number" },
            },
            required: ["ticker", "shares"],
          },
          description: "Array of stock holdings",
        },
      },
      required: ["holdings"],
    },
  },
  {
    name: "get_company_news",
    description: "Get recent news articles for a company.",
    input_schema: {
      type: "object" as const,
      properties: {
        company: {
          type: "string",
          description: "Company name or ticker",
        },
        limit: {
          type: "number",
          description: "Maximum number of articles to return (default: 5)",
        },
      },
      required: ["company"],
    },
  },
];

// Tool implementation functions
async function executeToolCall(
  name: string,
  input: Record<string, unknown>
): Promise<string> {
  switch (name) {
    case "get_stock_price": {
      // In production, call a real API like Alpha Vantage or Yahoo Finance
      const prices: Record<string, number> = {
        AAPL: 245.32, GOOGL: 178.90, MSFT: 445.67,
        NVDA: 892.45, TSLA: 267.80, AMZN: 198.54,
      };
      const ticker = input.ticker as string;
      const price = prices[ticker.toUpperCase()];
      return price
        ? JSON.stringify({ ticker, price, currency: "USD" })
        : JSON.stringify({ error: `Unknown ticker: ${ticker}` });
    }
    case "calculate_portfolio_value": {
      const holdings = input.holdings as any[];
      const totalValue = holdings.reduce((sum, h) => sum + h.shares * (h.purchase_price || 0), 0);
      return JSON.stringify({ total_value: totalValue, holdings_count: holdings.length });
    }
    case "get_company_news": {
      return JSON.stringify({
        articles: [
          { title: `${input.company} reports strong Q1 earnings`, date: "2026-04-01" },
          { title: `${input.company} announces new AI product line`, date: "2026-03-28" },
        ],
      });
    }
    default:
      return JSON.stringify({ error: `Unknown tool: ${name}` });
  }
}

// The agentic loop with tool use
async function agentWithTools(userMessage: string): Promise<string> {
  const messages: Anthropic.MessageParam[] = [
    { role: "user", content: userMessage },
  ];

  while (true) {
    const response = await client.messages.create({
      model: "claude-sonnet-4-20250514",
      max_tokens: 4096,
      tools,
      messages,
    });

    // Check if the model wants to use tools
    if (response.stop_reason === "tool_use") {
      // Process all tool calls in the response
      const toolResults: Anthropic.ToolResultBlockParam[] = [];

      for (const block of response.content) {
        if (block.type === "tool_use") {
          const result = await executeToolCall(
            block.name,
            block.input as Record<string, unknown>
          );
          toolResults.push({
            type: "tool_result",
            tool_use_id: block.id,
            content: result,
          });
        }
      }

      // Add assistant response and tool results to messages
      messages.push({ role: "assistant", content: response.content });
      messages.push({ role: "user", content: toolResults });
    } else {
      // Model is done — extract the final text response
      const textBlock = response.content.find(b => b.type === "text");
      return textBlock ? textBlock.text : "";
    }
  }
}

// Usage
const answer = await agentWithTools(
  "What's the stock price of AAPL and NVDA? Which one is higher?"
);
console.log(answer);

OpenAI Function Calling

OpenAI uses a similar but slightly different API structure. Tools are defined with a function wrapper and the model returns tool_calls in the response.

# OpenAI function calling in Python
from openai import OpenAI
import json

client = OpenAI()

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "City name, e.g., 'San Francisco'",
                    },
                    "units": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature units",
                    },
                },
                "required": ["city"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "search_restaurants",
            "description": "Search for restaurants in a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string"},
                    "cuisine": {"type": "string", "description": "Type of cuisine"},
                    "price_range": {
                        "type": "string",
                        "enum": ["$", "$$", "$$$", "$$$$"],
                    },
                },
                "required": ["city"],
            },
        },
    },
]

def execute_tool(name: str, arguments: dict) -> str:
    if name == "get_weather":
        return json.dumps({"temp": 72, "condition": "sunny", "city": arguments["city"]})
    elif name == "search_restaurants":
        return json.dumps({"restaurants": [
            {"name": "Chez Claude", "rating": 4.8, "cuisine": arguments.get("cuisine", "French")},
        ]})
    return json.dumps({"error": f"Unknown tool: {name}"})

def agent_loop(user_message: str) -> str:
    messages = [{"role": "user", "content": user_message}]

    while True:
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=messages,
            tools=tools,
        )

        message = response.choices[0].message

        if message.tool_calls:
            messages.append(message)  # Add assistant's tool call message

            for tool_call in message.tool_calls:
                result = execute_tool(
                    tool_call.function.name,
                    json.loads(tool_call.function.arguments),
                )
                messages.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": result,
                })
        else:
            return message.content

# Usage
print(agent_loop("What's the weather in SF and suggest a good restaurant?"))

Vercel AI SDK Unified Tool API

The Vercel AI SDK provides a unified tool API that works consistently across all LLM providers. This is the recommended approach for TypeScript applications, as it handles the provider differences automatically.

// Vercel AI SDK - unified tool API
import { generateText, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";

const result = await generateText({
  model: anthropic("claude-sonnet-4-20250514"),
  maxSteps: 5, // Enable multi-step tool use (agent loop)
  tools: {
    getWeather: tool({
      description: "Get the current weather for a location",
      parameters: z.object({
        city: z.string().describe("The city name"),
        units: z.enum(["celsius", "fahrenheit"]).optional().default("fahrenheit"),
      }),
      execute: async ({ city, units }) => {
        // Call weather API
        return { city, temperature: 72, units, condition: "sunny" };
      },
    }),
    searchDatabase: tool({
      description: "Search the product database with a query",
      parameters: z.object({
        query: z.string().describe("Search query"),
        category: z.string().optional().describe("Product category filter"),
        maxResults: z.number().optional().default(10),
      }),
      execute: async ({ query, category, maxResults }) => {
        // Query database
        return {
          results: [
            { id: 1, name: "Widget Pro", price: 29.99, category: "Tools" },
            { id: 2, name: "Gadget X", price: 49.99, category: "Electronics" },
          ],
          total: 2,
        };
      },
    }),
  },
  prompt: "What's the weather in NYC and find me products under $50?",
});

// Access all steps (tool calls and results)
for (const step of result.steps) {
  console.log("Step:", step.stepType);
  for (const toolCall of step.toolCalls) {
    console.log(`  Tool: ${toolCall.toolName}`, toolCall.args);
  }
  for (const toolResult of step.toolResults) {
    console.log(`  Result:`, toolResult.result);
  }
}

console.log("Final:", result.text);

Tool Schema Best Practices

Schema Design Guide

Practice Good Bad
Tool namesget_customer_orderstool1, doStuff
DescriptionsDetailed with examplesVague or missing
ParametersStrong types + descriptionsGeneric string for everything
EnumsUse for known valuesFree-text for fixed options
Error handlingReturn structured errorsThrow exceptions to model

Common Pitfalls

  • Too many tools: Models perform best with 5-20 well-defined tools. Beyond that, accuracy drops.
  • Ambiguous descriptions: If two tools could serve the same purpose, the model will pick inconsistently.
  • Missing validation: Always validate tool inputs before execution — the model may produce invalid arguments.
  • No error messages: Return clear error messages so the model can self-correct and try again.
  • Overly complex schemas: Keep schemas as flat as possible. Deeply nested objects confuse models.

Summary

Function calling is the mechanism that transforms LLMs from text generators into capable agents. Whether you use Anthropic's tool use, OpenAI's function calling, or the Vercel AI SDK's unified approach, the pattern is the same: define tools with clear schemas, let the model decide when and how to call them, execute the calls, and feed results back. Master this pattern and you can build agents that interact with any system.

Continue Learning