Overview
Function calling allows models to intelligently call functions you define, enabling:
- API integrations
- Database queries
- External tool use
- Structured data extraction
Define Functions
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., San Francisco"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}]
Complete Example
from openai import OpenAI
import json
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.redpill.ai/v1"
)
# Define available functions
tools = [{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}]
# Make request
response = client.chat.completions.create(
model="openai/gpt-5",
messages=[{"role": "user", "content": "What's the weather in SF?"}],
tools=tools
)
# Check if model wants to call function
tool_call = response.choices[0].message.tool_calls[0]
if tool_call.function.name == "get_weather":
args = json.loads(tool_call.function.arguments)
# Call your actual function
weather = get_weather(args["location"], args.get("unit", "celsius"))
# Send result back to model
response = client.chat.completions.create(
model="openai/gpt-5",
messages=[
{"role": "user", "content": "What's the weather in SF?"},
response.choices[0].message,
{
"role": "tool",
"tool_call_id": tool_call.id,
"content": json.dumps(weather)
}
],
tools=tools
)
print(response.choices[0].message.content)
Supported Models
Function calling works with:
- ✅ All OpenAI GPT models
- ✅ Anthropic Claude 3+
- ✅ Google Gemini
- ✅ Meta Llama 3.2+
- ✅ Phala confidential models
Structured Outputs
Structured outputs guarantee that model responses match your exact JSON schema - perfect for data extraction, form filling, and API responses.
Using Pydantic Models
from openai import OpenAI
from pydantic import BaseModel
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.redpill.ai/v1"
)
# Define your structure with Pydantic
class CalendarEvent(BaseModel):
name: str
date: str
participants: list[str]
# Request structured output
response = client.beta.chat.completions.parse(
model="openai/gpt-5",
messages=[
{"role": "system", "content": "Extract calendar events from text"},
{"role": "user", "content": "Team meeting tomorrow at 2pm with Alice and Bob"}
],
response_format=CalendarEvent
)
# Guaranteed to match schema
event = response.choices[0].message.parsed
print(event.name) # "Team meeting"
print(event.participants) # ["Alice", "Bob"]
JSON Schema Approach
# Define schema directly
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"email": {"type": "string", "format": "email"},
"age": {"type": "integer", "minimum": 0},
"interests": {
"type": "array",
"items": {"type": "string"}
}
},
"required": ["name", "email"],
"additionalProperties": False
}
response = client.chat.completions.create(
model="openai/gpt-5",
messages=[
{"role": "user", "content": "Extract user info: John Doe, john@example.com, 30 years old, likes hiking and coding"}
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "user_profile",
"strict": True,
"schema": schema
}
}
)
import json
user = json.loads(response.choices[0].message.content)
# Guaranteed to match schema exactly
Complex Nested Structures
from pydantic import BaseModel, Field
from typing import List, Optional
class Address(BaseModel):
street: str
city: str
country: str
zipcode: str
class Company(BaseModel):
name: str
industry: str
employees: int = Field(ge=1)
class Person(BaseModel):
full_name: str = Field(description="Full name of the person")
email: str = Field(pattern=r"^[\w\.-]+@[\w\.-]+\.\w+$")
age: Optional[int] = Field(None, ge=0, le=150)
address: Address
employer: Optional[Company] = None
skills: List[str] = Field(default_factory=list)
# Extract complex structured data
response = client.beta.chat.completions.parse(
model="openai/gpt-5",
messages=[{
"role": "user",
"content": """
Extract: Alice Johnson, alice@example.com, 28 years old.
Lives at 123 Main St, San Francisco, CA 94105.
Works at TechCorp (software industry, 500 employees).
Skills: Python, Machine Learning, Data Science.
"""
}],
response_format=Person
)
person = response.choices[0].message.parsed
print(f"{person.full_name} works at {person.employer.name}")
print(f"Skills: {', '.join(person.skills)}")
Use Cases
Data Extraction from Text
Generate database-ready records:class Product(BaseModel):
sku: str = Field(pattern="^[A-Z]{3}-\d{4}$")
name: str
category: str
price: float = Field(gt=0)
in_stock: bool
tags: List[str]
response = client.beta.chat.completions.parse(
model="openai/gpt-5",
messages=[{
"role": "user",
"content": "Create product: Wireless Mouse, Electronics, $29.99, in stock, tags: mouse, wireless, computer"
}],
response_format=Product
)
# Insert directly into database
db.products.insert(response.choices[0].message.parsed.dict())
Structured Outputs + TEE Privacy
Privacy benefits with RedPill:
# Medical record extraction - TEE protected
class MedicalRecord(BaseModel):
patient_id: str
diagnosis: str
medications: List[str]
visit_date: str
response = client.beta.chat.completions.parse(
model="phala/qwen-2.5-7b-instruct", # Confidential model
messages=[{
"role": "user",
"content": "Extract from: Patient #12345 diagnosed with hypertension, prescribed lisinopril..."
}],
response_format=MedicalRecord
)
# ✅ Extraction happens in TEE
# ✅ PHI never leaves hardware encryption
# ✅ Structured output guaranteed
# ✅ HIPAA-compliant processing
Validation & Error Handling
from pydantic import ValidationError
try:
response = client.beta.chat.completions.parse(
model="openai/gpt-5",
messages=[{"role": "user", "content": "..."}],
response_format=MySchema
)
# If model returns invalid JSON, OpenAI automatically refines
# until it matches schema (up to 3 retries)
data = response.choices[0].message.parsed
print("Valid structured output:", data)
except ValidationError as e:
# Pydantic validation failed
print(f"Schema validation error: {e}")
except Exception as e:
# Other errors
print(f"Error: {e}")
Supported Models for Structured Outputs
| Model | Structured Outputs | Notes |
|---|
openai/gpt-5 | ✅ Full support | Best performance |
openai/gpt-5-mini | ✅ Full support | Fast & cheap |
anthropic/claude-sonnet-4.5 | ✅ Via function calling | Use tools approach |
phala/qwen-2.5-7b-instruct | ✅ Full support | TEE-protected |
phala/deepseek-chat-v3-0324 | ✅ Full support | TEE-protected |
For models without native structured output support, use function calling with a single function whose parameters match your desired schema.
- Use
strict: true for guaranteed schema compliance
- Simpler schemas = faster responses
- Provide examples in system message for complex structures
- Cache Pydantic models for repeated use
# Good: Simple, clear schema
class User(BaseModel):
name: str
age: int
# Avoid: Overly complex nested schemas
class OverlyComplex(BaseModel):
nested: dict[str, list[dict[str, Union[int, str, list[dict]]]]]
Best Practices
- Clear descriptions: Help model understand when to call functions
- Type safety: Use strict JSON Schema types or Pydantic models
- Error handling: Validate function arguments with try/except
- Security: Never execute untrusted code from function calls
- Use structured outputs: For guaranteed JSON schema compliance
- TEE for sensitive data: Use Phala confidential models for PII extraction
Always validate and sanitize function arguments before execution. Even with structured outputs, implement defensive programming.