Overview
LangChain is a popular framework for developing applications powered by language models. RedPill’s OpenAI-compatible API works seamlessly with LangChain.
All requests through LangChain automatically flow through RedPill’s TEE-protected gateway.
Installation
Python
JavaScript/TypeScript
pip install langchain-openai
Python Integration
Basic Setup
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
# Initialize ChatOpenAI with RedPill
chat = ChatOpenAI(
model = "openai/gpt-5" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1" ,
temperature = 0.7
)
# Send messages
messages = [
SystemMessage( content = "You are a helpful AI assistant." ),
HumanMessage( content = "Explain quantum computing in simple terms" )
]
response = chat.invoke(messages)
print (response.content)
Streaming Responses
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
chat = ChatOpenAI(
model = "openai/gpt-5" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1" ,
streaming = True
)
# Stream the response
for chunk in chat.stream([HumanMessage( content = "Write a story about AI" )]):
print (chunk.content, end = "" , flush = True )
Using Multiple Models
RedPill supports 50+ models. Switch between them easily:
from langchain_openai import ChatOpenAI
# Use Claude for reasoning tasks
claude = ChatOpenAI(
model = "anthropic/claude-sonnet-4.5" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
# Use DeepSeek for coding tasks
deepseek = ChatOpenAI(
model = "deepseek/deepseek-chat" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
# Use Phala confidential model for sensitive data
phala = ChatOpenAI(
model = "phala/qwen-2.5-7b-instruct" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
Conversation Chains
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
chat = ChatOpenAI(
model = "openai/gpt-5" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
conversation = ConversationChain(
llm = chat,
memory = ConversationBufferMemory()
)
# Multi-turn conversation
response1 = conversation.predict( input = "Hi, I'm learning about AI" )
print (response1)
response2 = conversation.predict( input = "What should I learn first?" )
print (response2)
TypeScript/JavaScript Integration
Basic Setup
import { ChatOpenAI } from "@langchain/openai" ;
import { HumanMessage , SystemMessage } from "@langchain/core/messages" ;
const chat = new ChatOpenAI ({
model: "openai/gpt-5" ,
apiKey: process . env . REDPILL_API_KEY ,
configuration: {
baseURL: "https://api.redpill.ai/v1"
},
temperature: 0.7
});
const messages = [
new SystemMessage ( "You are a helpful AI assistant." ),
new HumanMessage ( "Explain quantum computing in simple terms" )
];
const response = await chat . invoke ( messages );
console . log ( response . content );
Streaming in TypeScript
import { ChatOpenAI } from "@langchain/openai" ;
import { HumanMessage } from "@langchain/core/messages" ;
const chat = new ChatOpenAI ({
model: "openai/gpt-5" ,
apiKey: process . env . REDPILL_API_KEY ,
configuration: {
baseURL: "https://api.redpill.ai/v1"
},
streaming: true
});
const stream = await chat . stream ([
new HumanMessage ( "Write a story about AI" )
]);
for await ( const chunk of stream ) {
process . stdout . write ( chunk . content );
}
RAG (Retrieval Augmented Generation)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain.chains import RetrievalQA
# Initialize embeddings with RedPill
embeddings = OpenAIEmbeddings(
model = "text-embedding-3-small" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
# Create vector store
documents = [
Document( page_content = "RedPill is a TEE-protected AI gateway" ),
Document( page_content = "RedPill supports 50+ AI models" ),
Document( page_content = "All requests are hardware-protected" )
]
vectorstore = Chroma.from_documents(documents, embeddings)
# Initialize chat model
chat = ChatOpenAI(
model = "openai/gpt-5" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
# Create RAG chain
qa_chain = RetrievalQA.from_chain_type(
llm = chat,
retriever = vectorstore.as_retriever()
)
# Query
result = qa_chain.invoke( "How many models does RedPill support?" )
print (result[ "result" ])
Function Calling with LangChain
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate
# Define tools
@tool
def get_weather ( location : str , unit : str = "celsius" ) -> str :
"""Get the current weather in a location"""
return f "The weather in { location } is 22° { unit[ 0 ].upper() } "
@tool
def calculate ( expression : str ) -> float :
"""Calculate a mathematical expression"""
return eval (expression)
# Initialize chat model
chat = ChatOpenAI(
model = "openai/gpt-5" ,
api_key = "YOUR_REDPILL_API_KEY" ,
base_url = "https://api.redpill.ai/v1"
)
# Create agent
tools = [get_weather, calculate]
prompt = ChatPromptTemplate.from_messages([
( "system" , "You are a helpful assistant" ),
( "human" , " {input} " ),
( "placeholder" , " {agent_scratchpad} " )
])
agent = create_tool_calling_agent(chat, tools, prompt)
agent_executor = AgentExecutor( agent = agent, tools = tools)
# Execute
result = agent_executor.invoke({
"input" : "What's the weather in Paris and what's 25 + 17?"
})
print (result[ "output" ])
Environment Variables
Store your API key securely:
REDPILL_API_KEY = sk-your-api-key-here
import os
from langchain_openai import ChatOpenAI
chat = ChatOpenAI(
model = "openai/gpt-5" ,
api_key = os.environ[ "REDPILL_API_KEY" ],
base_url = "https://api.redpill.ai/v1"
)
Supported Models
All 50+ RedPill models work with LangChain:
Provider Example Models OpenAI openai/gpt-5, openai/gpt-5-mini, openai/o4-miniAnthropic anthropic/claude-sonnet-4.5, anthropic/claude-opus-4.1Google google/gemini-2.5-flashDeepSeek deepseek/deepseek-chatPhala (TEE) phala/qwen-2.5-7b-instruct, phala/deepseek-chat-v3-0324
View All Models Browse complete model list →
Best Practices
Use Environment Variables
Never hardcode API keys. Use environment variables or secret managers.
GPT-4o : Best for general tasks
Claude 3.5 Sonnet : Best for reasoning and analysis
DeepSeek : Best for coding tasks
Phala models : Best for sensitive data (full TEE protection)
Use streaming for better user experience in interactive applications.
Wrap API calls in try-catch blocks to handle rate limits and errors gracefully.
Example Projects
Chatbot Build a multi-turn conversational AI with memory
RAG System Create a document Q&A system with embeddings
AI Agent Build an autonomous agent with function calling
Content Generator Generate articles, summaries, and creative content
Need Help?