---
title: "OpenAI SDK"
sidebarTitle: "OpenAI SDK"
description: "Memory tools for OpenAI function calling with Supermemory integration"
icon: "/images/openai.svg"
---
Add memory capabilities to the official OpenAI SDKs using Supermemory. Two approaches available:
1. **`withSupermemory` wrapper** - Automatic memory injection into system prompts (zero-config)
2. **Function calling tools** - Explicit tool calls for search/add memory operations
**New to Supermemory?** Start with `withSupermemory` for the simplest integration. It automatically injects relevant memories into your prompts.
Check out the NPM page for more details
Check out the PyPI page for more details
---
## withSupermemory Wrapper
The simplest way to add memory to your OpenAI client. Wraps your client to automatically inject relevant memories into system prompts.
### Installation
```bash
npm install @supermemory/tools openai
```
### Quick Start
```typescript
import OpenAI from "openai"
import { withSupermemory } from "@supermemory/tools/openai"
const openai = new OpenAI()
// Wrap client with memory - memories auto-injected into system prompts
const client = withSupermemory(openai, "user-123", {
mode: "full", // "profile" | "query" | "full"
addMemory: "always", // "always" | "never"
})
// Use normally - memories are automatically included
const response = await client.chat.completions.create({
model: "gpt-5",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What's my favorite programming language?" }
]
})
```
### Configuration Options
```typescript
const client = withSupermemory(openai, "user-123", {
// Memory search mode
mode: "full", // "profile" (user profile only), "query" (search only), "full" (both)
// Auto-save conversations as memories
addMemory: "always", // "always" | "never"
// Group messages into conversations
conversationId: "conv-456",
// Enable debug logging
verbose: true,
// Custom API endpoint
baseUrl: "https://custom.api.com"
})
```
### Modes Explained
| Mode | Description | Use Case |
|------|-------------|----------|
| `profile` | Injects user profile (static + dynamic facts) | General personalization |
| `query` | Searches memories based on user message | Question answering |
| `full` | Both profile and query-based search | Best for chatbots |
### Works with Responses API Too
```typescript
const client = withSupermemory(openai, "user-123", { mode: "full" })
// Memories injected into instructions
const response = await client.responses.create({
model: "gpt-5",
instructions: "You are a helpful assistant.",
input: "What do you know about me?"
})
```
### Environment Variables
```bash
SUPERMEMORY_API_KEY=your_supermemory_key
OPENAI_API_KEY=your_openai_key
```
---
## Function Calling Tools
For explicit control over memory operations, use function calling tools. The model decides when to search or add memories.
## Installation
```bash Python
# Using uv (recommended)
uv add supermemory-openai-sdk
# Or with pip
pip install supermemory-openai-sdk
```
```bash JavaScript/TypeScript
npm install @supermemory/tools
```
## Quick Start
```python Python SDK
import asyncio
import openai
from supermemory_openai import SupermemoryTools, execute_memory_tool_calls
async def main():
# Initialize OpenAI client
client = openai.AsyncOpenAI(api_key="your-openai-api-key")
# Initialize Supermemory tools
tools = SupermemoryTools(
api_key="your-supermemory-api-key",
config={"project_id": "my-project"}
)
# Chat with memory tools
response = await client.chat.completions.create(
model="gpt-5",
messages=[
{
"role": "system",
"content": "You are a helpful assistant with access to user memories."
},
{
"role": "user",
"content": "Remember that I prefer tea over coffee"
}
],
tools=tools.get_tool_definitions()
)
# Handle tool calls if present
if response.choices[0].message.tool_calls:
tool_results = await execute_memory_tool_calls(
api_key="your-supermemory-api-key",
tool_calls=response.choices[0].message.tool_calls,
config={"project_id": "my-project"}
)
print("Tool results:", tool_results)
print(response.choices[0].message.content)
asyncio.run(main())
```
```typescript JavaScript/TypeScript SDK
import { supermemoryTools, getToolDefinitions, createToolCallExecutor } from "@supermemory/tools/openai"
import OpenAI from "openai"
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY!,
})
// Get tool definitions for OpenAI
const toolDefinitions = getToolDefinitions()
// Create tool executor
const executeToolCall = createToolCallExecutor(process.env.SUPERMEMORY_API_KEY!, {
projectId: "your-project-id",
})
// Use with OpenAI Chat Completions
const completion = await client.chat.completions.create({
model: "gpt-5",
messages: [
{
role: "user",
content: "What do you remember about my preferences?",
},
],
tools: toolDefinitions,
})
// Execute tool calls if any
if (completion.choices[0]?.message.tool_calls) {
for (const toolCall of completion.choices[0].message.tool_calls) {
const result = await executeToolCall(toolCall)
console.log(result)
}
}
```
## Configuration
### Memory Tools Configuration
```python Python Configuration
from supermemory_openai import SupermemoryTools
tools = SupermemoryTools(
api_key="your-supermemory-api-key",
config={
"project_id": "my-project", # or use container_tags
"base_url": "https://custom-endpoint.com", # optional
}
)
```
```typescript JavaScript Configuration
import { supermemoryTools } from "@supermemory/tools/openai"
const tools = supermemoryTools(process.env.SUPERMEMORY_API_KEY!, {
containerTags: ["your-user-id"],
baseUrl: "https://custom-endpoint.com", // optional
})
```
## Available Tools
### Search Memories
Search through user memories using semantic search:
```python Python
# Search memories
result = await tools.search_memories(
information_to_get="user preferences",
limit=10,
include_full_docs=True
)
print(f"Found {len(result.memories)} memories")
```
```typescript JavaScript
// Search memories
const searchResult = await tools.searchMemories({
informationToGet: "user preferences",
limit: 10,
})
console.log(`Found ${searchResult.memories.length} memories`)
```
### Add Memory
Store new information in memory:
```python Python
# Add memory
result = await tools.add_memory(
memory="User prefers tea over coffee"
)
print(f"Added memory with ID: {result.memory.id}")
```
```typescript JavaScript
// Add memory
const addResult = await tools.addMemory({
memory: "User prefers dark roast coffee",
})
console.log(`Added memory with ID: ${addResult.memory.id}`)
```
## Individual Tools
Use tools separately for more granular control:
```python Python Individual Tools
from supermemory_openai import (
create_search_memories_tool,
create_add_memory_tool
)
search_tool = create_search_memories_tool("your-api-key")
add_tool = create_add_memory_tool("your-api-key")
# Use individual tools in OpenAI function calling
tools_list = [search_tool, add_tool]
```
```typescript JavaScript Individual Tools
import {
createSearchMemoriesTool,
createAddMemoryTool
} from "@supermemory/tools/openai"
const searchTool = createSearchMemoriesTool(process.env.SUPERMEMORY_API_KEY!)
const addTool = createAddMemoryTool(process.env.SUPERMEMORY_API_KEY!)
// Use individual tools
const toolDefinitions = [searchTool.definition, addTool.definition]
```
## Complete Chat Example
Here's a complete example showing a multi-turn conversation with memory:
```python Complete Python Example
import asyncio
import openai
from supermemory_openai import SupermemoryTools, execute_memory_tool_calls
async def chat_with_memory():
client = openai.AsyncOpenAI()
tools = SupermemoryTools(
api_key="your-supermemory-api-key",
config={"project_id": "chat-example"}
)
messages = [
{
"role": "system",
"content": """You are a helpful assistant with memory capabilities.
When users share personal information, remember it using addMemory.
When they ask questions, search your memories to provide personalized responses."""
}
]
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
break
messages.append({"role": "user", "content": user_input})
# Get AI response with tools
response = await client.chat.completions.create(
model="gpt-5",
messages=messages,
tools=tools.get_tool_definitions()
)
# Handle tool calls
if response.choices[0].message.tool_calls:
messages.append(response.choices[0].message)
tool_results = await execute_memory_tool_calls(
api_key="your-supermemory-api-key",
tool_calls=response.choices[0].message.tool_calls,
config={"project_id": "chat-example"}
)
messages.extend(tool_results)
# Get final response after tool execution
final_response = await client.chat.completions.create(
model="gpt-5",
messages=messages
)
assistant_message = final_response.choices[0].message.content
else:
assistant_message = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_message})
print(f"Assistant: {assistant_message}")
# Run the chat
asyncio.run(chat_with_memory())
```
```typescript Complete JavaScript Example
import OpenAI from "openai"
import { getToolDefinitions, createToolCallExecutor } from "@supermemory/tools/openai"
import readline from 'readline'
const client = new OpenAI()
const executeToolCall = createToolCallExecutor(process.env.SUPERMEMORY_API_KEY!, {
projectId: "chat-example",
})
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
})
async function chatWithMemory() {
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
{
role: "system",
content: `You are a helpful assistant with memory capabilities.
When users share personal information, remember it using addMemory.
When they ask questions, search your memories to provide personalized responses.`
}
]
const askQuestion = () => {
rl.question("You: ", async (userInput) => {
if (userInput.toLowerCase() === 'quit') {
rl.close()
return
}
messages.push({ role: "user", content: userInput })
// Get AI response with tools
const response = await client.chat.completions.create({
model: "gpt-5",
messages,
tools: getToolDefinitions(),
})
const choice = response.choices[0]
if (choice?.message.tool_calls) {
messages.push(choice.message)
// Execute tool calls
for (const toolCall of choice.message.tool_calls) {
const result = await executeToolCall(toolCall)
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content: JSON.stringify(result),
})
}
// Get final response after tool execution
const finalResponse = await client.chat.completions.create({
model: "gpt-5",
messages,
})
const assistantMessage = finalResponse.choices[0]?.message.content || "No response"
console.log(`Assistant: ${assistantMessage}`)
messages.push({ role: "assistant", content: assistantMessage })
} else {
const assistantMessage = choice?.message.content || "No response"
console.log(`Assistant: ${assistantMessage}`)
messages.push({ role: "assistant", content: assistantMessage })
}
askQuestion()
})
}
console.log("Chat with memory started. Type 'quit' to exit.")
askQuestion()
}
chatWithMemory()
```
## Error Handling
Handle errors gracefully in your applications:
```python Python Error Handling
from supermemory_openai import SupermemoryTools
import openai
async def safe_chat():
try:
client = openai.AsyncOpenAI()
tools = SupermemoryTools(api_key="your-api-key")
response = await client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "Hello"}],
tools=tools.get_tool_definitions()
)
except openai.APIError as e:
print(f"OpenAI API error: {e}")
except Exception as e:
print(f"Unexpected error: {e}")
```
```typescript JavaScript Error Handling
import OpenAI from "openai"
import { getToolDefinitions } from "@supermemory/tools/openai"
async function safeChat() {
try {
const client = new OpenAI()
const response = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "Hello" }],
tools: getToolDefinitions(),
})
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error("OpenAI API error:", error.message)
} else {
console.error("Unexpected error:", error)
}
}
}
```
## API Reference
### Python SDK
#### `SupermemoryTools`
**Constructor**
```python
SupermemoryTools(
api_key: str,
config: Optional[SupermemoryToolsConfig] = None
)
```
**Methods**
- `get_tool_definitions()` - Get OpenAI function definitions
- `search_memories(information_to_get, limit, include_full_docs)` - Search user memories
- `add_memory(memory)` - Add new memory
- `execute_tool_call(tool_call)` - Execute individual tool call
#### `execute_memory_tool_calls`
```python
execute_memory_tool_calls(
api_key: str,
tool_calls: List[ToolCall],
config: Optional[SupermemoryToolsConfig] = None
) -> List[dict]
```
### JavaScript SDK
#### `supermemoryTools`
```typescript
supermemoryTools(
apiKey: string,
config?: { projectId?: string; baseUrl?: string }
)
```
#### `createToolCallExecutor`
```typescript
createToolCallExecutor(
apiKey: string,
config?: { projectId?: string; baseUrl?: string }
) -> (toolCall: OpenAI.Chat.ChatCompletionMessageToolCall) => Promise
```
## Environment Variables
Set these environment variables:
```bash
SUPERMEMORY_API_KEY=your_supermemory_key
OPENAI_API_KEY=your_openai_key
SUPERMEMORY_BASE_URL=https://custom-endpoint.com # optional
```
## Development
### Python Setup
```bash
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Setup project
git clone
cd packages/openai-sdk-python
uv sync --dev
# Run tests
uv run pytest
# Type checking
uv run mypy src/supermemory_openai
# Formatting
uv run black src/ tests/
uv run isort src/ tests/
```
### JavaScript Setup
```bash
# Install dependencies
npm install
# Run tests
npm test
# Type checking
npm run type-check
# Linting
npm run lint
```
## Next Steps
Use with Vercel AI SDK for streamlined development
Direct API access for advanced memory management