---
title: "User Profiles with AI SDK"
description: "Automatically inject user profiles into LLM calls for instant personalization"
sidebarTitle: "User Profiles"
---
## Overview
The `withSupermemory` middleware automatically injects user profiles into your LLM calls, providing instant personalization without manual prompt engineering or API calls.
**New to User Profiles?** Read the [conceptual overview](/user-profiles) to understand what profiles are and why they're powerful for LLM personalization.
## Quick Start
```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
// Wrap any model with Supermemory middleware
const modelWithMemory = withSupermemory(
openai("gpt-4"), // Your base model
"user-123" // Container tag (user ID)
)
// Use normally - profiles are automatically injected!
const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "Help me with my current project" }]
})
// The model knows about the user's background, skills, and current work!
```
## How It Works
The `withSupermemory` middleware:
1. **Intercepts** your LLM calls before they reach the model
2. **Fetches** the user's profile based on the container tag
3. **Injects** profile data into the system prompt automatically
4. **Forwards** the enhanced prompt to your LLM
All of this happens transparently - you write code as if using a normal model, but get personalized responses.
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, set `addMemory: "always"`:
```typescript
const model = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
})
```
## Memory Search Modes
Configure how the middleware retrieves and uses memory:
### Profile Mode (Default)
Retrieves the user's complete profile without query-specific search. Best for general personalization.
```typescript
// Default behavior - profile mode
const model = withSupermemory(openai("gpt-4"), "user-123")
// Or explicitly specify
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile"
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
// Response uses full user profile for context
```
### Query Mode
Searches memories based on the user's specific message. Best for finding relevant information.
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "query"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "What was that Python script I wrote last week?"
}]
})
// Searches for memories about Python scripts from last week
```
### Full Mode
Combines profile AND query-based search for comprehensive context. Best for complex interactions.
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "Help me debug this similar to what we did before"
}]
})
// Uses both profile (user's expertise) AND search (previous debugging sessions)
```
## Custom Prompt Templates
Customize how memories are formatted and injected into the system prompt using the `promptTemplate` option. This is useful for:
- Using XML-based prompting (e.g., for Claude models)
- Custom branding (removing "supermemories" references)
- Controlling how your agent describes where information comes from
```typescript
import { generateText } from "ai"
import { withSupermemory, type MemoryPromptData } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
const customPrompt = (data: MemoryPromptData) => `
Here is some information about your past conversations with the user:
${data.userMemories}
${data.generalSearchMemories}
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full",
promptTemplate: customPrompt
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
```
### MemoryPromptData Interface
The `MemoryPromptData` object passed to your template function provides:
- `userMemories`: Pre-formatted markdown combining static profile facts (name, preferences, goals) and dynamic context (current projects, recent interests)
- `generalSearchMemories`: Pre-formatted search results based on semantic similarity to the current query (empty string if mode is "profile")
### XML-Based Prompting for Claude
Claude models perform better with XML-structured prompts:
```typescript
const claudePrompt = (data: MemoryPromptData) => `
${data.userMemories}
${data.generalSearchMemories}
Use the above context to provide personalized responses.
`.trim()
const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
mode: "full",
promptTemplate: claudePrompt
})
```
### Custom Branding
Remove "supermemories" references and use your own branding:
```typescript
const brandedPrompt = (data: MemoryPromptData) => `
You are an AI assistant with access to the user's personal knowledge base.
User Profile:
${data.userMemories}
Relevant Context:
${data.generalSearchMemories}
Use this information to provide personalized and contextually relevant responses.
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
promptTemplate: brandedPrompt
})
```
### Default Template
If no `promptTemplate` is provided, the default format is used:
```typescript
const defaultPrompt = (data: MemoryPromptData) =>
`User Supermemories: \n${data.userMemories}\n${data.generalSearchMemories}`.trim()
```
## Verbose Logging
Enable detailed logging to see exactly what's happening:
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
verbose: true // Enable detailed logging
})
const result = await generateText({
model,
messages: [{ role: "user", content: "Where do I live?" }]
})
// Console output:
// [supermemory] Searching memories for container: user-123
// [supermemory] User message: Where do I live?
// [supermemory] System prompt exists: false
// [supermemory] Found 3 memories
// [supermemory] Memory content: You live in San Francisco, California...
// [supermemory] Creating new system prompt with memories
```
## Comparison with Direct API
The AI SDK middleware abstracts away the complexity of manual profile management:
```typescript
// One line setup
const model = withSupermemory(openai("gpt-4"), "user-123")
// Use normally
const result = await generateText({
model,
messages: [{ role: "user", content: "Help me" }]
})
```
```typescript
// Manual profile fetching
const profileRes = await fetch('https://api.supermemory.ai/v4/profile', {
method: 'POST',
headers: { /* ... */ },
body: JSON.stringify({ containerTag: "user-123" })
})
const profile = await profileRes.json()
// Manual prompt construction
const systemPrompt = `User Profile:\n${profile.profile.static?.join('\n')}`
// Manual LLM call with profile
const result = await generateText({
model: openai("gpt-4"),
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: "Help me" }
]
})
```
## Limitations
- **Beta Feature**: The `withSupermemory` middleware is currently in beta
- **Container Tag Required**: You must provide a valid container tag
- **API Key Required**: Ensure `SUPERMEMORY_API_KEY` is set in your environment
## Next Steps
Understand how profiles work conceptually
Add explicit memory operations to your agents
Explore the underlying profile API
View the package on NPM
**Pro Tip**: Start with profile mode for general personalization, then experiment with query and full modes as you understand your use case better.