1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
|
---
title: "User Profiles with AI SDK"
description: "Automatically inject user profiles into LLM calls for instant personalization"
sidebarTitle: "User Profiles"
---
## Overview
The `withSupermemory` middleware automatically injects user profiles into your LLM calls, providing instant personalization without manual prompt engineering or API calls.
<Note>
**New to User Profiles?** Read the [conceptual overview](/user-profiles) to understand what profiles are and why they're powerful for LLM personalization.
</Note>
## Quick Start
```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
// Wrap any model with Supermemory middleware
const modelWithMemory = withSupermemory(
openai("gpt-4"), // Your base model
"user-123" // Container tag (user ID)
)
// Use normally - profiles are automatically injected!
const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "Help me with my current project" }]
})
// The model knows about the user's background, skills, and current work!
```
## How It Works
The `withSupermemory` middleware:
1. **Intercepts** your LLM calls before they reach the model
2. **Fetches** the user's profile based on the container tag
3. **Injects** profile data into the system prompt automatically
4. **Forwards** the enhanced prompt to your LLM
All of this happens transparently - you write code as if using a normal model, but get personalized responses.
<Note>
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, set `addMemory: "always"`:
```typescript
const model = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
})
```
</Note>
## Memory Search Modes
Configure how the middleware retrieves and uses memory:
### Profile Mode (Default)
Retrieves the user's complete profile without query-specific search. Best for general personalization.
```typescript
// Default behavior - profile mode
const model = withSupermemory(openai("gpt-4"), "user-123")
// Or explicitly specify
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile"
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
// Response uses full user profile for context
```
### Query Mode
Searches memories based on the user's specific message. Best for finding relevant information.
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "query"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "What was that Python script I wrote last week?"
}]
})
// Searches for memories about Python scripts from last week
```
### Full Mode
Combines profile AND query-based search for comprehensive context. Best for complex interactions.
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "Help me debug this similar to what we did before"
}]
})
// Uses both profile (user's expertise) AND search (previous debugging sessions)
```
## Custom Prompt Templates
Customize how memories are formatted and injected into the system prompt using the `promptTemplate` option. This is useful for:
- Using XML-based prompting (e.g., for Claude models)
- Custom branding (removing "supermemories" references)
- Controlling how your agent describes where information comes from
```typescript
import { generateText } from "ai"
import { withSupermemory, type MemoryPromptData } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
const customPrompt = (data: MemoryPromptData) => `
<user_memories>
Here is some information about your past conversations with the user:
${data.userMemories}
${data.generalSearchMemories}
</user_memories>
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full",
promptTemplate: customPrompt
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
```
### MemoryPromptData Interface
The `MemoryPromptData` object passed to your template function provides:
- `userMemories`: Pre-formatted markdown combining static profile facts (name, preferences, goals) and dynamic context (current projects, recent interests)
- `generalSearchMemories`: Pre-formatted search results based on semantic similarity to the current query (empty string if mode is "profile")
### XML-Based Prompting for Claude
Claude models perform better with XML-structured prompts:
```typescript
const claudePrompt = (data: MemoryPromptData) => `
<context>
<user_profile>
${data.userMemories}
</user_profile>
<relevant_memories>
${data.generalSearchMemories}
</relevant_memories>
</context>
Use the above context to provide personalized responses.
`.trim()
const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
mode: "full",
promptTemplate: claudePrompt
})
```
### Custom Branding
Remove "supermemories" references and use your own branding:
```typescript
const brandedPrompt = (data: MemoryPromptData) => `
You are an AI assistant with access to the user's personal knowledge base.
User Profile:
${data.userMemories}
Relevant Context:
${data.generalSearchMemories}
Use this information to provide personalized and contextually relevant responses.
`.trim()
const model = withSupermemory(openai("gpt-4"), "user-123", {
promptTemplate: brandedPrompt
})
```
### Default Template
If no `promptTemplate` is provided, the default format is used:
```typescript
const defaultPrompt = (data: MemoryPromptData) =>
`User Supermemories: \n${data.userMemories}\n${data.generalSearchMemories}`.trim()
```
## Verbose Logging
Enable detailed logging to see exactly what's happening:
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
verbose: true // Enable detailed logging
})
const result = await generateText({
model,
messages: [{ role: "user", content: "Where do I live?" }]
})
// Console output:
// [supermemory] Searching memories for container: user-123
// [supermemory] User message: Where do I live?
// [supermemory] System prompt exists: false
// [supermemory] Found 3 memories
// [supermemory] Memory content: You live in San Francisco, California...
// [supermemory] Creating new system prompt with memories
```
## Comparison with Direct API
The AI SDK middleware abstracts away the complexity of manual profile management:
<Tabs>
<Tab title="With AI SDK (Simple)">
```typescript
// One line setup
const model = withSupermemory(openai("gpt-4"), "user-123")
// Use normally
const result = await generateText({
model,
messages: [{ role: "user", content: "Help me" }]
})
```
</Tab>
<Tab title="Without AI SDK (Complex)">
```typescript
// Manual profile fetching
const profileRes = await fetch('https://api.supermemory.ai/v4/profile', {
method: 'POST',
headers: { /* ... */ },
body: JSON.stringify({ containerTag: "user-123" })
})
const profile = await profileRes.json()
// Manual prompt construction
const systemPrompt = `User Profile:\n${profile.profile.static?.join('\n')}`
// Manual LLM call with profile
const result = await generateText({
model: openai("gpt-4"),
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: "Help me" }
]
})
```
</Tab>
</Tabs>
## Limitations
- **Beta Feature**: The `withSupermemory` middleware is currently in beta
- **Container Tag Required**: You must provide a valid container tag
- **API Key Required**: Ensure `SUPERMEMORY_API_KEY` is set in your environment
## Next Steps
<CardGroup cols={2}>
<Card title="User Profiles Concepts" icon="brain" href="/user-profiles">
Understand how profiles work conceptually
</Card>
<Card title="Memory Tools" icon="wrench" href="/integrations/ai-sdk">
Add explicit memory operations to your agents
</Card>
<Card title="API Reference" icon="code" href="https://api.supermemory.ai/v3/reference#tag/profile">
Explore the underlying profile API
</Card>
<Card title="NPM Package" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
View the package on NPM
</Card>
</CardGroup>
<Info>
**Pro Tip**: Start with profile mode for general personalization, then experiment with query and full modes as you understand your use case better.
</Info>
|