1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
|
---
title: "User Profiles with AI SDK"
description: "Automatically inject user profiles into LLM calls for instant personalization"
sidebarTitle: "User Profiles"
---
## Overview
The `withSupermemory` middleware automatically injects user profiles into your LLM calls, providing instant personalization without manual prompt engineering or API calls.
<Note>
**New to User Profiles?** Read the [conceptual overview](/user-profiles) to understand what profiles are and why they're powerful for LLM personalization.
</Note>
## Quick Start
```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"
// Wrap any model with Supermemory middleware
const modelWithMemory = withSupermemory(
openai("gpt-4"), // Your base model
"user-123" // Container tag (user ID)
)
// Use normally - profiles are automatically injected!
const result = await generateText({
model: modelWithMemory,
messages: [{ role: "user", content: "Help me with my current project" }]
})
// The model knows about the user's background, skills, and current work!
```
## How It Works
The `withSupermemory` middleware:
1. **Intercepts** your LLM calls before they reach the model
2. **Fetches** the user's profile based on the container tag
3. **Injects** profile data into the system prompt automatically
4. **Forwards** the enhanced prompt to your LLM
All of this happens transparently - you write code as if using a normal model, but get personalized responses.
## Memory Search Modes
Configure how the middleware retrieves and uses memory:
### Profile Mode (Default)
Retrieves the user's complete profile without query-specific search. Best for general personalization.
```typescript
// Default behavior - profile mode
const model = withSupermemory(openai("gpt-4"), "user-123")
// Or explicitly specify
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile"
})
const result = await generateText({
model,
messages: [{ role: "user", content: "What do you know about me?" }]
})
// Response uses full user profile for context
```
### Query Mode
Searches memories based on the user's specific message. Best for finding relevant information.
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "query"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "What was that Python script I wrote last week?"
}]
})
// Searches for memories about Python scripts from last week
```
### Full Mode
Combines profile AND query-based search for comprehensive context. Best for complex interactions.
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full"
})
const result = await generateText({
model,
messages: [{
role: "user",
content: "Help me debug this similar to what we did before"
}]
})
// Uses both profile (user's expertise) AND search (previous debugging sessions)
```
## Verbose Logging
Enable detailed logging to see exactly what's happening:
```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
verbose: true // Enable detailed logging
})
const result = await generateText({
model,
messages: [{ role: "user", content: "Where do I live?" }]
})
// Console output:
// [supermemory] Searching memories for container: user-123
// [supermemory] User message: Where do I live?
// [supermemory] System prompt exists: false
// [supermemory] Found 3 memories
// [supermemory] Memory content: You live in San Francisco, California...
// [supermemory] Creating new system prompt with memories
```
## Comparison with Direct API
The AI SDK middleware abstracts away the complexity of manual profile management:
<Tabs>
<Tab title="With AI SDK (Simple)">
```typescript
// One line setup
const model = withSupermemory(openai("gpt-4"), "user-123")
// Use normally
const result = await generateText({
model,
messages: [{ role: "user", content: "Help me" }]
})
```
</Tab>
<Tab title="Without AI SDK (Complex)">
```typescript
// Manual profile fetching
const profileRes = await fetch('https://api.supermemory.ai/v4/profile', {
method: 'POST',
headers: { /* ... */ },
body: JSON.stringify({ containerTag: "user-123" })
})
const profile = await profileRes.json()
// Manual prompt construction
const systemPrompt = `User Profile:\n${profile.profile.static?.join('\n')}`
// Manual LLM call with profile
const result = await generateText({
model: openai("gpt-4"),
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: "Help me" }
]
})
```
</Tab>
</Tabs>
## Limitations
- **Beta Feature**: The `withSupermemory` middleware is currently in beta
- **Container Tag Required**: You must provide a valid container tag
- **API Key Required**: Ensure `SUPERMEMORY_API_KEY` is set in your environment
## Next Steps
<CardGroup cols={2}>
<Card title="User Profiles Concepts" icon="brain" href="/user-profiles">
Understand how profiles work conceptually
</Card>
<Card title="Memory Tools" icon="wrench" href="/ai-sdk/memory-tools">
Add explicit memory operations to your agents
</Card>
<Card title="API Reference" icon="code" href="https://api.supermemory.ai/v3/reference#tag/profile">
Explore the underlying profile API
</Card>
<Card title="NPM Package" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
View the package on NPM
</Card>
</CardGroup>
<Info>
**Pro Tip**: Start with profile mode for general personalization, then experiment with query and full modes as you understand your use case better.
</Info>
|