aboutsummaryrefslogtreecommitdiff
path: root/apps/docs/user-profiles.mdx
blob: c6a097ec9102372e8c0d6e04b6ca018f7d8a1e48 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
---
title: "User Profiles - Persistent Context for LLMs"
description: "Automatically maintained user profiles that provide instant, comprehensive context to your LLMs"
sidebarTitle: "User Profiles"
icon: "user"
---

## What are User Profiles?

User profiles are **automatically maintained collections of facts about your users** that Supermemory builds from all their interactions and content. Think of it as a persistent "about me" document that's always up-to-date and instantly accessible.

Instead of searching through memories every time you need context about a user, profiles give you:
- **Instant access** to comprehensive user information
- **Automatic updates** as users interact with your system
- **Two-tier structure** separating permanent facts from temporary context

<Note>
  Profile data can be appended to the system prompt so that it's always sent to your LLM and you don't need to run multiple queries.
</Note>

## Static vs Dynamic Profiles

![](/images/static-dynamic-profile.png)

Profiles are intelligently divided into two categories:

### Static Profile
**Long-term, stable facts that define who the user is**

These are facts that rarely change - the foundational information about a user that remains consistent over time.

Examples:
- "Sarah Chen is a senior software engineer at TechCorp"
- "Sarah specializes in distributed systems and Kubernetes"
- "Sarah has a PhD in Computer Science from MIT"
- "Sarah prefers technical documentation over video tutorials"

### Dynamic Profile
**Recent context and temporary information**

These are current activities, recent interests, and temporary states that provide immediate context.

Examples:
- "Sarah is currently migrating the payment service to microservices"
- "Sarah recently started learning Rust for a side project"
- "Sarah is preparing for a conference talk next month"
- "Sarah is debugging a memory leak in the authentication service"

<Accordion title="How are profiles different from search?" defaultOpen>
  **Traditional Search**: You query "What does Sarah know about Kubernetes?" and get specific memory chunks about Kubernetes.
  
  **User Profiles**: You get Sarah's complete professional context instantly - her role, expertise, preferences, and current projects - without needing to craft specific queries.
  
  The profile is **always there**, providing consistent personalization across every interaction.
</Accordion>

## Why We Built Profiles

### The Problem with Search-Only Approaches

Traditional memory systems rely entirely on search, which has fundamental limitations:

1. **Search is too narrow**: When you search for "project updates", you miss that the user prefers bullet points, works in PST timezone, and uses specific technical terminology.

2. **Search is repetitive**: Every chat message triggers multiple searches for basic context that rarely changes.

3. **Search misses relationships**: Individual memory chunks don't capture the full picture of who someone is and how different facts relate.


Profiles solve these problems by maintaining a **persistent, holistic view** of each user:
## How Profiles Work with Search

Profiles don't replace search - they complement it perfectly:

<Steps>
  <Step title="Profile provides foundation">
    The user's profile gives your LLM comprehensive background context about who they are, what they know, and what they're working on.
  </Step>
  
  <Step title="Search adds specificity">
    When you need specific information (like "error in deployment yesterday"), search finds those exact memories.
  </Step>
  
  <Step title="Combined for perfect context">
    Your LLM gets both the broad understanding from profiles AND the specific details from search.
  </Step>
</Steps>

### Real-World Example

Imagine a user asks: **"Can you help me debug this?"**

**Without profiles**: The LLM has no context about the user's expertise level, current projects, or debugging preferences.

**With profiles**: The LLM knows:
- The user is a senior engineer (adjust technical level)
- They're working on a payment service migration (likely context)
- They prefer command-line tools over GUIs (tool suggestions)
- They recently had issues with memory leaks (possible connection)

## Technical Implementation

### Endpoint Details

Based on the [API reference](https://api.supermemory.ai/v3/reference#tag/profile), the profile endpoint provides a simple interface:

**Endpoint**: `POST /v4/profile`

### Request Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `containerTag` | string | **Yes** | The container tag (usually user ID) to get profiles for |
| `q` | string | No | Optional search query to include search results with the profile |

### Response Structure

The response includes both profile data and optional search results:

```json
{
  "profile": {
    "static": [
      "User is a software engineer",
      "User specializes in Python and React"
    ],
    "dynamic": [
      "User is working on Project Alpha",
      "User recently started learning Rust"
    ]
  },
  "searchResults": {
    "results": [...],  // Only if 'q' parameter was provided
    "total": 15,
    "timing": 45.2
  }
}
```

## Code Examples

### Basic Profile Retrieval

<CodeGroup>

```typescript TypeScript
// Direct API call using fetch
const response = await fetch('https://api.supermemory.ai/v4/profile', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    containerTag: 'user_123'
  })
});

const data = await response.json();

console.log("Static facts:", data.profile.static);
console.log("Dynamic context:", data.profile.dynamic);

// Use in your LLM prompt
const systemPrompt = `
User Context:
${data.profile.static?.join('\n') || ''}

Current Activity:
${data.profile.dynamic?.join('\n') || ''}

Please provide personalized assistance based on this context.
`;
```

```python Python
import requests
import os

# Direct API call
response = requests.post(
    'https://api.supermemory.ai/v4/profile',
    headers={
        'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
        'Content-Type': 'application/json'
    },
    json={
        'containerTag': 'user_123'
    }
)

data = response.json()

print("Static facts:", data['profile']['static'])
print("Dynamic context:", data['profile']['dynamic'])

# Use in your LLM prompt
static_context = '\n'.join(data['profile'].get('static', []))
dynamic_context = '\n'.join(data['profile'].get('dynamic', []))

system_prompt = f"""
User Context:
{static_context}

Current Activity:
{dynamic_context}

Please provide personalized assistance based on this context.
"""
```

```bash cURL
curl -X POST https://api.supermemory.ai/v4/profile \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "containerTag": "user_123"
  }'
```

</CodeGroup>

### Profile with Search

Sometimes you want both the user's profile AND specific search results:

<CodeGroup>

```typescript TypeScript
// Get profile with search results
const response = await fetch('https://api.supermemory.ai/v4/profile', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    containerTag: 'user_123',
    q: 'deployment errors yesterday'  // Optional search query
  })
});

const data = await response.json();

// Now you have both profile and specific search results
const profile = data.profile;
const searchResults = data.searchResults?.results || [];

// Combine for comprehensive context
const context = {
  userBackground: profile.static,
  currentContext: profile.dynamic,
  specificInfo: searchResults.map(r => r.content)
};
```

```python Python
import requests

# Get profile with search results
response = requests.post(
    'https://api.supermemory.ai/v4/profile',
    headers={
        'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
        'Content-Type': 'application/json'
    },
    json={
        'containerTag': 'user_123',
        'q': 'deployment errors yesterday'  # Optional search query
    }
)

data = response.json()

# Access both profile and search results
profile = data['profile']
search_results = data.get('searchResults', {}).get('results', [])

# Combine for comprehensive context
context = {
    'user_background': profile.get('static', []),
    'current_context': profile.get('dynamic', []),
    'specific_info': [r['content'] for r in search_results]
}
```

</CodeGroup>

### Integration with Chat Applications

Here's how to use profiles in a real chat application:

<CodeGroup>

```typescript TypeScript
async function handleChatMessage(userId: string, message: string) {
  // Get user profile for personalization
  const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      containerTag: userId
    })
  });
  
  const profileData = await profileResponse.json();

  // Build personalized system prompt
  const systemPrompt = buildPersonalizedPrompt(profileData.profile);

  // Send to your LLM with context
  const response = await llm.chat({
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: message }
    ]
  });

  return response;
}

function buildPersonalizedPrompt(profile: any) {
  return `You are assisting a user with the following context:

ABOUT THE USER:
${profile.static?.join('\n') || 'No profile information yet.'}

CURRENT CONTEXT:
${profile.dynamic?.join('\n') || 'No recent activity.'}

Provide responses that are personalized to their expertise level, 
preferences, and current work context.`;
}
```

```python Python
import requests
import os

async def handle_chat_message(user_id: str, message: str):
    # Get user profile for personalization
    response = requests.post(
        'https://api.supermemory.ai/v4/profile',
        headers={
            'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
            'Content-Type': 'application/json'
        },
        json={'containerTag': user_id}
    )
    
    profile_data = response.json()
    
    # Build personalized system prompt
    system_prompt = build_personalized_prompt(profile_data['profile'])
    
    # Send to your LLM with context
    llm_response = await llm.chat(
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": message}
        ]
    )
    
    return llm_response

def build_personalized_prompt(profile):
    static_facts = '\n'.join(profile.get('static', ['No profile information yet.']))
    dynamic_context = '\n'.join(profile.get('dynamic', ['No recent activity.']))
    
    return f"""You are assisting a user with the following context:

ABOUT THE USER:
{static_facts}

CURRENT CONTEXT:
{dynamic_context}

Provide responses that are personalized to their expertise level, 
preferences, and current work context."""
```

</CodeGroup>

## AI SDK Integration

<Note>
  The Supermemory AI SDK provides a more elegant way to use profiles through the `withSupermemory` middleware, which automatically handles profile retrieval and injection into your LLM prompts.
</Note>

### Automatic Profile Integration

The AI SDK's `withSupermemory` middleware abstracts away all the profile endpoint complexity:

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Automatically injects user profile into every LLM call
const modelWithMemory = withSupermemory(openai("gpt-4"), "user_123")

const result = await generateText({
  model: modelWithMemory,
  messages: [{ role: "user", content: "What do you know about me?" }],
})

// The model automatically has access to the user's profile!
```

### Memory Search Modes

The AI SDK supports three modes for memory retrieval:

#### Profile Mode (Default)
Retrieves user profile memories without query filtering:

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Uses profile mode by default - gets all user profile memories
const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123")

// Explicitly specify profile mode
const modelWithProfile = withSupermemory(openai("gpt-4"), "user-123", { 
  mode: "profile" 
})

const result = await generateText({
  model: modelWithMemory,
  messages: [{ role: "user", content: "What do you know about me?" }],
})
```

#### Query Mode
Searches memories based on the user's message:

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithQuery = withSupermemory(openai("gpt-4"), "user-123", { 
  mode: "query" 
})

const result = await generateText({
  model: modelWithQuery,
  messages: [{ role: "user", content: "What's my favorite programming language?" }],
})
```

#### Full Mode
Combines both profile and query results:

```typescript
import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithFull = withSupermemory(openai("gpt-4"), "user-123", { 
  mode: "full" 
})

const result = await generateText({
  model: modelWithFull,
  messages: [{ role: "user", content: "Tell me about my preferences" }],
})
```

<Card title="Learn More About AI SDK" icon="triangle" href="/ai-sdk/overview">
  Explore the full capabilities of the Supermemory AI SDK, including tools for adding memories, searching, and automatic profile injection.
</Card>

### Understanding the Modes (Without AI SDK)

When using the API directly without the AI SDK:

- **Profile Only**: Call `/v4/profile` and add the profile data to your system prompt. This gives persistent user context without query-specific search.

- **Query Only**: Use the `/v4/search` endpoint with the user's specific question to find relevant memories based on their current query. Read [the search docs.](/search/overview)

- **Full Mode**: Combine both approaches - add profile data to the system prompt AND use the search endpoint for conversational context based on the user's specific query. This provides the most comprehensive context.

```typescript
// Full mode example without AI SDK
async function getFullContext(userId: string, userQuery: string) {
  // 1. Get user profile for system prompt
  const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
    method: 'POST',
    headers: { /* ... */ },
    body: JSON.stringify({ containerTag: userId })
  });
  const profileData = await profileResponse.json();
  
  // 2. Search for query-specific memories
  const searchResponse = await fetch('https://api.supermemory.ai/v3/search', {
    method: 'POST',
    headers: { /* ... */ },
    body: JSON.stringify({ 
      q: userQuery,
      containerTag: userId 
    })
  });
  const searchData = await searchResponse.json();
  
  // 3. Combine both in your prompt
  return {
    systemPrompt: `User Profile:\n${profileData.profile.static?.join('\n')}`,
    queryContext: searchData.results
  };
}
```
Or you can also juse use the `q` parameter in the `v4/profiles` endpoint to get those search results. I just wanted to demonstrate how you can use search and profile separately, so I put this elaborate code snippet.

## How Profiles are Built

Profiles are **automatically constructed and maintained** through Supermemory's ingestion pipeline:

<Steps>
  <Step title="Content Ingestion">
    When users add documents, chat, or any content to Supermemory, it goes through the standard ingestion workflow.
  </Step>
  
  <Step title="Intelligence Extraction">
    AI analyzes the content to extract not just memories, but also facts about the user themselves.
  </Step>
  
  <Step title="Profile Operations">
    The system generates profile operations (add, update, or remove facts) based on the new information.
  </Step>
  
  <Step title="Automatic Updates">
    Profiles are updated in real-time, ensuring they always reflect the latest information about the user.
  </Step>
</Steps>

<Note>
  You don't need to manually manage profiles - they're automatically maintained as users interact with your system. Just ingest content normally, and profiles build themselves.
</Note>


## Common Use Cases

### Personalized AI Assistants
Profiles ensure your AI assistant remembers user preferences, expertise, and context across conversations.

### Customer Support Systems
Support agents (or AI) instantly see customer history, preferences, and current issues without manual searches.

### Educational Platforms
Adapt content difficulty and teaching style based on the learner's profile and progress.

### Development Tools
IDE assistants that understand your coding style, current projects, and technical preferences.

## Performance Benefits

Profiles provide significant performance improvements:

| Metric | Without Profiles | With Profiles |
|--------|-----------------|---------------|
| Context Retrieval | 3-5 search queries | 1 profile call |
| Response Time | 200-500ms | 50-100ms |
| Token Usage | High (multiple searches) | Low (single response) |
| Consistency | Varies by search quality | Always comprehensive |