1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
|
---
title: Quickstart
description: Make your first API call to Supermemory - add and retrieve memories.
---
<Tip>
**Using Vercel AI SDK?** Check out the [AI SDK integration](/ai-sdk/overview) for the cleanest implementation with `@supermemory/tools/ai-sdk`.
</Tip>
## Memory API
**Step 1.** Sign up for [Supermemory's Developer Platform](http://console.supermemory.ai) to get the API key. Click on **API Keys -> Create API Key** to generate one.

**Step 2.** Install the SDK and set your API key:
<Tabs>
<Tab title="Python">
```bash
pip install supermemory
export SUPERMEMORY_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="TypeScript">
```bash
npm install supermemory
export SUPERMEMORY_API_KEY="YOUR_API_KEY"
```
</Tab>
</Tabs>
**Step 3.** Here's everything you need to add memory to your LLM:
<Tabs>
<Tab title="Python">
```python
from supermemory import Supermemory
client = Supermemory()
USER_ID = "dhravya"
conversation = [
{"role": "assistant", "content": "Hello, how are you doing?"},
{"role": "user", "content": "Hello! I am Dhravya. I am 20 years old. I love to code!"},
{"role": "user", "content": "Can I go to the club?"},
]
# Get user profile + relevant memories for context
profile = client.profile(container_tag=USER_ID, q=conversation[-1]["content"])
context = f"""Static profile:
{"\n".join(profile.profile.static)}
Dynamic profile:
{"\n".join(profile.profile.dynamic)}
Relevant memories:
{"\n".join(r.content for r in profile.search_results.results)}"""
# Build messages with memory-enriched context
messages = [{"role": "system", "content": f"User context:\n{context}"}, *conversation]
# response = llm.chat(messages=messages)
# Store conversation for future context
client.add(
content="\n".join(f"{m['role']}: {m['content']}" for m in conversation),
container_tag=USER_ID,
)
```
</Tab>
<Tab title="TypeScript">
```typescript
import Supermemory from "supermemory";
const client = new Supermemory();
const USER_ID = "dhravya";
const conversation = [
{ role: "assistant", content: "Hello, how are you doing?" },
{ role: "user", content: "Hello! I am Dhravya. I am 20 years old. I love to code!" },
{ role: "user", content: "Can I go to the club?" },
];
// Get user profile + relevant memories for context
const profile = await client.profile({
containerTag: USER_ID,
q: conversation.at(-1)!.content,
});
const context = `Static profile:
${profile.profile.static.join("\n")}
Dynamic profile:
${profile.profile.dynamic.join("\n")}
Relevant memories:
${profile.searchResults.results.map((r) => r.content).join("\n")}`;
// Build messages with memory-enriched context
const messages = [{ role: "system", content: `User context:\n${context}` }, ...conversation];
// const response = await llm.chat({ messages });
// Store conversation for future context
await client.memories.add({
content: conversation.map((m) => `${m.role}: ${m.content}`).join("\n"),
containerTag: USER_ID,
});
```
</Tab>
</Tabs>
That's it! Supermemory automatically:
- Extracts memories from conversations
- Builds and maintains user profiles (static facts + dynamic context)
- Returns relevant context for personalized LLM responses
Learn more about [User Profiles](/user-profiles) and [Search](/search/overview).
|