1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
|
---
title: "Overview — What is Supermemory?"
sidebarTitle: "Overview"
description = "Add long-term memory to your LLMs with three integration paths: AI SDK, Memory API, or Memory Router."
---
Supermemory gives your LLMs long-term memory. Instead of stateless text generation, they recall the right facts from your files, chats, and tools, so responses stay consistent, contextual, and personal.
## How does it work? (at a glance)

- You send Supermemory text, files, and chats.
- Supermemory [intelligently indexes them](/how-it-works) and builds a semantic understanding graph on top of an entity (e.g., a user, a document, a project, an organization).
- At query time, we fetch only the most relevant context and pass it to your models.
We offer three ways to add memory to your LLMs:
### Memory API — full control
- Ingest text, files, and chats (supports multi-modal); search & filter; re-rank results.
- Modelled after the actual human brain's working with smart forgetting, decay, recency bias, context rewriting, etc.
- API + SDKs for Node & Python; designed to scale in production.
<Info>
You can reference the full API documentation for the Memory API [here](/api-reference/manage-memories/add-memory).
</Info>
### AI SDK
- Native Vercel AI SDK integration with `@supermemory/tools/ai-sdk`
- Memory tools for agents or infinite chat for automatic context
- Works with streamText, generateText, and all AI SDK features
```typescript
import { streamText } from "ai"
import { supermemoryTools } from "@supermemory/tools/ai-sdk"
const result = await streamText({
model: anthropic("claude-3"),
tools: supermemoryTools("YOUR_KEY")
})
```
<Info>
The AI SDK is recommended for new projects using Vercel AI SDK. The Router works best for existing **chat applications**, whereas the Memory API works as a **complete memory database** with granular control.
</Info>
### Memory Router — drop-in proxy with minimal code
- Keep your existing LLM client; just append `api.supermemory.ai/v3/` to your base URL.
- Automatic chunking and token management that fits your context window.
- Adds minimal latency on top of existing LLM requests.
<Note>
All three approaches share the **same memory pool** when using the same user ID. You can mix and match based on your needs.
</Note>
## Next steps
Head to the [**Router vs API**](/routervsapi) guide to understand the technical differences between the two and pick what’s best for you with a simple 4-question flow.
|