aboutsummaryrefslogtreecommitdiff
path: root/apps/docs/intro.mdx
diff options
context:
space:
mode:
authorDhravya Shah <[email protected]>2025-11-27 09:53:11 -0700
committerDhravya Shah <[email protected]>2025-11-27 09:53:11 -0700
commit2f8bafac4ecdbf5eccf49219b898fd6586f338a3 (patch)
tree0b97ae1eaab5257a5658da38bcff0e4acd36c602 /apps/docs/intro.mdx
parentruntime styles injection + let user proxy requests for data in graph package ... (diff)
downloadsupermemory-2f8bafac4ecdbf5eccf49219b898fd6586f338a3.tar.xz
supermemory-2f8bafac4ecdbf5eccf49219b898fd6586f338a3.zip
update quickstart
Diffstat (limited to 'apps/docs/intro.mdx')
-rw-r--r--apps/docs/intro.mdx66
1 files changed, 37 insertions, 29 deletions
diff --git a/apps/docs/intro.mdx b/apps/docs/intro.mdx
index 3c0f76b7..efdcee38 100644
--- a/apps/docs/intro.mdx
+++ b/apps/docs/intro.mdx
@@ -15,49 +15,57 @@ Supermemory gives your LLMs long-term memory. Instead of stateless text generati
- Supermemory [intelligently indexes them](/how-it-works) and builds a semantic understanding graph on top of an entity (e.g., a user, a document, a project, an organization).
- At query time, we fetch only the most relevant context and pass it to your models.
-We offer three ways to add memory to your LLMs:
+## Supermemory is context engineering.
-### Memory API — full control
+#### Ingestion and Extraction
-- Ingest text, files, and chats (supports multi-modal); search & filter; re-rank results.
-- Modelled after the actual human brain's working with smart forgetting, decay, recency bias, context rewriting, etc.
-- API + SDKs for Node & Python; designed to scale in production.
+Supermemory handles all the extraction, for any data type that you have.
+- Text
+- Conversations
+- Files (PDF, Images, Docs)
+- Even videos!
-<Info>
- You can reference the full API documentation for the Memory API [here](/api-reference/manage-memories/add-memory).
-</Info>
+... and then,
-### AI SDK
+We offer three ways to add context to your LLMs:
-- Native Vercel AI SDK integration with `@supermemory/tools/ai-sdk`
-- Memory tools for agents or infinite chat for automatic context
-- Works with streamText, generateText, and all AI SDK features
+#### Memory API — Learned user context
-```typescript
-import { streamText } from "ai"
-import { supermemoryTools } from "@supermemory/tools/ai-sdk"
+![memory graph](/images/memory-graph.png)
-const result = await streamText({
- model: anthropic("claude-3"),
- tools: supermemoryTools("YOUR_KEY")
-})
-```
+Supermemory learns and builds the memory for the user. These are extracted facts about the user, that:
+- Evolve on top of existing context about the user, **in real time**
+- Handle **knowledge updates, temporal changes, forgetfulness**
+- Creates a **user profile** as the default context provider for the LLM.
-<Info>
-The AI SDK is recommended for new projects using Vercel AI SDK. The Router works best for existing **chat applications**, whereas the Memory API works as a **complete memory database** with granular control.
-</Info>
+_This can then be provided to the LLM, to give more contextual, personalized responses._
+
+#### User profiles
+
+Having the latest, evolving context about the user allows us to also create a **User Profile**. This is a combination of static and dynamic facts about the user, that the agent should **always know**
+Developers can configure supermemory with what static and dynamic contents are, depending on their use case.
+- Static: Information that the agent should **always** know.
+- Dynamic: **Episodic** information, about last few conversations etc.
-### Memory Router — drop-in proxy with minimal code
+This leads to a much better retrieval system, and extremely personalized responses.
+
+#### RAG - Advanced semantic search
+
+Along with the user context, developers can also choose to do a search on the raw context. We provide full RAG-as-a-service, along with
+- Full advanced metadata filtering
+- Contextual chunking
+- Works well with the memory engine
+
+<Info>
+ You can reference the full API reference for the Memory API [here](/api-reference/manage-documents/add-document).
+</Info>
-- Keep your existing LLM client; just append `api.supermemory.ai/v3/` to your base URL.
-- Automatic chunking and token management that fits your context window.
-- Adds minimal latency on top of existing LLM requests.
<Note>
-All three approaches share the **same memory pool** when using the same user ID. You can mix and match based on your needs.
+All three approaches share the **same context pool** when using the same user ID (`containerTag`). You can mix and match based on your needs.
</Note>
## Next steps
-Head to the [**Router vs API**](/routervsapi) guide to understand the technical differences between the two and pick what’s best for you with a simple 4-question flow.
+Head to the [**How it works**](/how-it-works) guide to understand the underlying way of how supermemory represents and learns in data.