aboutsummaryrefslogtreecommitdiff
path: root/apps/docs/ai-sdk
diff options
context:
space:
mode:
authorNaman Bansal <[email protected]>2025-10-10 19:47:57 +0800
committerNaman Bansal <[email protected]>2025-10-10 19:47:57 +0800
commit01a09fac56eb8cd4ecf5fb73619e753d1d106ce0 (patch)
tree36af7f6c2e54d42a3f8a1d51ed4735d338544474 /apps/docs/ai-sdk
parentfeat: ai sdk language model withSupermemory (#446) (diff)
downloadsupermemory-01a09fac56eb8cd4ecf5fb73619e753d1d106ce0.tar.xz
supermemory-01a09fac56eb8cd4ecf5fb73619e753d1d106ce0.zip
feat: profile page updates
Diffstat (limited to 'apps/docs/ai-sdk')
-rw-r--r--apps/docs/ai-sdk/overview.mdx28
-rw-r--r--apps/docs/ai-sdk/user-profiles.mdx204
2 files changed, 230 insertions, 2 deletions
diff --git a/apps/docs/ai-sdk/overview.mdx b/apps/docs/ai-sdk/overview.mdx
index 3192fea1..f084aba9 100644
--- a/apps/docs/ai-sdk/overview.mdx
+++ b/apps/docs/ai-sdk/overview.mdx
@@ -4,7 +4,7 @@ description: "Use Supermemory with Vercel AI SDK for seamless memory management"
sidebarTitle: "Overview"
---
-The Supermemory AI SDK provides native integration with Vercel's AI SDK through two approaches: **Memory Tools** for agent-based interactions and **Infinite Chat** for automatic context management.
+The Supermemory AI SDK provides native integration with Vercel's AI SDK through three approaches: **User Profiles** for automatic personalization, **Memory Tools** for agent-based interactions, and **Infinite Chat** for automatic context management.
<Card title="Supermemory tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
Check out the NPM page for more details
@@ -16,6 +16,25 @@ The Supermemory AI SDK provides native integration with Vercel's AI SDK through
npm install @supermemory/tools
```
+## User Profiles with Middleware
+
+Automatically inject user profiles into every LLM call for instant personalization.
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+// Wrap your model with Supermemory - profiles are automatically injected
+const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123")
+
+const result = await generateText({
+ model: modelWithMemory,
+ messages: [{ role: "user", content: "What do you know about me?" }]
+})
+// The model automatically has the user's profile context!
+```
+
## Memory Tools
Add memory capabilities to AI agents with search, add, and fetch operations.
@@ -64,12 +83,17 @@ const result = await streamText({
| Approach | Use Case |
|----------|----------|
+| User Profiles | Personalized LLM responses with automatic user context |
| Memory Tools | AI agents that need explicit memory control |
| Infinite Chat | Chat applications with automatic context |
## Next Steps
-<CardGroup cols={2}>
+<CardGroup cols={3}>
+ <Card title="User Profiles" icon="user" href="/ai-sdk/user-profiles">
+ Automatic personalization with profiles
+ </Card>
+
<Card title="Memory Tools" icon="wrench" href="/ai-sdk/memory-tools">
Agent-based memory management
</Card>
diff --git a/apps/docs/ai-sdk/user-profiles.mdx b/apps/docs/ai-sdk/user-profiles.mdx
new file mode 100644
index 00000000..ce8f5398
--- /dev/null
+++ b/apps/docs/ai-sdk/user-profiles.mdx
@@ -0,0 +1,204 @@
+---
+title: "User Profiles with AI SDK"
+description: "Automatically inject user profiles into LLM calls for instant personalization"
+sidebarTitle: "User Profiles"
+---
+
+## Overview
+
+The `withSupermemory` middleware automatically injects user profiles into your LLM calls, providing instant personalization without manual prompt engineering or API calls.
+
+<Note>
+ **New to User Profiles?** Read the [conceptual overview](/user-profiles) to understand what profiles are and why they're powerful for LLM personalization.
+</Note>
+
+## Quick Start
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+// Wrap any model with Supermemory middleware
+const modelWithMemory = withSupermemory(
+ openai("gpt-4"), // Your base model
+ "user-123" // Container tag (user ID)
+)
+
+// Use normally - profiles are automatically injected!
+const result = await generateText({
+ model: modelWithMemory,
+ messages: [{ role: "user", content: "Help me with my current project" }]
+})
+
+// The model knows about the user's background, skills, and current work!
+```
+
+## How It Works
+
+The `withSupermemory` middleware:
+
+1. **Intercepts** your LLM calls before they reach the model
+2. **Fetches** the user's profile based on the container tag
+3. **Injects** profile data into the system prompt automatically
+4. **Forwards** the enhanced prompt to your LLM
+
+All of this happens transparently - you write code as if using a normal model, but get personalized responses.
+
+## Memory Search Modes
+
+Configure how the middleware retrieves and uses memory:
+
+### Profile Mode (Default)
+
+Retrieves the user's complete profile without query-specific search. Best for general personalization.
+
+```typescript
+// Default behavior - profile mode
+const model = withSupermemory(openai("gpt-4"), "user-123")
+
+// Or explicitly specify
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "profile"
+})
+
+const result = await generateText({
+ model,
+ messages: [{ role: "user", content: "What do you know about me?" }]
+})
+// Response uses full user profile for context
+```
+
+### Query Mode
+
+Searches memories based on the user's specific message. Best for finding relevant information.
+
+```typescript
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "query"
+})
+
+const result = await generateText({
+ model,
+ messages: [{
+ role: "user",
+ content: "What was that Python script I wrote last week?"
+ }]
+})
+// Searches for memories about Python scripts from last week
+```
+
+### Full Mode
+
+Combines profile AND query-based search for comprehensive context. Best for complex interactions.
+
+```typescript
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "full"
+})
+
+const result = await generateText({
+ model,
+ messages: [{
+ role: "user",
+ content: "Help me debug this similar to what we did before"
+ }]
+})
+// Uses both profile (user's expertise) AND search (previous debugging sessions)
+```
+
+## Verbose Logging
+
+Enable detailed logging to see exactly what's happening:
+
+```typescript
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ verbose: true // Enable detailed logging
+})
+
+const result = await generateText({
+ model,
+ messages: [{ role: "user", content: "Where do I live?" }]
+})
+
+// Console output:
+// [supermemory] Searching memories for container: user-123
+// [supermemory] User message: Where do I live?
+// [supermemory] System prompt exists: false
+// [supermemory] Found 3 memories
+// [supermemory] Memory content: You live in San Francisco, California...
+// [supermemory] Creating new system prompt with memories
+```
+
+## Comparison with Direct API
+
+The AI SDK middleware abstracts away the complexity of manual profile management:
+
+<Tabs>
+ <Tab title="With AI SDK (Simple)">
+ ```typescript
+ // One line setup
+ const model = withSupermemory(openai("gpt-4"), "user-123")
+
+ // Use normally
+ const result = await generateText({
+ model,
+ messages: [{ role: "user", content: "Help me" }]
+ })
+ ```
+ </Tab>
+
+ <Tab title="Without AI SDK (Complex)">
+ ```typescript
+ // Manual profile fetching
+ const profileRes = await fetch('https://api.supermemory.ai/v4/profile', {
+ method: 'POST',
+ headers: { /* ... */ },
+ body: JSON.stringify({ containerTag: "user-123" })
+ })
+ const profile = await profileRes.json()
+
+ // Manual prompt construction
+ const systemPrompt = `User Profile:\n${profile.profile.static?.join('\n')}`
+
+ // Manual LLM call with profile
+ const result = await generateText({
+ model: openai("gpt-4"),
+ messages: [
+ { role: "system", content: systemPrompt },
+ { role: "user", content: "Help me" }
+ ]
+ })
+ ```
+ </Tab>
+</Tabs>
+
+## Limitations
+
+- **Beta Feature**: The `withSupermemory` middleware is currently in beta
+- **Container Tag Required**: You must provide a valid container tag
+- **API Key Required**: Ensure `SUPERMEMORY_API_KEY` is set in your environment
+
+## Next Steps
+
+<CardGroup cols={2}>
+ <Card title="User Profiles Concepts" icon="brain" href="/user-profiles">
+ Understand how profiles work conceptually
+ </Card>
+
+ <Card title="Memory Tools" icon="wrench" href="/ai-sdk/memory-tools">
+ Add explicit memory operations to your agents
+ </Card>
+
+ <Card title="API Reference" icon="code" href="https://api.supermemory.ai/v3/reference#tag/profile">
+ Explore the underlying profile API
+ </Card>
+
+ <Card title="NPM Package" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
+ View the package on NPM
+ </Card>
+</CardGroup>
+
+<Info>
+ **Pro Tip**: Start with profile mode for general personalization, then experiment with query and full modes as you understand your use case better.
+</Info>