aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNaman Bansal <[email protected]>2025-10-10 19:47:57 +0800
committerNaman Bansal <[email protected]>2025-10-10 19:47:57 +0800
commit01a09fac56eb8cd4ecf5fb73619e753d1d106ce0 (patch)
tree36af7f6c2e54d42a3f8a1d51ed4735d338544474
parentfeat: ai sdk language model withSupermemory (#446) (diff)
downloadsupermemory-01a09fac56eb8cd4ecf5fb73619e753d1d106ce0.tar.xz
supermemory-01a09fac56eb8cd4ecf5fb73619e753d1d106ce0.zip
feat: profile page updates
-rw-r--r--apps/docs/ai-sdk/overview.mdx28
-rw-r--r--apps/docs/ai-sdk/user-profiles.mdx204
-rw-r--r--apps/docs/docs.json2
-rw-r--r--apps/docs/images/static-dynamic-profile.pngbin0 -> 157923 bytes
-rw-r--r--apps/docs/user-profiles.mdx570
5 files changed, 802 insertions, 2 deletions
diff --git a/apps/docs/ai-sdk/overview.mdx b/apps/docs/ai-sdk/overview.mdx
index 3192fea1..f084aba9 100644
--- a/apps/docs/ai-sdk/overview.mdx
+++ b/apps/docs/ai-sdk/overview.mdx
@@ -4,7 +4,7 @@ description: "Use Supermemory with Vercel AI SDK for seamless memory management"
sidebarTitle: "Overview"
---
-The Supermemory AI SDK provides native integration with Vercel's AI SDK through two approaches: **Memory Tools** for agent-based interactions and **Infinite Chat** for automatic context management.
+The Supermemory AI SDK provides native integration with Vercel's AI SDK through three approaches: **User Profiles** for automatic personalization, **Memory Tools** for agent-based interactions, and **Infinite Chat** for automatic context management.
<Card title="Supermemory tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
Check out the NPM page for more details
@@ -16,6 +16,25 @@ The Supermemory AI SDK provides native integration with Vercel's AI SDK through
npm install @supermemory/tools
```
+## User Profiles with Middleware
+
+Automatically inject user profiles into every LLM call for instant personalization.
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+// Wrap your model with Supermemory - profiles are automatically injected
+const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123")
+
+const result = await generateText({
+ model: modelWithMemory,
+ messages: [{ role: "user", content: "What do you know about me?" }]
+})
+// The model automatically has the user's profile context!
+```
+
## Memory Tools
Add memory capabilities to AI agents with search, add, and fetch operations.
@@ -64,12 +83,17 @@ const result = await streamText({
| Approach | Use Case |
|----------|----------|
+| User Profiles | Personalized LLM responses with automatic user context |
| Memory Tools | AI agents that need explicit memory control |
| Infinite Chat | Chat applications with automatic context |
## Next Steps
-<CardGroup cols={2}>
+<CardGroup cols={3}>
+ <Card title="User Profiles" icon="user" href="/ai-sdk/user-profiles">
+ Automatic personalization with profiles
+ </Card>
+
<Card title="Memory Tools" icon="wrench" href="/ai-sdk/memory-tools">
Agent-based memory management
</Card>
diff --git a/apps/docs/ai-sdk/user-profiles.mdx b/apps/docs/ai-sdk/user-profiles.mdx
new file mode 100644
index 00000000..ce8f5398
--- /dev/null
+++ b/apps/docs/ai-sdk/user-profiles.mdx
@@ -0,0 +1,204 @@
+---
+title: "User Profiles with AI SDK"
+description: "Automatically inject user profiles into LLM calls for instant personalization"
+sidebarTitle: "User Profiles"
+---
+
+## Overview
+
+The `withSupermemory` middleware automatically injects user profiles into your LLM calls, providing instant personalization without manual prompt engineering or API calls.
+
+<Note>
+ **New to User Profiles?** Read the [conceptual overview](/user-profiles) to understand what profiles are and why they're powerful for LLM personalization.
+</Note>
+
+## Quick Start
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+// Wrap any model with Supermemory middleware
+const modelWithMemory = withSupermemory(
+ openai("gpt-4"), // Your base model
+ "user-123" // Container tag (user ID)
+)
+
+// Use normally - profiles are automatically injected!
+const result = await generateText({
+ model: modelWithMemory,
+ messages: [{ role: "user", content: "Help me with my current project" }]
+})
+
+// The model knows about the user's background, skills, and current work!
+```
+
+## How It Works
+
+The `withSupermemory` middleware:
+
+1. **Intercepts** your LLM calls before they reach the model
+2. **Fetches** the user's profile based on the container tag
+3. **Injects** profile data into the system prompt automatically
+4. **Forwards** the enhanced prompt to your LLM
+
+All of this happens transparently - you write code as if using a normal model, but get personalized responses.
+
+## Memory Search Modes
+
+Configure how the middleware retrieves and uses memory:
+
+### Profile Mode (Default)
+
+Retrieves the user's complete profile without query-specific search. Best for general personalization.
+
+```typescript
+// Default behavior - profile mode
+const model = withSupermemory(openai("gpt-4"), "user-123")
+
+// Or explicitly specify
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "profile"
+})
+
+const result = await generateText({
+ model,
+ messages: [{ role: "user", content: "What do you know about me?" }]
+})
+// Response uses full user profile for context
+```
+
+### Query Mode
+
+Searches memories based on the user's specific message. Best for finding relevant information.
+
+```typescript
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "query"
+})
+
+const result = await generateText({
+ model,
+ messages: [{
+ role: "user",
+ content: "What was that Python script I wrote last week?"
+ }]
+})
+// Searches for memories about Python scripts from last week
+```
+
+### Full Mode
+
+Combines profile AND query-based search for comprehensive context. Best for complex interactions.
+
+```typescript
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "full"
+})
+
+const result = await generateText({
+ model,
+ messages: [{
+ role: "user",
+ content: "Help me debug this similar to what we did before"
+ }]
+})
+// Uses both profile (user's expertise) AND search (previous debugging sessions)
+```
+
+## Verbose Logging
+
+Enable detailed logging to see exactly what's happening:
+
+```typescript
+const model = withSupermemory(openai("gpt-4"), "user-123", {
+ verbose: true // Enable detailed logging
+})
+
+const result = await generateText({
+ model,
+ messages: [{ role: "user", content: "Where do I live?" }]
+})
+
+// Console output:
+// [supermemory] Searching memories for container: user-123
+// [supermemory] User message: Where do I live?
+// [supermemory] System prompt exists: false
+// [supermemory] Found 3 memories
+// [supermemory] Memory content: You live in San Francisco, California...
+// [supermemory] Creating new system prompt with memories
+```
+
+## Comparison with Direct API
+
+The AI SDK middleware abstracts away the complexity of manual profile management:
+
+<Tabs>
+ <Tab title="With AI SDK (Simple)">
+ ```typescript
+ // One line setup
+ const model = withSupermemory(openai("gpt-4"), "user-123")
+
+ // Use normally
+ const result = await generateText({
+ model,
+ messages: [{ role: "user", content: "Help me" }]
+ })
+ ```
+ </Tab>
+
+ <Tab title="Without AI SDK (Complex)">
+ ```typescript
+ // Manual profile fetching
+ const profileRes = await fetch('https://api.supermemory.ai/v4/profile', {
+ method: 'POST',
+ headers: { /* ... */ },
+ body: JSON.stringify({ containerTag: "user-123" })
+ })
+ const profile = await profileRes.json()
+
+ // Manual prompt construction
+ const systemPrompt = `User Profile:\n${profile.profile.static?.join('\n')}`
+
+ // Manual LLM call with profile
+ const result = await generateText({
+ model: openai("gpt-4"),
+ messages: [
+ { role: "system", content: systemPrompt },
+ { role: "user", content: "Help me" }
+ ]
+ })
+ ```
+ </Tab>
+</Tabs>
+
+## Limitations
+
+- **Beta Feature**: The `withSupermemory` middleware is currently in beta
+- **Container Tag Required**: You must provide a valid container tag
+- **API Key Required**: Ensure `SUPERMEMORY_API_KEY` is set in your environment
+
+## Next Steps
+
+<CardGroup cols={2}>
+ <Card title="User Profiles Concepts" icon="brain" href="/user-profiles">
+ Understand how profiles work conceptually
+ </Card>
+
+ <Card title="Memory Tools" icon="wrench" href="/ai-sdk/memory-tools">
+ Add explicit memory operations to your agents
+ </Card>
+
+ <Card title="API Reference" icon="code" href="https://api.supermemory.ai/v3/reference#tag/profile">
+ Explore the underlying profile API
+ </Card>
+
+ <Card title="NPM Package" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
+ View the package on NPM
+ </Card>
+</CardGroup>
+
+<Info>
+ **Pro Tip**: Start with profile mode for general personalization, then experiment with query and full modes as you understand your use case better.
+</Info>
diff --git a/apps/docs/docs.json b/apps/docs/docs.json
index b18f89d9..219be1ca 100644
--- a/apps/docs/docs.json
+++ b/apps/docs/docs.json
@@ -112,6 +112,7 @@
]
},
"search/filtering",
+ "user-profiles",
"memory-api/track-progress",
{
"group": "List Memories",
@@ -193,6 +194,7 @@
"icon": "triangle",
"pages": [
"ai-sdk/overview",
+ "ai-sdk/user-profiles",
"ai-sdk/memory-tools",
"ai-sdk/infinite-chat",
"ai-sdk/npm"
diff --git a/apps/docs/images/static-dynamic-profile.png b/apps/docs/images/static-dynamic-profile.png
new file mode 100644
index 00000000..d12a8611
--- /dev/null
+++ b/apps/docs/images/static-dynamic-profile.png
Binary files differ
diff --git a/apps/docs/user-profiles.mdx b/apps/docs/user-profiles.mdx
new file mode 100644
index 00000000..c6a097ec
--- /dev/null
+++ b/apps/docs/user-profiles.mdx
@@ -0,0 +1,570 @@
+---
+title: "User Profiles - Persistent Context for LLMs"
+description: "Automatically maintained user profiles that provide instant, comprehensive context to your LLMs"
+sidebarTitle: "User Profiles"
+icon: "user"
+---
+
+## What are User Profiles?
+
+User profiles are **automatically maintained collections of facts about your users** that Supermemory builds from all their interactions and content. Think of it as a persistent "about me" document that's always up-to-date and instantly accessible.
+
+Instead of searching through memories every time you need context about a user, profiles give you:
+- **Instant access** to comprehensive user information
+- **Automatic updates** as users interact with your system
+- **Two-tier structure** separating permanent facts from temporary context
+
+<Note>
+ Profile data can be appended to the system prompt so that it's always sent to your LLM and you don't need to run multiple queries.
+</Note>
+
+## Static vs Dynamic Profiles
+
+![](/images/static-dynamic-profile.png)
+
+Profiles are intelligently divided into two categories:
+
+### Static Profile
+**Long-term, stable facts that define who the user is**
+
+These are facts that rarely change - the foundational information about a user that remains consistent over time.
+
+Examples:
+- "Sarah Chen is a senior software engineer at TechCorp"
+- "Sarah specializes in distributed systems and Kubernetes"
+- "Sarah has a PhD in Computer Science from MIT"
+- "Sarah prefers technical documentation over video tutorials"
+
+### Dynamic Profile
+**Recent context and temporary information**
+
+These are current activities, recent interests, and temporary states that provide immediate context.
+
+Examples:
+- "Sarah is currently migrating the payment service to microservices"
+- "Sarah recently started learning Rust for a side project"
+- "Sarah is preparing for a conference talk next month"
+- "Sarah is debugging a memory leak in the authentication service"
+
+<Accordion title="How are profiles different from search?" defaultOpen>
+ **Traditional Search**: You query "What does Sarah know about Kubernetes?" and get specific memory chunks about Kubernetes.
+
+ **User Profiles**: You get Sarah's complete professional context instantly - her role, expertise, preferences, and current projects - without needing to craft specific queries.
+
+ The profile is **always there**, providing consistent personalization across every interaction.
+</Accordion>
+
+## Why We Built Profiles
+
+### The Problem with Search-Only Approaches
+
+Traditional memory systems rely entirely on search, which has fundamental limitations:
+
+1. **Search is too narrow**: When you search for "project updates", you miss that the user prefers bullet points, works in PST timezone, and uses specific technical terminology.
+
+2. **Search is repetitive**: Every chat message triggers multiple searches for basic context that rarely changes.
+
+3. **Search misses relationships**: Individual memory chunks don't capture the full picture of who someone is and how different facts relate.
+
+
+Profiles solve these problems by maintaining a **persistent, holistic view** of each user:
+## How Profiles Work with Search
+
+Profiles don't replace search - they complement it perfectly:
+
+<Steps>
+ <Step title="Profile provides foundation">
+ The user's profile gives your LLM comprehensive background context about who they are, what they know, and what they're working on.
+ </Step>
+
+ <Step title="Search adds specificity">
+ When you need specific information (like "error in deployment yesterday"), search finds those exact memories.
+ </Step>
+
+ <Step title="Combined for perfect context">
+ Your LLM gets both the broad understanding from profiles AND the specific details from search.
+ </Step>
+</Steps>
+
+### Real-World Example
+
+Imagine a user asks: **"Can you help me debug this?"**
+
+**Without profiles**: The LLM has no context about the user's expertise level, current projects, or debugging preferences.
+
+**With profiles**: The LLM knows:
+- The user is a senior engineer (adjust technical level)
+- They're working on a payment service migration (likely context)
+- They prefer command-line tools over GUIs (tool suggestions)
+- They recently had issues with memory leaks (possible connection)
+
+## Technical Implementation
+
+### Endpoint Details
+
+Based on the [API reference](https://api.supermemory.ai/v3/reference#tag/profile), the profile endpoint provides a simple interface:
+
+**Endpoint**: `POST /v4/profile`
+
+### Request Parameters
+
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `containerTag` | string | **Yes** | The container tag (usually user ID) to get profiles for |
+| `q` | string | No | Optional search query to include search results with the profile |
+
+### Response Structure
+
+The response includes both profile data and optional search results:
+
+```json
+{
+ "profile": {
+ "static": [
+ "User is a software engineer",
+ "User specializes in Python and React"
+ ],
+ "dynamic": [
+ "User is working on Project Alpha",
+ "User recently started learning Rust"
+ ]
+ },
+ "searchResults": {
+ "results": [...], // Only if 'q' parameter was provided
+ "total": 15,
+ "timing": 45.2
+ }
+}
+```
+
+## Code Examples
+
+### Basic Profile Retrieval
+
+<CodeGroup>
+
+```typescript TypeScript
+// Direct API call using fetch
+const response = await fetch('https://api.supermemory.ai/v4/profile', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ containerTag: 'user_123'
+ })
+});
+
+const data = await response.json();
+
+console.log("Static facts:", data.profile.static);
+console.log("Dynamic context:", data.profile.dynamic);
+
+// Use in your LLM prompt
+const systemPrompt = `
+User Context:
+${data.profile.static?.join('\n') || ''}
+
+Current Activity:
+${data.profile.dynamic?.join('\n') || ''}
+
+Please provide personalized assistance based on this context.
+`;
+```
+
+```python Python
+import requests
+import os
+
+# Direct API call
+response = requests.post(
+ 'https://api.supermemory.ai/v4/profile',
+ headers={
+ 'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
+ 'Content-Type': 'application/json'
+ },
+ json={
+ 'containerTag': 'user_123'
+ }
+)
+
+data = response.json()
+
+print("Static facts:", data['profile']['static'])
+print("Dynamic context:", data['profile']['dynamic'])
+
+# Use in your LLM prompt
+static_context = '\n'.join(data['profile'].get('static', []))
+dynamic_context = '\n'.join(data['profile'].get('dynamic', []))
+
+system_prompt = f"""
+User Context:
+{static_context}
+
+Current Activity:
+{dynamic_context}
+
+Please provide personalized assistance based on this context.
+"""
+```
+
+```bash cURL
+curl -X POST https://api.supermemory.ai/v4/profile \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "containerTag": "user_123"
+ }'
+```
+
+</CodeGroup>
+
+### Profile with Search
+
+Sometimes you want both the user's profile AND specific search results:
+
+<CodeGroup>
+
+```typescript TypeScript
+// Get profile with search results
+const response = await fetch('https://api.supermemory.ai/v4/profile', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ containerTag: 'user_123',
+ q: 'deployment errors yesterday' // Optional search query
+ })
+});
+
+const data = await response.json();
+
+// Now you have both profile and specific search results
+const profile = data.profile;
+const searchResults = data.searchResults?.results || [];
+
+// Combine for comprehensive context
+const context = {
+ userBackground: profile.static,
+ currentContext: profile.dynamic,
+ specificInfo: searchResults.map(r => r.content)
+};
+```
+
+```python Python
+import requests
+
+# Get profile with search results
+response = requests.post(
+ 'https://api.supermemory.ai/v4/profile',
+ headers={
+ 'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
+ 'Content-Type': 'application/json'
+ },
+ json={
+ 'containerTag': 'user_123',
+ 'q': 'deployment errors yesterday' # Optional search query
+ }
+)
+
+data = response.json()
+
+# Access both profile and search results
+profile = data['profile']
+search_results = data.get('searchResults', {}).get('results', [])
+
+# Combine for comprehensive context
+context = {
+ 'user_background': profile.get('static', []),
+ 'current_context': profile.get('dynamic', []),
+ 'specific_info': [r['content'] for r in search_results]
+}
+```
+
+</CodeGroup>
+
+### Integration with Chat Applications
+
+Here's how to use profiles in a real chat application:
+
+<CodeGroup>
+
+```typescript TypeScript
+async function handleChatMessage(userId: string, message: string) {
+ // Get user profile for personalization
+ const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
+ method: 'POST',
+ headers: {
+ 'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`,
+ 'Content-Type': 'application/json'
+ },
+ body: JSON.stringify({
+ containerTag: userId
+ })
+ });
+
+ const profileData = await profileResponse.json();
+
+ // Build personalized system prompt
+ const systemPrompt = buildPersonalizedPrompt(profileData.profile);
+
+ // Send to your LLM with context
+ const response = await llm.chat({
+ messages: [
+ { role: "system", content: systemPrompt },
+ { role: "user", content: message }
+ ]
+ });
+
+ return response;
+}
+
+function buildPersonalizedPrompt(profile: any) {
+ return `You are assisting a user with the following context:
+
+ABOUT THE USER:
+${profile.static?.join('\n') || 'No profile information yet.'}
+
+CURRENT CONTEXT:
+${profile.dynamic?.join('\n') || 'No recent activity.'}
+
+Provide responses that are personalized to their expertise level,
+preferences, and current work context.`;
+}
+```
+
+```python Python
+import requests
+import os
+
+async def handle_chat_message(user_id: str, message: str):
+ # Get user profile for personalization
+ response = requests.post(
+ 'https://api.supermemory.ai/v4/profile',
+ headers={
+ 'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}',
+ 'Content-Type': 'application/json'
+ },
+ json={'containerTag': user_id}
+ )
+
+ profile_data = response.json()
+
+ # Build personalized system prompt
+ system_prompt = build_personalized_prompt(profile_data['profile'])
+
+ # Send to your LLM with context
+ llm_response = await llm.chat(
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": message}
+ ]
+ )
+
+ return llm_response
+
+def build_personalized_prompt(profile):
+ static_facts = '\n'.join(profile.get('static', ['No profile information yet.']))
+ dynamic_context = '\n'.join(profile.get('dynamic', ['No recent activity.']))
+
+ return f"""You are assisting a user with the following context:
+
+ABOUT THE USER:
+{static_facts}
+
+CURRENT CONTEXT:
+{dynamic_context}
+
+Provide responses that are personalized to their expertise level,
+preferences, and current work context."""
+```
+
+</CodeGroup>
+
+## AI SDK Integration
+
+<Note>
+ The Supermemory AI SDK provides a more elegant way to use profiles through the `withSupermemory` middleware, which automatically handles profile retrieval and injection into your LLM prompts.
+</Note>
+
+### Automatic Profile Integration
+
+The AI SDK's `withSupermemory` middleware abstracts away all the profile endpoint complexity:
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+// Automatically injects user profile into every LLM call
+const modelWithMemory = withSupermemory(openai("gpt-4"), "user_123")
+
+const result = await generateText({
+ model: modelWithMemory,
+ messages: [{ role: "user", content: "What do you know about me?" }],
+})
+
+// The model automatically has access to the user's profile!
+```
+
+### Memory Search Modes
+
+The AI SDK supports three modes for memory retrieval:
+
+#### Profile Mode (Default)
+Retrieves user profile memories without query filtering:
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+// Uses profile mode by default - gets all user profile memories
+const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123")
+
+// Explicitly specify profile mode
+const modelWithProfile = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "profile"
+})
+
+const result = await generateText({
+ model: modelWithMemory,
+ messages: [{ role: "user", content: "What do you know about me?" }],
+})
+```
+
+#### Query Mode
+Searches memories based on the user's message:
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+const modelWithQuery = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "query"
+})
+
+const result = await generateText({
+ model: modelWithQuery,
+ messages: [{ role: "user", content: "What's my favorite programming language?" }],
+})
+```
+
+#### Full Mode
+Combines both profile and query results:
+
+```typescript
+import { generateText } from "ai"
+import { withSupermemory } from "@supermemory/tools/ai-sdk"
+import { openai } from "@ai-sdk/openai"
+
+const modelWithFull = withSupermemory(openai("gpt-4"), "user-123", {
+ mode: "full"
+})
+
+const result = await generateText({
+ model: modelWithFull,
+ messages: [{ role: "user", content: "Tell me about my preferences" }],
+})
+```
+
+<Card title="Learn More About AI SDK" icon="triangle" href="/ai-sdk/overview">
+ Explore the full capabilities of the Supermemory AI SDK, including tools for adding memories, searching, and automatic profile injection.
+</Card>
+
+### Understanding the Modes (Without AI SDK)
+
+When using the API directly without the AI SDK:
+
+- **Profile Only**: Call `/v4/profile` and add the profile data to your system prompt. This gives persistent user context without query-specific search.
+
+- **Query Only**: Use the `/v4/search` endpoint with the user's specific question to find relevant memories based on their current query. Read [the search docs.](/search/overview)
+
+- **Full Mode**: Combine both approaches - add profile data to the system prompt AND use the search endpoint for conversational context based on the user's specific query. This provides the most comprehensive context.
+
+```typescript
+// Full mode example without AI SDK
+async function getFullContext(userId: string, userQuery: string) {
+ // 1. Get user profile for system prompt
+ const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', {
+ method: 'POST',
+ headers: { /* ... */ },
+ body: JSON.stringify({ containerTag: userId })
+ });
+ const profileData = await profileResponse.json();
+
+ // 2. Search for query-specific memories
+ const searchResponse = await fetch('https://api.supermemory.ai/v3/search', {
+ method: 'POST',
+ headers: { /* ... */ },
+ body: JSON.stringify({
+ q: userQuery,
+ containerTag: userId
+ })
+ });
+ const searchData = await searchResponse.json();
+
+ // 3. Combine both in your prompt
+ return {
+ systemPrompt: `User Profile:\n${profileData.profile.static?.join('\n')}`,
+ queryContext: searchData.results
+ };
+}
+```
+Or you can also juse use the `q` parameter in the `v4/profiles` endpoint to get those search results. I just wanted to demonstrate how you can use search and profile separately, so I put this elaborate code snippet.
+
+## How Profiles are Built
+
+Profiles are **automatically constructed and maintained** through Supermemory's ingestion pipeline:
+
+<Steps>
+ <Step title="Content Ingestion">
+ When users add documents, chat, or any content to Supermemory, it goes through the standard ingestion workflow.
+ </Step>
+
+ <Step title="Intelligence Extraction">
+ AI analyzes the content to extract not just memories, but also facts about the user themselves.
+ </Step>
+
+ <Step title="Profile Operations">
+ The system generates profile operations (add, update, or remove facts) based on the new information.
+ </Step>
+
+ <Step title="Automatic Updates">
+ Profiles are updated in real-time, ensuring they always reflect the latest information about the user.
+ </Step>
+</Steps>
+
+<Note>
+ You don't need to manually manage profiles - they're automatically maintained as users interact with your system. Just ingest content normally, and profiles build themselves.
+</Note>
+
+
+## Common Use Cases
+
+### Personalized AI Assistants
+Profiles ensure your AI assistant remembers user preferences, expertise, and context across conversations.
+
+### Customer Support Systems
+Support agents (or AI) instantly see customer history, preferences, and current issues without manual searches.
+
+### Educational Platforms
+Adapt content difficulty and teaching style based on the learner's profile and progress.
+
+### Development Tools
+IDE assistants that understand your coding style, current projects, and technical preferences.
+
+## Performance Benefits
+
+Profiles provide significant performance improvements:
+
+| Metric | Without Profiles | With Profiles |
+|--------|-----------------|---------------|
+| Context Retrieval | 3-5 search queries | 1 profile call |
+| Response Time | 200-500ms | 50-100ms |
+| Token Usage | High (multiple searches) | Low (single response) |
+| Consistency | Varies by search quality | Always comprehensive | \ No newline at end of file