diff options
| author | Dhravya Shah <[email protected]> | 2025-11-27 09:53:11 -0700 |
|---|---|---|
| committer | Dhravya Shah <[email protected]> | 2025-11-27 09:53:11 -0700 |
| commit | 2f8bafac4ecdbf5eccf49219b898fd6586f338a3 (patch) | |
| tree | 0b97ae1eaab5257a5658da38bcff0e4acd36c602 /apps/docs/user-profiles.mdx | |
| parent | runtime styles injection + let user proxy requests for data in graph package ... (diff) | |
| download | supermemory-2f8bafac4ecdbf5eccf49219b898fd6586f338a3.tar.xz supermemory-2f8bafac4ecdbf5eccf49219b898fd6586f338a3.zip | |
update quickstart
Diffstat (limited to 'apps/docs/user-profiles.mdx')
| -rw-r--r-- | apps/docs/user-profiles.mdx | 570 |
1 files changed, 0 insertions, 570 deletions
diff --git a/apps/docs/user-profiles.mdx b/apps/docs/user-profiles.mdx deleted file mode 100644 index c6a097ec..00000000 --- a/apps/docs/user-profiles.mdx +++ /dev/null @@ -1,570 +0,0 @@ ---- -title: "User Profiles - Persistent Context for LLMs" -description: "Automatically maintained user profiles that provide instant, comprehensive context to your LLMs" -sidebarTitle: "User Profiles" -icon: "user" ---- - -## What are User Profiles? - -User profiles are **automatically maintained collections of facts about your users** that Supermemory builds from all their interactions and content. Think of it as a persistent "about me" document that's always up-to-date and instantly accessible. - -Instead of searching through memories every time you need context about a user, profiles give you: -- **Instant access** to comprehensive user information -- **Automatic updates** as users interact with your system -- **Two-tier structure** separating permanent facts from temporary context - -<Note> - Profile data can be appended to the system prompt so that it's always sent to your LLM and you don't need to run multiple queries. -</Note> - -## Static vs Dynamic Profiles - - - -Profiles are intelligently divided into two categories: - -### Static Profile -**Long-term, stable facts that define who the user is** - -These are facts that rarely change - the foundational information about a user that remains consistent over time. - -Examples: -- "Sarah Chen is a senior software engineer at TechCorp" -- "Sarah specializes in distributed systems and Kubernetes" -- "Sarah has a PhD in Computer Science from MIT" -- "Sarah prefers technical documentation over video tutorials" - -### Dynamic Profile -**Recent context and temporary information** - -These are current activities, recent interests, and temporary states that provide immediate context. - -Examples: -- "Sarah is currently migrating the payment service to microservices" -- "Sarah recently started learning Rust for a side project" -- "Sarah is preparing for a conference talk next month" -- "Sarah is debugging a memory leak in the authentication service" - -<Accordion title="How are profiles different from search?" defaultOpen> - **Traditional Search**: You query "What does Sarah know about Kubernetes?" and get specific memory chunks about Kubernetes. - - **User Profiles**: You get Sarah's complete professional context instantly - her role, expertise, preferences, and current projects - without needing to craft specific queries. - - The profile is **always there**, providing consistent personalization across every interaction. -</Accordion> - -## Why We Built Profiles - -### The Problem with Search-Only Approaches - -Traditional memory systems rely entirely on search, which has fundamental limitations: - -1. **Search is too narrow**: When you search for "project updates", you miss that the user prefers bullet points, works in PST timezone, and uses specific technical terminology. - -2. **Search is repetitive**: Every chat message triggers multiple searches for basic context that rarely changes. - -3. **Search misses relationships**: Individual memory chunks don't capture the full picture of who someone is and how different facts relate. - - -Profiles solve these problems by maintaining a **persistent, holistic view** of each user: -## How Profiles Work with Search - -Profiles don't replace search - they complement it perfectly: - -<Steps> - <Step title="Profile provides foundation"> - The user's profile gives your LLM comprehensive background context about who they are, what they know, and what they're working on. - </Step> - - <Step title="Search adds specificity"> - When you need specific information (like "error in deployment yesterday"), search finds those exact memories. - </Step> - - <Step title="Combined for perfect context"> - Your LLM gets both the broad understanding from profiles AND the specific details from search. - </Step> -</Steps> - -### Real-World Example - -Imagine a user asks: **"Can you help me debug this?"** - -**Without profiles**: The LLM has no context about the user's expertise level, current projects, or debugging preferences. - -**With profiles**: The LLM knows: -- The user is a senior engineer (adjust technical level) -- They're working on a payment service migration (likely context) -- They prefer command-line tools over GUIs (tool suggestions) -- They recently had issues with memory leaks (possible connection) - -## Technical Implementation - -### Endpoint Details - -Based on the [API reference](https://api.supermemory.ai/v3/reference#tag/profile), the profile endpoint provides a simple interface: - -**Endpoint**: `POST /v4/profile` - -### Request Parameters - -| Parameter | Type | Required | Description | -|-----------|------|----------|-------------| -| `containerTag` | string | **Yes** | The container tag (usually user ID) to get profiles for | -| `q` | string | No | Optional search query to include search results with the profile | - -### Response Structure - -The response includes both profile data and optional search results: - -```json -{ - "profile": { - "static": [ - "User is a software engineer", - "User specializes in Python and React" - ], - "dynamic": [ - "User is working on Project Alpha", - "User recently started learning Rust" - ] - }, - "searchResults": { - "results": [...], // Only if 'q' parameter was provided - "total": 15, - "timing": 45.2 - } -} -``` - -## Code Examples - -### Basic Profile Retrieval - -<CodeGroup> - -```typescript TypeScript -// Direct API call using fetch -const response = await fetch('https://api.supermemory.ai/v4/profile', { - method: 'POST', - headers: { - 'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`, - 'Content-Type': 'application/json' - }, - body: JSON.stringify({ - containerTag: 'user_123' - }) -}); - -const data = await response.json(); - -console.log("Static facts:", data.profile.static); -console.log("Dynamic context:", data.profile.dynamic); - -// Use in your LLM prompt -const systemPrompt = ` -User Context: -${data.profile.static?.join('\n') || ''} - -Current Activity: -${data.profile.dynamic?.join('\n') || ''} - -Please provide personalized assistance based on this context. -`; -``` - -```python Python -import requests -import os - -# Direct API call -response = requests.post( - 'https://api.supermemory.ai/v4/profile', - headers={ - 'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}', - 'Content-Type': 'application/json' - }, - json={ - 'containerTag': 'user_123' - } -) - -data = response.json() - -print("Static facts:", data['profile']['static']) -print("Dynamic context:", data['profile']['dynamic']) - -# Use in your LLM prompt -static_context = '\n'.join(data['profile'].get('static', [])) -dynamic_context = '\n'.join(data['profile'].get('dynamic', [])) - -system_prompt = f""" -User Context: -{static_context} - -Current Activity: -{dynamic_context} - -Please provide personalized assistance based on this context. -""" -``` - -```bash cURL -curl -X POST https://api.supermemory.ai/v4/profile \ - -H "Authorization: Bearer YOUR_API_KEY" \ - -H "Content-Type: application/json" \ - -d '{ - "containerTag": "user_123" - }' -``` - -</CodeGroup> - -### Profile with Search - -Sometimes you want both the user's profile AND specific search results: - -<CodeGroup> - -```typescript TypeScript -// Get profile with search results -const response = await fetch('https://api.supermemory.ai/v4/profile', { - method: 'POST', - headers: { - 'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`, - 'Content-Type': 'application/json' - }, - body: JSON.stringify({ - containerTag: 'user_123', - q: 'deployment errors yesterday' // Optional search query - }) -}); - -const data = await response.json(); - -// Now you have both profile and specific search results -const profile = data.profile; -const searchResults = data.searchResults?.results || []; - -// Combine for comprehensive context -const context = { - userBackground: profile.static, - currentContext: profile.dynamic, - specificInfo: searchResults.map(r => r.content) -}; -``` - -```python Python -import requests - -# Get profile with search results -response = requests.post( - 'https://api.supermemory.ai/v4/profile', - headers={ - 'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}', - 'Content-Type': 'application/json' - }, - json={ - 'containerTag': 'user_123', - 'q': 'deployment errors yesterday' # Optional search query - } -) - -data = response.json() - -# Access both profile and search results -profile = data['profile'] -search_results = data.get('searchResults', {}).get('results', []) - -# Combine for comprehensive context -context = { - 'user_background': profile.get('static', []), - 'current_context': profile.get('dynamic', []), - 'specific_info': [r['content'] for r in search_results] -} -``` - -</CodeGroup> - -### Integration with Chat Applications - -Here's how to use profiles in a real chat application: - -<CodeGroup> - -```typescript TypeScript -async function handleChatMessage(userId: string, message: string) { - // Get user profile for personalization - const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', { - method: 'POST', - headers: { - 'Authorization': `Bearer ${process.env.SUPERMEMORY_API_KEY}`, - 'Content-Type': 'application/json' - }, - body: JSON.stringify({ - containerTag: userId - }) - }); - - const profileData = await profileResponse.json(); - - // Build personalized system prompt - const systemPrompt = buildPersonalizedPrompt(profileData.profile); - - // Send to your LLM with context - const response = await llm.chat({ - messages: [ - { role: "system", content: systemPrompt }, - { role: "user", content: message } - ] - }); - - return response; -} - -function buildPersonalizedPrompt(profile: any) { - return `You are assisting a user with the following context: - -ABOUT THE USER: -${profile.static?.join('\n') || 'No profile information yet.'} - -CURRENT CONTEXT: -${profile.dynamic?.join('\n') || 'No recent activity.'} - -Provide responses that are personalized to their expertise level, -preferences, and current work context.`; -} -``` - -```python Python -import requests -import os - -async def handle_chat_message(user_id: str, message: str): - # Get user profile for personalization - response = requests.post( - 'https://api.supermemory.ai/v4/profile', - headers={ - 'Authorization': f'Bearer {os.getenv("SUPERMEMORY_API_KEY")}', - 'Content-Type': 'application/json' - }, - json={'containerTag': user_id} - ) - - profile_data = response.json() - - # Build personalized system prompt - system_prompt = build_personalized_prompt(profile_data['profile']) - - # Send to your LLM with context - llm_response = await llm.chat( - messages=[ - {"role": "system", "content": system_prompt}, - {"role": "user", "content": message} - ] - ) - - return llm_response - -def build_personalized_prompt(profile): - static_facts = '\n'.join(profile.get('static', ['No profile information yet.'])) - dynamic_context = '\n'.join(profile.get('dynamic', ['No recent activity.'])) - - return f"""You are assisting a user with the following context: - -ABOUT THE USER: -{static_facts} - -CURRENT CONTEXT: -{dynamic_context} - -Provide responses that are personalized to their expertise level, -preferences, and current work context.""" -``` - -</CodeGroup> - -## AI SDK Integration - -<Note> - The Supermemory AI SDK provides a more elegant way to use profiles through the `withSupermemory` middleware, which automatically handles profile retrieval and injection into your LLM prompts. -</Note> - -### Automatic Profile Integration - -The AI SDK's `withSupermemory` middleware abstracts away all the profile endpoint complexity: - -```typescript -import { generateText } from "ai" -import { withSupermemory } from "@supermemory/tools/ai-sdk" -import { openai } from "@ai-sdk/openai" - -// Automatically injects user profile into every LLM call -const modelWithMemory = withSupermemory(openai("gpt-4"), "user_123") - -const result = await generateText({ - model: modelWithMemory, - messages: [{ role: "user", content: "What do you know about me?" }], -}) - -// The model automatically has access to the user's profile! -``` - -### Memory Search Modes - -The AI SDK supports three modes for memory retrieval: - -#### Profile Mode (Default) -Retrieves user profile memories without query filtering: - -```typescript -import { generateText } from "ai" -import { withSupermemory } from "@supermemory/tools/ai-sdk" -import { openai } from "@ai-sdk/openai" - -// Uses profile mode by default - gets all user profile memories -const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123") - -// Explicitly specify profile mode -const modelWithProfile = withSupermemory(openai("gpt-4"), "user-123", { - mode: "profile" -}) - -const result = await generateText({ - model: modelWithMemory, - messages: [{ role: "user", content: "What do you know about me?" }], -}) -``` - -#### Query Mode -Searches memories based on the user's message: - -```typescript -import { generateText } from "ai" -import { withSupermemory } from "@supermemory/tools/ai-sdk" -import { openai } from "@ai-sdk/openai" - -const modelWithQuery = withSupermemory(openai("gpt-4"), "user-123", { - mode: "query" -}) - -const result = await generateText({ - model: modelWithQuery, - messages: [{ role: "user", content: "What's my favorite programming language?" }], -}) -``` - -#### Full Mode -Combines both profile and query results: - -```typescript -import { generateText } from "ai" -import { withSupermemory } from "@supermemory/tools/ai-sdk" -import { openai } from "@ai-sdk/openai" - -const modelWithFull = withSupermemory(openai("gpt-4"), "user-123", { - mode: "full" -}) - -const result = await generateText({ - model: modelWithFull, - messages: [{ role: "user", content: "Tell me about my preferences" }], -}) -``` - -<Card title="Learn More About AI SDK" icon="triangle" href="/ai-sdk/overview"> - Explore the full capabilities of the Supermemory AI SDK, including tools for adding memories, searching, and automatic profile injection. -</Card> - -### Understanding the Modes (Without AI SDK) - -When using the API directly without the AI SDK: - -- **Profile Only**: Call `/v4/profile` and add the profile data to your system prompt. This gives persistent user context without query-specific search. - -- **Query Only**: Use the `/v4/search` endpoint with the user's specific question to find relevant memories based on their current query. Read [the search docs.](/search/overview) - -- **Full Mode**: Combine both approaches - add profile data to the system prompt AND use the search endpoint for conversational context based on the user's specific query. This provides the most comprehensive context. - -```typescript -// Full mode example without AI SDK -async function getFullContext(userId: string, userQuery: string) { - // 1. Get user profile for system prompt - const profileResponse = await fetch('https://api.supermemory.ai/v4/profile', { - method: 'POST', - headers: { /* ... */ }, - body: JSON.stringify({ containerTag: userId }) - }); - const profileData = await profileResponse.json(); - - // 2. Search for query-specific memories - const searchResponse = await fetch('https://api.supermemory.ai/v3/search', { - method: 'POST', - headers: { /* ... */ }, - body: JSON.stringify({ - q: userQuery, - containerTag: userId - }) - }); - const searchData = await searchResponse.json(); - - // 3. Combine both in your prompt - return { - systemPrompt: `User Profile:\n${profileData.profile.static?.join('\n')}`, - queryContext: searchData.results - }; -} -``` -Or you can also juse use the `q` parameter in the `v4/profiles` endpoint to get those search results. I just wanted to demonstrate how you can use search and profile separately, so I put this elaborate code snippet. - -## How Profiles are Built - -Profiles are **automatically constructed and maintained** through Supermemory's ingestion pipeline: - -<Steps> - <Step title="Content Ingestion"> - When users add documents, chat, or any content to Supermemory, it goes through the standard ingestion workflow. - </Step> - - <Step title="Intelligence Extraction"> - AI analyzes the content to extract not just memories, but also facts about the user themselves. - </Step> - - <Step title="Profile Operations"> - The system generates profile operations (add, update, or remove facts) based on the new information. - </Step> - - <Step title="Automatic Updates"> - Profiles are updated in real-time, ensuring they always reflect the latest information about the user. - </Step> -</Steps> - -<Note> - You don't need to manually manage profiles - they're automatically maintained as users interact with your system. Just ingest content normally, and profiles build themselves. -</Note> - - -## Common Use Cases - -### Personalized AI Assistants -Profiles ensure your AI assistant remembers user preferences, expertise, and context across conversations. - -### Customer Support Systems -Support agents (or AI) instantly see customer history, preferences, and current issues without manual searches. - -### Educational Platforms -Adapt content difficulty and teaching style based on the learner's profile and progress. - -### Development Tools -IDE assistants that understand your coding style, current projects, and technical preferences. - -## Performance Benefits - -Profiles provide significant performance improvements: - -| Metric | Without Profiles | With Profiles | -|--------|-----------------|---------------| -| Context Retrieval | 3-5 search queries | 1 profile call | -| Response Time | 200-500ms | 50-100ms | -| Token Usage | High (multiple searches) | Low (single response) | -| Consistency | Varies by search quality | Always comprehensive |
\ No newline at end of file |