diff options
| author | Dhravya Shah <[email protected]> | 2025-09-13 22:09:40 -0700 |
|---|---|---|
| committer | Dhravya Shah <[email protected]> | 2025-09-13 22:09:40 -0700 |
| commit | 90fd19f2156e28845d9288ea8ffc2d7d9573b77a (patch) | |
| tree | e630e3943d70b688c42a762c11c745159e1d6771 /apps/docs/memory-api | |
| parent | Merge branch 'main' of https://github.com/supermemoryai/supermemory (diff) | |
| download | supermemory-90fd19f2156e28845d9288ea8ffc2d7d9573b77a.tar.xz supermemory-90fd19f2156e28845d9288ea8ffc2d7d9573b77a.zip | |
update: Readme
Diffstat (limited to 'apps/docs/memory-api')
| -rw-r--r-- | apps/docs/memory-api/connectors/advanced/bring-your-own-key.mdx | 138 | ||||
| -rw-r--r-- | apps/docs/memory-api/connectors/creating-connection.mdx | 85 | ||||
| -rw-r--r-- | apps/docs/memory-api/connectors/google-drive.mdx | 27 | ||||
| -rw-r--r-- | apps/docs/memory-api/connectors/overview.mdx | 26 | ||||
| -rw-r--r-- | apps/docs/memory-api/creation/adding-memories.mdx | 389 | ||||
| -rw-r--r-- | apps/docs/memory-api/creation/status.mdx | 14 | ||||
| -rw-r--r-- | apps/docs/memory-api/features/auto-multi-modal.mdx | 181 | ||||
| -rw-r--r-- | apps/docs/memory-api/features/content-cleaner.mdx | 86 | ||||
| -rw-r--r-- | apps/docs/memory-api/features/filtering.mdx | 266 | ||||
| -rw-r--r-- | apps/docs/memory-api/features/query-rewriting.mdx | 50 | ||||
| -rw-r--r-- | apps/docs/memory-api/features/reranking.mdx | 44 | ||||
| -rw-r--r-- | apps/docs/memory-api/introduction.mdx | 43 | ||||
| -rw-r--r-- | apps/docs/memory-api/overview.mdx | 161 | ||||
| -rw-r--r-- | apps/docs/memory-api/sdks/python.mdx | 349 | ||||
| -rw-r--r-- | apps/docs/memory-api/sdks/typescript.mdx | 391 | ||||
| -rw-r--r-- | apps/docs/memory-api/searching/searching-memories.mdx | 138 |
16 files changed, 0 insertions, 2388 deletions
diff --git a/apps/docs/memory-api/connectors/advanced/bring-your-own-key.mdx b/apps/docs/memory-api/connectors/advanced/bring-your-own-key.mdx deleted file mode 100644 index 3d63cb46..00000000 --- a/apps/docs/memory-api/connectors/advanced/bring-your-own-key.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: 'Bring Your Own Key (BYOK)' -description: 'Configure your own OAuth application credentials for enhanced security and control' ---- - -By default, supermemory uses its own OAuth applications to connect to third-party providers. However, you can configure your own OAuth application credentials for enhanced security and control. This is particularly useful for enterprise customers who want to maintain control over their data access. - -<Danger> - Some providers like Google Drive require extensive verification and approval before you can use custom keys. -</Danger> - -### Setting up Custom Provider Keys - -To configure custom OAuth credentials for your organization, use the `PATCH /v3/settings` endpoint: - -1. Set up your OAuth application on the provider's developer console. - -Google: https://console.developers.google.com/apis/credentials/oauthclient \ -Notion: https://www.notion.so/my-integrations \ -OneDrive: https://portal.azure.com/#view/Microsoft_AAD_RegisteredApps/ApplicationsMenu - -2. If using Google drive, - -- Select the application type as `Web application` -- **Enable the Google drive api in "APIs and Services" in the Cloud Console** - -3. Configure the redirect URL, set it to: - -``` -https://api.supermemory.ai/v3/connections/auth/callback/{provider} -``` - -For example, if you are using Google Drive, the redirect URL would be: - -``` -https://api.supermemory.ai/v3/connections/auth/callback/google-drive -``` - -4. Configure the client ID and client secret in the `PATCH /v3/settings` endpoint. - -<CodeGroup> -```typescript Typescript -import Supermemory from 'supermemory'; - -const client = new Supermemory({ - apiKey: process.env['SUPERMEMORY_API_KEY'], -}); - -// Example: Configure Google Drive custom OAuth credentials -const settings = await client.settings.update({ - googleCustomKeyEnabled: true, - googleDriveClientId: "your-google-client-id", - googleDriveClientSecret: "your-google-client-secret" -}); - -// Example: Configure Notion custom OAuth credentials -const settings = await client.settings.update({ - notionCustomKeyEnabled: true, - notionClientId: "your-notion-client-id", - notionClientSecret: "your-notion-client-secret" -}); - -// Example: Configure OneDrive custom OAuth credentials -const settings = await client.settings.update({ - onedriveCustomKeyEnabled: true, - onedriveClientId: "your-onedrive-client-id", - onedriveClientSecret: "your-onedrive-client-secret" -}); -``` - -```python Python -from supermemory import supermemory - -client = supermemory( - api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted -) - -# Example: Configure Google Drive custom OAuth credentials -settings = client.settings.update( - google_custom_key_enabled=True, - google_client_id="your-google-client-id", - google_client_secret="your-google-client-secret" -) - -# Example: Configure Notion custom OAuth credentials -settings = client.settings.update( - notion_custom_key_enabled=True, - notion_client_id="your-notion-client-id", - notion_client_secret="your-notion-client-secret" -) - -# Example: Configure OneDrive custom OAuth credentials -settings = client.settings.update( - onedrive_custom_key_enabled=True, - onedrive_client_id="your-onedrive-client-id", - onedrive_client_secret="your-onedrive-client-secret" -) -``` - -```bash cURL -# Example: Configure Google Drive custom OAuth credentials -curl --request PATCH \ - --url https://api.supermemory.ai/v3/settings \ - --header 'Authorization: Bearer <token>' \ - --header 'Content-Type: application/json' \ - --data '{ - "googleDriveCustomKeyEnabled": true, - "googleDriveClientId": "your-google-client-id", - "googleDriveClientSecret": "your-google-client-secret" -}' - -# Example: Configure Notion custom OAuth credentials -curl --request PATCH \ - --url https://api.supermemory.ai/v3/settings \ - --header 'Authorization: Bearer <token>' \ - --header 'Content-Type: application/json' \ - --data '{ - "notionCustomKeyEnabled": true, - "notionClientId": "your-notion-client-id", - "notionClientSecret": "your-notion-client-secret" -}' - -# Example: Configure OneDrive custom OAuth credentials -curl --request PATCH \ - --url https://api.supermemory.ai/v3/settings \ - --header 'Authorization: Bearer <token>' \ - --header 'Content-Type: application/json' \ - --data '{ - "onedriveCustomKeyEnabled": true, - "onedriveClientId": "your-onedrive-client-id", - "onedriveClientSecret": "your-onedrive-client-secret" -}' -``` -</CodeGroup> - -<Warning> - Once you enable custom keys for a provider, all new connections for that provider will use your custom OAuth application. Existing connections WILL need to be re-authorized. -</Warning>
\ No newline at end of file diff --git a/apps/docs/memory-api/connectors/creating-connection.mdx b/apps/docs/memory-api/connectors/creating-connection.mdx deleted file mode 100644 index 39abc47a..00000000 --- a/apps/docs/memory-api/connectors/creating-connection.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: 'Creating connections' -description: 'Create a connection to sync your content with supermemory' ---- - -To create a connection, just make a `POST` request to `/v3/connections/{provider}` - -<CodeGroup> -```typescript Typescript -import Supermemory from 'supermemory'; - -const client = new Supermemory({ - apiKey: process.env['SUPERMEMORY_API_KEY'], // This is the default and can be omitted -}); - -const connection = await client.connections.create('notion'); - -console.debug(connection.authLink); -``` - -```python Python -import requests - -url = "https://api.supermemory.ai/v3/connections/{provider}" - -payload = { - "redirectUrl": "<string>", - "containerTags": ["<string>"], - "metadata": {}, - "documentLimit": 5000 -} -headers = { - "Authorization": "Bearer <token>", - "Content-Type": "application/json" -} - -response = requests.request("POST", url, json=payload, headers=headers) - -print(response.text) -``` - -```bash cURL -curl --request POST \ - --url https://api.supermemory.ai/v3/connections/{provider} \ - --header 'Authorization: Bearer <token>' \ - --header 'Content-Type: application/json' \ - --data '{ - "redirectUrl": "<string>", - "containerTags": [ - "<string>" - ], - "metadata": {}, - "documentLimit": 5000 -}' -``` -</CodeGroup> - -### Parameters - -- `provider`: The provider to connect to. Currently supported providers are `notion`, `google-drive`, `one-drive` -- `redirectUrl`: The URL to redirect to after the connection is created (your app URL) -- `containerTags`: Optional. For partitioning users, organizations, etc. in your app. - - Example: `["user_123", "project_alpha"]` -- `metadata`: Optional. Any metadata you want to associate with the connection. - - This metadata is added to every document synced from this connection. -- `documentLimit`: Optional. The maximum number of documents to sync from this connection. - - Default: 10,000 - - This can be used to limit costs and sync a set number of documents for a specific user. - - -## Response - -supermemory sends a response with the following schema: -```json -{ - "id": "<string>", - "authLink": "<string>", - "expiresIn": "<string>", - "redirectsTo": "<string>" -} -``` - -You can use the `authLink` to redirect the user to the provider's login page. - -Next up, managing connections. diff --git a/apps/docs/memory-api/connectors/google-drive.mdx b/apps/docs/memory-api/connectors/google-drive.mdx deleted file mode 100644 index 8413fdd2..00000000 --- a/apps/docs/memory-api/connectors/google-drive.mdx +++ /dev/null @@ -1,27 +0,0 @@ ---- -title: 'Google Drive' -description: 'Sync your Google Drive content with supermemory' ---- - -supermemory syncs Google Drive documents automatically and instantaneously. - -## Supported file types - -- Google Docs -- Google Slides -- Google Sheets - -## Conversions - -To import items, supermemory converts documents into markdown, and then ingests them into supermemory. -This conversion is lossy, and some formatting may be lost. - -## Sync frequency - -supermemory syncs documents: -- **A document is modified or created (Webhook recieved)** - - Note that not all providers are synced via webhook (Instant sync right now) - - `Google-Drive` and `Notion` documents are synced instantaneously -- Every **four hours** -- On **Manual Sync** (API call) - - You can call `/v3/connections/{provider}/sync` to sync documents manually diff --git a/apps/docs/memory-api/connectors/overview.mdx b/apps/docs/memory-api/connectors/overview.mdx deleted file mode 100644 index 8727b68c..00000000 --- a/apps/docs/memory-api/connectors/overview.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: 'Connectors Overview' -sidebarTitle: 'Overview' -description: 'Sync external connections like Google Drive, Notion, OneDrive with supermemory' ---- - -supermemory can sync external connections like Google Drive, Notion, OneDrive with more coming soon. - -### The Flow - -1. Make a `POST` request to `/v3/connections/{provider}` -2. supermemory will return an `authLink` which you can redirect the user to -3. The user will be redirected to the provider's login page -4. User is redirected back to your app's `redirectUrl` - - - -## Sync frequency - -supermemory syncs documents: -- **A document is modified or created (Webhook recieved)** - - Note that not all providers are synced via webhook (Instant sync right now) - - `Google-Drive` and `Notion` documents are synced instantaneously -- Every **four hours** -- On **Manual Sync** (API call) - - You can call `/v3/connections/{provider}/sync` to sync documents manually diff --git a/apps/docs/memory-api/creation/adding-memories.mdx b/apps/docs/memory-api/creation/adding-memories.mdx deleted file mode 100644 index 45b03fc1..00000000 --- a/apps/docs/memory-api/creation/adding-memories.mdx +++ /dev/null @@ -1,389 +0,0 @@ ---- -title: "Adding Memories" -description: "Learn how to add content to supermemory" -icon: "plus" ---- - -<Accordion title="Best Practices" icon="sparkles"> -1. **Content Organization** - - **Use `containerTags` for grouping/partitioning** - - Optional tags (array of strings) to group memories. - - Can be a user ID, project ID, or any other identifier. - - Allows filtering for memories that share specific tags. - - Example: `["user_123", "project_alpha"]` - - Read more about [filtering](/memory-api/features/filtering) - -2. **Performance Tips** - - **Batch Operations** - - You can add multiple items in parallel - - Use different `containerTags` for different spaces - - Don't wait for processing to complete unless needed - - - **Search Optimization** - ```json - { - "q": "error logs", - "documentThreshold": 0.7, // Higher = more precise - "limit": 5, // Keep it small - "onlyMatchingChunks": true // Skip extra context if not needed - } - ``` - -3. **URL Content** - - Send clean URLs without tracking parameters - - Use article URLs, not homepage URLs - - Check URL accessibility before sending - -</Accordion> - -## Basic Usage - -To add a memory, send a POST request to `/add` with your content: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --data '{ - "customId": "xyz-my-db-id", - "content": "This is the content of my memory", - "metadata": { - "category": "technology", - "tag_1": "ai", - "tag_2": "machine-learning", - }, - "containerTags": ["user_123", "project_xyz"] -}' -``` - -```typescript Typescript -await client.memory.create({ - customId: "xyz-mydb-id", - content: "This is the content of my memory", - metadata: { - category: "technology", - tag_1": "ai", - tag_2": "machine-learning", - }, - containerTags: ["user_123", "project_xyz"] -}) -``` - -```python Python -client.memory.create( - customId="xyz-mydb-id", - content="documents related to python", - metadata={ - "category": "datascience", - "tag_1": "ai", - "tag_2": "machine-learning", - }, - containerTags=["user_123", "project_xyz"] -) -``` - -</CodeGroup> - -The API will return a response with an ID and initial status: - -```json -{ - "id": "mem_abc123", - "status": "queued" -} -``` - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - -d '{ - "content": "https://example.com/article", - "metadata": { - "source": "web", # Just example metadata - "category": "technology" # NOT required - }, - "containerTags": ["user_456", "research_papers"] - }' -``` - -```typescript Typescript -await client.memory.create({ - content: "https://example.com/article", - userId: "user_456", - metadata: { - source: "web", // Just example metadata - category: "technology", // NOT required - }, - containerTags: ["user_456", "research_papers"], -}); -``` - -```python Python -client.memory.create( - content="https://example.com/article", - userId="user_456", - metadata={ - "source": "web", - "category": "technology" - }, - containerTags=["user_456", "research_papers"] -) -``` - -</CodeGroup> - -{/\* <Note> -TODO: Supported content types - -</Note> */} - -## Metadata and Organization - -You can add rich metadata to organize your content: - -```json -{ - "metadata": { - "source": "string", // String - "priority": 1234, // Custom numeric field - "custom_field": "any" // Any custom field - } -} -``` - -{/\* <Note> -TODO: Filtering by metadata - -</Note> */} - -## Partitioning by user - -You can attribute and partition your data by providing a `userId`: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - -d '{ - "content": "This is space-specific content", - "userId": "space_123", - "metadata": { - "category": "space-content" - } - }' -``` - -```typescript Typescript -await client.memory.create({ - content: "This is space-specific content", - userId: "space_123", - metadata: { - category: "space-content", - }, -}); -``` - -```python Python -client.memory.create( - content="This is space-specific content", - userId="space_123", - metadata={ - "category": "space-content" - } -) -``` - -</CodeGroup> - -<Note> - When searching, if you provide a `userId`, only memories from that space will - be returned. -</Note> - -## Grouping - -You can group memories by providing an array of `containerTags`: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - -d '{ - "content": "This is space-specific content", - "containerTags": ["user_123", "project_xyz"] - }' -``` - -```typescript Typescript -await client.memory.create({ - content: "This is space-specific content", - containerTags: ["user_123", "project_xyz"], -}); -``` - -```python Python -client.memory.create( - content="This is space-specific content", - containerTags=["user_123", "project_xyz"] -) -``` - -</CodeGroup> - -{/\* <Note> -TODO: Processing Statuses - -</Note> */} - -## Checking Status - -Check status using the memory ID: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories/mem_abc123 \ - --request GET \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' -``` - -```typescript Typescript -await client.memory.get("mem_abc123"); -``` - -```python Python -client.memory.get("mem_abc123") -``` - -</CodeGroup> - -<Warning> - -Memories are deleted after 2 minutes if an irrecoverable error occurs. - -</Warning> - -## File Uploads - -For file uploads, use the dedicated file upload endpoint. You can include `containerTags` directly in the form data: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories/file \ - --request POST \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --form 'file=@/path/to/your/file.pdf' \ - --form 'containerTags=["user_123", "project_xyz"]' -``` - -```typescript Typescript -const formData = new FormData(); -formData.append("file", fileBlob); -formData.append("containerTags", JSON.stringify(["user_123", "project_xyz"])); - -const response = await fetch("https://api.supermemory.ai/v3/memories/file", { - method: "POST", - headers: { - Authorization: "Bearer SUPERMEMORY_API_KEY", - }, - body: formData, -}); -``` - -```python Python -import requests -import json - -with open('/path/to/your/file.pdf', 'rb') as f: - files = {'file': f} - data = {'containerTags': json.dumps(["user_123", "project_xyz"])} - response = requests.post( - 'https://api.supermemory.ai/v3/memories/file', - headers={'Authorization': 'Bearer SUPERMEMORY_API_KEY'}, - files=files, - data=data - ) -``` - -</CodeGroup> - -### Adding Additional Metadata to Files - -If you need to add additional metadata (like title or description) after upload, you can use the PATCH endpoint: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories/MEMORY_ID \ - --request PATCH \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --data '{ - "metadata": { - "title": "My Document", - "description": "Important project document" - } - }' -``` - -```typescript Typescript -await fetch(`https://api.supermemory.ai/v3/memories/${memoryId}`, { - method: "PATCH", - headers: { - "Content-Type": "application/json", - Authorization: "Bearer SUPERMEMORY_API_KEY", - }, - body: JSON.stringify({ - metadata: { - title: "My Document", - description: "Important project document", - }, - }), -}); -``` - -```python Python -import requests - -requests.patch( - f'https://api.supermemory.ai/v3/memories/{memory_id}', - headers={ - 'Content-Type': 'application/json', - 'Authorization': 'Bearer SUPERMEMORY_API_KEY' - }, - json={ - 'metadata': { - 'title': 'My Document', - 'description': 'Important project document' - } - } -) -``` - -</CodeGroup> - -<Note> - The file upload endpoint returns immediately with a memory ID and processing - status. The file will be processed asynchronously, and you can check its - status using the GET endpoint. -</Note> - -## Next Steps - -Explore more advanced features in our [API Reference](/api-reference/manage-memories/add-memory) diff --git a/apps/docs/memory-api/creation/status.mdx b/apps/docs/memory-api/creation/status.mdx deleted file mode 100644 index 44a53656..00000000 --- a/apps/docs/memory-api/creation/status.mdx +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: "Processing Status" -description: "Learn about the stages of content processing" ---- - -After adding content, you can check its processing status: - -1. `queued`: Content is queued for processing -2. `extracting`: Extracting content from source -3. `chunking`: Splitting content into semantic chunks -4. `embedding`: Generating vector embeddings -5. `indexing`: Adding to search index -6. `done`: Processing complete -7. `failed`: Processing failed
\ No newline at end of file diff --git a/apps/docs/memory-api/features/auto-multi-modal.mdx b/apps/docs/memory-api/features/auto-multi-modal.mdx deleted file mode 100644 index 18a91135..00000000 --- a/apps/docs/memory-api/features/auto-multi-modal.mdx +++ /dev/null @@ -1,181 +0,0 @@ ---- -title: "Auto Multi Modal" -description: "supermemory automatically detects the content type of the document you are adding." -icon: "sparkles" ---- - -supermemory is natively multi-modal, and can automatically detect the content type of the document you are adding. - -We use the best of breed tools to extract content from URLs, and process it for optimal memory storage. - -## Automatic Content Type Detection - -supermemory automatically detects the content type of the document you're adding. Simply pass your content to the API, and supermemory will handle the rest. - -<Tabs> - <Tab title="How It Works"> - The content detection system analyzes: - - URL patterns and domains - - File extensions and MIME types - - Content structure and metadata - - Headers and response types - </Tab> - <Tab title="Best Practices"> - <Accordion title="Content Type Best Practices" defaultOpen icon="sparkles"> - 1. **Type Selection** - - Use `note` for simple text - - Use `webpage` for online content - - Use native types when possible - - 2. **URL Content** - - Send clean URLs without tracking parameters - - Use article URLs, not homepage URLs - - Check URL accessibility before sending - </Accordion> - - </Tab> -</Tabs> - -### Quick Implementation - -All you need to do is pass the content to the `/memories` endpoint: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories \ - --request POST \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - -d '{"content": "https://example.com/article"}' -``` - -```typescript -await client.add.create({ - content: "https://example.com/article", -}); -``` - -```python -client.add.create( - content="https://example.com/article" -) -``` - -</CodeGroup> - -<Note> - supermemory uses [Markdowner](https://md.dhr.wtf) to extract content from - URLs. -</Note> - -## Supported Content Types - -supermemory supports a wide range of content formats to ensure versatility in memory creation: - -<Grid cols={2}> - <Card title="Text Content" icon="document-text"> - - `note`: Plain text notes and documents - - Directly processes raw text content - - Automatically chunks content for optimal retrieval - - Preserves formatting and structure - </Card> - - <Card title="Web Content" icon="globe"> - - `webpage`: Web pages (just provide the URL) - - Intelligently extracts main content - - Preserves important metadata (title, description, images) - - Extracts OpenGraph metadata when available - - - `tweet`: Twitter content - - Captures tweet text, media, and metadata - - Preserves thread structure if applicable - - </Card> - - <Card title="Document Types" icon="document"> - - `pdf`: PDF files - - Extracts text content while maintaining structure - - Handles both searchable PDFs and scanned documents with OCR - - Preserves page breaks and formatting - - - `google_doc`: Google Documents - - Seamlessly integrates with Google Docs API - - Maintains document formatting and structure - - Auto-updates when source document changes - - - `notion_doc`: Notion pages - - Extracts content while preserving Notion's block structure - - Handles rich text formatting and embedded content - - </Card> - - <Card title="Media Types" icon="photo"> - - `image`: Images with text content - - Advanced OCR for text extraction - - Visual content analysis and description - - - `video`: Video content - - Transcription and content extraction - - Key frame analysis - - </Card> -</Grid> - -## Processing Pipeline - -<Steps> - <Step title="Content Detection"> - supermemory automatically identifies the content type based on the input provided. - </Step> - -<Step title="Content Extraction"> - Type-specific extractors process the content with: - Specialized parsing for - each format - Error handling with retries - Rate limit management -</Step> - - <Step title="AI Enhancement"> - ```typescript - interface ProcessedContent { - content: string; // Extracted text - summary?: string; // AI-generated summary - tags?: string[]; // Extracted tags - categories?: string[]; // Content categories - } - ``` - </Step> - - <Step title="Chunking & Indexing"> - - Sentence-level splitting - - 2-sentence overlap - - Context preservation - - Semantic coherence - </Step> -</Steps> - -## Technical Specifications - -### Size Limits - -| Content Type | Max Size | -| ------------ | -------- | -| Text/Note | 1MB | -| PDF | 10MB | -| Image | 5MB | -| Video | 100MB | -| Web Page | N/A | -| Google Doc | N/A | -| Notion Page | N/A | -| Tweet | N/A | - -### Processing Time - -| Content Type | Processing Time | -| ------------ | --------------- | -| Text/Note | Almost instant | -| PDF | 1-5 seconds | -| Image | 2-10 seconds | -| Video | 10+ seconds | -| Web Page | 1-3 seconds | -| Google Doc | N/A | -| Notion Page | N/A | -| Tweet | N/A | diff --git a/apps/docs/memory-api/features/content-cleaner.mdx b/apps/docs/memory-api/features/content-cleaner.mdx deleted file mode 100644 index e586c3dc..00000000 --- a/apps/docs/memory-api/features/content-cleaner.mdx +++ /dev/null @@ -1,86 +0,0 @@ ---- -title: "Cleaning and Categorizing" -description: "Document Cleaning Summaries in supermemory" -icon: "washing-machine" ---- - -supermemory provides advanced configuration options to customize your content processing pipeline. At its core is an AI-powered system that can automatically analyze, categorize, and filter your content based on your specific needs. - -## Configuration Schema - -```json -{ - "shouldLLMFilter": true, - "categories": ["feature-request", "bug-report", "positive", "negative"], - "filterPrompt": "Analyze feedback sentiment and identify feature requests", - "includeItems": ["critical", "high-priority"], - "excludeItems": ["spam", "irrelevant"] -} -``` - -## Core Settings - -### shouldLLMFilter -- **Type**: `boolean` -- **Required**: No (defaults to `false`) -- **Description**: Master switch for AI-powered content analysis. Must be enabled to use any of the advanced filtering features. - -### categories -- **Type**: `string[]` -- **Limits**: Each category must be 1-50 characters -- **Required**: No -- **Description**: Define custom categories for content classification. When specified, the AI will only use these categories. If not specified, it will generate 3-5 relevant categories automatically. - -### filterPrompt -- **Type**: `string` -- **Limits**: 1-750 characters -- **Required**: No -- **Description**: Custom instructions for the AI on how to analyze and categorize content. Use this to guide the categorization process based on your specific needs. - -### includeItems & excludeItems -- **Type**: `string[]` -- **Limits**: Each item must be 1-20 characters -- **Required**: No -- **Description**: Fine-tune content filtering by specifying items to explicitly include or exclude during processing. - -## Content Processing Pipeline - -When content is ingested with LLM filtering enabled: - -1. **Initial Processing** - - Content is extracted and normalized - - Basic metadata (title, description) is captured - -2. **AI Analysis** - - Content is analyzed based on your `filterPrompt` - - Categories are assigned (either from your predefined list or auto-generated) - - Tags are evaluated and scored - -3. **Chunking & Indexing** - - Content is split into semantic chunks - - Each chunk is embedded for efficient search - - Metadata and classifications are stored - -## Example Use Cases - -### 1. Customer Feedback System -```json -{ - "shouldLLMFilter": true, - "categories": ["positive", "negative", "neutral"], - "filterPrompt": "Analyze customer sentiment and identify key themes", -} -``` - -### 2. Content Moderation -```json -{ - "shouldLLMFilter": true, - "categories": ["safe", "needs-review", "flagged"], - "filterPrompt": "Identify potentially inappropriate or sensitive content", - "excludeItems": ["spam", "offensive"], - "includeItems": ["user-generated"] -} -``` - -> **Important**: All filtering features (`categories`, `filterPrompt`, `includeItems`, `excludeItems`) require `shouldLLMFilter` to be enabled. Attempting to use these features without enabling `shouldLLMFilter` will result in a 400 error. diff --git a/apps/docs/memory-api/features/filtering.mdx b/apps/docs/memory-api/features/filtering.mdx deleted file mode 100644 index cde6ee4a..00000000 --- a/apps/docs/memory-api/features/filtering.mdx +++ /dev/null @@ -1,266 +0,0 @@ ---- -title: "Filtering" -description: "Learn how to filter content while searching from supermemory" -icon: "list-filter-plus" ---- - -## Container Tag - -Container tag is an identifier for your end users, to group memories together.. - -This can be: -- A user using your product -- An organization using a SaaS - -A project ID, or even a dynamic one like `user_project_etc` - -We recommend using single containerTag in all API requests. - -The graph is built on top of the Container Tags. For example, each user / tag in your supermemory account will have one single graph built for them. - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --data '{ - "q": "machine learning", - "containerTags": ["user_123"] - }' -``` - -```typescript Typescript -await client.search.execute({ - q: "machine learning", - containerTags: ["user_123"], -}); -``` - -```python Python -client.search.execute( - q="machine learning", - containerTags=["user_123"] -) -``` - -</CodeGroup> - -## Metadata - -Sometimes, you might want to add metadata and do advanced filtering based on it. - -Using metadata filtering, you can search based on: - -- AND and OR conditions -- String matching -- Numeric matching -- Date matching -- Time range queries - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --data '{ - "q": "machine learning", - "filters": { - "AND": [ - { - "key": "category", - "value": "technology", - "negate": false - }, - { - "filterType": "numeric", - "key": "readingTime", - "value": "5", - "negate": false, - "numericOperator": "<=" - } - ] - } -}' -``` - -```typescript Typescript -await client.search.execute({ - q: "machine learning", - filters: { - AND: [ - { - key: "category", - value: "technology", - negate: false, - }, - { - filterType: "numeric", - key: "readingTime", - value: "5", - negate: false, - numericOperator: "<=", - }, - ], - }, -}); -``` - -```python Python -client.search.execute( - q="machine learning", - filters={ - "AND": [ - { - "key": "category", - "value": "technology", - "negate": false - }, - { - "filterType": "numeric", - "key": "readingTime", - "value": "5", - "negate": false, - "numericOperator": "<=" - } - ] - } -) -``` - -</CodeGroup> - -## Array Contains Filtering - -You can filter memories by array values using the `array_contains` filter type. This is particularly useful for filtering by participants or other array-based metadata. - -First, create a memory with participants in the metadata: - -<CodeGroup> - -```bash cURL -curl --location 'https://api.supermemory.ai/v3/memories' \ ---header 'Content-Type: application/json' \ ---header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ ---data '{ - "content": "quarterly planning meeting discussion", - "metadata": { - "participants": ["john.doe", "sarah.smith", "mike.wilson"] - } - }' -``` - -```typescript Typescript -await client.memories.create({ - content: "quarterly planning meeting discussion", - metadata: { - participants: ["john.doe", "sarah.smith", "mike.wilson"] - } -}); -``` - -```python Python -client.memories.create( - content="quarterly planning meeting discussion", - metadata={ - "participants": ["john.doe", "sarah.smith", "mike.wilson"] - } -) -``` - -</CodeGroup> - -Then search using the `array_contains` filter: - -<CodeGroup> - -```bash cURL -curl --location 'https://api.supermemory.ai/v3/search' \ ---header 'Content-Type: application/json' \ ---header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ ---data '{ - "q": "meeting", - "filters": { - "AND": [ - { - "key": "participants", - "value": "john.doe", - "filterType": "array_contains" - } - ] - }, - "limit": 5 - }' -``` - -```typescript Typescript -await client.search.execute({ - q: "meeting", - filters: { - AND: [ - { - key: "participants", - value: "john.doe", - filterType: "array_contains" - } - ] - }, - limit: 5 -}); -``` - -```python Python -client.search.execute( - q="meeting", - filters={ - "AND": [ - { - "key": "participants", - "value": "john.doe", - "filterType": "array_contains" - } - ] - }, - limit=5 -) -``` - -</CodeGroup> - -## Document - -You can also find chunks within a specific, large document. - -This can be particularly useful for extremely large documents like Books, Podcasts, etc. - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --data '{ - "q": "machine learning", - "docId": "doc_123" - }' -``` - -```typescript Typescript -await client.search.execute({ - q: "machine learning", - docId: "doc_123", -}); -``` - -```python Python -client.search.execute( - q="machine learning", - docId="doc_123" -) -``` - -</CodeGroup> diff --git a/apps/docs/memory-api/features/query-rewriting.mdx b/apps/docs/memory-api/features/query-rewriting.mdx deleted file mode 100644 index 9508297a..00000000 --- a/apps/docs/memory-api/features/query-rewriting.mdx +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: "Query Rewriting" -description: "Query Rewriting in supermemory" -icon: "blend" ---- - -Query Rewriting is a feature that allows you to rewrite queries to make them more accurate. - - - -### Usage - -In supermemory, you can enable query rewriting by setting the `rewriteQuery` parameter to `true` in the search API. - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search \ - --request POST \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - --header 'Content-Type: application/json' \ - -d '{ - "q": "What is the capital of France?", - "rewriteQuery": true - }' -``` - -```typescript -await client.search.create({ - q: "What is the capital of France?", - rewriteQuery: true, -}); -``` - -```python -client.search.create( - q="What is the capital of France?", - rewriteQuery=True -) -``` - -</CodeGroup> - -### Notes and limitations - -- supermemory generates multiple rewrites, and runs the search through all of them. -- The results are then merged and returned to you. -- There is no additional costs associated with query rewriting. -- While query rewriting makes the quality much better, it also **incurs additional latency**. -- All other features like filtering, hybrid search, recency bias, etc. work with rewritten results as well. diff --git a/apps/docs/memory-api/features/reranking.mdx b/apps/docs/memory-api/features/reranking.mdx deleted file mode 100644 index 1df8a9c5..00000000 --- a/apps/docs/memory-api/features/reranking.mdx +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: "Reranking" -description: "Reranked search results in supermemory" -icon: "chart-bar-increasing" ---- - -Reranking is a feature that allows you to rerank search results based on the query. - - - -### Usage - -In supermemory, you can enable answer rewriting by setting the `rerank` parameter to `true` in the search API. - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search?q=What+is+the+capital+of+France?&rerank=true \ - --request GET \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' -``` - -```typescript -await client.search.create({ - q: "What is the capital of France?", - rerank: true, -}); -``` - -```python -client.search.create( - q="What is the capital of France?", - rerank=True -) -``` - -</CodeGroup> - -### Notes and limitations - -- We currently use `bge-reranker-base` model for reranking. -- There is no additional costs associated with reranking. -- While reranking makes the quality much better, it also **incurs additional latency**. -- All other features like filtering, hybrid search, recency bias, etc. work with reranked results as well. diff --git a/apps/docs/memory-api/introduction.mdx b/apps/docs/memory-api/introduction.mdx deleted file mode 100644 index 8ff4547e..00000000 --- a/apps/docs/memory-api/introduction.mdx +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: "Introduction - Memory endpoints" -sidebarTitle: "Introduction" -description: "Ingest content at scale, in any format." ---- - -**supermemory** automatically **ingests and processes your data**, and makes it searchable. - -<Info> - The Memory engine scales linearly - which means we're **incredibly fast and scalable**, while providing one of the more affordable -</Info> - - - -It also gives you features like: - -- [Connectors and Syncing](/memory-api/connectors/) -- [Multimodality](/memory-api/features/auto-multi-modal) -- [Advanced Filtering](/memory-api/features/filtering) -- [Reranking](/memory-api/features/reranking) -- [Extracting details from text](/memory-api/features/content-cleaner) -- [Query Rewriting](/memory-api/features/query-rewriting) - -... and lots more\! - - -Check out the following resources to get started: - - -<CardGroup cols={2}> - <Card title="Quickstart" icon="zap" href="/memory-api/overview"> - Get started in 5 minutes - </Card> - <Card title="API Reference" icon="unplug" href="/api-reference"> - Learn more about the API - </Card> - <Card title="Use Cases" icon="brain" href="/overview/use-cases"> - See what supermemory can do for you - </Card> - <Card title="SDKs" icon="code" href="/memory-api/sdks/"> - Learn more about the SDKs - </Card> -</CardGroup>
\ No newline at end of file diff --git a/apps/docs/memory-api/overview.mdx b/apps/docs/memory-api/overview.mdx deleted file mode 100644 index fc9ce28a..00000000 --- a/apps/docs/memory-api/overview.mdx +++ /dev/null @@ -1,161 +0,0 @@ ---- -title: "Quickstart - 5 mins" -description: "Learn how to integrate supermemory into your application" ---- - -## Authentication - -Head to [supermemory's Developer Platform](https://console.supermemory.ai) built to help you monitor and manage every aspect of the API. - -All API requests require authentication using an API key. Include your API key as follows: - -<CodeGroup> - -```bash cURL -Authorization: Bearer YOUR_API_KEY -``` - -```typescript Typescript -// npm install supermemory - -const client = new supermemory({ - apiKey: "YOUR_API_KEY", -}); -``` - -```python Python -# pip install supermemory - -client = supermemory( - api_key="YOUR_API_KEY", -) -``` - -</CodeGroup> - -## Installing the clients - -You can use supermemory through the APIs, or using our SDKs - -<CodeGroup> - -```bash cURL -https://api.supermemory.ai/v3 -``` - -```bash Typescript -npm i supermemory -``` - -```bash Python -pip install supermemory -``` - -</CodeGroup> - -## Add your first memory - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/memories \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - -d '{"content": "This is the content of my first memory."}' -``` - -```typescript Typescript -await client.memory.add({ - content: "This is the content of my first memory.", -}); -``` - -```python Python -client.memory.add( - content="This is the content of my first memory.", -) -``` - -</CodeGroup> - -This will add a new memory to your supermemory account. - -Try it out in the [API Playground](/api-reference/manage-memories/add-memory). - -## Content Processing - -<Accordion title="Processing steps" icon="sparkles"> - When you add content to supermemory, it goes through several processing steps: - -1. **Queued**: Initial state when content is submitted -2. **Extracting**: Content is being extracted from the source -3. **Chunking**: Content is being split into semantic chunks -4. **Embedding**: Generating vector embeddings for search -5. **Indexing**: Adding content to the search index -6. **Done**: Processing complete - </Accordion> - -<Accordion title="Advanced Chunking" icon="sparkles"> - The system uses advanced NLP techniques for optimal chunking: - -- Sentence-level splitting for natural boundaries -- Context preservation with overlapping chunks -- Smart handling of long content -- Semantic coherence optimization - </Accordion> - -## Search your memories - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search \ - --request POST \ - --header 'Content-Type: application/json' \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' \ - -d '{"q": "This is the content of my first memory."}' -``` - -```typescript Typescript -await client.search.execute({ - q: "This is the content of my first memory.", -}); -``` - -```python Python -client.search.execute( - q="This is the content of my first memory.", -) -``` - -</CodeGroup> - -Try it out in the [API Playground](/api-reference/search-memories/search-memories). - -You can do a lot more with supermemory, and we will walk through everything you need to. - -Next, explore the features available in supermemory - -<CardGroup cols={2}> - <Card title="Adding memories" icon="plus" href="/memory-api/creation"> - Adding memories - </Card> - <Card - title="Searching and filtering" - icon="search" - href="/memory-api/searching" - > - Searching for items - </Card> - <Card - title="Connectors and Syncing" - icon="plug" - href="/memory-api/connectors" - > - Connecting external sources - </Card> - <Card title="Features" icon="sparkles" href="/memory-api/features"> - Explore Features - </Card> -</CardGroup> diff --git a/apps/docs/memory-api/sdks/python.mdx b/apps/docs/memory-api/sdks/python.mdx deleted file mode 100644 index 2b1f56fc..00000000 --- a/apps/docs/memory-api/sdks/python.mdx +++ /dev/null @@ -1,349 +0,0 @@ ---- -title: 'Python SDK' -sidebarTitle: "Python" -description: 'Learn how to use supermemory with Python' ---- - -## Installation - -```sh -# install from PyPI -pip install --pre supermemory -``` - -## Usage - - -```python -import os -from supermemory import Supermemory - -client = supermemory( - api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted -) - -response = client.search.execute( - q="documents related to python", -) -print(response.results) -``` - -While you can provide an `api_key` keyword argument, -we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) -to add `SUPERMEMORY_API_KEY="My API Key"` to your `.env` file -so that your API Key is not stored in source control. - -## Async usage - -Simply import `AsyncSupermemory` instead of `supermemory` and use `await` with each API call: - -```python -import os -import asyncio -from supermemory import AsyncSupermemory - -client = AsyncSupermemory( - api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted -) - - -async def main() -> None: - response = await client.search.execute( - q="documents related to python", - ) - print(response.results) - - -asyncio.run(main()) -``` - -Functionality between the synchronous and asynchronous clients is otherwise identical. - -## Using types - -Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like: - -- Serializing back into JSON, `model.to_json()` -- Converting to a dictionary, `model.to_dict()` - -Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`. - -## File uploads - -Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`. - -```python -from pathlib import Path -from supermemory import Supermemory - -client = supermemory() - -client.memories.upload_file( - file=Path("/path/to/file"), -) -``` - -The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically. - -## Handling errors - -When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `supermemory.APIConnectionError` is raised. - -When the API returns a non-success status code (that is, 4xx or 5xx -response), a subclass of `supermemory.APIStatusError` is raised, containing `status_code` and `response` properties. - -All errors inherit from `supermemory.APIError`. - -```python -import supermemory -from supermemory import Supermemory - -client = supermemory() - -try: - client.memories.add( - content="This is a detailed article about machine learning concepts...", - ) -except supermemory.APIConnectionError as e: - print("The server could not be reached") - print(e.__cause__) # an underlying Exception, likely raised within httpx. -except supermemory.RateLimitError as e: - print("A 429 status code was received; we should back off a bit.") -except supermemory.APIStatusError as e: - print("Another non-200-range status code was received") - print(e.status_code) - print(e.response) -``` - -Error codes are as follows: - -| Status Code | Error Type | -| ----------- | -------------------------- | -| 400 | `BadRequestError` | -| 401 | `AuthenticationError` | -| 403 | `PermissionDeniedError` | -| 404 | `NotFoundError` | -| 422 | `UnprocessableEntityError` | -| 429 | `RateLimitError` | -| >=500 | `InternalServerError` | -| N/A | `APIConnectionError` | - -### Retries - -Certain errors are automatically retried 2 times by default, with a short exponential backoff. -Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, -429 Rate Limit, and >=500 Internal errors are all retried by default. - -You can use the `max_retries` option to configure or disable retry settings: - -```python -from supermemory import Supermemory - -# Configure the default for all requests: -client = supermemory( - # default is 2 - max_retries=0, -) - -# Or, configure per-request: -client.with_options(max_retries=5).memories.add( - content="This is a detailed article about machine learning concepts...", -) -``` - -### Timeouts - -By default requests time out after 1 minute. You can configure this with a `timeout` option, -which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object: - -```python -from supermemory import Supermemory - -# Configure the default for all requests: -client = supermemory( - # 20 seconds (default is 1 minute) - timeout=20.0, -) - -# More granular control: -client = supermemory( - timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0), -) - -# Override per-request: -client.with_options(timeout=5.0).memories.add( - content="This is a detailed article about machine learning concepts...", -) -``` - -On timeout, an `APITimeoutError` is thrown. - -Note that requests that time out are [retried twice by default](#retries). - -## Advanced - -### Logging - -We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module. - -You can enable logging by setting the environment variable `SUPERMEMORY_LOG` to `info`. - -```shell -$ export SUPERMEMORY_LOG=info -``` - -Or to `debug` for more verbose logging. - -### How to tell whether `None` means `null` or missing - -In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`: - -```py -if response.my_field is None: - if 'my_field' not in response.model_fields_set: - print('Got json like {}, without a "my_field" key present at all.') - else: - print('Got json like {"my_field": null}.') -``` - -### Accessing raw response data (e.g. headers) - -The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g., - -```py -from supermemory import Supermemory - -client = supermemory() -response = client.memories.with_raw_response.add( - content="This is a detailed article about machine learning concepts...", -) -print(response.headers.get('X-My-Header')) - -memory = response.parse() # get the object that `memories.add()` would have returned -print(memory.id) -``` - -These methods return an [`APIResponse`](https://github.com/supermemoryai/python-sdk/tree/main/src/supermemory/_response.py) object. - -The async client returns an [`AsyncAPIResponse`](https://github.com/supermemoryai/python-sdk/tree/main/src/supermemory/_response.py) with the same structure, the only difference being `await`able methods for reading the response content. - -#### `.with_streaming_response` - -The above interface eagerly reads the full response body when you make the request, which may not always be what you want. - -To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods. - -```python -with client.memories.with_streaming_response.add( - content="This is a detailed article about machine learning concepts...", -) as response: - print(response.headers.get("X-My-Header")) - - for line in response.iter_lines(): - print(line) -``` - -The context manager is required so that the response will reliably be closed. - -### Making custom/undocumented requests - -This library is typed for convenient access to the documented API. - -If you need to access undocumented endpoints, params, or response properties, the library can still be used. - -#### Undocumented endpoints - -To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other -http verbs. Options on the client will be respected (such as retries) when making this request. - -```py -import httpx - -response = client.post( - "/foo", - cast_to=httpx.Response, - body={"my_param": True}, -) - -print(response.headers.get("x-foo")) -``` - -#### Undocumented request params - -If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request -options. - -#### Undocumented response properties - -To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You -can also get all the extra fields on the Pydantic model as a dict with -[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra). - -### Configuring the HTTP client - -You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including: - -- Support for [proxies](https://www.python-httpx.org/advanced/proxies/) -- Custom [transports](https://www.python-httpx.org/advanced/transports/) -- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality - -```python -import httpx -from supermemory import Supermemory, DefaultHttpxClient - -client = supermemory( - # Or use the `SUPERMEMORY_BASE_URL` env var - base_url="http://my.test.server.example.com:8083", - http_client=DefaultHttpxClient( - proxy="http://my.test.proxy.example.com", - transport=httpx.HTTPTransport(local_address="0.0.0.0"), - ), -) -``` - -You can also customize the client on a per-request basis by using `with_options()`: - -```python -client.with_options(http_client=DefaultHttpxClient(...)) -``` - -### Managing HTTP resources - -By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting. - -```py -from supermemory import Supermemory - -with supermemory() as client: - # make requests here - ... - -# HTTP client is now closed -``` - -## Versioning - -This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions: - -1. Changes that only affect static types, without breaking runtime behavior. -2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_ -3. Changes that we do not expect to impact the vast majority of users in practice. - -We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience. - -We are keen for your feedback; please open an [issue](https://www.github.com/supermemoryai/python-sdk/issues) with questions, bugs, or suggestions. - -### Determining the installed version - -If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version. - -You can determine the version that is being used at runtime with: - -```py -import supermemory -print(supermemory.__version__) -``` - -## Requirements - -Python 3.8 or higher.
\ No newline at end of file diff --git a/apps/docs/memory-api/sdks/typescript.mdx b/apps/docs/memory-api/sdks/typescript.mdx deleted file mode 100644 index 54cc7137..00000000 --- a/apps/docs/memory-api/sdks/typescript.mdx +++ /dev/null @@ -1,391 +0,0 @@ ---- -title: 'Typescript SDK' -sidebarTitle: "Typescript" -description: 'Learn how to use supermemory with Typescript' ---- - -## Installation - -```sh -npm install supermemory -``` - -## Usage - -```js -import supermemory from 'supermemory'; - -const client = new supermemory({ - apiKey: process.env['SUPERMEMORY_API_KEY'], // This is the default and can be omitted -}); - -async function main() { - const response = await client.search.execute({ q: 'documents related to python' }); - - console.debug(response.results); -} - -main(); -``` - -### Request & Response types - -This library includes TypeScript definitions for all request params and response fields. You may import and use them like so: - - -```ts -import supermemory from 'supermemory'; - -const client = new supermemory({ - apiKey: process.env['SUPERMEMORY_API_KEY'], // This is the default and can be omitted -}); - -async function main() { - const params: supermemory.MemoryAddParams = { - content: 'This is a detailed article about machine learning concepts...', - }; - const response: supermemory.MemoryAddResponse = await client.memories.add(params); -} - -main(); -``` - -Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors. - -## File uploads - -Request parameters that correspond to file uploads can be passed in many different forms: - -- `File` (or an object with the same structure) -- a `fetch` `Response` (or an object with the same structure) -- an `fs.ReadStream` -- the return value of our `toFile` helper - -```ts -import fs from 'fs'; -import supermemory, { toFile } from 'supermemory'; - -const client = new supermemory(); - -// If you have access to Node `fs` we recommend using `fs.createReadStream()`: -await client.memories.uploadFile({ file: fs.createReadStream('/path/to/file') }); - -// Or if you have the web `File` API you can pass a `File` instance: -await client.memories.uploadFile({ file: new File(['my bytes'], 'file') }); - -// You can also pass a `fetch` `Response`: -await client.memories.uploadFile({ file: await fetch('https://somesite/file') }); - -// Finally, if none of the above are convenient, you can use our `toFile` helper: -await client.memories.uploadFile({ file: await toFile(Buffer.from('my bytes'), 'file') }); -await client.memories.uploadFile({ file: await toFile(new Uint8Array([0, 1, 2]), 'file') }); -``` - -## Handling errors - -When the library is unable to connect to the API, -or if the API returns a non-success status code (i.e., 4xx or 5xx response), -a subclass of `APIError` will be thrown: - - -```ts -async function main() { - const response = await client.memories - .add({ content: 'This is a detailed article about machine learning concepts...' }) - .catch(async (err) => { - if (err instanceof supermemory.APIError) { - console.debug(err.status); // 400 - console.debug(err.name); // BadRequestError - console.debug(err.headers); // {server: 'nginx', ...} - } else { - throw err; - } - }); -} - -main(); -``` - -Error codes are as follows: - -| Status Code | Error Type | -| ----------- | -------------------------- | -| 400 | `BadRequestError` | -| 401 | `AuthenticationError` | -| 403 | `PermissionDeniedError` | -| 404 | `NotFoundError` | -| 422 | `UnprocessableEntityError` | -| 429 | `RateLimitError` | -| >=500 | `InternalServerError` | -| N/A | `APIConnectionError` | - -### Retries - -Certain errors will be automatically retried 2 times by default, with a short exponential backoff. -Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict, -429 Rate Limit, and >=500 Internal errors will all be retried by default. - -You can use the `maxRetries` option to configure or disable this: - - -```js -// Configure the default for all requests: -const client = new supermemory({ - maxRetries: 0, // default is 2 -}); - -// Or, configure per-request: -await client.memories.add({ content: 'This is a detailed article about machine learning concepts...' }, { - maxRetries: 5, -}); -``` - -### Timeouts - -Requests time out after 1 minute by default. You can configure this with a `timeout` option: - - -```ts -// Configure the default for all requests: -const client = new supermemory({ - timeout: 20 * 1000, // 20 seconds (default is 1 minute) -}); - -// Override per-request: -await client.memories.add({ content: 'This is a detailed article about machine learning concepts...' }, { - timeout: 5 * 1000, -}); -``` - -On timeout, an `APIConnectionTimeoutError` is thrown. - -Note that requests which time out will be [retried twice by default](#retries). - -## Advanced Usage - -### Accessing raw Response data (e.g., headers) - -The "raw" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return. -This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic. - -You can also use the `.withResponse()` method to get the raw `Response` along with the parsed data. -Unlike `.asResponse()` this method consumes the body, returning once it is parsed. - - -```ts -const client = new supermemory(); - -const response = await client.memories - .add({ content: 'This is a detailed article about machine learning concepts...' }) - .asResponse(); -console.debug(response.headers.get('X-My-Header')); -console.debug(response.statusText); // access the underlying Response object - -const { data: response, response: raw } = await client.memories - .add({ content: 'This is a detailed article about machine learning concepts...' }) - .withResponse(); -console.debug(raw.headers.get('X-My-Header')); -console.debug(response.id); -``` - -### Logging - -<Warning> -All log messages are intended for debugging only. The format and content of log messages may change between releases. -</Warning> - -#### Log levels - -The log level can be configured in two ways: - -1. Via the `SUPERMEMORY_LOG` environment variable -2. Using the `logLevel` client option (overrides the environment variable if set) - -```ts -import supermemory from 'supermemory'; - -const client = new supermemory({ - logLevel: 'debug', // Show all log messages -}); -``` - -Available log levels, from most to least verbose: - -- `'debug'` - Show debug messages, info, warnings, and errors -- `'info'` - Show info messages, warnings, and errors -- `'warn'` - Show warnings and errors (default) -- `'error'` - Show only errors -- `'off'` - Disable all logging - -At the `'debug'` level, all HTTP requests and responses are logged, including headers and bodies. -Some authentication-related headers are redacted, but sensitive data in request and response bodies -may still be visible. - -#### Custom logger - -By default, this library logs to `globalThis.console`. You can also provide a custom logger. -Most logging libraries are supported, including [pino](https://www.npmjs.com/package/pino), [winston](https://www.npmjs.com/package/winston), [bunyan](https://www.npmjs.com/package/bunyan), [consola](https://www.npmjs.com/package/consola), [signale](https://www.npmjs.com/package/signale), and [@std/log](https://jsr.io/@std/log). If your logger doesn't work, please open an issue. - -When providing a custom logger, the `logLevel` option still controls which messages are emitted, messages -below the configured level will not be sent to your logger. - -```ts -import supermemory from 'supermemory'; -import pino from 'pino'; - -const logger = pino(); - -const client = new supermemory({ - logger: logger.child({ name: 'supermemory' }), - logLevel: 'debug', // Send all messages to pino, allowing it to filter -}); -``` - -### Making custom/undocumented requests - -This library is typed for convenient access to the documented API. If you need to access undocumented -endpoints, params, or response properties, the library can still be used. - -#### Undocumented endpoints - -To make requests to undocumented endpoints, you can use `client.get`, `client.post`, and other HTTP verbs. -Options on the client, such as retries, will be respected when making these requests. - -```ts -await client.post('/some/path', { - body: { some_prop: 'foo' }, - query: { some_query_arg: 'bar' }, -}); -``` - -#### Undocumented request params - -To make requests using undocumented parameters, you may use `// @ts-expect-error` on the undocumented -parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you -send will be sent as-is. - -```ts -client.foo.create({ - foo: 'my_param', - bar: 12, - // @ts-expect-error baz is not yet public - baz: 'undocumented option', -}); -``` - -For requests with the `GET` verb, any extra params will be in the query, all other requests will send the -extra param in the body. - -If you want to explicitly send an extra argument, you can do so with the `query`, `body`, and `headers` request -options. - -#### Undocumented response properties - -To access undocumented response properties, you may access the response object with `// @ts-expect-error` on -the response object, or cast the response object to the requisite type. Like the request params, we do not -validate or strip extra properties from the response from the API. - -### Customizing the fetch client - -By default, this library expects a global `fetch` function is defined. - -If you want to use a different `fetch` function, you can either polyfill the global: - -```ts -import fetch from 'my-fetch'; - -globalThis.fetch = fetch; -``` - -Or pass it to the client: - -```ts -import supermemory from 'supermemory'; -import fetch from 'my-fetch'; - -const client = new supermemory({ fetch }); -``` - -### Fetch options - -If you want to set custom `fetch` options without overriding the `fetch` function, you can provide a `fetchOptions` object when instantiating the client or making a request. (Request-specific options override client options.) - -```ts -import supermemory from 'supermemory'; - -const client = new supermemory({ - fetchOptions: { - // `RequestInit` options - }, -}); -``` - -#### Configuring proxies - -To modify proxy behavior, you can provide custom `fetchOptions` that add runtime-specific proxy options to requests: - -```ts -import supermemory from 'supermemory'; -import * as undici from 'undici'; - -const proxyAgent = new undici.ProxyAgent('http://localhost:8888'); -const client = new supermemory({ - fetchOptions: { - dispatcher: proxyAgent, - }, -}); -``` - -```ts -import supermemory from 'supermemory'; - -const client = new supermemory({ - fetchOptions: { - proxy: 'http://localhost:8888', - }, -}); -``` - -```ts -import supermemory from 'npm:supermemory'; - -const httpClient = Deno.createHttpClient({ proxy: { url: 'http://localhost:8888' } }); -const client = new supermemory({ - fetchOptions: { - client: httpClient, - }, -}); -``` - -## Frequently Asked Questions - -## Semantic versioning - -This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions: - -1. Changes that only affect static types, without breaking runtime behavior. -2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_ -3. Changes that we do not expect to impact the vast majority of users in practice. - -We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience. - -We are keen for your feedback; please open an [issue](https://www.github.com/supermemoryai/sdk-ts/issues) with questions, bugs, or suggestions. - -## Requirements - -TypeScript >= 4.9 is supported. - -The following runtimes are supported: - -- Web browsers (Up-to-date Chrome, Firefox, Safari, Edge, and more) -- Node.js 20 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions. -- Deno v1.28.0 or higher. -- Bun 1.0 or later. -- Cloudflare Workers. -- Vercel Edge Runtime. -- Jest 28 or greater with the `"node"` environment (`"jsdom"` is not supported at this time). -- Nitro v2.6 or greater. - -Note that React Native is not supported at this time. - -If you are interested in other runtime environments, please open or upvote an issue on GitHub.
\ No newline at end of file diff --git a/apps/docs/memory-api/searching/searching-memories.mdx b/apps/docs/memory-api/searching/searching-memories.mdx deleted file mode 100644 index a94c0c71..00000000 --- a/apps/docs/memory-api/searching/searching-memories.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: "Searching Memories" -description: "Learn how to search for and retrieve content from supermemory" ---- - -<Accordion title="Best Practices" defaultOpen icon="sparkles"> -1. **Query Formulation**: - - Use natural language queries - - Include relevant keywords - - Be specific but not too verbose - -2. **Filtering**: - - Use metadata filters for precision - - Combine multiple filters when needed - - Use appropriate thresholds - -3. **Performance**: - - Set appropriate result limits - - Use specific document/chunk filters - - Consider response timing - </Accordion> - -## Basic Search - -To search through your memories, send a POST request to `/search`: - -<CodeGroup> - -```bash cURL -curl https://api.supermemory.ai/v3/search?q=machine+learning+concepts&limit=10 \ - --request GET \ - --header 'Authorization: Bearer SUPERMEMORY_API_KEY' -``` - -```typescript Typescript -await client.search.execute({ - q: "machine learning concepts", - limit: 10, -}); -``` - -```python Python -client.search.execute( - q="machine learning concepts", - limit=10 -) -``` - -</CodeGroup> - -The API will return relevant matches with their similarity scores: - -```json -{ - "results": [ - { - "documentId": "doc_xyz789", - "chunks": [ - { - "content": "Machine learning is a subset of artificial intelligence...", - "isRelevant": true, - "score": 0.85 - } - ], - "score": 0.95, - "metadata": { - "source": "web", - "category": "technology" - }, - "title": "Introduction to Machine Learning" - } - ], - "total": 1, - "timing": 123.45 -} -``` - -## Search Parameters - -```json -{ - "q": "search query", // Required: Search query string - "limit": 10, // Optional: Max results (default: 10) - "documentThreshold": 0.5, // Optional: Min document score (0-1) - "chunkThreshold": 0.5, // Optional: Min chunk score (0-1) - "onlyMatchingChunks": false, // Optional: Skip context chunks - "docId": "doc_id", // Optional: Search in specific doc - "userId": "user_123", // Optional: Search in user's space - "includeSummary": false, // Optional: Include doc summaries - "filters": { - // Optional: Metadata filters - "AND": [ - { - "key": "category", - "value": "technology" - } - ] - }, - "categoriesFilter": [ - // Optional: Category filters - "technology", - "science" - ] -} -``` - -## Search Response - -The search response includes: - -```json -{ - "results": [ - { - "documentId": "string", // Document ID - "chunks": [ - { - // Matching chunks - "content": "string", // Chunk content - "isRelevant": true, // Is directly relevant - "score": 0.95 // Similarity score - } - ], - "score": 0.95, // Document score - "metadata": {}, // Document metadata - "title": "string", // Document title - "createdAt": "string", // Creation date - "updatedAt": "string" // Last update date - } - ], - "total": 1, // Total results - "timing": 123.45 // Search time (ms) -} -``` - -## Next Steps - -Explore more advanced features in our [API Reference](/api-reference/search-memories/search-memories). |