aboutsummaryrefslogtreecommitdiff
path: root/apps/docs/memory-api/sdks
diff options
context:
space:
mode:
authorDhravya Shah <[email protected]>2025-09-28 16:42:06 -0700
committerDhravya Shah <[email protected]>2025-09-28 16:42:06 -0700
commit2093b316d9ecb9cfa9c550f436caee08e12f5d11 (patch)
tree07b87fbd48b0b38ef26b9d5f839ad8cd61d82331 /apps/docs/memory-api/sdks
parentMerge branch 'main' of https://github.com/supermemoryai/supermemory (diff)
downloadsupermemory-2093b316d9ecb9cfa9c550f436caee08e12f5d11.tar.xz
supermemory-2093b316d9ecb9cfa9c550f436caee08e12f5d11.zip
migrate docs to public
Diffstat (limited to 'apps/docs/memory-api/sdks')
-rw-r--r--apps/docs/memory-api/sdks/native.mdx68
-rw-r--r--apps/docs/memory-api/sdks/openai-plugins.mdx584
-rw-r--r--apps/docs/memory-api/sdks/overview.mdx24
-rw-r--r--apps/docs/memory-api/sdks/python.mdx349
-rw-r--r--apps/docs/memory-api/sdks/supermemory-npm.mdx5
-rw-r--r--apps/docs/memory-api/sdks/supermemory-pypi.mdx5
-rw-r--r--apps/docs/memory-api/sdks/typescript.mdx391
7 files changed, 1426 insertions, 0 deletions
diff --git a/apps/docs/memory-api/sdks/native.mdx b/apps/docs/memory-api/sdks/native.mdx
new file mode 100644
index 00000000..67f79d18
--- /dev/null
+++ b/apps/docs/memory-api/sdks/native.mdx
@@ -0,0 +1,68 @@
+---
+title: 'Supermemory SDKs'
+sidebarTitle: "Python and JavaScript SDKs"
+description: 'Learn how to use supermemory with Python and JavaScript'
+---
+
+For more information, see the full updated references at
+
+<Columns cols={2}>
+ <Card title="Python SDK" icon="python" href="https://pypi.org/project/supermemory/">
+ </Card>
+
+ <Card title="Javascript SDK" icon="js" href="https://www.npmjs.com/package/supermemory">
+ </Card>
+</Columns>
+
+
+## Python SDK
+
+## Installation
+
+```sh
+# install from PyPI
+pip install --pre supermemory
+```
+
+## Usage
+
+
+```python
+import os
+from supermemory import Supermemory
+
+client = supermemory(
+ api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted
+)
+
+response = client.search.documents(
+ q="documents related to python",
+)
+print(response.results)
+```
+
+## JavaScript SDK
+
+## Installation
+
+```sh
+npm install supermemory
+```
+
+## Usage
+
+```js
+import supermemory from 'supermemory';
+
+const client = new supermemory({
+ apiKey: process.env['SUPERMEMORY_API_KEY'], // This is the default and can be omitted
+});
+
+async function main() {
+ const response = await client.search.documents({ q: 'documents related to python' });
+
+ console.debug(response.results);
+}
+
+main();
+```
diff --git a/apps/docs/memory-api/sdks/openai-plugins.mdx b/apps/docs/memory-api/sdks/openai-plugins.mdx
new file mode 100644
index 00000000..2fb0a789
--- /dev/null
+++ b/apps/docs/memory-api/sdks/openai-plugins.mdx
@@ -0,0 +1,584 @@
+---
+title: "OpenAI SDK Plugins"
+description: "Memory tools for OpenAI function calling with Supermemory integration"
+---
+
+Add memory capabilities to the official OpenAI SDKs using Supermemory's function calling tools. These plugins provide seamless integration with OpenAI's chat completions and function calling features.
+
+<CardGroup>
+<Card title="Supermemory tools on npm" icon="npm" href="https://www.npmjs.com/package/@supermemory/tools">
+ Check out the NPM page for more details
+</Card>
+<Card title="Supermemory AI SDK" icon="python" href="https://pypi.org/project/supermemory-openai-sdk/">
+ Check out the PyPI page for more details
+</Card>
+</CardGroup>
+
+## Installation
+
+<CodeGroup>
+
+```bash Python
+# Using uv (recommended)
+uv add supermemory-openai-sdk
+
+# Or with pip
+pip install supermemory-openai-sdk
+```
+
+```bash JavaScript/TypeScript
+npm install @supermemory/tools
+```
+
+</CodeGroup>
+
+## Quick Start
+
+<CodeGroup>
+
+```python Python SDK
+import asyncio
+import openai
+from supermemory_openai import SupermemoryTools, execute_memory_tool_calls
+
+async def main():
+ # Initialize OpenAI client
+ client = openai.AsyncOpenAI(api_key="your-openai-api-key")
+
+ # Initialize Supermemory tools
+ tools = SupermemoryTools(
+ api_key="your-supermemory-api-key",
+ config={"project_id": "my-project"}
+ )
+
+ # Chat with memory tools
+ response = await client.chat.completions.create(
+ model="gpt-4o",
+ messages=[
+ {
+ "role": "system",
+ "content": "You are a helpful assistant with access to user memories."
+ },
+ {
+ "role": "user",
+ "content": "Remember that I prefer tea over coffee"
+ }
+ ],
+ tools=tools.get_tool_definitions()
+ )
+
+ # Handle tool calls if present
+ if response.choices[0].message.tool_calls:
+ tool_results = await execute_memory_tool_calls(
+ api_key="your-supermemory-api-key",
+ tool_calls=response.choices[0].message.tool_calls,
+ config={"project_id": "my-project"}
+ )
+ print("Tool results:", tool_results)
+
+ print(response.choices[0].message.content)
+
+asyncio.run(main())
+```
+
+```typescript JavaScript/TypeScript SDK
+import { supermemoryTools, getToolDefinitions, createToolCallExecutor } from "@supermemory/tools/openai"
+import OpenAI from "openai"
+
+const client = new OpenAI({
+ apiKey: process.env.OPENAI_API_KEY!,
+})
+
+// Get tool definitions for OpenAI
+const toolDefinitions = getToolDefinitions()
+
+// Create tool executor
+const executeToolCall = createToolCallExecutor(process.env.SUPERMEMORY_API_KEY!, {
+ projectId: "your-project-id",
+})
+
+// Use with OpenAI Chat Completions
+const completion = await client.chat.completions.create({
+ model: "gpt-4",
+ messages: [
+ {
+ role: "user",
+ content: "What do you remember about my preferences?",
+ },
+ ],
+ tools: toolDefinitions,
+})
+
+// Execute tool calls if any
+if (completion.choices[0]?.message.tool_calls) {
+ for (const toolCall of completion.choices[0].message.tool_calls) {
+ const result = await executeToolCall(toolCall)
+ console.log(result)
+ }
+}
+```
+
+</CodeGroup>
+
+## Configuration
+
+### Memory Tools Configuration
+
+<CodeGroup>
+
+```python Python Configuration
+from supermemory_openai import SupermemoryTools
+
+tools = SupermemoryTools(
+ api_key="your-supermemory-api-key",
+ config={
+ "project_id": "my-project", # or use container_tags
+ "base_url": "https://custom-endpoint.com", # optional
+ }
+)
+```
+
+```typescript JavaScript Configuration
+import { supermemoryTools } from "@supermemory/tools/openai"
+
+const tools = supermemoryTools(process.env.SUPERMEMORY_API_KEY!, {
+ projectId: "your-project-id",
+ baseUrl: "https://custom-endpoint.com", // optional
+})
+```
+
+</CodeGroup>
+
+## Available Tools
+
+### Search Memories
+
+Search through user memories using semantic search:
+
+<CodeGroup>
+
+```python Python
+# Search memories
+result = await tools.search_memories(
+ information_to_get="user preferences",
+ limit=10,
+ include_full_docs=True
+)
+print(f"Found {len(result.memories)} memories")
+```
+
+```typescript JavaScript
+// Search memories
+const searchResult = await tools.searchMemories({
+ informationToGet: "user preferences",
+ limit: 10,
+})
+console.log(`Found ${searchResult.memories.length} memories`)
+```
+
+</CodeGroup>
+
+### Add Memory
+
+Store new information in memory:
+
+<CodeGroup>
+
+```python Python
+# Add memory
+result = await tools.add_memory(
+ memory="User prefers tea over coffee"
+)
+print(f"Added memory with ID: {result.memory.id}")
+```
+
+```typescript JavaScript
+// Add memory
+const addResult = await tools.addMemory({
+ memory: "User prefers dark roast coffee",
+})
+console.log(`Added memory with ID: ${addResult.memory.id}`)
+```
+
+</CodeGroup>
+
+### Fetch Memory
+
+Retrieve specific memory by ID:
+
+<CodeGroup>
+
+```python Python
+# Fetch specific memory
+result = await tools.fetch_memory(
+ memory_id="memory-id-here"
+)
+print(f"Memory content: {result.memory.content}")
+```
+
+```typescript JavaScript
+// Fetch specific memory
+const fetchResult = await tools.fetchMemory({
+ memoryId: "memory-id-here"
+})
+console.log(`Memory content: ${fetchResult.memory.content}`)
+```
+
+</CodeGroup>
+
+## Individual Tools
+
+Use tools separately for more granular control:
+
+<CodeGroup>
+
+```python Python Individual Tools
+from supermemory_openai import (
+ create_search_memories_tool,
+ create_add_memory_tool,
+ create_fetch_memory_tool
+)
+
+search_tool = create_search_memories_tool("your-api-key")
+add_tool = create_add_memory_tool("your-api-key")
+fetch_tool = create_fetch_memory_tool("your-api-key")
+
+# Use individual tools in OpenAI function calling
+tools_list = [search_tool, add_tool, fetch_tool]
+```
+
+```typescript JavaScript Individual Tools
+import {
+ createSearchMemoriesTool,
+ createAddMemoryTool,
+ createFetchMemoryTool
+} from "@supermemory/tools/openai"
+
+const searchTool = createSearchMemoriesTool(process.env.SUPERMEMORY_API_KEY!)
+const addTool = createAddMemoryTool(process.env.SUPERMEMORY_API_KEY!)
+const fetchTool = createFetchMemoryTool(process.env.SUPERMEMORY_API_KEY!)
+
+// Use individual tools
+const toolDefinitions = [searchTool, addTool, fetchTool]
+```
+
+</CodeGroup>
+
+## Complete Chat Example
+
+Here's a complete example showing a multi-turn conversation with memory:
+
+<CodeGroup>
+
+```python Complete Python Example
+import asyncio
+import openai
+from supermemory_openai import SupermemoryTools, execute_memory_tool_calls
+
+async def chat_with_memory():
+ client = openai.AsyncOpenAI()
+ tools = SupermemoryTools(
+ api_key="your-supermemory-api-key",
+ config={"project_id": "chat-example"}
+ )
+
+ messages = [
+ {
+ "role": "system",
+ "content": """You are a helpful assistant with memory capabilities.
+ When users share personal information, remember it using addMemory.
+ When they ask questions, search your memories to provide personalized responses."""
+ }
+ ]
+
+ while True:
+ user_input = input("You: ")
+ if user_input.lower() == 'quit':
+ break
+
+ messages.append({"role": "user", "content": user_input})
+
+ # Get AI response with tools
+ response = await client.chat.completions.create(
+ model="gpt-4o",
+ messages=messages,
+ tools=tools.get_tool_definitions()
+ )
+
+ # Handle tool calls
+ if response.choices[0].message.tool_calls:
+ messages.append(response.choices[0].message)
+
+ tool_results = await execute_memory_tool_calls(
+ api_key="your-supermemory-api-key",
+ tool_calls=response.choices[0].message.tool_calls,
+ config={"project_id": "chat-example"}
+ )
+
+ messages.extend(tool_results)
+
+ # Get final response after tool execution
+ final_response = await client.chat.completions.create(
+ model="gpt-4o",
+ messages=messages
+ )
+
+ assistant_message = final_response.choices[0].message.content
+ else:
+ assistant_message = response.choices[0].message.content
+ messages.append({"role": "assistant", "content": assistant_message})
+
+ print(f"Assistant: {assistant_message}")
+
+# Run the chat
+asyncio.run(chat_with_memory())
+```
+
+```typescript Complete JavaScript Example
+import OpenAI from "openai"
+import { getToolDefinitions, createToolCallExecutor } from "@supermemory/tools/openai"
+import readline from 'readline'
+
+const client = new OpenAI()
+const executeToolCall = createToolCallExecutor(process.env.SUPERMEMORY_API_KEY!, {
+ projectId: "chat-example",
+})
+
+const rl = readline.createInterface({
+ input: process.stdin,
+ output: process.stdout,
+})
+
+async function chatWithMemory() {
+ const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
+ {
+ role: "system",
+ content: `You are a helpful assistant with memory capabilities.
+ When users share personal information, remember it using addMemory.
+ When they ask questions, search your memories to provide personalized responses.`
+ }
+ ]
+
+ const askQuestion = () => {
+ rl.question("You: ", async (userInput) => {
+ if (userInput.toLowerCase() === 'quit') {
+ rl.close()
+ return
+ }
+
+ messages.push({ role: "user", content: userInput })
+
+ // Get AI response with tools
+ const response = await client.chat.completions.create({
+ model: "gpt-4",
+ messages,
+ tools: getToolDefinitions(),
+ })
+
+ const choice = response.choices[0]
+ if (choice?.message.tool_calls) {
+ messages.push(choice.message)
+
+ // Execute tool calls
+ for (const toolCall of choice.message.tool_calls) {
+ const result = await executeToolCall(toolCall)
+ messages.push({
+ role: "tool",
+ tool_call_id: toolCall.id,
+ content: JSON.stringify(result),
+ })
+ }
+
+ // Get final response after tool execution
+ const finalResponse = await client.chat.completions.create({
+ model: "gpt-4",
+ messages,
+ })
+
+ const assistantMessage = finalResponse.choices[0]?.message.content || "No response"
+ console.log(`Assistant: ${assistantMessage}`)
+ messages.push({ role: "assistant", content: assistantMessage })
+ } else {
+ const assistantMessage = choice?.message.content || "No response"
+ console.log(`Assistant: ${assistantMessage}`)
+ messages.push({ role: "assistant", content: assistantMessage })
+ }
+
+ askQuestion()
+ })
+ }
+
+ console.log("Chat with memory started. Type 'quit' to exit.")
+ askQuestion()
+}
+
+chatWithMemory()
+```
+
+</CodeGroup>
+
+## Error Handling
+
+Handle errors gracefully in your applications:
+
+<CodeGroup>
+
+```python Python Error Handling
+from supermemory_openai import SupermemoryTools
+import openai
+
+async def safe_chat():
+ try:
+ client = openai.AsyncOpenAI()
+ tools = SupermemoryTools(api_key="your-api-key")
+
+ response = await client.chat.completions.create(
+ model="gpt-4o",
+ messages=[{"role": "user", "content": "Hello"}],
+ tools=tools.get_tool_definitions()
+ )
+
+ except openai.APIError as e:
+ print(f"OpenAI API error: {e}")
+ except Exception as e:
+ print(f"Unexpected error: {e}")
+```
+
+```typescript JavaScript Error Handling
+import OpenAI from "openai"
+import { getToolDefinitions } from "@supermemory/tools/openai"
+
+async function safeChat() {
+ try {
+ const client = new OpenAI()
+
+ const response = await client.chat.completions.create({
+ model: "gpt-4",
+ messages: [{ role: "user", content: "Hello" }],
+ tools: getToolDefinitions(),
+ })
+
+ } catch (error) {
+ if (error instanceof OpenAI.APIError) {
+ console.error("OpenAI API error:", error.message)
+ } else {
+ console.error("Unexpected error:", error)
+ }
+ }
+}
+```
+
+</CodeGroup>
+
+## API Reference
+
+### Python SDK
+
+#### `SupermemoryTools`
+
+**Constructor**
+```python
+SupermemoryTools(
+ api_key: str,
+ config: Optional[SupermemoryToolsConfig] = None
+)
+```
+
+**Methods**
+- `get_tool_definitions()` - Get OpenAI function definitions
+- `search_memories(information_to_get, limit, include_full_docs)` - Search user memories
+- `add_memory(memory)` - Add new memory
+- `fetch_memory(memory_id)` - Fetch specific memory by ID
+- `execute_tool_call(tool_call)` - Execute individual tool call
+
+#### `execute_memory_tool_calls`
+
+```python
+execute_memory_tool_calls(
+ api_key: str,
+ tool_calls: List[ToolCall],
+ config: Optional[SupermemoryToolsConfig] = None
+) -> List[dict]
+```
+
+### JavaScript SDK
+
+#### `supermemoryTools`
+
+```typescript
+supermemoryTools(
+ apiKey: string,
+ config?: { projectId?: string; baseUrl?: string }
+)
+```
+
+#### `createToolCallExecutor`
+
+```typescript
+createToolCallExecutor(
+ apiKey: string,
+ config?: { projectId?: string; baseUrl?: string }
+) -> (toolCall: OpenAI.Chat.ChatCompletionMessageToolCall) => Promise<any>
+```
+
+## Environment Variables
+
+Set these environment variables:
+
+```bash
+SUPERMEMORY_API_KEY=your_supermemory_key
+OPENAI_API_KEY=your_openai_key
+SUPERMEMORY_BASE_URL=https://custom-endpoint.com # optional
+```
+
+## Development
+
+### Python Setup
+
+```bash
+# Install uv
+curl -LsSf https://astral.sh/uv/install.sh | sh
+
+# Setup project
+git clone <repository-url>
+cd packages/openai-sdk-python
+uv sync --dev
+
+# Run tests
+uv run pytest
+
+# Type checking
+uv run mypy src/supermemory_openai
+
+# Formatting
+uv run black src/ tests/
+uv run isort src/ tests/
+```
+
+### JavaScript Setup
+
+```bash
+# Install dependencies
+npm install
+
+# Run tests
+npm test
+
+# Type checking
+npm run type-check
+
+# Linting
+npm run lint
+```
+
+## Next Steps
+
+<CardGroup cols={2}>
+ <Card title="AI SDK Integration" icon="triangle" href="/ai-sdk/overview">
+ Use with Vercel AI SDK for streamlined development
+ </Card>
+
+ <Card title="Memory API" icon="database" href="/memory-api/overview">
+ Direct API access for advanced memory management
+ </Card>
+</CardGroup>
diff --git a/apps/docs/memory-api/sdks/overview.mdx b/apps/docs/memory-api/sdks/overview.mdx
new file mode 100644
index 00000000..c0f536b5
--- /dev/null
+++ b/apps/docs/memory-api/sdks/overview.mdx
@@ -0,0 +1,24 @@
+---
+title: "Overview"
+---
+
+<Columns cols={2}>
+ <Card title="Native Python and Typescript/JS SDKs" icon="code" href="/memory-api/sdks/native">
+ <br/>
+ ```pip install supermemory```
+
+ ```npm install supermemory```
+ </Card>
+
+ <Card title="AI SDK plugin" icon="triangle" href="/ai-sdk/overview">
+ Easy to use with Vercel AI SDK
+ </Card>
+
+ <Card title="OpenAI SDK plugins" icon="sparkles" href="/memory-api/sdks/openai-plugins">
+ Use supermemory with the python and javascript OpenAI SDKs
+ </Card>
+
+ <Card title="Request more plugins" icon="life-buoy" href="mailto:[email protected]">
+ We will add support for your favorite SDKs asap.
+ </Card>
+</Columns>
diff --git a/apps/docs/memory-api/sdks/python.mdx b/apps/docs/memory-api/sdks/python.mdx
new file mode 100644
index 00000000..2b1f56fc
--- /dev/null
+++ b/apps/docs/memory-api/sdks/python.mdx
@@ -0,0 +1,349 @@
+---
+title: 'Python SDK'
+sidebarTitle: "Python"
+description: 'Learn how to use supermemory with Python'
+---
+
+## Installation
+
+```sh
+# install from PyPI
+pip install --pre supermemory
+```
+
+## Usage
+
+
+```python
+import os
+from supermemory import Supermemory
+
+client = supermemory(
+ api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted
+)
+
+response = client.search.execute(
+ q="documents related to python",
+)
+print(response.results)
+```
+
+While you can provide an `api_key` keyword argument,
+we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
+to add `SUPERMEMORY_API_KEY="My API Key"` to your `.env` file
+so that your API Key is not stored in source control.
+
+## Async usage
+
+Simply import `AsyncSupermemory` instead of `supermemory` and use `await` with each API call:
+
+```python
+import os
+import asyncio
+from supermemory import AsyncSupermemory
+
+client = AsyncSupermemory(
+ api_key=os.environ.get("SUPERMEMORY_API_KEY"), # This is the default and can be omitted
+)
+
+
+async def main() -> None:
+ response = await client.search.execute(
+ q="documents related to python",
+ )
+ print(response.results)
+
+
+asyncio.run(main())
+```
+
+Functionality between the synchronous and asynchronous clients is otherwise identical.
+
+## Using types
+
+Nested request parameters are [TypedDicts](https://docs.python.org/3/library/typing.html#typing.TypedDict). Responses are [Pydantic models](https://docs.pydantic.dev) which also provide helper methods for things like:
+
+- Serializing back into JSON, `model.to_json()`
+- Converting to a dictionary, `model.to_dict()`
+
+Typed requests and responses provide autocomplete and documentation within your editor. If you would like to see type errors in VS Code to help catch bugs earlier, set `python.analysis.typeCheckingMode` to `basic`.
+
+## File uploads
+
+Request parameters that correspond to file uploads can be passed as `bytes`, or a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance or a tuple of `(filename, contents, media type)`.
+
+```python
+from pathlib import Path
+from supermemory import Supermemory
+
+client = supermemory()
+
+client.memories.upload_file(
+ file=Path("/path/to/file"),
+)
+```
+
+The async client uses the exact same interface. If you pass a [`PathLike`](https://docs.python.org/3/library/os.html#os.PathLike) instance, the file contents will be read asynchronously automatically.
+
+## Handling errors
+
+When the library is unable to connect to the API (for example, due to network connection problems or a timeout), a subclass of `supermemory.APIConnectionError` is raised.
+
+When the API returns a non-success status code (that is, 4xx or 5xx
+response), a subclass of `supermemory.APIStatusError` is raised, containing `status_code` and `response` properties.
+
+All errors inherit from `supermemory.APIError`.
+
+```python
+import supermemory
+from supermemory import Supermemory
+
+client = supermemory()
+
+try:
+ client.memories.add(
+ content="This is a detailed article about machine learning concepts...",
+ )
+except supermemory.APIConnectionError as e:
+ print("The server could not be reached")
+ print(e.__cause__) # an underlying Exception, likely raised within httpx.
+except supermemory.RateLimitError as e:
+ print("A 429 status code was received; we should back off a bit.")
+except supermemory.APIStatusError as e:
+ print("Another non-200-range status code was received")
+ print(e.status_code)
+ print(e.response)
+```
+
+Error codes are as follows:
+
+| Status Code | Error Type |
+| ----------- | -------------------------- |
+| 400 | `BadRequestError` |
+| 401 | `AuthenticationError` |
+| 403 | `PermissionDeniedError` |
+| 404 | `NotFoundError` |
+| 422 | `UnprocessableEntityError` |
+| 429 | `RateLimitError` |
+| >=500 | `InternalServerError` |
+| N/A | `APIConnectionError` |
+
+### Retries
+
+Certain errors are automatically retried 2 times by default, with a short exponential backoff.
+Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
+429 Rate Limit, and >=500 Internal errors are all retried by default.
+
+You can use the `max_retries` option to configure or disable retry settings:
+
+```python
+from supermemory import Supermemory
+
+# Configure the default for all requests:
+client = supermemory(
+ # default is 2
+ max_retries=0,
+)
+
+# Or, configure per-request:
+client.with_options(max_retries=5).memories.add(
+ content="This is a detailed article about machine learning concepts...",
+)
+```
+
+### Timeouts
+
+By default requests time out after 1 minute. You can configure this with a `timeout` option,
+which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) object:
+
+```python
+from supermemory import Supermemory
+
+# Configure the default for all requests:
+client = supermemory(
+ # 20 seconds (default is 1 minute)
+ timeout=20.0,
+)
+
+# More granular control:
+client = supermemory(
+ timeout=httpx.Timeout(60.0, read=5.0, write=10.0, connect=2.0),
+)
+
+# Override per-request:
+client.with_options(timeout=5.0).memories.add(
+ content="This is a detailed article about machine learning concepts...",
+)
+```
+
+On timeout, an `APITimeoutError` is thrown.
+
+Note that requests that time out are [retried twice by default](#retries).
+
+## Advanced
+
+### Logging
+
+We use the standard library [`logging`](https://docs.python.org/3/library/logging.html) module.
+
+You can enable logging by setting the environment variable `SUPERMEMORY_LOG` to `info`.
+
+```shell
+$ export SUPERMEMORY_LOG=info
+```
+
+Or to `debug` for more verbose logging.
+
+### How to tell whether `None` means `null` or missing
+
+In an API response, a field may be explicitly `null`, or missing entirely; in either case, its value is `None` in this library. You can differentiate the two cases with `.model_fields_set`:
+
+```py
+if response.my_field is None:
+ if 'my_field' not in response.model_fields_set:
+ print('Got json like {}, without a "my_field" key present at all.')
+ else:
+ print('Got json like {"my_field": null}.')
+```
+
+### Accessing raw response data (e.g. headers)
+
+The "raw" Response object can be accessed by prefixing `.with_raw_response.` to any HTTP method call, e.g.,
+
+```py
+from supermemory import Supermemory
+
+client = supermemory()
+response = client.memories.with_raw_response.add(
+ content="This is a detailed article about machine learning concepts...",
+)
+print(response.headers.get('X-My-Header'))
+
+memory = response.parse() # get the object that `memories.add()` would have returned
+print(memory.id)
+```
+
+These methods return an [`APIResponse`](https://github.com/supermemoryai/python-sdk/tree/main/src/supermemory/_response.py) object.
+
+The async client returns an [`AsyncAPIResponse`](https://github.com/supermemoryai/python-sdk/tree/main/src/supermemory/_response.py) with the same structure, the only difference being `await`able methods for reading the response content.
+
+#### `.with_streaming_response`
+
+The above interface eagerly reads the full response body when you make the request, which may not always be what you want.
+
+To stream the response body, use `.with_streaming_response` instead, which requires a context manager and only reads the response body once you call `.read()`, `.text()`, `.json()`, `.iter_bytes()`, `.iter_text()`, `.iter_lines()` or `.parse()`. In the async client, these are async methods.
+
+```python
+with client.memories.with_streaming_response.add(
+ content="This is a detailed article about machine learning concepts...",
+) as response:
+ print(response.headers.get("X-My-Header"))
+
+ for line in response.iter_lines():
+ print(line)
+```
+
+The context manager is required so that the response will reliably be closed.
+
+### Making custom/undocumented requests
+
+This library is typed for convenient access to the documented API.
+
+If you need to access undocumented endpoints, params, or response properties, the library can still be used.
+
+#### Undocumented endpoints
+
+To make requests to undocumented endpoints, you can make requests using `client.get`, `client.post`, and other
+http verbs. Options on the client will be respected (such as retries) when making this request.
+
+```py
+import httpx
+
+response = client.post(
+ "/foo",
+ cast_to=httpx.Response,
+ body={"my_param": True},
+)
+
+print(response.headers.get("x-foo"))
+```
+
+#### Undocumented request params
+
+If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
+options.
+
+#### Undocumented response properties
+
+To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
+can also get all the extra fields on the Pydantic model as a dict with
+[`response.model_extra`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel.model_extra).
+
+### Configuring the HTTP client
+
+You can directly override the [httpx client](https://www.python-httpx.org/api/#client) to customize it for your use case, including:
+
+- Support for [proxies](https://www.python-httpx.org/advanced/proxies/)
+- Custom [transports](https://www.python-httpx.org/advanced/transports/)
+- Additional [advanced](https://www.python-httpx.org/advanced/clients/) functionality
+
+```python
+import httpx
+from supermemory import Supermemory, DefaultHttpxClient
+
+client = supermemory(
+ # Or use the `SUPERMEMORY_BASE_URL` env var
+ base_url="http://my.test.server.example.com:8083",
+ http_client=DefaultHttpxClient(
+ proxy="http://my.test.proxy.example.com",
+ transport=httpx.HTTPTransport(local_address="0.0.0.0"),
+ ),
+)
+```
+
+You can also customize the client on a per-request basis by using `with_options()`:
+
+```python
+client.with_options(http_client=DefaultHttpxClient(...))
+```
+
+### Managing HTTP resources
+
+By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
+
+```py
+from supermemory import Supermemory
+
+with supermemory() as client:
+ # make requests here
+ ...
+
+# HTTP client is now closed
+```
+
+## Versioning
+
+This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
+
+1. Changes that only affect static types, without breaking runtime behavior.
+2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
+3. Changes that we do not expect to impact the vast majority of users in practice.
+
+We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
+
+We are keen for your feedback; please open an [issue](https://www.github.com/supermemoryai/python-sdk/issues) with questions, bugs, or suggestions.
+
+### Determining the installed version
+
+If you've upgraded to the latest version but aren't seeing any new features you were expecting then your python environment is likely still using an older version.
+
+You can determine the version that is being used at runtime with:
+
+```py
+import supermemory
+print(supermemory.__version__)
+```
+
+## Requirements
+
+Python 3.8 or higher. \ No newline at end of file
diff --git a/apps/docs/memory-api/sdks/supermemory-npm.mdx b/apps/docs/memory-api/sdks/supermemory-npm.mdx
new file mode 100644
index 00000000..c872458a
--- /dev/null
+++ b/apps/docs/memory-api/sdks/supermemory-npm.mdx
@@ -0,0 +1,5 @@
+---
+title: "`supermemory` on npm"
+url: "https://www.npmjs.com/package/supermemory"
+icon: npm
+---
diff --git a/apps/docs/memory-api/sdks/supermemory-pypi.mdx b/apps/docs/memory-api/sdks/supermemory-pypi.mdx
new file mode 100644
index 00000000..1b831245
--- /dev/null
+++ b/apps/docs/memory-api/sdks/supermemory-pypi.mdx
@@ -0,0 +1,5 @@
+---
+title: "`supermemory` on pypi"
+url: "https://pypi.org/project/supermemory/"
+icon: python
+---
diff --git a/apps/docs/memory-api/sdks/typescript.mdx b/apps/docs/memory-api/sdks/typescript.mdx
new file mode 100644
index 00000000..54cc7137
--- /dev/null
+++ b/apps/docs/memory-api/sdks/typescript.mdx
@@ -0,0 +1,391 @@
+---
+title: 'Typescript SDK'
+sidebarTitle: "Typescript"
+description: 'Learn how to use supermemory with Typescript'
+---
+
+## Installation
+
+```sh
+npm install supermemory
+```
+
+## Usage
+
+```js
+import supermemory from 'supermemory';
+
+const client = new supermemory({
+ apiKey: process.env['SUPERMEMORY_API_KEY'], // This is the default and can be omitted
+});
+
+async function main() {
+ const response = await client.search.execute({ q: 'documents related to python' });
+
+ console.debug(response.results);
+}
+
+main();
+```
+
+### Request & Response types
+
+This library includes TypeScript definitions for all request params and response fields. You may import and use them like so:
+
+
+```ts
+import supermemory from 'supermemory';
+
+const client = new supermemory({
+ apiKey: process.env['SUPERMEMORY_API_KEY'], // This is the default and can be omitted
+});
+
+async function main() {
+ const params: supermemory.MemoryAddParams = {
+ content: 'This is a detailed article about machine learning concepts...',
+ };
+ const response: supermemory.MemoryAddResponse = await client.memories.add(params);
+}
+
+main();
+```
+
+Documentation for each method, request param, and response field are available in docstrings and will appear on hover in most modern editors.
+
+## File uploads
+
+Request parameters that correspond to file uploads can be passed in many different forms:
+
+- `File` (or an object with the same structure)
+- a `fetch` `Response` (or an object with the same structure)
+- an `fs.ReadStream`
+- the return value of our `toFile` helper
+
+```ts
+import fs from 'fs';
+import supermemory, { toFile } from 'supermemory';
+
+const client = new supermemory();
+
+// If you have access to Node `fs` we recommend using `fs.createReadStream()`:
+await client.memories.uploadFile({ file: fs.createReadStream('/path/to/file') });
+
+// Or if you have the web `File` API you can pass a `File` instance:
+await client.memories.uploadFile({ file: new File(['my bytes'], 'file') });
+
+// You can also pass a `fetch` `Response`:
+await client.memories.uploadFile({ file: await fetch('https://somesite/file') });
+
+// Finally, if none of the above are convenient, you can use our `toFile` helper:
+await client.memories.uploadFile({ file: await toFile(Buffer.from('my bytes'), 'file') });
+await client.memories.uploadFile({ file: await toFile(new Uint8Array([0, 1, 2]), 'file') });
+```
+
+## Handling errors
+
+When the library is unable to connect to the API,
+or if the API returns a non-success status code (i.e., 4xx or 5xx response),
+a subclass of `APIError` will be thrown:
+
+
+```ts
+async function main() {
+ const response = await client.memories
+ .add({ content: 'This is a detailed article about machine learning concepts...' })
+ .catch(async (err) => {
+ if (err instanceof supermemory.APIError) {
+ console.debug(err.status); // 400
+ console.debug(err.name); // BadRequestError
+ console.debug(err.headers); // {server: 'nginx', ...}
+ } else {
+ throw err;
+ }
+ });
+}
+
+main();
+```
+
+Error codes are as follows:
+
+| Status Code | Error Type |
+| ----------- | -------------------------- |
+| 400 | `BadRequestError` |
+| 401 | `AuthenticationError` |
+| 403 | `PermissionDeniedError` |
+| 404 | `NotFoundError` |
+| 422 | `UnprocessableEntityError` |
+| 429 | `RateLimitError` |
+| >=500 | `InternalServerError` |
+| N/A | `APIConnectionError` |
+
+### Retries
+
+Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
+Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
+429 Rate Limit, and >=500 Internal errors will all be retried by default.
+
+You can use the `maxRetries` option to configure or disable this:
+
+
+```js
+// Configure the default for all requests:
+const client = new supermemory({
+ maxRetries: 0, // default is 2
+});
+
+// Or, configure per-request:
+await client.memories.add({ content: 'This is a detailed article about machine learning concepts...' }, {
+ maxRetries: 5,
+});
+```
+
+### Timeouts
+
+Requests time out after 1 minute by default. You can configure this with a `timeout` option:
+
+
+```ts
+// Configure the default for all requests:
+const client = new supermemory({
+ timeout: 20 * 1000, // 20 seconds (default is 1 minute)
+});
+
+// Override per-request:
+await client.memories.add({ content: 'This is a detailed article about machine learning concepts...' }, {
+ timeout: 5 * 1000,
+});
+```
+
+On timeout, an `APIConnectionTimeoutError` is thrown.
+
+Note that requests which time out will be [retried twice by default](#retries).
+
+## Advanced Usage
+
+### Accessing raw Response data (e.g., headers)
+
+The "raw" `Response` returned by `fetch()` can be accessed through the `.asResponse()` method on the `APIPromise` type that all methods return.
+This method returns as soon as the headers for a successful response are received and does not consume the response body, so you are free to write custom parsing or streaming logic.
+
+You can also use the `.withResponse()` method to get the raw `Response` along with the parsed data.
+Unlike `.asResponse()` this method consumes the body, returning once it is parsed.
+
+
+```ts
+const client = new supermemory();
+
+const response = await client.memories
+ .add({ content: 'This is a detailed article about machine learning concepts...' })
+ .asResponse();
+console.debug(response.headers.get('X-My-Header'));
+console.debug(response.statusText); // access the underlying Response object
+
+const { data: response, response: raw } = await client.memories
+ .add({ content: 'This is a detailed article about machine learning concepts...' })
+ .withResponse();
+console.debug(raw.headers.get('X-My-Header'));
+console.debug(response.id);
+```
+
+### Logging
+
+<Warning>
+All log messages are intended for debugging only. The format and content of log messages may change between releases.
+</Warning>
+
+#### Log levels
+
+The log level can be configured in two ways:
+
+1. Via the `SUPERMEMORY_LOG` environment variable
+2. Using the `logLevel` client option (overrides the environment variable if set)
+
+```ts
+import supermemory from 'supermemory';
+
+const client = new supermemory({
+ logLevel: 'debug', // Show all log messages
+});
+```
+
+Available log levels, from most to least verbose:
+
+- `'debug'` - Show debug messages, info, warnings, and errors
+- `'info'` - Show info messages, warnings, and errors
+- `'warn'` - Show warnings and errors (default)
+- `'error'` - Show only errors
+- `'off'` - Disable all logging
+
+At the `'debug'` level, all HTTP requests and responses are logged, including headers and bodies.
+Some authentication-related headers are redacted, but sensitive data in request and response bodies
+may still be visible.
+
+#### Custom logger
+
+By default, this library logs to `globalThis.console`. You can also provide a custom logger.
+Most logging libraries are supported, including [pino](https://www.npmjs.com/package/pino), [winston](https://www.npmjs.com/package/winston), [bunyan](https://www.npmjs.com/package/bunyan), [consola](https://www.npmjs.com/package/consola), [signale](https://www.npmjs.com/package/signale), and [@std/log](https://jsr.io/@std/log). If your logger doesn't work, please open an issue.
+
+When providing a custom logger, the `logLevel` option still controls which messages are emitted, messages
+below the configured level will not be sent to your logger.
+
+```ts
+import supermemory from 'supermemory';
+import pino from 'pino';
+
+const logger = pino();
+
+const client = new supermemory({
+ logger: logger.child({ name: 'supermemory' }),
+ logLevel: 'debug', // Send all messages to pino, allowing it to filter
+});
+```
+
+### Making custom/undocumented requests
+
+This library is typed for convenient access to the documented API. If you need to access undocumented
+endpoints, params, or response properties, the library can still be used.
+
+#### Undocumented endpoints
+
+To make requests to undocumented endpoints, you can use `client.get`, `client.post`, and other HTTP verbs.
+Options on the client, such as retries, will be respected when making these requests.
+
+```ts
+await client.post('/some/path', {
+ body: { some_prop: 'foo' },
+ query: { some_query_arg: 'bar' },
+});
+```
+
+#### Undocumented request params
+
+To make requests using undocumented parameters, you may use `// @ts-expect-error` on the undocumented
+parameter. This library doesn't validate at runtime that the request matches the type, so any extra values you
+send will be sent as-is.
+
+```ts
+client.foo.create({
+ foo: 'my_param',
+ bar: 12,
+ // @ts-expect-error baz is not yet public
+ baz: 'undocumented option',
+});
+```
+
+For requests with the `GET` verb, any extra params will be in the query, all other requests will send the
+extra param in the body.
+
+If you want to explicitly send an extra argument, you can do so with the `query`, `body`, and `headers` request
+options.
+
+#### Undocumented response properties
+
+To access undocumented response properties, you may access the response object with `// @ts-expect-error` on
+the response object, or cast the response object to the requisite type. Like the request params, we do not
+validate or strip extra properties from the response from the API.
+
+### Customizing the fetch client
+
+By default, this library expects a global `fetch` function is defined.
+
+If you want to use a different `fetch` function, you can either polyfill the global:
+
+```ts
+import fetch from 'my-fetch';
+
+globalThis.fetch = fetch;
+```
+
+Or pass it to the client:
+
+```ts
+import supermemory from 'supermemory';
+import fetch from 'my-fetch';
+
+const client = new supermemory({ fetch });
+```
+
+### Fetch options
+
+If you want to set custom `fetch` options without overriding the `fetch` function, you can provide a `fetchOptions` object when instantiating the client or making a request. (Request-specific options override client options.)
+
+```ts
+import supermemory from 'supermemory';
+
+const client = new supermemory({
+ fetchOptions: {
+ // `RequestInit` options
+ },
+});
+```
+
+#### Configuring proxies
+
+To modify proxy behavior, you can provide custom `fetchOptions` that add runtime-specific proxy options to requests:
+
+```ts
+import supermemory from 'supermemory';
+import * as undici from 'undici';
+
+const proxyAgent = new undici.ProxyAgent('http://localhost:8888');
+const client = new supermemory({
+ fetchOptions: {
+ dispatcher: proxyAgent,
+ },
+});
+```
+
+```ts
+import supermemory from 'supermemory';
+
+const client = new supermemory({
+ fetchOptions: {
+ proxy: 'http://localhost:8888',
+ },
+});
+```
+
+```ts
+import supermemory from 'npm:supermemory';
+
+const httpClient = Deno.createHttpClient({ proxy: { url: 'http://localhost:8888' } });
+const client = new supermemory({
+ fetchOptions: {
+ client: httpClient,
+ },
+});
+```
+
+## Frequently Asked Questions
+
+## Semantic versioning
+
+This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
+
+1. Changes that only affect static types, without breaking runtime behavior.
+2. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
+3. Changes that we do not expect to impact the vast majority of users in practice.
+
+We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
+
+We are keen for your feedback; please open an [issue](https://www.github.com/supermemoryai/sdk-ts/issues) with questions, bugs, or suggestions.
+
+## Requirements
+
+TypeScript >= 4.9 is supported.
+
+The following runtimes are supported:
+
+- Web browsers (Up-to-date Chrome, Firefox, Safari, Edge, and more)
+- Node.js 20 LTS or later ([non-EOL](https://endoflife.date/nodejs)) versions.
+- Deno v1.28.0 or higher.
+- Bun 1.0 or later.
+- Cloudflare Workers.
+- Vercel Edge Runtime.
+- Jest 28 or greater with the `"node"` environment (`"jsdom"` is not supported at this time).
+- Nitro v2.6 or greater.
+
+Note that React Native is not supported at this time.
+
+If you are interested in other runtime environments, please open or upvote an issue on GitHub. \ No newline at end of file