aboutsummaryrefslogtreecommitdiff
path: root/packages
Commit message (Collapse)AuthorAgeFilesLines
* py package updatepipecat-sdkPrasanna7212026-01-102-6/+4
|
* update readme and agents.mdPrasanna7212026-01-102-226/+37
|
* added delta message trackingPrasanna7212026-01-102-151/+70
|
* fixed req callPrasanna7212026-01-091-1/+1
|
* removed await and testing codePrasanna7212026-01-093-101/+63
|
* pipecat-sdkPrasanna7212026-01-098-0/+1166
|
* feat: allow prompt template for @supermemory/tools package (#655)01-06-feat_allow_prompt_template_for_supermemory_tools_packageMaheshtheDev2026-01-076-41/+206
| | | | | | | | | | | | | | | | | | | | | | | | | | ## Add customizable prompt templates for memory injection **Changes:** - Add `promptTemplate` option to `withSupermemory()` for full control over injected memory format (XML, custom branding, etc.) - New `MemoryPromptData` interface with `userMemories` and `generalSearchMemories` fields - Exclude `system` messages from persistence to avoid storing injected prompts - Add JSDoc comments to all public interfaces for better DevEx **Usage:** ```typescript const customPrompt = (data: MemoryPromptData) => ` <user_memories> ${data.userMemories} ${data.generalSearchMemories} </user_memories> `.trim() const model = withSupermemory(openai("gpt-4"), "user-123", { promptTemplate: customPrompt, }) ```
* chore: update the package version of tools (#637)12-30-chore_update_the_package_version_of_toolsMaheshtheDev2025-12-301-1/+1
|
* fix(tools): pass apiKey to profile search instead of using process.env (#634)Arnab Mondal2025-12-302-25/+29
|
* fix deduplication in python sdk (#626)12-23-fix_deduplication_in_python_sdknexxeln2025-12-293-25/+99
| | | | done in a similar way to the ai sdk
* chore: bump package versionDhravya Shah2025-12-281-1/+1
|
* MemoryGraph - revamped (#627)Vidya Rupak2025-12-2913-372/+2102
|
* conditionalDhravya Shah2025-12-233-10/+16
|
* feat(@supermemory/tools): vercel ai sdk compatbile with v5 and v6 (#628)12-23-feat_supermemory_tools_vercel_ai_sdk_compatbile_with_v5_and_v6MaheshtheDev2025-12-245-165/+299
|
* bump packageDhravya Shah2025-12-232-3/+3
|
* fix: deduplicate memories after returned to save tokensDhravya Shah2025-12-225-38/+201
|
* chore: fix tsdown defaults in withsupermemory package (#623)12-21-chore_fix_tsdown_defaults_in_withsupermemory_packageMaheshtheDev2025-12-213-2/+3
|
* Support for conversations in SDKs (#618)12-15-support_for_conversationsDhravya2025-12-2012-43/+432
|
* fix: change support email to the one on slackDhravya Shah2025-12-181-1/+1
|
* fix: memory graph not visible with just docsDhravya Shah2025-12-173-15/+12
|
* Merge branch 'main' of https://github.com/supermemoryai/supermemoryDhravya Shah2025-12-0614-1/+3951
|\
| * fix ui issues and package issue (#610)Mahesh Sanikommu2025-12-0614-1/+3951
| |
* | chore: bump package versionsDhravya Shah2025-12-061-1/+1
|/
* feat(tools): allow passing apiKey via options for browser support (#599)Arnab Mondal2025-12-053-7/+12
| | | Co-authored-by: Mahesh Sanikommmu <[email protected]>
* add docs for graph package (#603)graph-docsnexxeln2025-12-041-328/+44
|
* use latest graph and remove old graph (#604)use-memory-graph-packagenexxeln2025-12-0414-3951/+1
|
* chore(@supermemory/tools): fix the documentation of withSupermemory (#601)12-03-chore_supermemory_tools_fix_the_documentation_of_withsupermemoryMaheshtheDev2025-12-032-3/+3
| | | | - small docs miss match on addMemory default option
* add spaces selector with search (#600)update-memory-graphnexxeln2025-12-0241-1672/+2088
| | | | | | | relevant files to review: \- memory-graph.tsx \- spaces-dropdown.tsx \- spaces-dropdown.css.ts
* update quickstartDhravya Shah2025-11-271-1/+1
|
* runtime styles injection + let user proxy requests for data in graph package ↵proxy-graph-requestsnexxeln2025-11-2225-1140/+272
| | | | + new playground (#588)
* package the graph (#563)shoubhit/eng-358-packaging-graph-componentnexxeln2025-11-1965-24/+7178
| | | | | | | | | | | | | | includes: - a package that contains a MemoryGraph component which handles fetching data and rendering the graph - a playground to test the package problems: - the bundle size is huge - the styles are kinda broken? we are using [https://www.npmjs.com/package/vite-plugin-libgi-inject-css](https://www.npmjs.com/package/vite-plugin-lib-inject-css) to inject the styles ![image.png](https://app.graphite.com/user-attachments/assets/cb1822c5-850a-45a2-9bfa-72b73436659f.png)
* fix: org switch issue on consumer when dev org exists (#577)11-11-fix_org_switch_issue_on_consumer_when_dev_org_existsMaheshtheDev2025-11-111-9/+10
|
* add openai middleware functionality for python sdk (#546)openai-middleware-pythonnexxeln2025-11-1110-22/+3705
| | | | | | | | add openai middleware functionality fix critical type errors and linting issues update readme with middleware documentation
* fix: past due pending users improvements (#572)11-10-fix_past_due_pending_users_improvementsMaheshtheDev2025-11-101-7/+21
|
* fix(web): sentry issues across the web app (#570)11-08-fix_web_sentry_issues_across_the_web_appMaheshtheDev2025-11-094-29/+87
| | | | | | | | | | Fixes all following sentry issues - CONSUMER-APP-FF - CONSUMER-APP-1T - CONSUMER-APP-86 - CONSUMER-APP-7H - CONSUMER-APP-4F - CONSUMER-APP-7X
* add support for responses api in openai typescript sdk (#549)Shoubhit Dash2025-11-073-68/+200
|
* feat(@supermemory/tools): capture assitant responses with filtered memory (#539)MaheshtheDev2025-10-288-160/+404
| | | | | | | | | | | | | | | ### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration. ### What changed? - Refactored the middleware architecture to support both streaming and non-streaming responses - Extracted memory prompt functionality into a separate module (`memory-prompt.ts`) - Added memory saving capability for streaming responses - Improved the formatting of memory content with a "User Supermemories:" prefix - Added utility function to filter out supermemories from content - Created a new streaming example in the test app with a dedicated route and page - Updated version from 1.3.0 to 1.3.1 in package.json - Simplified installation instructions in [README.m](http://README.md)d
* feat: improved add memory UI bits (#502)Hardik Vora2025-10-271-1/+1
|
* feat: optional posthog intialization (#525)Saksham Kushwaha2025-10-272-4/+12
|
* fix: openai sdk packaging issue (#532)MaheshtheDev2025-10-272-2/+2
|
* chore: skip the conditional org switch for better auth state share (#533)MaheshtheDev2025-10-271-8/+9
|
* feat: withSupermemory for openai sdk (#531)MaheshtheDev2025-10-2711-8/+744
| | | | | | | | | | | | | | | ### TL;DR Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency. ### What changed? - Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions - Implemented memory search and injection functionality for OpenAI clients - Restructured the OpenAI module to separate tools and middleware functionality - Updated README with comprehensive documentation and examples for the new OpenAI middleware - Added test implementation with a Next.js API route example - Reorganized package exports to support the new structure
* fix: auto switch to expected org (#522)MaheshtheDev2025-10-251-17/+23
|
* chat app withSupermemory testMahesh Sanikommmu2025-10-2218-1/+432
|
* add props interface exportMahesh Sanikommmu2025-10-223-9/+10
|
* add commentShoubhit Dash2025-10-221-1/+2
|
* add testShoubhit Dash2025-10-221-0/+24
|
* fix prompt mutation in vercel middlewareShoubhit Dash2025-10-221-3/+9
|
* fix(tools): update the docs for conversationalMahesh Sanikommmu2025-10-192-4/+24
|
* add conversationId functionality to map to customId in ingestion (#499)sohamd222025-10-192-5/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ### TL;DR Added support for conversation grouping in Supermemory middleware through a new `conversationId` parameter. ### What changed? - Added a new `conversationId` option to the `withSupermemory` function to group messages into a single document for contextual memory generation - Updated the middleware to use this conversation ID when adding memories, using a `customId` format of `conversation:{conversationId}` - Created a new `getConversationContent` function that extracts the full conversation content from the prompt parameters - Enhanced memory storage to save entire conversations rather than just the last user message - Updated documentation and examples to demonstrate the new parameter usage ### How to test? 1. Import the `withSupermemory` function from the package 2. Create a model with memory using the new `conversationId` parameter: ```typescript const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123", { conversationId: "conversation-456", mode: "full", addMemory: "always" }) ``` 3. Use the model in a conversation and verify that messages are grouped by the conversation ID 4. Check that memories are being stored with the custom ID format `conversation:{conversationId}` ### Why make this change? This enhancement improves the contextual understanding of the AI by allowing related messages to be grouped together as a single conversation document. By using a conversation ID, the system can maintain coherent memory across multiple interactions within the same conversation thread, providing better context retrieval and more relevant responses.