aboutsummaryrefslogtreecommitdiff
path: root/packages/tools
Commit message (Collapse)AuthorAgeFilesLines
* add support for responses api in openai typescript sdk (#549)Shoubhit Dash2025-11-073-68/+200
|
* feat(@supermemory/tools): capture assitant responses with filtered memory (#539)MaheshtheDev2025-10-288-160/+404
| | | | | | | | | | | | | | | ### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration. ### What changed? - Refactored the middleware architecture to support both streaming and non-streaming responses - Extracted memory prompt functionality into a separate module (`memory-prompt.ts`) - Added memory saving capability for streaming responses - Improved the formatting of memory content with a "User Supermemories:" prefix - Added utility function to filter out supermemories from content - Created a new streaming example in the test app with a dedicated route and page - Updated version from 1.3.0 to 1.3.1 in package.json - Simplified installation instructions in [README.m](http://README.md)d
* fix: openai sdk packaging issue (#532)MaheshtheDev2025-10-272-2/+2
|
* feat: withSupermemory for openai sdk (#531)MaheshtheDev2025-10-2711-8/+744
| | | | | | | | | | | | | | | ### TL;DR Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency. ### What changed? - Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions - Implemented memory search and injection functionality for OpenAI clients - Restructured the OpenAI module to separate tools and middleware functionality - Updated README with comprehensive documentation and examples for the new OpenAI middleware - Added test implementation with a Next.js API route example - Reorganized package exports to support the new structure
* chat app withSupermemory testMahesh Sanikommmu2025-10-2218-1/+432
|
* add props interface exportMahesh Sanikommmu2025-10-223-9/+10
|
* add commentShoubhit Dash2025-10-221-1/+2
|
* add testShoubhit Dash2025-10-221-0/+24
|
* fix prompt mutation in vercel middlewareShoubhit Dash2025-10-221-3/+9
|
* fix(tools): update the docs for conversationalMahesh Sanikommmu2025-10-192-4/+24
|
* add conversationId functionality to map to customId in ingestion (#499)sohamd222025-10-192-5/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ### TL;DR Added support for conversation grouping in Supermemory middleware through a new `conversationId` parameter. ### What changed? - Added a new `conversationId` option to the `withSupermemory` function to group messages into a single document for contextual memory generation - Updated the middleware to use this conversation ID when adding memories, using a `customId` format of `conversation:{conversationId}` - Created a new `getConversationContent` function that extracts the full conversation content from the prompt parameters - Enhanced memory storage to save entire conversations rather than just the last user message - Updated documentation and examples to demonstrate the new parameter usage ### How to test? 1. Import the `withSupermemory` function from the package 2. Create a model with memory using the new `conversationId` parameter: ```typescript const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123", { conversationId: "conversation-456", mode: "full", addMemory: "always" }) ``` 3. Use the model in a conversation and verify that messages are grouped by the conversation ID 4. Check that memories are being stored with the custom ID format `conversation:{conversationId}` ### Why make this change? This enhancement improves the contextual understanding of the AI by allowing related messages to be grouped together as a single conversation document. By using a conversation ID, the system can maintain coherent memory across multiple interactions within the same conversation thread, providing better context retrieval and more relevant responses.
* fix: side effect removalMahesh Sanikommmu2025-10-101-1/+1
|
* fix: add memory code params and documentation in readmeMahesh Sanikommmu2025-10-104-22/+53
|
* create memory adding option in vercel sdk (#484)sohamd222025-10-113-5/+53
| | | | | | | | | | | | | | | | | | | | | | | | ### TL;DR Added support for automatically saving user messages to Supermemory. ### What changed? - Added a new `addMemory` option to `wrapVercelLanguageModel` that accepts either "always" or "never" (defaults to "never") - Implemented the `addMemoryTool` function to save user messages to Supermemory - Modified the middleware to check the `addMemory` setting and save the last user message when appropriate - Initialized the Supermemory client in the middleware to enable memory storage ### How to test? 1. Set the `SUPERMEMORY_API_KEY` environment variable 2. Use the `wrapVercelLanguageModel` function with the new `addMemory: "always"` option 3. Send a user message through the model 4. Verify that the message is saved to Supermemory with the specified container tag ### Why make this change? This change enables automatic memory creation from user messages, which improves the system's ability to build a knowledge base without requiring explicit memory creation calls. This is particularly useful for applications that want to automatically capture and store user interactions for future reference.
* feat: ai sdk language model withSupermemory (#446)MaheshtheDev2025-10-109-1/+972
|
* Revert "test(ai-sdk): streamText and generateText for ai sdk" (#466)Dhravya Shah2025-10-084-159/+2
|
* test(ai-sdk): streamText and generateText for ai sdk (#451)Mahesh Sanikommu2025-10-084-2/+159
|
* fix: model namesDhravya Shah2025-10-032-3/+3
|
* fix: docsDhravya Shah2025-10-031-2/+2
|
* fix: tools filesDhravya Shah2025-10-022-2/+8
|
* feat: Add memory vs rag and migration section to docsDhravya Shah2025-10-011-43/+100
|
* feat: Claude memory integrationDhravya Shah2025-09-2910-3/+2261
|
* bump versionDhravya Shah2025-09-241-1/+1
|
* update: ReadmeDhravya Shah2025-09-131-5/+5
|
* feat: openai python sdk (#409)CodeWithShreyans2025-09-031-0/+9
|
* feat: new tools package (#407)CodeWithShreyans2025-09-0211-0/+969