aboutsummaryrefslogtreecommitdiff
path: root/packages/tools/README.md
Commit message (Collapse)AuthorAgeFilesLines
* feat: tools package strict mode for openai models (#699)01-22-feat_tools_package_strict_mode_for_openai_modelsMaheshtheDev2026-01-231-0/+33
|
* fix(tools): multi step agent prompt caching (#685)01-19-fix_tools_multi_step_agent_prompt_cachingMaheshtheDev2026-01-201-0/+1
|
* feat: allow prompt template for @supermemory/tools package (#655)01-06-feat_allow_prompt_template_for_supermemory_tools_packageMaheshtheDev2026-01-071-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | ## Add customizable prompt templates for memory injection **Changes:** - Add `promptTemplate` option to `withSupermemory()` for full control over injected memory format (XML, custom branding, etc.) - New `MemoryPromptData` interface with `userMemories` and `generalSearchMemories` fields - Exclude `system` messages from persistence to avoid storing injected prompts - Add JSDoc comments to all public interfaces for better DevEx **Usage:** ```typescript const customPrompt = (data: MemoryPromptData) => ` <user_memories> ${data.userMemories} ${data.generalSearchMemories} </user_memories> `.trim() const model = withSupermemory(openai("gpt-4"), "user-123", { promptTemplate: customPrompt, }) ```
* chore: fix tsdown defaults in withsupermemory package (#623)12-21-chore_fix_tsdown_defaults_in_withsupermemory_packageMaheshtheDev2025-12-211-1/+1
|
* feat(tools): allow passing apiKey via options for browser support (#599)Arnab Mondal2025-12-051-1/+3
| | | Co-authored-by: Mahesh Sanikommmu <[email protected]>
* feat(@supermemory/tools): capture assitant responses with filtered memory (#539)MaheshtheDev2025-10-281-6/+0
| | | | | | | | | | | | | | | ### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration. ### What changed? - Refactored the middleware architecture to support both streaming and non-streaming responses - Extracted memory prompt functionality into a separate module (`memory-prompt.ts`) - Added memory saving capability for streaming responses - Improved the formatting of memory content with a "User Supermemories:" prefix - Added utility function to filter out supermemories from content - Created a new streaming example in the test app with a dedicated route and page - Updated version from 1.3.0 to 1.3.1 in package.json - Simplified installation instructions in [README.m](http://README.md)d
* feat: withSupermemory for openai sdk (#531)MaheshtheDev2025-10-271-2/+101
| | | | | | | | | | | | | | | ### TL;DR Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency. ### What changed? - Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions - Implemented memory search and injection functionality for OpenAI clients - Restructured the OpenAI module to separate tools and middleware functionality - Updated README with comprehensive documentation and examples for the new OpenAI middleware - Added test implementation with a Next.js API route example - Reorganized package exports to support the new structure
* fix(tools): update the docs for conversationalMahesh Sanikommmu2025-10-191-3/+23
|
* fix: add memory code params and documentation in readmeMahesh Sanikommmu2025-10-101-1/+44
|
* feat: ai sdk language model withSupermemory (#446)MaheshtheDev2025-10-101-0/+118
|
* fix: model namesDhravya Shah2025-10-031-2/+2
|
* fix: docsDhravya Shah2025-10-031-2/+2
|
* feat: Claude memory integrationDhravya Shah2025-09-291-0/+69
|
* update: ReadmeDhravya Shah2025-09-131-5/+5
|
* feat: new tools package (#407)CodeWithShreyans2025-09-021-0/+155