| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
| |
Co-authored-by: antonvishal <[email protected]>
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
### Relocated logo file and removed unnecessary configuration files.
### What changed?
- Moved `logo.svg` to `apps/web/public/logo-fullmark.svg`
- Updated the logo path in `README.md` to reflect the new location
- Removed empty `.npmrc` file
- Removed `apps/web/public/_headers` file that contained Next.js static caching configuration
|
| |
|
|
|
|
|
|
|
|
|
| |
### Improved mobile responsiveness across chat interface and memory list with better loading states.
### What changed?
- Added responsive padding in chat page for mobile devices
- Enhanced header layout for chat titles with proper truncation and responsive text sizes
- Replaced the simple loading spinner in memory list with skeleton loading cards
- Improved message container width constraints on mobile devices
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
### Added streaming support to the Supermemory middleware and improved memory handling in the AI SDK integration.
### What changed?
- Refactored the middleware architecture to support both streaming and non-streaming responses
- Extracted memory prompt functionality into a separate module (`memory-prompt.ts`)
- Added memory saving capability for streaming responses
- Improved the formatting of memory content with a "User Supermemories:" prefix
- Added utility function to filter out supermemories from content
- Created a new streaming example in the test app with a dedicated route and page
- Updated version from 1.3.0 to 1.3.1 in package.json
- Simplified installation instructions in [README.m](http://README.md)d
|
| | |
|
| | |
|
| | |
|
| |
|
| |
Co-authored-by: Mahesh Sanikommu <[email protected]>
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
### TL;DR
Added OpenAI SDK middleware support for SuperMemory integration, allowing direct memory injection without AI SDK dependency.
### What changed?
- Added `withSupermemory` middleware for OpenAI SDK that automatically injects relevant memories into chat completions
- Implemented memory search and injection functionality for OpenAI clients
- Restructured the OpenAI module to separate tools and middleware functionality
- Updated README with comprehensive documentation and examples for the new OpenAI middleware
- Added test implementation with a Next.js API route example
- Reorganized package exports to support the new structure
|
| |
|
|
|
|
|
|
| |
When connector is syncing

After connected showing the metadata

|
| | |
|
| | |
|
| | |
|
| |\
| |
| | |
feat: update app component to have a better loading screen
|
| |/
|
|
|
|
|
| |
- Changed the logo image source in App.tsx to use a file based svg rather than url one.
- Enhanced the loading indicator with an animated SVG.
- Adjusted styles for better alignment and spacing in the loading section.
- Added the dark-transparent.svg file to the public directory.
|
| |\
| |
| | |
fix: prompt mutation in withSupermemory and types for props
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| |\ \
| |/
|/| |
docs: fixed naming convention of SDK usage examples
|
| | | |
|
| | | |
|
| | |
| |
| |
| | |
updated the new supermemory support email to `[email protected]`
|
| |\ \
| | |
| | | |
feat(chat): increase maxSteps to allow multiple tool-calling rounds
|
| | | | |
|
| | |/ |
|
| |\ \
| | |
| | |
| | |
| | | |
supermemoryai/10-19-chore_browser-extension_t3_chat_search_memories
feat(browser-extension): setting to enable/disable auto prompt captures
|
| | | | |
|
| | |/ |
|
| |\ \
| | |
| | |
| | |
| | | |
supermemoryai/10-19-fix_tools_update_the_docs_for_conversational
fix(tools): update the docs for conversational
|
| |/ / |
|
| |/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
### TL;DR
Added support for conversation grouping in Supermemory middleware through a new `conversationId` parameter.
### What changed?
- Added a new `conversationId` option to the `withSupermemory` function to group messages into a single document for contextual memory generation
- Updated the middleware to use this conversation ID when adding memories, using a `customId` format of `conversation:{conversationId}`
- Created a new `getConversationContent` function that extracts the full conversation content from the prompt parameters
- Enhanced memory storage to save entire conversations rather than just the last user message
- Updated documentation and examples to demonstrate the new parameter usage
### How to test?
1. Import the `withSupermemory` function from the package
2. Create a model with memory using the new `conversationId` parameter:
```typescript
const modelWithMemory = withSupermemory(openai("gpt-4"), "user-123", {
conversationId: "conversation-456",
mode: "full",
addMemory: "always"
})
```
3. Use the model in a conversation and verify that messages are grouped by the conversation ID
4. Check that memories are being stored with the custom ID format `conversation:{conversationId}`
### Why make this change?
This enhancement improves the contextual understanding of the AI by allowing related messages to be grouped together as a single conversation document. By using a conversation ID, the system can maintain coherent memory across multiple interactions within the same conversation thread, providing better context retrieval and more relevant responses.
|
| | |
|
| |
|
|
|
|
|
|
| |
selection (#495)
Feature : Import folder level x bookmarks
[Screen Recording 2025-10-17 at 1.37.52 PM.mov <span class="graphite__hidden">(uploaded via Graphite)</span> <img class="graphite__hidden" src="https://app.graphite.dev/user-attachments/thumbnails/15cd60ff-856e-4f29-8897-74ae3c869c87.mov" />](https://app.graphite.dev/user-attachments/video/15cd60ff-856e-4f29-8897-74ae3c869c87.mov)
|
| |\
| |
| | |
Add markdown rendering support to memory content display
|
| | | |
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
- Add markdown rendering support to memory content display
- Auto-detect and format JSON responses in code blocks
- Convert terminal commands to bash code blocks
- Improve code block styling with monospace font and compact spacing
|
| |\ \
| | |
| | | |
fix: mount graph dialog globally to fix chat page issue
|
| | | | |
|
| | | | |
|
| |/ /
| |
| |
| | |
The issue is whenever a user is trying to log in with an email and a one-time code, the Chrome extension is not able to authenticate. The fix is to add a callback URL with a query parameter of `extension-auth-success` equal to `true`, which will allow the Chrome extension to identify and verify the auth whenever a user is trying to log in into the Chrome extension.
|
| |\ \
| | |
| | | |
feat: n8n + zapier integration page
|