aboutsummaryrefslogtreecommitdiff
path: root/packages/pipecat-sdk-python/README.md
blob: 5f6e8478767ea9dddede3170f39643100da690d1 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
# Supermemory Pipecat SDK

Memory-enhanced conversational AI pipelines with [Supermemory](https://supermemory.ai) and [Pipecat](https://github.com/pipecat-ai/pipecat).

## Installation

```bash
pip install supermemory-pipecat
```

## Quick Start

```python
import os
from pipecat.pipeline.pipeline import Pipeline
from pipecat.services.openai import OpenAILLMService, OpenAIUserContextAggregator
from supermemory_pipecat import SupermemoryPipecatService

# Create memory service
memory = SupermemoryPipecatService(
    api_key=os.getenv("SUPERMEMORY_API_KEY"),
    user_id="user-123",  # Required: used as container_tag
    session_id="conversation-456",  # Optional: groups memories by session
)

# Create pipeline with memory
pipeline = Pipeline([
    transport.input(),
    stt,
    user_context,
    memory,  # Automatically retrieves and injects relevant memories
    llm,
    transport.output(),
])
```

## Configuration

### Parameters

| Parameter    | Type        | Required | Description                                                |
| ------------ | ----------- | -------- | ---------------------------------------------------------- |
| `user_id`    | str         | **Yes**  | User identifier - used as container_tag for memory scoping |
| `session_id` | str         | No       | Session/conversation ID for grouping memories              |
| `api_key`    | str         | No       | Supermemory API key (or set `SUPERMEMORY_API_KEY` env var) |
| `params`     | InputParams | No       | Advanced configuration                                     |
| `base_url`   | str         | No       | Custom API endpoint                                        |

### Advanced Configuration

```python
from supermemory_pipecat import SupermemoryPipecatService

memory = SupermemoryPipecatService(
    user_id="user-123",
    session_id="conv-456",
    params=SupermemoryPipecatService.InputParams(
        search_limit=10,           # Max memories to retrieve
        search_threshold=0.1,      # Similarity threshold
        mode="full",               # "profile", "query", or "full"
        system_prompt="Based on previous conversations, I recall:\n\n",
    ),
)
```

### Memory Modes

| Mode        | Static Profile | Dynamic Profile | Search Results |
| ----------- | -------------- | --------------- | -------------- |
| `"profile"` | Yes            | Yes             | No             |
| `"query"`   | No             | No              | Yes            |
| `"full"`    | Yes            | Yes             | Yes            |

## How It Works

1. **Intercepts context frames** - Listens for `LLMContextFrame` in the pipeline
2. **Tracks conversation** - Maintains clean conversation history (no injected memories)
3. **Retrieves memories** - Queries `/v4/profile` API with user's message
4. **Injects memories** - Formats and adds to LLM context as system message
5. **Stores messages** - Sends last user message to Supermemory (background, non-blocking)

### What Gets Stored

Only the last user message is sent to Supermemory:

```
User: What's the weather like today?
```

Stored as:

```json
{
  "content": "User: What's the weather like today?",
  "container_tags": ["user-123"],
  "custom_id": "conversation-456",
  "metadata": { "platform": "pipecat" }
}
```

## Full Example

```python
import asyncio
import os
from fastapi import FastAPI, WebSocket
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.task import PipelineTask
from pipecat.pipeline.runner import PipelineRunner
from pipecat.services.openai import (
    OpenAILLMService,
    OpenAIUserContextAggregator,
)
from pipecat.transports.network.fastapi_websocket import (
    FastAPIWebsocketTransport,
    FastAPIWebsocketParams,
)
from supermemory_pipecat import SupermemoryPipecatService

app = FastAPI()

@app.websocket("/chat")
async def websocket_endpoint(websocket: WebSocket):
    await websocket.accept()

    transport = FastAPIWebsocketTransport(
        websocket=websocket,
        params=FastAPIWebsocketParams(audio_out_enabled=True),
    )

    user_context = OpenAIUserContextAggregator()

    # Supermemory memory service
    memory = SupermemoryPipecatService(
        user_id="alice",
        session_id="session-123",
    )

    llm = OpenAILLMService(
        api_key=os.getenv("OPENAI_API_KEY"),
        model="gpt-4",
    )

    pipeline = Pipeline([
        transport.input(),
        user_context,
        memory,
        llm,
        transport.output(),
    ])

    runner = PipelineRunner()
    task = PipelineTask(pipeline)
    await runner.run(task)
```

## License

MIT