Compare commits
3 Commits
d4740f0d1f
...
d1cd3f4cc7
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d1cd3f4cc7 | ||
|
|
5815be6b69 | ||
|
|
3b5f37c7ca |
119
README.md
119
README.md
@@ -79,6 +79,11 @@ curl http://localhost:8000/health
|
|||||||
{"status":"ok","buckets":0,"decay_engine":"stopped"}
|
{"status":"ok","buckets":0,"decay_engine":"stopped"}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
浏览器打开前端 Dashboard:**http://localhost:8000/dashboard**
|
||||||
|
|
||||||
|
> 如果你用的是 `docker-compose.user.yml` 默认端口,地址就是 `http://localhost:8000/dashboard`。
|
||||||
|
> 如果你改了端口映射(比如 `18001:8000`),则是 `http://localhost:18001/dashboard`。
|
||||||
|
|
||||||
> **看到错误?** 检查 Docker Desktop 是否正在运行(状态栏有图标)。
|
> **看到错误?** 检查 Docker Desktop 是否正在运行(状态栏有图标)。
|
||||||
|
|
||||||
### 第六步:接入 Claude
|
### 第六步:接入 Claude
|
||||||
@@ -193,6 +198,8 @@ docker logs ombre-brain
|
|||||||
|
|
||||||
看到 `Uvicorn running on http://0.0.0.0:8000` 说明成功了。
|
看到 `Uvicorn running on http://0.0.0.0:8000` 说明成功了。
|
||||||
|
|
||||||
|
浏览器打开前端 Dashboard:**http://localhost:18001/dashboard**(`docker-compose.yml` 默认端口映射 `18001:8000`)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**接入 Claude.ai(远程访问)**
|
**接入 Claude.ai(远程访问)**
|
||||||
@@ -241,8 +248,8 @@ Ombre Brain gives it persistent memory — not cold key-value storage, but a sys
|
|||||||
- **Obsidian 原生 / Obsidian-native**: 每个记忆桶就是一个 Markdown 文件,YAML frontmatter 存元数据。可以直接在 Obsidian 里浏览、编辑、搜索。自动注入 `[[双链]]`。
|
- **Obsidian 原生 / Obsidian-native**: 每个记忆桶就是一个 Markdown 文件,YAML frontmatter 存元数据。可以直接在 Obsidian 里浏览、编辑、搜索。自动注入 `[[双链]]`。
|
||||||
Each memory bucket is a Markdown file with YAML frontmatter. Browse, edit, and search directly in Obsidian. Wikilinks are auto-injected.
|
Each memory bucket is a Markdown file with YAML frontmatter. Browse, edit, and search directly in Obsidian. Wikilinks are auto-injected.
|
||||||
|
|
||||||
- **API 降级 / API degradation**: 脱水压缩和自动打标优先用廉价 LLM API(DeepSeek / Gemini 等),API 不可用时自动降级到本地关键词分析——始终可用。向量检索不可用时降级到 fuzzy matching。
|
- **API 脱水 + 缓存 / API dehydration + cache**: 脱水压缩和自动打标通过 LLM API(DeepSeek / Gemini 等)完成,结果缓存到本地 SQLite(`dehydration_cache.db`),相同内容不重复调用 API。向量检索不可用时降级到 fuzzy matching。
|
||||||
Dehydration and auto-tagging prefer a cheap LLM API (DeepSeek / Gemini etc.). When the API is unavailable, it degrades to local keyword analysis — always functional. Embedding search degrades to fuzzy matching when unavailable.
|
Dehydration and auto-tagging are done via LLM API (DeepSeek / Gemini etc.), with results cached locally in SQLite (`dehydration_cache.db`) to avoid redundant API calls. Embedding search degrades to fuzzy matching when unavailable.
|
||||||
|
|
||||||
- **历史对话导入 / Conversation history import**: 将过去与 Claude / ChatGPT / DeepSeek 等的对话批量导入为记忆桶。支持 Claude JSON 导出、ChatGPT 导出、Markdown、纯文本等格式,分块处理带断点续传,通过 Dashboard「导入」Tab 操作。
|
- **历史对话导入 / Conversation history import**: 将过去与 Claude / ChatGPT / DeepSeek 等的对话批量导入为记忆桶。支持 Claude JSON 导出、ChatGPT 导出、Markdown、纯文本等格式,分块处理带断点续传,通过 Dashboard「导入」Tab 操作。
|
||||||
Batch-import past conversations (Claude / ChatGPT / DeepSeek etc.) as memory buckets. Supports Claude JSON export, ChatGPT export, Markdown, and plain text. Chunked processing with resume support, via the Dashboard "Import" tab.
|
Batch-import past conversations (Claude / ChatGPT / DeepSeek etc.) as memory buckets. Supports Claude JSON export, ChatGPT export, Markdown, and plain text. Chunked processing with resume support, via the Dashboard "Import" tab.
|
||||||
@@ -553,6 +560,7 @@ docker compose -f docker-compose.user.yml up -d
|
|||||||
```
|
```
|
||||||
|
|
||||||
验证:`curl http://localhost:8000/health`
|
验证:`curl http://localhost:8000/health`
|
||||||
|
Dashboard:浏览器打开 `http://localhost:8000/dashboard`
|
||||||
|
|
||||||
### Render
|
### Render
|
||||||
|
|
||||||
@@ -565,13 +573,15 @@ docker compose -f docker-compose.user.yml up -d
|
|||||||
1. (可选)设置 `OMBRE_API_KEY`:任何 OpenAI 兼容 API 的 key,不填则自动降级为本地关键词提取
|
1. (可选)设置 `OMBRE_API_KEY`:任何 OpenAI 兼容 API 的 key,不填则自动降级为本地关键词提取
|
||||||
2. (可选)设置 `OMBRE_BASE_URL`:API 地址,支持任意 OpenAI 化地址,如 `https://api.deepseek.com/v1` / `http://123.1.1.1:7689/v1` / `http://your-ollama:11434/v1`
|
2. (可选)设置 `OMBRE_BASE_URL`:API 地址,支持任意 OpenAI 化地址,如 `https://api.deepseek.com/v1` / `http://123.1.1.1:7689/v1` / `http://your-ollama:11434/v1`
|
||||||
3. Render 自动挂载持久化磁盘到 `/opt/render/project/src/buckets`
|
3. Render 自动挂载持久化磁盘到 `/opt/render/project/src/buckets`
|
||||||
4. 部署后 MCP URL:`https://<你的服务名>.onrender.com/mcp`
|
4. Dashboard:`https://<你的服务名>.onrender.com/dashboard`
|
||||||
|
5. 部署后 MCP URL:`https://<你的服务名>.onrender.com/mcp`
|
||||||
|
|
||||||
`render.yaml` is included. After clicking the button:
|
`render.yaml` is included. After clicking the button:
|
||||||
1. (Optional) `OMBRE_API_KEY`: any OpenAI-compatible key; omit to fall back to local keyword extraction
|
1. (Optional) `OMBRE_API_KEY`: any OpenAI-compatible key; omit to fall back to local keyword extraction
|
||||||
2. (Optional) `OMBRE_BASE_URL`: any OpenAI-compatible endpoint, e.g. `https://api.deepseek.com/v1`, `http://123.1.1.1:7689/v1`, `http://your-ollama:11434/v1`
|
2. (Optional) `OMBRE_BASE_URL`: any OpenAI-compatible endpoint, e.g. `https://api.deepseek.com/v1`, `http://123.1.1.1:7689/v1`, `http://your-ollama:11434/v1`
|
||||||
3. Persistent disk auto-mounts at `/opt/render/project/src/buckets`
|
3. Persistent disk auto-mounts at `/opt/render/project/src/buckets`
|
||||||
4. MCP URL after deploy: `https://<your-service>.onrender.com/mcp`
|
4. Dashboard: `https://<your-service>.onrender.com/dashboard`
|
||||||
|
5. MCP URL after deploy: `https://<your-service>.onrender.com/mcp`
|
||||||
|
|
||||||
### Zeabur
|
### Zeabur
|
||||||
|
|
||||||
@@ -611,6 +621,7 @@ docker compose -f docker-compose.user.yml up -d
|
|||||||
5. **验证 / Verify**
|
5. **验证 / Verify**
|
||||||
- 访问 `https://<你的域名>.zeabur.app/health`,应返回 JSON
|
- 访问 `https://<你的域名>.zeabur.app/health`,应返回 JSON
|
||||||
- Visit `https://<your-domain>.zeabur.app/health` — should return JSON
|
- Visit `https://<your-domain>.zeabur.app/health` — should return JSON
|
||||||
|
- Dashboard:`https://<你的域名>.zeabur.app/dashboard`
|
||||||
- 最终 MCP 地址 / MCP URL:`https://<你的域名>.zeabur.app/mcp`
|
- 最终 MCP 地址 / MCP URL:`https://<你的域名>.zeabur.app/mcp`
|
||||||
|
|
||||||
**常见问题 / Troubleshooting:**
|
**常见问题 / Troubleshooting:**
|
||||||
@@ -672,6 +683,106 @@ When connecting via tunnel, ensure:
|
|||||||
|
|
||||||
If using Claude Code, `.claude/settings.json` configures a `SessionStart` hook that auto-calls `breath` on each new or resumed session, surfacing your highest-weight unresolved memories as context. Only active in remote HTTP mode. Set `OMBRE_HOOK_SKIP=1` to disable temporarily.
|
If using Claude Code, `.claude/settings.json` configures a `SessionStart` hook that auto-calls `breath` on each new or resumed session, surfacing your highest-weight unresolved memories as context. Only active in remote HTTP mode. Set `OMBRE_HOOK_SKIP=1` to disable temporarily.
|
||||||
|
|
||||||
|
## 更新 / How to Update
|
||||||
|
|
||||||
|
不同部署方式的更新方法。
|
||||||
|
|
||||||
|
Different update procedures depending on your deployment method.
|
||||||
|
|
||||||
|
### Docker Hub 预构建镜像用户 / Docker Hub Pre-built Image
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 拉取最新镜像
|
||||||
|
docker pull p0luz/ombre-brain:latest
|
||||||
|
|
||||||
|
# 重启容器(记忆数据在 volume 里,不会丢失)
|
||||||
|
docker compose -f docker-compose.user.yml down
|
||||||
|
docker compose -f docker-compose.user.yml up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
> 你的记忆数据挂载在 `./buckets:/data`,pull + restart 不会影响已有数据。
|
||||||
|
> Your memory data is mounted at `./buckets:/data` — pull + restart won't affect existing data.
|
||||||
|
|
||||||
|
### 从源码部署用户 / Source Code Deploy (Docker)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Ombre-Brain
|
||||||
|
|
||||||
|
# 拉取最新代码
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# 重新构建并重启
|
||||||
|
docker compose down
|
||||||
|
docker compose build
|
||||||
|
docker compose up -d
|
||||||
|
```
|
||||||
|
|
||||||
|
> `docker compose build` 会重新构建镜像。volume 挂载的记忆数据不受影响。
|
||||||
|
> `docker compose build` rebuilds the image. Volume-mounted memory data is unaffected.
|
||||||
|
|
||||||
|
### 本地 Python 用户 / Local Python (no Docker)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Ombre-Brain
|
||||||
|
|
||||||
|
# 拉取最新代码
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# 更新依赖(如有新增)
|
||||||
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
# 重启服务
|
||||||
|
# Ctrl+C 停止旧进程,然后:
|
||||||
|
python server.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### Render
|
||||||
|
|
||||||
|
Render 连接了你的 GitHub 仓库,**自动部署**:
|
||||||
|
|
||||||
|
1. 如果你 Fork 了仓库 → 在 GitHub 上同步上游更新(Sync fork),Render 会自动重新部署
|
||||||
|
2. 或者手动:Render Dashboard → 你的服务 → **Manual Deploy** → **Deploy latest commit**
|
||||||
|
|
||||||
|
> 持久化磁盘(`/opt/render/project/src/buckets`)上的记忆数据在重新部署时保留。
|
||||||
|
> Persistent disk data at `/opt/render/project/src/buckets` is preserved across deploys.
|
||||||
|
|
||||||
|
### Zeabur
|
||||||
|
|
||||||
|
Zeabur 也连接了你的 GitHub 仓库:
|
||||||
|
|
||||||
|
1. 在 GitHub 上同步 Fork 的最新代码 → Zeabur 自动触发重新构建部署
|
||||||
|
2. 或者手动:Zeabur Dashboard → 你的服务 → **Redeploy**
|
||||||
|
|
||||||
|
> Volume 挂载在 `/app/buckets`,重新部署时数据保留。
|
||||||
|
> Volume mounted at `/app/buckets` — data persists across redeploys.
|
||||||
|
|
||||||
|
### VPS / 自有服务器 / Self-hosted VPS
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd Ombre-Brain
|
||||||
|
|
||||||
|
# 拉取最新代码
|
||||||
|
git pull origin main
|
||||||
|
|
||||||
|
# 方式 A:Docker 部署
|
||||||
|
docker compose down
|
||||||
|
docker compose build
|
||||||
|
docker compose up -d
|
||||||
|
|
||||||
|
# 方式 B:直接 Python 运行
|
||||||
|
pip install -r requirements.txt
|
||||||
|
# 重启你的进程管理器(systemd / supervisord / pm2 等)
|
||||||
|
sudo systemctl restart ombre-brain # 示例
|
||||||
|
```
|
||||||
|
|
||||||
|
> **通用注意事项 / General notes:**
|
||||||
|
> - 更新不会影响你的记忆数据(存在 volume 或 buckets 目录里)
|
||||||
|
> - 如果 `requirements.txt` 有变化,Docker 用户重新 build 即可自动处理;非 Docker 用户需手动 `pip install -r requirements.txt`
|
||||||
|
> - 更新后访问 `/health` 验证服务正常
|
||||||
|
> - Updates never affect your memory data (stored in volumes or buckets directory)
|
||||||
|
> - If `requirements.txt` changed, Docker rebuild handles it automatically; non-Docker users need `pip install -r requirements.txt`
|
||||||
|
> - After updating, visit `/health` to verify the service is running
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
MIT
|
MIT
|
||||||
|
|||||||
Reference in New Issue
Block a user