Compare commits

...

4 Commits

Author SHA1 Message Date
P0luz
e9d61b5d9d fix: 移除本地保底脱水的过时描述(README+dehydrator注释)
Some checks failed
Build & Push Docker Image / build-and-push (push) Has been cancelled
Tests / test (push) Has been cancelled
2026-04-19 18:19:04 +08:00
P0luz
d1cd3f4cc7 docs: 添加各部署方式的更新指南(Docker Hub/源码/Render/Zeabur/VPS) 2026-04-19 18:03:04 +08:00
P0luz
5815be6b69 docs: 云部署补充Dashboard地址, 修正脱水API描述(已移除local fallback) 2026-04-19 18:00:31 +08:00
P0luz
3b5f37c7ca docs: README补充前端Dashboard地址和端口说明 2026-04-19 17:51:59 +08:00
2 changed files with 118 additions and 7 deletions

121
README.md
View File

@@ -79,6 +79,11 @@ curl http://localhost:8000/health
{"status":"ok","buckets":0,"decay_engine":"stopped"}
```
浏览器打开前端 Dashboard**http://localhost:8000/dashboard**
> 如果你用的是 `docker-compose.user.yml` 默认端口,地址就是 `http://localhost:8000/dashboard`。
> 如果你改了端口映射(比如 `18001:8000`),则是 `http://localhost:18001/dashboard`。
> **看到错误?** 检查 Docker Desktop 是否正在运行(状态栏有图标)。
### 第六步:接入 Claude
@@ -154,7 +159,7 @@ OMBRE_API_KEY=你的API密钥
> 3. Set `dehydration.base_url` to `https://generativelanguage.googleapis.com/v1beta/openai` in `config.yaml`
> Also supports DeepSeek, Ollama, LM Studio, vLLM, or any OpenAI-compatible API.
没有 API key 也能用,脱水压缩会降级到本地模式,只是效果差一点。那就写
没有 API key 则脱水压缩和自动打标功能不可用(会报错),但记忆的读写和检索仍正常工作。如果暂时不用脱水功能,可以留空
```
OMBRE_API_KEY=
@@ -193,6 +198,8 @@ docker logs ombre-brain
看到 `Uvicorn running on http://0.0.0.0:8000` 说明成功了。
浏览器打开前端 Dashboard**http://localhost:18001/dashboard**`docker-compose.yml` 默认端口映射 `18001:8000`
---
**接入 Claude.ai远程访问**
@@ -241,8 +248,8 @@ Ombre Brain gives it persistent memory — not cold key-value storage, but a sys
- **Obsidian 原生 / Obsidian-native**: 每个记忆桶就是一个 Markdown 文件YAML frontmatter 存元数据。可以直接在 Obsidian 里浏览、编辑、搜索。自动注入 `[[双链]]`
Each memory bucket is a Markdown file with YAML frontmatter. Browse, edit, and search directly in Obsidian. Wikilinks are auto-injected.
- **API 降级 / API degradation**: 脱水压缩和自动打标优先用廉价 LLM APIDeepSeek / Gemini 等)API 不可用时自动降级到本地关键词分析——始终可用。向量检索不可用时降级到 fuzzy matching。
Dehydration and auto-tagging prefer a cheap LLM API (DeepSeek / Gemini etc.). When the API is unavailable, it degrades to local keyword analysis — always functional. Embedding search degrades to fuzzy matching when unavailable.
- **API 脱水 + 缓存 / API dehydration + cache**: 脱水压缩和自动打标通过 LLM APIDeepSeek / Gemini 等)完成,结果缓存到本地 SQLite`dehydration_cache.db`),相同内容不重复调用 API。向量检索不可用时降级到 fuzzy matching。
Dehydration and auto-tagging are done via LLM API (DeepSeek / Gemini etc.), with results cached locally in SQLite (`dehydration_cache.db`) to avoid redundant API calls. Embedding search degrades to fuzzy matching when unavailable.
- **历史对话导入 / Conversation history import**: 将过去与 Claude / ChatGPT / DeepSeek 等的对话批量导入为记忆桶。支持 Claude JSON 导出、ChatGPT 导出、Markdown、纯文本等格式分块处理带断点续传通过 Dashboard「导入」Tab 操作。
Batch-import past conversations (Claude / ChatGPT / DeepSeek etc.) as memory buckets. Supports Claude JSON export, ChatGPT export, Markdown, and plain text. Chunked processing with resume support, via the Dashboard "Import" tab.
@@ -553,6 +560,7 @@ docker compose -f docker-compose.user.yml up -d
```
验证:`curl http://localhost:8000/health`
Dashboard浏览器打开 `http://localhost:8000/dashboard`
### Render
@@ -565,13 +573,15 @@ docker compose -f docker-compose.user.yml up -d
1. (可选)设置 `OMBRE_API_KEY`:任何 OpenAI 兼容 API 的 key不填则自动降级为本地关键词提取
2. (可选)设置 `OMBRE_BASE_URL`API 地址,支持任意 OpenAI 化地址,如 `https://api.deepseek.com/v1` / `http://123.1.1.1:7689/v1` / `http://your-ollama:11434/v1`
3. Render 自动挂载持久化磁盘到 `/opt/render/project/src/buckets`
4. 部署后 MCP URL`https://<你的服务名>.onrender.com/mcp`
4. Dashboard`https://<你的服务名>.onrender.com/dashboard`
5. 部署后 MCP URL`https://<你的服务名>.onrender.com/mcp`
`render.yaml` is included. After clicking the button:
1. (Optional) `OMBRE_API_KEY`: any OpenAI-compatible key; omit to fall back to local keyword extraction
2. (Optional) `OMBRE_BASE_URL`: any OpenAI-compatible endpoint, e.g. `https://api.deepseek.com/v1`, `http://123.1.1.1:7689/v1`, `http://your-ollama:11434/v1`
3. Persistent disk auto-mounts at `/opt/render/project/src/buckets`
4. MCP URL after deploy: `https://<your-service>.onrender.com/mcp`
4. Dashboard: `https://<your-service>.onrender.com/dashboard`
5. MCP URL after deploy: `https://<your-service>.onrender.com/mcp`
### Zeabur
@@ -611,6 +621,7 @@ docker compose -f docker-compose.user.yml up -d
5. **验证 / Verify**
- 访问 `https://<你的域名>.zeabur.app/health`,应返回 JSON
- Visit `https://<your-domain>.zeabur.app/health` — should return JSON
- Dashboard`https://<你的域名>.zeabur.app/dashboard`
- 最终 MCP 地址 / MCP URL`https://<你的域名>.zeabur.app/mcp`
**常见问题 / Troubleshooting**
@@ -672,6 +683,106 @@ When connecting via tunnel, ensure:
If using Claude Code, `.claude/settings.json` configures a `SessionStart` hook that auto-calls `breath` on each new or resumed session, surfacing your highest-weight unresolved memories as context. Only active in remote HTTP mode. Set `OMBRE_HOOK_SKIP=1` to disable temporarily.
## 更新 / How to Update
不同部署方式的更新方法。
Different update procedures depending on your deployment method.
### Docker Hub 预构建镜像用户 / Docker Hub Pre-built Image
```bash
# 拉取最新镜像
docker pull p0luz/ombre-brain:latest
# 重启容器(记忆数据在 volume 里,不会丢失)
docker compose -f docker-compose.user.yml down
docker compose -f docker-compose.user.yml up -d
```
> 你的记忆数据挂载在 `./buckets:/data`pull + restart 不会影响已有数据。
> Your memory data is mounted at `./buckets:/data` — pull + restart won't affect existing data.
### 从源码部署用户 / Source Code Deploy (Docker)
```bash
cd Ombre-Brain
# 拉取最新代码
git pull origin main
# 重新构建并重启
docker compose down
docker compose build
docker compose up -d
```
> `docker compose build` 会重新构建镜像。volume 挂载的记忆数据不受影响。
> `docker compose build` rebuilds the image. Volume-mounted memory data is unaffected.
### 本地 Python 用户 / Local Python (no Docker)
```bash
cd Ombre-Brain
# 拉取最新代码
git pull origin main
# 更新依赖(如有新增)
pip install -r requirements.txt
# 重启服务
# Ctrl+C 停止旧进程,然后:
python server.py
```
### Render
Render 连接了你的 GitHub 仓库,**自动部署**
1. 如果你 Fork 了仓库 → 在 GitHub 上同步上游更新Sync forkRender 会自动重新部署
2. 或者手动Render Dashboard → 你的服务 → **Manual Deploy** → **Deploy latest commit**
> 持久化磁盘(`/opt/render/project/src/buckets`)上的记忆数据在重新部署时保留。
> Persistent disk data at `/opt/render/project/src/buckets` is preserved across deploys.
### Zeabur
Zeabur 也连接了你的 GitHub 仓库:
1. 在 GitHub 上同步 Fork 的最新代码 → Zeabur 自动触发重新构建部署
2. 或者手动Zeabur Dashboard → 你的服务 → **Redeploy**
> Volume 挂载在 `/app/buckets`,重新部署时数据保留。
> Volume mounted at `/app/buckets` — data persists across redeploys.
### VPS / 自有服务器 / Self-hosted VPS
```bash
cd Ombre-Brain
# 拉取最新代码
git pull origin main
# 方式 ADocker 部署
docker compose down
docker compose build
docker compose up -d
# 方式 B直接 Python 运行
pip install -r requirements.txt
# 重启你的进程管理器systemd / supervisord / pm2 等)
sudo systemctl restart ombre-brain # 示例
```
> **通用注意事项 / General notes:**
> - 更新不会影响你的记忆数据(存在 volume 或 buckets 目录里)
> - 如果 `requirements.txt` 有变化Docker 用户重新 build 即可自动处理;非 Docker 用户需手动 `pip install -r requirements.txt`
> - 更新后访问 `/health` 验证服务正常
> - Updates never affect your memory data (stored in volumes or buckets directory)
> - If `requirements.txt` changed, Docker rebuild handles it automatically; non-Docker users need `pip install -r requirements.txt`
> - After updating, visit `/health` to verify the service is running
## License
MIT

View File

@@ -235,8 +235,8 @@ class Dehydrator:
# ---------------------------------------------------------
# Dehydrate: compress raw content into concise summary
# 脱水:将原始内容压缩为精简摘要
# Try API first, fallback to local
# 先尝试 API,失败则回退本地
# API only (no local fallback)
# 仅通过 API 脱水(无本地回退)
# ---------------------------------------------------------
async def dehydrate(self, content: str, metadata: dict = None) -> str:
"""