refactor: doc/code consistency, OMBRE_PORT, webhook push, host-vault dashboard
Doc-code consistency (per BEHAVIOR_SPEC.md ground truth):
- INTERNALS.md, dehydrator.py, README.md, config.example.yaml: drop the
outdated "API 不可用自动降级到本地关键词提取" claims; align with the
"RuntimeError on API outage, no silent fallback" design decision
- INTERNALS.md & BEHAVIOR_SPEC.md narrative: activation_count=1 → 0 (B-04)
- server.py header: 5 MCP tools → 6 (add dream)
OMBRE_PORT (T5/T6):
- Replace hardcoded 8000 in FastMCP / uvicorn / keepalive URL
with int(os.environ.get("OMBRE_PORT", "8000"))
OMBRE_HOOK_URL / OMBRE_HOOK_SKIP webhook (T7):
- Implement _fire_webhook() helper: fire-and-forget POST with 5s timeout,
failures logged at WARNING but never propagated
- Wired into breath / dream MCP tools and /breath-hook + /dream-hook routes
- Push payload: {event, timestamp, payload:{...}}; documented in ENV_VARS.md
Dashboard host-vault input (T12, per user request):
- New /api/host-vault GET/POST endpoints persist OMBRE_HOST_VAULT_DIR
to project-root .env (idempotent upsert, preserves other entries,
rejects quotes/newlines)
- Settings tab gains a "宿主机记忆桶目录 (Docker)" panel with
load/save buttons and a clear "需要 docker compose down/up 生效" notice
This commit is contained in:
@@ -28,9 +28,11 @@ log_level: "INFO"
|
||||
merge_threshold: 75
|
||||
|
||||
# --- Dehydration API / 脱水压缩 API 配置 ---
|
||||
# Uses a cheap LLM for intelligent compression; auto-degrades to local
|
||||
# keyword extraction if API is unavailable
|
||||
# 用廉价 LLM 做智能压缩,API 不可用时自动降级到本地关键词提取
|
||||
# Uses a cheap LLM for intelligent compression. API is required; if the
|
||||
# configured key/endpoint is unavailable, hold/grow will raise an explicit
|
||||
# error instead of silently degrading (see BEHAVIOR_SPEC.md 三、降级行为表).
|
||||
# 用廉价 LLM 做智能压缩。API 为必需;如 key/endpoint 不可用,
|
||||
# hold/grow 会直接报错而非静默降级(详见 BEHAVIOR_SPEC.md 三、降级行为表)。
|
||||
dehydration:
|
||||
# Supports any OpenAI-compatible API: DeepSeek / Ollama / LM Studio / vLLM / Gemini etc.
|
||||
# 支持所有 OpenAI 兼容 API:DeepSeek / Ollama / LM Studio / vLLM / Gemini 等
|
||||
|
||||
Reference in New Issue
Block a user