Persistent memory layer for AI coding agents.
One MCP server. Nine agents. Zero context loss.
中文文档 · Quick Start · Features · How It Works · Full Setup Guide
AI coding agents forget everything between sessions. Switch IDEs and context is gone. Memorix gives every agent a shared, persistent memory — decisions, gotchas, and architecture survive across sessions and tools.
Session 1 (Cursor): "Use JWT with refresh tokens, 15-min expiry" → stored as 🟤 decision
Session 2 (Claude Code): "Add login endpoint" → finds the decision → implements correctly
No re-explaining. No copy-pasting. No vendor lock-in.
npm install -g memorixAdd to your agent's MCP config:
Cursor · .cursor/mcp.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Claude Code
claude mcp add memorix -- memorix serveWindsurf · ~/.codeium/windsurf/mcp_config.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }VS Code Copilot · .vscode/mcp.json
{ "servers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Codex · ~/.codex/config.toml
[mcp_servers.memorix]
command = "memorix"
args = ["serve"]Kiro · .kiro/settings/mcp.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Antigravity · ~/.gemini/antigravity/mcp_config.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"], "env": { "MEMORIX_PROJECT_ROOT": "/your/project/path" } } } }OpenCode · ~/.config/opencode/config.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Gemini CLI · .gemini/settings.json
{ "mcpServers": { "memorix": { "command": "memorix", "args": ["serve"] } } }Restart your agent. Done. No API keys, no cloud, no dependencies.
Auto-update: Memorix silently checks for updates on startup (once per 24h) and self-updates in the background. No manual
npm updateneeded.
Note: Do NOT use
npx— it re-downloads each time and causes MCP timeout. Use global install.
| Memory | memorix_store · memorix_search · memorix_detail · memorix_timeline · memorix_resolve · memorix_deduplicate · memorix_suggest_topic_key — 3-layer progressive disclosure with ~10x token savings |
| Sessions | memorix_session_start · memorix_session_end · memorix_session_context — auto-inject previous context on new sessions |
| Knowledge Graph | create_entities · create_relations · add_observations · delete_entities · delete_observations · delete_relations · search_nodes · open_nodes · read_graph — MCP Official Memory Server compatible |
| Workspace Sync | memorix_workspace_sync · memorix_rules_sync · memorix_skills — migrate MCP configs, rules, and skills across 9 agents |
| Maintenance | memorix_retention · memorix_consolidate · memorix_export · memorix_import — decay scoring, dedup, backup |
| Dashboard | memorix_dashboard — web UI with D3.js knowledge graph, observation browser, retention panel |
🎯 session-request · 🔴 gotcha · 🟡 problem-solution · 🔵 how-it-works · 🟢 what-changed · 🟣 discovery · 🟠 why-it-exists · 🟤 decision · ⚖️ trade-off
memorix hooks installCaptures decisions, errors, and gotchas automatically. Pattern detection in English + Chinese. Smart filtering (30s cooldown, skips trivial commands). Injects high-value memories at session start.
BM25 fulltext out of the box (~50MB RAM). Semantic search is opt-in — 3 providers:
# Set in your MCP config env:
MEMORIX_EMBEDDING=api # ⭐ Recommended — zero local RAM, best quality
MEMORIX_EMBEDDING=fastembed # Local ONNX (~300MB RAM)
MEMORIX_EMBEDDING=transformers # Local JS/WASM (~500MB RAM)
MEMORIX_EMBEDDING=off # Default — BM25 only, minimal resourcesWorks with any OpenAI-compatible API — OpenAI, Qwen, OpenRouter, Ollama, or any API proxy:
MEMORIX_EMBEDDING=api
MEMORIX_EMBEDDING_API_KEY=sk-xxx # or reuse OPENAI_API_KEY
MEMORIX_EMBEDDING_MODEL=text-embedding-3-small # default
MEMORIX_EMBEDDING_BASE_URL=https://api.openai.com/v1 # optional
MEMORIX_EMBEDDING_DIMENSIONS=512 # optional dimension shorteningPerformance advantages over competitors:
- 10K LRU cache + disk persistence — repeat queries cost $0 and take 0ms
- Batch API calls — up to 2048 texts per request (competitors: 1-by-1)
- 4x concurrent processing — parallel batch chunks
- Text normalization — better cache hit rates via whitespace dedup
- Debounced disk writes — 5s coalesce window, not per-call I/O
- Zero external dependencies — no Chroma, no SQLite, just native
fetch - Smart key fallback — auto-reuses LLM API key if same provider
npm install -g fastembed # for MEMORIX_EMBEDDING=fastembed
npm install -g @huggingface/transformers # for MEMORIX_EMBEDDING=transformersBoth run 100% locally. Zero API calls.
Enable intelligent memory deduplication and fact extraction with your own API key:
# Set in your MCP config env, or export before starting:
MEMORIX_LLM_API_KEY=sk-xxx # OpenAI-compatible API key
MEMORIX_LLM_PROVIDER=openai # openai | anthropic | openrouter
MEMORIX_LLM_MODEL=gpt-4o-mini # model name
MEMORIX_LLM_BASE_URL=https://... # custom endpoint (optional)Or use existing env vars — Memorix auto-detects:
OPENAI_API_KEY→ OpenAIANTHROPIC_API_KEY→ AnthropicOPENROUTER_API_KEY→ OpenRouter
Without LLM: Free heuristic deduplication (similarity-based)
With LLM: Smart merge, fact extraction, contradiction detection
memorix # Interactive menu (no args)
memorix configure # LLM + Embedding provider setup (TUI)
memorix status # Project info + stats
memorix dashboard # Web UI at localhost:3210
memorix hooks install # Auto-capture for IDEs┌─────────┐ ┌───────────┐ ┌────────────┐ ┌───────┐ ┌──────────┐
│ Cursor │ │ Claude │ │ Windsurf │ │ Codex │ │ +4 more │
│ │ │ Code │ │ │ │ │ │ │
└────┬────┘ └─────┬─────┘ └─────┬──────┘ └───┬───┘ └────┬─────┘
│ │ │ │ │
└─────────────┴──────┬───────┴──────────────┴───────────┘
│ MCP (stdio)
┌──────┴──────┐
│ Memorix │
│ MCP Server │
└──────┬──────┘
│
┌───────────────┼───────────────┐
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ Orama │ │ Knowledge │ │ Rules & │
│ Search │ │ Graph │ │ Workspace │
│ (BM25+Vec) │ │ (Entities) │ │ Sync │
└─────────────┘ └─────────────┘ └─────────────┘
│
~/.memorix/data/
(100% local, per-project isolation)
- Project isolation — auto-detected from
git remote, scoped search by default - Shared storage — all agents read/write the same
~/.memorix/data/, cross-IDE by design - Token efficient — 3-layer progressive disclosure: search → timeline → detail
git clone https://github.com/AVIDS2/memorix.git
cd memorix && npm install
npm run dev # watch mode
npm test # 606 tests
npm run build # production build📚 Architecture · API Reference · Modules · Design Decisions
For AI systems:
llms.txt·llms-full.txt
Built on ideas from mcp-memory-service, MemCP, claude-mem, and Mem0.
Built by AVIDS2 · Star ⭐ if it helps your workflow
