Cero suscripciones. Cero API keys. Cero costos. Un agente de IA real en Telegram, con memoria persistente, voz y compatibilidad nativa con el ecosistema de skills de OpenClaw — todo corriendo en tu propia máquina con un ./setup.sh de 10 minutos.
Zero subscriptions. Zero API keys. Zero cost. A real AI agent on Telegram, with persistent memory, voice, and native compatibility with the OpenClaw skills ecosystem — all running on your own box from a 10-minute ./setup.sh.
ES: Las herramientas de IA te obligan a elegir entre tres caminos: pagar suscripción (Claude Desktop, ChatGPT Plus), poner tu propia API key (CrewAI, AutoGen), o construir desde cero (LangGraph, OpenFang). Faltaba la cuarta opción: cero costo, cero API keys, compatibilidad nativa con skills de OpenClaw, en tu celular. Eso hace Opencode-Assistant.
EN: AI tools today force a choice between three paths: pay subscriptions (Claude Desktop, ChatGPT Plus), bring your own API key (CrewAI, AutoGen), or build from scratch (LangGraph, OpenFang). What was missing: a fourth option — zero cost, zero API keys, native OpenClaw skill compatibility, on your phone. That's what Opencode-Assistant does.
Comparativa honesta. Las casillas en negrita son donde realmente ganamos; el resto es contexto. Los demás frameworks son excelentes, simplemente apuntan a otra cosa.
Honest comparison. Bold cells are where we actually win; the rest is context. The other frameworks are excellent — they just aim at different problems.
| Feature | Opencode-Assistant | OpenClaw runtime | OpenFang | CrewAI | AutoGen | LangGraph | Claude Desktop |
|---|---|---|---|---|---|---|---|
| 01 Cost out of the box | 💰 $0 — big-pickle included |
API key needed | BYO model | BYO model | BYO model | BYO model | Subscription |
02 OpenClaw SKILL.md |
✅ Drop-in + auto-update from URL + sha256 verify | ✅ Native | ❌ | ❌ | ❌ | ❌ | ✅ Native |
| 03 Persistent memory | SQLite + MCP + optional vectors | File-based | SQLite + FTS5 | 4-layer | External | Checkpoints | Native long-term |
| 04 Vector / semantic search | ✅ Ollama or OpenAI | ❌ | ✅ Built-in | ❌ | ❌ | ❌ | Some clients |
| 05 Voice in / out | ✅ Whisper + Speechify | ❌ | ❌ | ❌ | ❌ | ❌ | Some clients |
| 06 Built-in cron / scheduling | ✅ 3 types (task / reminder / backup) | ❌ | Scheduled tasks | ❌ | ❌ | ❌ | ❌ |
| 07 Localization (UI) | 🌍 6 languages (en/es/de/fr/ru/zh) | 🇬🇧 English | 🇬🇧 English | 🇬🇧 English | 🇬🇧 English | 🇬🇧 English | Multi |
| 08 Setup time | ⚡ ./setup.sh (≈10 min) |
App install | Build from source | pip install |
pip install |
pip install |
App install |
| 09 Self-hosted | ✅ Docker, one command | ✅ | ✅ Docker | ✅ | ✅ | ✅ | ❌ Cloud |
| 10 Channel adapters | 🟡 Telegram only (mobile-focused) | ~13 (Slack, Discord, etc.) | 40 (multi-channel framework) | Plugin-based | None native | None native | Native client |
| 11 Production hardening | Single-user whitelist | Basic | 16 security layers + WASM sandbox | Docker | AES enc. | Checkpoints | Cloud-managed |
| 12 Language | TypeScript | TypeScript | Rust | Python | Python | Python | — (closed) |
| 13 License | MIT | MIT | MIT | MIT | Apache 2.0 | MIT | Closed |
ES: Otros frameworks nos ganan en multi-canal (Slack/Discord/WhatsApp/web) y en hardening empresarial (las 16 capas de seguridad y el sandbox WASM de OpenFang son impresionantes). Este proyecto cambia esa amplitud por costo cero, sin APIs, skills de OpenClaw nativas y setup en 10 minutos para un solo usuario en Telegram. Si necesitas agentes multi-canal a escala, mira a OpenFang o construye con CrewAI/LangGraph. Si quieres un asistente personal en tu bolsillo, esto es para ti.
EN: Other frameworks beat us on multi-channel (Slack/Discord/WhatsApp/web) and on enterprise-grade hardening (OpenFang's 16 security layers and WASM sandbox are genuinely impressive). This project trades that breadth for zero cost, no APIs, native OpenClaw skills, and a 10-minute setup for a single user on Telegram. If you need multi-channel agents at scale, look at OpenFang or build with CrewAI/LangGraph. If you want a personal assistant in your pocket, this is for you.
┌──────────────┐ ┌────────────────────────────┐
│ Telegram │ Bot API │ bot container │
│ (mobile) │ ◀────────────▶ │ grammY · MCP HTTP :4097 │
└──────────────┘ │ SQLite memory (data.db) │
└─────────────┬──────────────┘
│ HTTP /mcp (memory tools)
▼
┌────────────────────────────┐
│ opencode container │
│ opencode serve :4096 │
│ big-pickle (Claude Sonnet)│
└─────────────┬──────────────┘
│ optional /v1/embeddings
▼
┌──────────────────────────────────┐
│ Ollama (host) or OpenAI (cloud) │
│ embeddings for vector memory │
└──────────────────────────────────┘
El bot expone tu memoria SQLite a OpenCode vía un servidor MCP local — el asistente puede leer y escribir memoria en cualquier momento de la sesión, no solo recibir un snapshot al inicio. Con un proveedor de embeddings opcional, fact_search rankea por similitud semántica en vez de buscar por substring.
The bot exposes your SQLite memory to OpenCode via a local MCP server — the assistant can read and write memory at any point during a session, not just receive a snapshot at start. With an optional embedding provider, fact_search ranks by semantic similarity instead of substring matching.
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
./setup.sh10 pasos guiados → bot funcional. Detalles abajo.
Un bot de Telegram que convierte a OpenCode en un asistente personal de IA mobile-first. Te da:
- 🧠 Memoria persistente — datos, preferencias, contexto de proyectos, resúmenes de sesión, todo en SQLite y consultable en vivo vía herramientas MCP
- 🔍 Búsqueda semántica opcional — embebe tus datos con Ollama (local, gratis) u OpenAI/Groq/Together (cloud) y la búsqueda entiende paráfrasis e idiomas
- 🪄 Compatibilidad con SKILL.md de OpenClaw — pega cualquier
SKILL.mddel ecosistema openclaw-skills y funciona - ⏰ Tareas programadas — tres tipos (correr una sesión OpenCode, mandar un recordatorio, hacer backup de memoria)
- 🎙️ Voz — STT con Whisper + TTS con Speechify/OpenAI/Edge, enviado como nota de voz de Telegram
- 🔒 Single-user, auto-alojado — whitelist estricta; tus datos, tu VPS
- 🌍 6 idiomas — en / es / de / fr / ru / zh
- No quieres pagar nada. El modelo
big-pickle(Claude Sonnet) viene incluido sin API key. Edge TTS (voces neurales de Microsoft) funciona sin key ni cuota. Speechify TTS y Groq Whisper STT tienen tier gratis suficiente. Ollama para vectores corre en tu máquina, también gratis. - Ya tienes skills de OpenClaw. Pegas el
SKILL.mdenmemory/skills/o lo instalas con/skill_install <url>desde GitHub — funciona igual que en Claude Desktop o el runtime de OpenClaw. - Quieres tu memoria, no la de OpenAI. SQLite local, MCP standard, exportable a markdown cuando quieras irte. Sin lock-in.
- Lo quieres en el celular. Otros frameworks son CLI-only o desktop apps; este es Telegram desde el primer commit.
- 1 GB RAM, 5 GB de disco
- Docker 20.10+ con Compose v2
- Cuenta de Telegram + token de @BotFather
- Tu user ID de @userinfobot
- 2 GB RAM, 7 GB de disco
- Mismos requisitos de Docker
- Ollama instalado en el host
- ~270 MB extra para el modelo
nomic-embed-text - Sin GPU — embeddings rápidos en CPU
# 1. Clonar
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
# 2. Instalar Docker si no lo tienes
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER && newgrp docker
# 3. Wizard interactivo (10 pasos)
./setup.shEl wizard genera .env, todos los archivos de memoria, y arranca Docker automáticamente. Abres Telegram, hablas con tu bot. Listo.
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
cp .env.example .env
# Edita .env: TELEGRAM_BOT_TOKEN, TELEGRAM_ALLOWED_USER_ID
docker compose up -d
docker compose logs -f botTodos los comandos van en Git Bash (no CMD ni PowerShell). Si no lo tienes: https://git-scm.com/download/win.
# 1. Instalar Docker Desktop para Windows con backend WSL 2
# https://docs.docker.com/desktop/install/windows-install/
# 2. En Git Bash:
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
./setup.sh
# 3. Si setup.sh falla en Windows, usa el flujo manual:
cp .env.example .env
# Edita .env en cualquier editor (Notepad, VS Code, ...)
docker compose up -d
docker compose logs -f botNota Windows:
host.docker.internalya funciona nativamente con Docker Desktop, así que el setup opcional de Ollama de abajo funciona sin configuración extra.
La memoria vectorial hace que fact_search sea semántica en vez de búsqueda por substring. Activa los vectores en tres pasos.
# 1. Instalar Ollama en el host (no dentro del contenedor)
curl -fsSL https://ollama.com/install.sh | sh
# 2. Hacer que escuche en todas las interfaces (Docker no alcanza 127.0.0.1)
sudo mkdir -p /etc/systemd/system/ollama.service.d
echo '[Service]' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
echo 'Environment="OLLAMA_HOST=0.0.0.0:11434"' | sudo tee -a /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload && sudo systemctl restart ollama
# 3. Bloquear el puerto desde internet (seguridad)
sudo ufw deny 11434/tcp 2>/dev/null || true
# 4. Bajar el modelo de embeddings (~270 MB)
ollama pull nomic-embed-text
# 5. Configurar el bot
cat >> .env <<'EOF'
EMBEDDING_BASE_URL=http://host.docker.internal:11434/v1
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_API_KEY=
EOF
# 6. Reiniciar y backfill
docker compose restart bot
# En Telegram: /memory_reembed# 1. Instalar Ollama Desktop
# https://ollama.com/download/windows
# 2. PowerShell admin (una vez, para exponerlo a Docker):
# setx OLLAMA_HOST "0.0.0.0:11434"
# Reinicia Ollama desde la bandeja del sistema.
# 3. Git Bash:
ollama pull nomic-embed-text
# 4. Editar .env:
echo "EMBEDDING_BASE_URL=http://host.docker.internal:11434/v1" >> .env
echo "EMBEDDING_MODEL=nomic-embed-text" >> .env
echo "EMBEDDING_API_KEY=" >> .env
# 5. Reiniciar y backfill
docker compose restart bot
# En Telegram: /memory_reembedEMBEDDING_BASE_URL=https://api.openai.com/v1
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_API_KEY=sk-...Guía completa y troubleshooting: docs/VECTOR_MEMORY.md.
opencode-assistant --updateSmart update: hace backup automático de tu memoria y configs, detecta qué containers necesitan rebuild, y solo reconstruye lo que cambió. Los demás servicios siguen corriendo durante todo el proceso. Más detalles: docs/CLI_USAGE.md.
Si no tienes el CLI instalado todavía (porque hiciste git clone antes de que existiera): sudo ln -sf $(pwd)/bin/opencode-assistant /usr/local/bin/opencode-assistant. O fallback manual: git pull && docker compose up -d --build.
La memoria persiste en ./memory/ (volumen montado), las actualizaciones nunca pierden estado.
| Comando | Descripción |
|---|---|
/status |
Estado del servidor, proyecto, sesión, modelo |
/new · /abort · /sessions |
Gestión de sesiones |
/projects · /worktree · /open |
Cambio de proyecto |
/tts · /rename · /help |
Misceláneos |
/task · /tasklist |
Tareas programadas |
/commands · /mcps |
Comandos OpenCode y servidores MCP |
| Comando | Descripción |
|---|---|
/memory <texto> |
Guardar un dato |
/memory_search <consulta> |
Buscar (vectores si están activos, LIKE si no) |
/memory_remove <id> |
Borrar un dato por id |
/memory_export |
Volcar todo a archivos markdown |
/memory_reembed |
Recalcular embeddings |
/inline_facts <on|off|N> |
Cuántos datos inyectar al iniciar sesión |
/personality [texto] |
Reglas de comportamiento ("dime siempre señor", etc.) |
/show_tools <on|off> |
Mostrar/ocultar mensajes de herramientas |
/listskill · /skill <nombre> |
Explorar skills |
/skill_install <url> · /skill_update · /skill_remove · /skill_verify |
Gestión de skills |
docs/QUICK_DEMO.md— primeros 5 minutosdocs/CLI_USAGE.md— manual del comandoopencode-assistantdocs/TTS_PROVIDERS.md— Edge / Speechify / OpenAI / Google y/ttsdocs/MCP_INTEGRATION.md— cómo MCP se conecta a OpenCodedocs/VECTOR_MEMORY.md— guía completa de memoria vectorialdocs/TROUBLESHOOTING.md— síntoma → soluciónPRODUCT.mdyCONCEPT.md— visión y límites
MIT — tus datos, tu setup, tus reglas.
A Telegram bot that turns OpenCode into a mobile-first personal AI assistant. You get:
- 🧠 Persistent memory — facts, preferences, project context, session summaries, all in SQLite and live-queryable via MCP tools
- 🔍 Optional semantic search — embed your facts with Ollama (local, free) or OpenAI/Groq/Together (cloud) and search understands paraphrasing and languages
- 🪄 OpenClaw
SKILL.mdcompatibility — drop anySKILL.mdfrom the openclaw-skills ecosystem and it works - ⏰ Scheduled tasks — three types (run an OpenCode session, send a reminder, back up memory)
- 🎙️ Voice — Whisper STT + Speechify/OpenAI/Edge TTS, sent as Telegram voice notes
- 🔒 Single-user, self-hosted — strict whitelist; your data, your VPS
- 🌍 6 languages — en / es / de / fr / ru / zh
- You don't want to pay anything. The
big-picklemodel (Claude Sonnet) is included with no API key. Edge TTS (Microsoft neural voices) works with no key and no quota. Speechify TTS and Groq Whisper STT have generous free tiers. Ollama for vectors runs locally, also free. - You already have OpenClaw skills. Drop a
SKILL.mdintomemory/skills/or install one with/skill_install <url>from GitHub — it works the same as in Claude Desktop or the OpenClaw runtime. - You want your memory, not OpenAI's. Local SQLite, MCP standard, exportable to markdown whenever you want to leave. No lock-in.
- You want it on your phone. Other frameworks are CLI-only or desktop apps; this is Telegram from commit one.
- 1 GB RAM, 5 GB disk
- Docker 20.10+ with Compose v2
- Telegram account + bot token from @BotFather
- Your user ID from @userinfobot
- 2 GB RAM, 7 GB disk
- Same Docker requirements
- Ollama installed on the host
- ~270 MB extra for the
nomic-embed-textembedding model - No GPU — embeddings run fast on CPU
# 1. Clone
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
# 2. Install Docker if you don't have it
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER && newgrp docker
# 3. Run the guided setup wizard (10 steps)
./setup.shThe wizard generates .env, all memory files, and launches Docker automatically. Open Telegram, talk to your bot. Done.
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
cp .env.example .env
# Edit .env: TELEGRAM_BOT_TOKEN, TELEGRAM_ALLOWED_USER_ID
docker compose up -d
docker compose logs -f botAll commands use Git Bash (not CMD/PowerShell). Install Git for Windows first: https://git-scm.com/download/win.
# 1. Install Docker Desktop for Windows with WSL 2 backend
# https://docs.docker.com/desktop/install/windows-install/
# 2. In Git Bash:
git clone https://github.com/JohanYP/Opencode-Assistant.git
cd Opencode-Assistant
./setup.sh
# 3. If setup.sh fails on Windows, use the manual flow:
cp .env.example .env
# Edit .env in any editor (Notepad, VS Code, ...)
docker compose up -d
docker compose logs -f botWindows note:
host.docker.internalalready works natively on Docker Desktop, so the optional Ollama setup below works without extra config.
Vector memory makes fact_search semantic instead of substring-based. Three steps to enable.
# 1. Install Ollama on the host (not inside the container)
curl -fsSL https://ollama.com/install.sh | sh
# 2. Make Ollama listen on all interfaces (Docker can't reach 127.0.0.1)
sudo mkdir -p /etc/systemd/system/ollama.service.d
echo '[Service]' | sudo tee /etc/systemd/system/ollama.service.d/override.conf
echo 'Environment="OLLAMA_HOST=0.0.0.0:11434"' | sudo tee -a /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload && sudo systemctl restart ollama
# 3. Block the port from public internet (security)
sudo ufw deny 11434/tcp 2>/dev/null || true
# 4. Pull the embedding model (~270 MB)
ollama pull nomic-embed-text
# 5. Wire it into the bot
cat >> .env <<'EOF'
EMBEDDING_BASE_URL=http://host.docker.internal:11434/v1
EMBEDDING_MODEL=nomic-embed-text
EMBEDDING_API_KEY=
EOF
# 6. Restart and backfill
docker compose restart bot
# In Telegram: /memory_reembed# 1. Install Ollama Desktop
# https://ollama.com/download/windows
# 2. PowerShell admin (once, to expose to Docker):
# setx OLLAMA_HOST "0.0.0.0:11434"
# Restart Ollama from the system tray.
# 3. Git Bash:
ollama pull nomic-embed-text
# 4. Edit .env:
echo "EMBEDDING_BASE_URL=http://host.docker.internal:11434/v1" >> .env
echo "EMBEDDING_MODEL=nomic-embed-text" >> .env
echo "EMBEDDING_API_KEY=" >> .env
# 5. Restart and backfill
docker compose restart bot
# In Telegram: /memory_reembedEMBEDDING_BASE_URL=https://api.openai.com/v1
EMBEDDING_MODEL=text-embedding-3-small
EMBEDDING_API_KEY=sk-...Full guide and troubleshooting: docs/VECTOR_MEMORY.md.
opencode-assistant --updateSmart update: snapshots your memory + configs first, fetches origin, detects which containers actually need a rebuild, and rebuilds only those. The other services keep running through the whole flow. Full reference: docs/CLI_USAGE.md.
If you cloned before the CLI existed and the symlink isn't installed yet: sudo ln -sf $(pwd)/bin/opencode-assistant /usr/local/bin/opencode-assistant. Or manual fallback: git pull && docker compose up -d --build.
Memory persists in ./memory/ (mounted volume), so updates never lose state.
| Command | Description |
|---|---|
/status |
Server, project, session, model info |
/new · /abort · /sessions |
Session management |
/projects · /worktree · /open |
Project switching |
/tts · /rename · /help |
Misc |
/task · /tasklist |
Scheduled tasks |
/commands · /mcps |
OpenCode commands and MCP servers |
| Command | Description |
|---|---|
/memory <text> |
Save a fact |
/memory_search <query> |
Search (vector if enabled, LIKE otherwise) |
/memory_remove <id> |
Delete a fact by id |
/memory_export |
Dump everything to markdown files |
/memory_reembed |
Recompute embeddings |
/inline_facts <on|off|N> |
Tune how many facts get inlined at session start |
/personality [text] |
User-defined behaviour rules ("always address me as 'sir'", etc.) |
/show_tools <on|off> |
Toggle tool-call messages in chat |
/listskill · /skill <name> |
Browse skills |
/skill_install <url> · /skill_update · /skill_remove · /skill_verify |
Skill lifecycle |
docs/QUICK_DEMO.md— first 5 minutes after installdocs/CLI_USAGE.md—opencode-assistantcommand referencedocs/TTS_PROVIDERS.md— Edge / Speechify / OpenAI / Google +/ttsdocs/MCP_INTEGRATION.md— how memory tools wire into OpenCodedocs/VECTOR_MEMORY.md— full vector memory guidedocs/TROUBLESHOOTING.md— symptom → fixPRODUCT.mdandCONCEPT.md— vision and boundaries
MIT — your data, your setup, your rules.
ES: Si el proyecto te ahorró tiempo o dinero, una ⭐ en GitHub ayuda muchísimo a que más gente lo encuentre. También puedes:
- Compartir tu setup con
#OpencodeAssistanten redes - Abrir un issue con tu caso de uso
- Mandar PRs con skills nuevas, traducciones, fixes
EN: If this saved you time or money, a ⭐ on GitHub helps it reach more people. You can also:
- Share your setup with
#OpencodeAssistanton socials - Open an issue with your use case
- Send PRs with new skills, translations, or fixes
Construido sobre / Built on top of:
- OpenCode by SST — the AI coding agent under the hood
- OpenClaw skills ecosystem — the
SKILL.mdstandard - Ollama — local embedding inference
- Speechify — free TTS API
- Groq — free Whisper STT API
- grammY — Telegram bot framework
- better-sqlite3 — synchronous SQLite for Node
Keywords: OpenCode · OpenClaw · OpenFang · Claude Sonnet · MCP · Model Context Protocol · Ollama · vector memory · semantic search · embeddings · Telegram bot · AI assistant · self-hosted · personal AI · claude-skills · SKILL.md · big-pickle · cron jobs · Whisper STT · Speechify TTS · SQLite · openclaw-skills · sst/opencode · CrewAI · AutoGen · LangGraph · ZeroClaw · agent framework