English | 中文文档
Put the real Codex and Claude Code CLI on Telegram.
Not an API wrapper — the actual CLI, with native sessions, local files, and real tool use.
Resume desktop sessions from Telegram, or run isolated multi-bot teams through Agent Bus.
Runs the native CLI harness directly — Codex or Claude per instance, hot-reloaded instructions, voice/file input, local session resume, Telegram groups/topics, Telegram-delivered scheduled tasks, multi-bot Agent Bus, structured timeline/audit logs, service doctor, and dashboard included.
No reimplemented API wrappers, no fake chat layer.
Dual Engine | Multi-Bot | Groups | Agent Bus | Crew | Files | Cron | Voice | Resume | Budget | Quick Start | Ops
RULE 1: Let your Claude Code or Codex CLI set this up for you. Clone the repo, open it in your terminal, and tell your AI agent: "read the README and configure a Telegram bot for me". It will handle the rest.
Recommended runtime: enable YOLO mode for hands-free Telegram instances you control:
telegram yolo on --instance <name>. With YOLO off, the bridge can ask for approval in Telegram instead: Claude approvals are per tool request; Codex approvals are per turn becausecodex execdoes not support mid-turn approval callbacks. Useunsafeonly on a trusted machine and workspace.
- v4.6.2 — adds Telegram Board + Mini Bus coordination:
/boardstores durable Kanban tasks with richer cards, dependencies, WIP limits, review gates, run history, and one-task execution via Mini Bus topics or Agent Bus instances;/minilets forum topics in the same group act as planner/writer/reviewer-style peers for fan-out, chain, verify, and crew workflows. - v4.5.10 — adds Codex Fast Mode control with
/fast on|off|status, forwardingfast_modeandservice_tier="fast"to both Codex process and app-server runtimes while keeping Claude instances rejected cleanly. Fast Mode is experimental in unattended bridge use: if Codex starts returning engine-runtime failures, turn it off with/fast off; if the instance is already unhealthy, restart the instance once after the current turn is idle. - v4.5.9 — hardens schema-backed tool delivery receipts: malformed
[tool:{...}]JSON or rejected send tools no longer preserve misleading “already sent” model text; batch/long delivery now prefers fencedtool-callblocks, and generatedagent.mdupgrades clean up duplicate scheduler residue. - v4.5.8 — documents
[tool:{...}]as the only generated delivery tag format, keeps legacy[send-file:]/[send-image:]tags as compatibility-only, and clarifies the file-delivery trust boundary. - v4.5.7 — unifies file delivery and Telegram scheduled tasks around the registered
[tool:{...}]layer, adds safertool-callfenced blocks, hardens stream/post-turn dedupe, and improves cron reliability with timezones, stale-run handling, file locks, job caps, and failure receipts. - v4.5.3 — recovers a stale Telegram update watermark from audit history on service startup, preventing old completed tasks from replaying after restart.
- v4.5.2 — fixes Telegram update watermark ordering, so rapid follow-up messages cannot be skipped while an earlier turn is still finishing.
- v4.5.1+ — moves Telegram transport rules into each instance's
agent.md, leaving only a short static Telegram reminder in the per-turn prompt. File delivery now uses the registered[tool:...]layer, withcctb sendkept for CLI workflows. - v4.5.0 — simplifies file delivery around explicit send receipts and removes the old manifest/contract/count-repair/wakeup delivery state.
- Earlier 4.x releases added the dual Codex/Claude process runtimes, Agent Bus, crew workflows, timeline/audit logs, service doctor, dashboard, and Delivery Protocol v2.
Upgrading existing generated instance instructions: refresh generated agent.md blocks after updating so old bots get the short Telegram Transport and Scheduled Tasks sections:
telegram instructions upgrade --all --dry-run
telegram instructions upgrade --all
telegram service restart --allUse --force only for instances with a custom transport block you intentionally want to replace. Forced replacements create an agent.md.bak.<timestamp> backup next to the original file.
- Native CLI first. The bridge runs the real Codex and Claude Code CLIs, so local auth, project files, sessions, approvals, and engine-specific behavior remain the same as on your desktop.
- Resume desktop work from anywhere. Pick up an existing local Codex or Claude Code session from Telegram, send files or instructions while away, then continue the same project back on the desktop.
- Group topics become clean side conversations. A single bot can serve private chat plus allowed Telegram groups; forum topics get separate sessions and cron scopes, so throwaway tasks and scheduled work do not pollute the main conversation. Topic peers can also be composed into a Mini Bus for quick same-group fan-out or chain workflows, while
/boardkeeps durable Kanban task state outside model memory. - Multi-engine without separate playbooks. Each bot can choose Codex or Claude, process or stream runtime, while file delivery and scheduled tasks still go through the same schema-backed
[tool:{...}]bridge protocol. - Telegram features live in the bridge, not in model memory. File sending, cron persistence, receipts, access checks, and retries are handled by bridge code, so tasks keep working across model changes, restarts, and resumed sessions.
- Short prompts, stable instructions. Transport rules live in instance-level
agent.md; per-turn prompts stay small and do not need request ids, temp directories, or side-channel secrets. - Receipts over claims. File delivery and scheduled-task creation produce structured accepted/rejected receipts, so "done" only counts when the bridge actually delivered or scheduled something.
- Operable by default. Timeline logs, audit logs, doctor, dashboard, usage tracking, cron state, and generated-instruction upgrades make failures visible and recovery repeatable.
Each bot instance can run either OpenAI Codex or Claude Code as its backend. Switch engines per-instance with one command:
# Set an instance to use Claude Code
npm run dev -- telegram engine claude --instance review-bot
# Set another to use Codex
npm run dev -- telegram engine codex --instance helper-bot
# Check current engine
npm run dev -- telegram engine --instance review-bot| Feature | Codex Engine | Claude Engine |
|---|---|---|
| CLI command | codex exec --json |
claude -p --output-format json |
| Session resume | codex exec resume --json <id> |
claude -p -r <session-id> |
| Project instructions | agent.md (prepended to prompt) |
agent.md (via --system-prompt) + CLAUDE.md (auto-loaded from workspace) |
| Telegram approval when YOLO is off | Pre-approve the turn, then run that turn with --full-auto |
Inline approval buttons for Claude permission prompts |
| YOLO mode | --full-auto / --dangerously-bypass-approvals-and-sandbox |
--permission-mode bypassPermissions / --dangerously-skip-permissions |
/compact |
Not needed (each exec is stateless) | Compresses session context to reduce token usage |
| Working directory | workspace/ under instance dir |
workspace/ under instance dir (with CLAUDE.md) |
When using the Claude engine, each instance gets a workspace/ directory. Drop a CLAUDE.md in there for project-level instructions that Claude Code reads natively:
~/.cctb/review-bot/
├── agent.md ← "You are a strict code reviewer"
├── workspace/
│ └── CLAUDE.md ← "TypeScript project. Use ESLint. Never modify tests."
├── config.json ← { "engine": "claude", "approvalMode": "full-auto" }
└── .env
Two layers of instructions, no conflict:
- agent.md → Your bot personality (injected via
--system-prompt) - CLAUDE.md → Project rules (Claude auto-discovers from working directory)
Run as many bots as you need. Each instance is fully isolated — its own engine, token, personality, threads, access rules, inbox, and audit trail. By default, each instance is meant for one Telegram chat; multi-chat access is opt-in.
┌─────────────────────────────────────────────┐
│ cc-telegram-bridge │
└────────────┬──────────────┬─────────────────┘
│ │
┌──────────────┼──────────────┼──────────────┐
▼ ▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐
│ "default" │ │ "work" │ │ "reviewer" │ │ "research" │
│ engine: │ │ engine: │ │ engine: │ │ engine: │
│ codex │ │ codex │ │ claude │ │ claude │
│ │ │ │ │ │ │ │
│ agent.md: │ │ agent.md: │ │ agent.md: │ │ agent.md: │
│ "General │ │ "Reply in │ │ "Strict │ │ "Deep │
│ helper" │ │ Chinese" │ │ reviewer" │ │ research" │
└────────────┘ └────────────┘ └────────────┘ └────────────┘
PID 4821 PID 5102 PID 5340 PID 5520
# Configure each instance
npm run dev -- telegram configure <token-A>
npm run dev -- telegram configure --instance work <token-B>
npm run dev -- telegram configure --instance reviewer <token-C>
# Set engines
npm run dev -- telegram engine claude --instance reviewer
# Set personalities
npm run dev -- telegram instructions set --instance reviewer ./reviewer-instructions.md
# Recommended: enable YOLO for Telegram/mobile use
npm run dev -- telegram yolo on --instance work
# Start them all
npm run dev -- telegram service start
npm run dev -- telegram service start --instance work
npm run dev -- telegram service start --instance reviewerEach bot has its own agent.md. Hot-reloaded on every message — edit anytime, no restart needed.
npm run dev -- telegram instructions show --instance work
npm run dev -- telegram instructions set --instance work ./my-instructions.md
npm run dev -- telegram instructions path --instance workOr edit directly:
# Windows
notepad %USERPROFILE%\.cctb\work\agent.md
# macOS
open -e ~/.cctb/work/agent.mdDuring each active Telegram turn, the bridge can deliver generated files through the registered Telegram tool layer. The canonical agent-facing form is an inline tool tag:
[tool:{"name":"send.file","payload":{"path":"/absolute/path/to/report.pdf"}}]
[tool:{"name":"send.image","payload":{"path":"/absolute/path/to/image.png"}}]
[tool:{"name":"send.batch","payload":{"message":"Done","images":["/absolute/path/to/image.png"],"files":["/absolute/path/to/report.pdf"]}}]
For larger or quote-heavy payloads, the same tool envelope can be emitted as a fenced block:
```tool-call
{"name":"send.file","payload":{"path":"/absolute/path/to/report.pdf"}}
```
For CLI workflows, the bridge also injects a stable cctb command into turn-scoped engine processes:
cctb send --image /absolute/path/to/image.png
cctb send --file /absolute/path/to/report.pdf
cctb send --message "Done" --file /absolute/path/to/report.pdfInside an active Telegram turn, cctb send uses the turn-scoped side-channel and preserves the current chat/session context. The same delivery path is also available through the repository CLI outside an active turn, where it falls back to the configured instance and active Telegram session:
telegram send --image /absolute/path/to/image.png
telegram send --file /absolute/path/to/report.pdf
telegram send --chat 123456789 --file /absolute/path/to/report.pdf
telegram send --instance bot2 --chat 123456789 --image /absolute/path/to/image.pngCurrent delivery rules:
- Agents should use
[tool:...]delivery tags for existing files, images, PDFs, decks, and other binary outputs. This is the only delivery tag format generated instance instructions teach. [tool:...]examples are generated from the registered tool schema/examples; explicit fencedtool-callblocks execute through the same parser.cctb sendremains available for turn-scoped CLI workflows and is internally routed through the same send tool layer.- Use
telegram sendwhen you need the same explicit delivery command outside an active turn, or when the turn-scopedcctbhelper is unavailable. - Explicit send commands accept any readable absolute file path.
- Legacy
[send-file:/absolute/path]/[send-image:/absolute/path]tags are accepted only for older sessions and copied historical output. Do not use them in new agent instructions, system prompts, or examples. - Small text/code files can still use the
file:name.extfenced-block form. - The helper is scoped to one Telegram turn. It will not work after the turn finishes.
- Legacy fallback tags still validate that files live under the instance workspace or the active
/resumeproject before sending. - Accepted and rejected file deliveries are recorded as turn-level receipts, so the bridge can decide completion from structured delivery evidence instead of text claims.
- If a file was already sent by stream delivery or the side-channel helper, the final
.telegram-outsweep skips that same real path to avoid duplicate Telegram attachments. - Request-scoped
.telegram-out/<requestId>/directories are runtime buffers and are pruned after 24 hours. - The bridge no longer keeps manifest, pending-contract, or count-based state to infer future delivery intent across ordinary chat turns.
- Text-only tasks such as image analysis, image descriptions, or inline reports are not treated as file-delivery failures.
This works for Codex, Claude, process, and stream runtimes because the canonical path only requires the agent to emit text. File delivery is explicit: generate the file, emit the tool tag or call the send command, and rely on the resulting receipt.
When upgrading from v4.5.0 or earlier, refresh generated instance instructions with:
telegram instructions upgrade --all --dry-run
telegram instructions upgrade --allThis safely replaces old generated Telegram Transport blocks and appends the block when missing. Custom transport sections are left untouched unless you rerun with --force. Forced replacements create an agent.md.bak.<timestamp> backup next to the original file.
Agents can schedule Telegram-delivered reminders and recurring tasks through the same tool layer used for file delivery:
[tool:{"name":"cron.add","payload":{"in":"10m","prompt":"check email"}}]
[tool:{"name":"cron.add","payload":{"at":"2026-05-01T09:00:00Z","prompt":"Monday standup"}}]
[tool:{"name":"cron.add","payload":{"cron":"0 9 * * 1","prompt":"weekly summary"}}]
Users can also manage tasks directly in Telegram:
/cron list
/cron add 0 9 * * 1 weekly summary
/cron rm <job-id>
/cron toggle <job-id>
/cron run <job-id>
Cron behavior is designed for Telegram delivery, not session-local reminders:
- Jobs are persisted in the instance state and survive bot restarts.
chatId,userId, andchatTypeare injected by the bridge, not trusted from the agent payload.- Relative reminders (
in), absolute reminders (at), and recurring 5-field cron expressions (cron) are supported. - Each job stores timezone information; by default it follows the server/instance environment where the bot runs.
- Missed one-shot reminders older than the grace window are marked missed instead of firing as a burst after long downtime.
- Recurring jobs track failures, keep capped run history, and can be disabled after repeated failures.
- Per-chat job caps prevent accidental recursive job creation from growing without bound.
For human operators, the CLI remains available for inspection and debugging, but generated agent.md instructions tell agents to use the [tool:{...}] layer so Claude/Codex process and stream runtimes behave consistently.
For hands-free Telegram use, telegram yolo on is recommended. It keeps Codex/Claude moving without asking on each turn. If you keep YOLO off, the bridge will use Telegram approval buttons where the engine supports a headless path: Claude can approve individual permission prompts; Codex app-server mode maps YOLO settings to the app-server sandbox mode. Keep unsafe for fully trusted local environments only.
Claude approval buttons use a short-lived localhost MCP bridge with a random URL token. This protects against blind local port scans, but the token is still visible to same-user local processes that can inspect process command lines. Treat YOLO-off approval as a single-user workstation convenience, not a multi-user isolation boundary.
npm run dev -- telegram yolo on --instance work # Safe auto-approve
npm run dev -- telegram yolo unsafe --instance work # Skip ALL checks
npm run dev -- telegram yolo off --instance work # Normal flow
npm run dev -- telegram yolo --instance work # Check status| Mode | Codex | Claude | Use case |
|---|---|---|---|
off |
Telegram pre-turn approval | Telegram tool approval | Default, safest |
on |
--full-auto |
--permission-mode bypassPermissions |
Mobile use |
unsafe |
--dangerously-bypass-* |
--dangerously-skip-permissions |
Trusted env only |
Track token consumption and cost per instance:
npm run dev -- telegram usage # Default instance
npm run dev -- telegram usage --instance work # Named instanceOutput:
Instance: work
Requests: 42
Input tokens: 185,230
Output tokens: 12,450
Cached tokens: 96,000
Estimated cost: $0.3521
Last updated: 2026-04-09T10:00:00Z
Claude reports exact USD cost. Codex reports tokens only (cost shows as "unknown").
While a turn runs, the bridge sends Telegram typing actions and records structured events in timeline.log.jsonl / audit.log.jsonl. Long tool calls are not live-edited into the chat; inspect them with:
npm run dev -- telegram timeline --instance work
npm run dev -- telegram dashboard --instance work
npm run dev -- telegram service status --instance worktelegram verbosity is kept as a compatibility config knob, but the current Codex/Claude process runtimes use typing actions plus timeline/audit events rather than live-editing partial model output into Telegram.
Set a per-instance spending cap. When total cost reaches the limit, new requests are blocked until the budget is raised or cleared.
npm run dev -- telegram budget show --instance work # Current spend vs limit
npm run dev -- telegram budget set 10 --instance work # Cap at $10
npm run dev -- telegram budget clear --instance work # Remove capBudget is enforced in real-time — the bot replies with a bilingual message when the limit is hit.
Send voice messages in Telegram — the bridge transcribes them locally before forwarding the text to the AI engine. No cloud ASR service required.
How it works:
- User sends a voice message in Telegram
- The bridge downloads the
.oggfile - Transcribes it via a local ASR service (HTTP first, CLI fallback)
- The transcript replaces the voice attachment as the user's text message
- The AI engine processes it as a normal text request
Setup with Qwen3-ASR (example):
# Clone and install the ASR model
git clone https://github.com/nicoboss/qwen3-asr-python
cd qwen3-asr-python
python -m venv venv
source venv/bin/activate
pip install -e .
# Download a model (0.6B is fast enough for voice messages)
huggingface-cli download Qwen/Qwen3-ASR-0.6B --local-dir models/Qwen3-ASR-0.6BThe bridge looks for the ASR service at two locations (in order):
| Method | Endpoint / Path | Latency | Notes |
|---|---|---|---|
| HTTP server | POST http://127.0.0.1:8412/transcribe |
~2-3s | Model stays in memory. Recommended. |
| CLI fallback | ~/projects/qwen3-asr/transcribe.py <file> |
~30s | Loads model each time. No server needed. |
Start the HTTP server (recommended):
python ~/projects/qwen3-asr/server.py
# Qwen3-ASR server listening on http://127.0.0.1:8412Optional ASR watchdog:
By default the bridge does not start arbitrary ASR processes. If you want it to repair a local ASR server after repeated HTTP failures, add an explicit command to the instance .env:
ASR_SERVICE_COMMAND='curl -fsS --max-time 2 -X POST http://127.0.0.1:8412/shutdown >/dev/null 2>&1 || true; sleep 2; cd "$HOME/projects/qwen3-asr" && exec "$HOME/projects/qwen3-asr/venv/bin/python3" "$HOME/projects/qwen3-asr/server.py" >> "$HOME/.cctb/asr-server.log" 2>&1'
ASR_RESTART_AFTER_FAILURES=2
ASR_RESTART_COOLDOWN_MS=60000The watchdog only covers the warm HTTP ASR path. CLI fallback still exists for transcription, but it is not daemon-managed.
Custom ASR integration:
To use a different ASR engine, modify the createDefaultTranscribeVoice() function in src/telegram/message-input.ts. The function receives the local path to an .ogg audio file and should return the transcribed text as a string.
Started a task locally with Claude Code? Continue it on Telegram — no copy-paste, no re-explaining context. Using Codex instead? Attach an existing thread by ID and keep going from Telegram.
/resume ← Bot scans your local sessions from the past hour
The bot lists recent sessions with project names and timestamps:
Recent local sessions:
1. [cc-telegram-bridge] 64c2081c… (5m ago)
2. [my-app] a3f8b21e… (32m ago)
Reply /resume <number> to continue that session.
Pick one:
/resume 1 ← Bot symlinks the session, switches workspace, binds session ID
Now every message you send goes through the original session — same context, same project directory, same conversation history. When you're done:
/detach ← Unbinds session, restores the pre-/resume conversation when one exists
How it works under the hood:
- Scans
CLAUDE_CONFIG_DIR/projects/when set, otherwise~/.claude/projects/, for.jsonlfiles modified in the last hour - Binds the session ID and overrides the workspace to point at your real project path
- Claude CLI resumes with
-r <sessionId>in the original directory /detachreturns to the pre-/resume conversation when one exists; otherwise it falls back to the default workspace without touching the original local session file
No pollution: bridge and instance instructions are passed per invocation and are not written back into local session files.
Codex does not expose the same local session scan flow as Claude. If you already know the thread ID, attach it explicitly:
/resume thread thread_abc123
That binds the current Telegram chat to the existing Codex thread. From then on:
- new Telegram messages continue that thread
/statusshows the current thread ID/detachunbinds the thread and restores the pre-attach conversation when one exists
This is an attach flow, not a local session import: the thread stays server-side and the bridge only binds the known thread ID to the current chat.
Note: the default Codex app-server runtime validates /resume thread <thread-id> through the local Codex runtime. Thread IDs unknown to the local machine still fail closed instead of being guessed.
List, rename, or delete instances from the CLI. The service must be stopped before renaming or deleting.
npm run dev -- telegram instance list # Show all instances
npm run dev -- telegram instance rename old-name new-name # Rename
npm run dev -- telegram instance delete staging --yes # Delete (requires --yes)Back up an instance's entire state directory to a single .cctb.gz archive. Restore atomically with rollback on failure.
npm run dev -- telegram backup --instance work # Creates timestamped .cctb.gz
npm run dev -- telegram backup --instance work --out ./bak.cctb.gz
npm run dev -- telegram restore ./bak.cctb.gz --instance work # Restore (instance must not exist)
npm run dev -- telegram restore ./bak.cctb.gz --instance work --force # Overwrite existingThe archive format is a pure-Node gzipped binary — no tar dependency, works on Windows/macOS/Linux identically.
Enable bot-to-bot communication via local HTTP IPC. The bus now supports point delegation, fan-out, sequential chains, auto-review, and coordinator-led crew workflows. It handles routing, peer validation, loop prevention, and local auth.
Protocol v1 — every request and response is stamped with protocolVersion, declared capabilities, structured errorCode, and a retryable flag, so callers can tell transient failures (timeouts, unreachable peers) from terminal ones (disabled bus, peer not allowed). Legacy unversioned payloads are still accepted for rolling upgrades. Peer liveness is verified by probing GET /api/health and matching a cc-telegram-bridge fingerprint, so a reused local port cannot fake a live peer. Full spec: docs/bus-protocol.md.
Add bus to each instance's config.json:
{ "engine": "codex", "bus": { "peers": "*" } }| Field | Description |
|---|---|
peers |
"*" = talk to all bus-enabled bots. ["a", "b"] = specific bots only. Omit or false = isolated. |
maxDepth |
Max delegation hops (default 3). Prevents A→B→C→A loops. |
port |
Local HTTP port. 0 = auto-assign (default). |
secret |
Shared secret for Bearer token authentication (optional). |
parallel |
List of instances for /fan parallel queries (e.g. ["sec-bot", "perf-bot"]). |
chain |
Ordered list of instances for /chain sequential handoff (e.g. ["reviewer", "writer"]). |
verifier |
Instance name for /verify auto-verification (e.g. "reviewer"). |
crew |
Fixed coordinator workflow config for hub-and-spoke specialist orchestration. |
Both sides must allow each other — unilateral bus config is rejected.
In any bot's Telegram chat:
/ask reviewer Please review this function for security issues
/fan Analyze this code for bugs, security issues, and performance
/chain Improve this answer step by step
/verify Write a function to sort an array
/ask <instance> <prompt>— delegate to a specific bot, result inline/fan <prompt>— query current bot + allparallelbots simultaneously, combined results/chain <prompt>— run a configured sequential pipeline, each stage receiving the previous stage output explicitly/verify <prompt>— execute on current bot, then auto-send toverifierfor review
/chain is the lightweight pipeline. crew is the heavier hub-and-spoke mode.
/board adds a small Hermes-inspired Kanban layer on top of Telegram. It is intentionally state-first: tasks, dependencies, assignees, blocked reasons, and completion summaries are stored in board.json, not only in the model conversation. This makes it useful for coordinating Mini Bus or Agent Bus work without relying on "remember what we were doing".
/board add Draft launch plan
/board desc B1 Write launch messaging and rollout tasks
/board accept B1 README updated
/board priority B1 high
/board labels B1 docs launch
/board check B1 add Update README
/board list
/board show B1
/board assign B1 writer
/board dep B2 B1
/board limits global 3
/board review B1 on reviewer
/board ready B2
/board run B2
/board start B2
/board fail B2 tests failed
/board runs B2
/board block B2 waiting on API docs
/board unblock B2
/board approve B1
/board reject B1 needs more tests
/board done B1 design accepted
/board add <task>— create a durable task with a stable id likeB1/board desc <id> <description>— set task card description/board accept <id> <criterion>— append an acceptance criterion/board priority <id> <low|normal|high|urgent>— set priority/board labels <id> <labels...>— replace task labels/board check <id> add <item>//board check <id> done <C1>— manage checklist items/board list [todo|ready|running|blocked|done]— list board tasks/board show <id>— show one task with source chat/topic metadata/board assign <id> <assignee>— label the task with a Mini Bus peer, bot instance, or free-form owner/board dep <id> <depends-on-id>— declare that one task waits for another/board limits [global|assignee|conversation] <n>— set WIP limits; defaults areglobal=3,assignee=1,conversation=1/board review <id> <on|off> [reviewer]— require review beforedone/board approve <id>//board reject <id> <reason>— resolve tasks waiting in review/board ready <id>— move a task to ready if dependencies are complete/board run <id>— execute a ready task through its assignee; Mini Bus peers in the current group are preferred, otherwise the assignee is treated as an Agent Bus instance/board start <id>— mark a task running and create a lightweight run record/board fail <id> <reason>— close the active run as failed and block the task with the reason/board runs <id>— show run attempt history for one task/board block <id> <reason>//board unblock <id>— manage blocked work/board done <id> [summary]— complete a task; dependents whose dependencies are all done are promoted toready
This is not an autonomous dispatcher yet. It gives the bridge durable planning state first: richer task cards, WIP limits, run history, dependency promotion, review gates, and explicit one-task execution with /board run <id>. Automatic dispatch should build on this primitive rather than bypassing the task model.
Inside an allowed Telegram group or forum, /mini lets one bot treat different topics as lightweight peers. Each peer keeps its own topic session, uses the same instance config and agent.md, and can be asked directly, queried in parallel, or chained sequentially. This is useful for temporary planning/review threads without creating new bot instances.
Use Mini Bus when you want separate working memory without separate bots:
- keep an
intaketopic for the coordinator and registerplanner,writer,reviewer, orresearchtopics as peers - run quick comparisons with
/mini fan, where each peer answers the same prompt in parallel - run staged work with
/mini chain, where each topic receives the previous topic's output - run a lightweight review loop with
/mini verify - run a fixed specialist workflow with
/mini crew research-report
Prerequisites:
- the bot must be in an allowed Telegram group or forum topic
- if the group uses BotFather privacy mode, make the bot an admin so it can see ordinary group messages; otherwise mention/reply-to the bot or use commands
- register each topic from inside that topic with
/mini here <name>
Typical setup:
/mini here planner
/mini here writer
/mini status
/mini ask planner Break this task into steps
/mini fan Compare these options
/mini chain Turn this rough idea into a final answer
/mini verifier reviewer
/mini verify Write the final answer
/mini role researcher research
/mini role analyst analyst
/mini role writer writer
/mini role reviewer reviewer
/mini crew research-report Analyze this market
After setup, use the coordinator topic to call the peers:
/mini ask planner Break this into tickets
/mini fan Find risks in this plan
/mini chain Turn this plan into final copy
/mini verify reviewer Is this ready to ship?
/mini here <name>— register the current topic as a named peer for the current group/mini order <names...>— set the default/mini chainorder/mini parallel <names...>— set the default/mini fantarget list/mini verifier <name|off>— set the verifier used by/mini verify/mini role <researcher|analyst|writer|reviewer> <name>— bind a crew role to a named topic peer/mini crew research-report <prompt>— run the full coordinator-ledresearch-reportworkflow using topic peers as specialists/mini ask <name> <prompt>— send one prompt to a named topic peer/mini fan <prompt>— run all registered peer topics except the current topic in parallel/mini chain <prompt>— run registered peer topics in registration order, passing each output to the next stage/mini verify [name] <prompt>— execute in the current topic, then ask the configured or named verifier topic to review it/mini rm <name>— remove a topic peer
The practical benefit is isolation with low overhead: every topic has its own session and cron scope, but all topics share the same bot token, workspace, engine settings, budget tracking, approvals, timeline, and audit logs. That makes Mini Bus good for short-lived multi-agent work such as planning, drafting, review, research, or temporary cron/job conversations.
Mini Bus is intentionally scoped to the current Telegram group. It does not open another bot token or another workspace; if multiple topics edit the same files concurrently, the same workspace-conflict rules apply as any concurrent local agents.
Mini crew is the topic-scoped version of Agent Bus crew: the coordinator runs in the current topic context, decomposes the task, sends research sub-questions to the researcher topic in parallel, then routes analysis, writing, review, and any revision loop through the configured role topics. It uses the same crew-runs/*.json, timeline, audit, budget, approval, and topic-session boundaries as the instance-level workflow.
Hub & Spoke — one commander, multiple workers:
┌──────────┐
│ main │
│ peers: * │
└──┬────┬──┘
│ │
┌───────┘ └───────┐
▼ ▼
┌──────────┐ ┌──────────┐
│ reviewer │ │ researcher│
│peers: │ │peers: │
│ ["main"] │ │ ["main"] │
└──────────┘ └──────────┘
Workers only talk to the hub. The hub dispatches and aggregates.
Pipeline — sequential handoff:
┌────────┐ ┌────────┐ ┌────────┐
│ intake │────▶│ coder │────▶│ review │
│peers: │ │peers: │ │peers: │
│["coder"]│ │["intake",│ │["coder"]│
└────────┘ │"review"]│ └────────┘
└────────┘
Each bot only knows its neighbors. Tasks flow left to right.
Parallel — fan-out to multiple specialists:
/fan "analyze this code"
│
┌──────────────┼──────────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ sec-bot │ │ perf-bot │ │ style-bot│
└──────────┘ └──────────┘ └──────────┘
│ │ │
└──────────────┼──────────────┘
▼
Combined result
{ "bus": { "peers": "*", "parallel": ["sec-bot", "perf-bot", "style-bot"] } }Verification — execute then auto-review:
/verify "write a sort function"
│
▼
┌──────────┐ result ┌──────────┐
│ coder │ ───────────▶ │ reviewer │
└──────────┘ └──────────┘
│
verification
│
▼
Both shown to user
{ "bus": { "peers": "*", "verifier": "reviewer" } }For heavier multi-agent work, one instance can act as a dedicated coordinator while fixed specialist instances do focused work. This follows the article-style hub-and-spoke pattern:
- the user talks directly to the coordinator bot
- specialists never talk to each other directly
- all context is passed explicitly by the coordinator
- the coordinator keeps the run state, stage progress, and final assembly
Current built-in workflow is research-report:
coordinator -> researcher -> analyst -> writer -> reviewer
If the reviewer asks for changes, the coordinator can send the draft back to the writer for one or more revision rounds.
Example config on the coordinator instance:
{
"bus": {
"peers": ["researcher", "analyst", "writer", "reviewer"],
"crew": {
"enabled": true,
"workflow": "research-report",
"coordinator": "coordinator",
"roles": {
"researcher": "researcher",
"analyst": "analyst",
"writer": "writer",
"reviewer": "reviewer"
},
"maxResearchQuestions": 4,
"maxRevisionRounds": 2
}
}
}Behavior notes:
- only the coordinator instance should have this
crewblock - the five roles must all be distinct
- ordinary text messages sent to the coordinator bot will run the crew workflow automatically
- crew runs are persisted under
crew-runs/*.json - stage progress is also written to
timeline.log.jsonl
Mesh — full interconnect:
// Every instance
{ "bus": { "peers": "*" } }All bots can talk to all bots. Simplest config, best for small teams (3-5 bots).
TL;DR — You only need to do two things on your phone: get a bot token from BotFather and send the pairing code. Everything else happens on your computer via Claude Code or Codex CLI.
- Node.js >= 20
- OpenAI Codex CLI and/or Claude Code CLI installed and authenticated
- A Telegram account (phone)
- Open Telegram and search for @BotFather
- Send
/newbot - Follow the prompts — give your bot a name and username
- BotFather will reply with a bot token like
123456789:ABCdefGHIjklMNOpqrsTUVwxyz0123456789 - Copy this token — you'll paste it in your terminal
Open your terminal with Claude Code or Codex, and tell it:
"Clone https://github.com/cloveric/cc-telegram-bridge and set up a Telegram bot with this token:
<paste your token>"
Or do it manually:
git clone https://github.com/cloveric/cc-telegram-bridge.git
cd cc-telegram-bridge
npm install
npm run build
# Configure with your bot token
npm run dev -- telegram configure <your-bot-token>
# Optional: switch to Claude engine (default is Codex)
npm run dev -- telegram engine claude
# Recommended: enable YOLO mode for hands-free Telegram operation
npm run dev -- telegram yolo on
# Start the service
npm run dev -- telegram service start- Open Telegram and find your new bot (search its username)
- Send any message — the bot will reply with a 6-character pairing code like
38J63T - Go back to your terminal and run:
npm run dev -- telegram access pair 38J63TDone! You can now chat with Codex or Claude from Telegram. Send text, voice messages, or files — the bot handles everything.
# Create a second bot with BotFather, then:
npm run dev -- telegram configure --instance work <second-token>
npm run dev -- telegram engine claude --instance work
npm run dev -- telegram yolo on --instance work
npm run dev -- telegram service start --instance work
# Pair the same way: send a message, get the code, run `telegram access pair <code> --instance work`┌─────────────────────────────────────────────────────────────────────┐
│ cc-telegram-bridge │
├─────────────┬──────────────┬──────────────────┬─────────────────────┤
│ Telegram │ Runtime │ AI Engine │ State │
│ Layer │ Layer │ Layer │ Layer │
├─────────────┼──────────────┼──────────────────┼─────────────────────┤
│ api.ts │ bridge.ts │ adapter.ts │ access-store.ts │
│ delivery.ts │ chat-queue.ts│ process-adapter │ session-store.ts │
│ update- │ session- │ .ts (Codex) │ runtime-state.ts │
│ normalizer │ manager.ts │ claude-adapter │ instance-lock.ts │
│ .ts │ │ .ts (Claude) │ json-store.ts │
│ message- │ │ │ audit-log.ts │
│ renderer.ts │ │ agent.md + config│ timeline-log.ts │
│ │ │ │ usage-store.ts │
│ │ │ │ crew-run-store.ts │
└─────────────┴──────────────┴──────────────────┴─────────────────────┘
┌─────────────────────────────────────────────────────────────────────┐
│ Bus Layer (local HTTP, loopback, protocol v1) │
├─────────────────────────────────────────────────────────────────────┤
│ bus-server.ts · bus-client.ts · bus-handler.ts │
│ bus-protocol.ts (envelope, errors, zod) · bus-registry.ts │
│ bus-config.ts · delegation-commands.ts · crew-workflow.ts │
└─────────────────────────────────────────────────────────────────────┘
Data flow:
Telegram Update → Normalize → Access Check → Chat Queue (serialized)
→ Load config.json (engine) → Load agent.md → Session Lookup
→ Codex Exec or Claude -p (new or resume)
→ Typing action + timeline events → Final Render → Deliver → Audit
|
Switch between Codex and Claude Code per instance. Mix and match — one bot on Codex, another on Claude, managed from one CLI. |
Each instance loads its own |
|
Run multiple Telegram bots from one repo. Each instance has its own token, engine, workspace, access rules, session binding, audit trail, and service lifecycle. |
Local bot-to-bot calls enable delegation, fan-out, chains, verification, and coordinator-led crew workflows without mixing each bot's Telegram chat context. |
|
One command to auto-approve everything — works with both engines. Per-instance, hot-reloadable. |
Every instance has its own personality, workspace, sessions, access rules, inbox, audit trail, and workspace-keyed auto-memory. The engine config dir ( |
|
|
Telegram shows typing while a turn runs, and structured timeline/audit events record sessions, tool calls, file receipts, retries, and completion status for debugging. |
|
Long polling (~0ms latency), exponential backoff, 429 auto-retry, 409 conflict auto-shutdown, graceful SIGTERM/SIGINT, fault-tolerant batch processing. |
|
|
Per-instance token counts (input/output/cached) and USD cost. |
|
|
Set a per-instance cost cap. Requests are blocked when the limit is hit — with bilingual messages. |
Generated images, PDFs, decks, and reports are delivered through registered |
|
One command to archive or restore an instance. Zero-dependency binary format, cross-platform, with atomic rollback. |
List, rename, and delete instances from the CLI. Running-instance guards prevent data corruption. |
|
Send voice messages — transcribed locally via pluggable ASR (e.g. Qwen3-ASR). HTTP server for fast inference, CLI fallback when offline. |
Every action recorded per-instance in append-only JSONL — filterable by type, chat, and outcome. Auto-rotated at 10MB. |
|
Multi-stage Dockerfile included. Build once, deploy anywhere. |
Local bot-to-bot calls speak a versioned |
| Command | Description |
|---|---|
telegram service start |
Acquire lock, load state, begin long-polling |
telegram service stop |
Graceful shutdown (SIGTERM/SIGINT) |
telegram service status |
Running state, PID, engine, bot identity, timeline summary, latest crew run |
telegram service restart |
Stop + start with clean consumer reset |
telegram service logs |
Tail stdout/stderr logs |
telegram service doctor |
Health check across all subsystems, including timeline, crew state, shared engine env, and stale launchd leftovers |
telegram engine [codex|claude] |
Switch AI engine per instance |
telegram yolo [on|off|unsafe] |
Toggle auto-approval mode |
telegram usage |
Show token usage and estimated cost |
telegram verbosity [0|1|2] |
Store the legacy verbosity setting; current process runtimes use typing actions plus timeline/audit events |
telegram budget [show|set|clear] |
Per-instance cost cap (blocks requests when exceeded) |
telegram timeline |
Inspect structured lifecycle events with filters |
telegram instance [list|rename|delete] |
Manage instances from the CLI |
telegram backup [--instance <name>] |
Archive instance state to .cctb.gz |
telegram restore <archive> |
Restore instance from backup (with --force to overwrite) |
telegram logs rotate |
Manually trigger log rotation |
telegram dashboard |
Generate and open an HTML status dashboard with timeline and latest crew snapshot |
telegram help |
Show all available commands |
All commands accept --instance <name> to target a specific bot.
telegram service doctor --instance <name>telegram session list --instance <name>telegram session inspect --instance <name> <chat-id>telegram session reset --instance <name> <chat-id>telegram task list --instance <name>telegram task inspect --instance <name> <upload-id>telegram task clear --instance <name> <upload-id>
Telegram users can also use:
/status/engine [claude|codex]— switch engine for the current instance (the bridge resets stale bindings automatically)/effort [low|medium|high|xhigh|max|off]— set reasoning effort level (maxis Claude-only; Codex usesxhighinstead)/model [name|off]— switch model/fast [on|off|status]— toggle Codex Fast Mode. Treat it as experimental in bridge instances; if Codex runtime failures appear, use/fast off, avoid repeated retries, then restart the instance once if the next simple turn still fails./btw <question>— ask a side question without affecting the current session/ask <instance> <prompt>— delegate to a specific peer bot/fan <prompt>— query current bot plus configured parallel bots/chain <prompt>— run the configured sequential bot chain/verify <prompt>— execute locally, then auto-review with the verifier bot/resume— Claude: scan local sessions; Codex: use/resume thread <thread-id>to attach an existing thread/detach— detach from resumed Claude session or current Codex thread; restore the pre-resume conversation when one exists/stop— immediately stop the current running task/continue— resume the latest waiting archive summary/compact(Claude only — compresses context; Codex falls back to reset)/context(Claude only) — show current context fill level; use it to decide when to/compact/ultrareview(Claude Opus 4.7+ only) — dedicated code-review pass, typically paired with/resumeinto a local project/reset/help
For archive summaries, the intended continuation path is to reply to that summary or press its Continue Analysis button; bare /continue only resumes the latest waiting archive.
Recovery behavior on unreadable state:
telegram service statusandtelegram service doctordegrade tounknown (...)warnings instead of crashing whensession.json,file-workflow.json,timeline.log.jsonl, orcrew-runs/state is unreadable.telegram session inspectandtelegram task inspectreport unreadable state and stop instead of pretending the record is missing.telegram session reset,telegram task clear, and Telegram/resetonly self-heal corruption/schema-invalid state. Before writing a default empty file, the unreadable original is quarantined as a backup beside the state file.- Telegram
/statusshowsunknown (...)for session/task state when the backing JSON is unreadable.
Windows (PowerShell):
.\scripts\start-instance.ps1 [-Instance work]
.\scripts\status-instance.ps1 [-Instance work]
.\scripts\stop-instance.ps1 [-Instance work]macOS / Linux (bash):
./scripts/start-instance.sh [work]
./scripts/status-instance.sh [work]
./scripts/stop-instance.sh [work]Legacy cleanup after older autostart builds:
bash scripts/cleanup-legacy-launchd.sh --allClaude auth smoke test:
npm run smoke:claude-authShared engine env rule:
CLAUDE_CONFIG_DIRandCODEX_HOMEare only forwarded when you explicitly export them.- If you change either one, restart the affected instance from that same shell.
telegram service doctornow flags both shared-env mismatches and stale launchd plists.
Per-instance, two layers: pairing + allowlist.
Default behavior is intentionally conservative:
- One instance is locked to one Telegram chat by default
- A second chat will not be paired or allowlisted unless you explicitly enable multi-chat
- This keeps
/resume, workspace overrides, local files, and session state from bleeding across chats by accident
npm run dev -- telegram access pair <code>
npm run dev -- telegram access policy allowlist
npm run dev -- telegram access allow <chat-id>
npm run dev -- telegram access revoke <chat-id>
npm run dev -- telegram access multi on
npm run dev -- telegram access multi off
npm run dev -- telegram status [--instance work]Use telegram access multi on --instance <name> only when you really want one bot instance to serve multiple chats. New and legacy instances both default to off unless you explicitly change it.
Group usage has a second allow layer: the Telegram user must already be authorized, and the group chat must be explicitly allowed from inside that group:
/group status
/group allow
/group deny
/group on
/group off
/group all
/group at
By default, ordinary group messages are ignored unless they mention the bot username or reply to one of the bot's messages. Slash commands still work. Use /group all inside a group if you want that allowed group to behave like an always-listening shared chat; use /group at in the same group to return to the safer default. For /group all to hear ordinary messages, promote the bot to admin in that group so Telegram actually delivers ordinary group messages to it. BotFather privacy mode can also affect delivery, but group admin is the practical setup path. Unauthorized group messages are silent and only audited, so strangers cannot make the bot spam a group.
Forum topics are isolated conversations: each topic gets its own engine session and cron scope. Within the same topic, authorized users share that topic's session context; use a separate topic when you want a separate temporary conversation.
Per-instance append-only JSONL log with filterable queries:
npm run dev -- telegram audit [--instance work]
npm run dev -- telegram audit 50 # Last 50 entries
npm run dev -- telegram audit --type update.handle --outcome error # Filter by type/outcome
npm run dev -- telegram audit --chat 688567588 # Filter by chataudit.log.jsonl records what the bridge did — update.handle, bus.reply, budget.blocked — one line per external action, rotated at 10MB.
Parallel to audit, the bridge emits a lifecycle stream (timeline.log.jsonl) describing the shape of each turn — turn.started, turn.completed, budget.threshold_reached, crew.stage.*, bus delegations, etc. Same JSONL shape, different axis:
npm run dev -- telegram timeline [--instance work]
npm run dev -- telegram timeline --type turn.completed --outcome error
npm run dev -- telegram timeline --chat 688567588 --limit 100Think of it this way: audit answers "what action did we take", timeline answers "how did this turn go". telegram service status and telegram dashboard pull summaries from timeline.
# Windows: %USERPROFILE%\.cctb\<instance>\
# macOS/Linux: ~/.cctb/<instance>/
<instance>/
├── agent.md # Bot personality & instructions
├── config.json # Engine, YOLO mode, verbosity, bus
├── usage.json # Token usage and cost tracking
├── workspace/ # Per-bot working directory
│ └── CLAUDE.md # Claude Code project instructions (Claude only)
├── .env # Bot token
├── access.json # Pairing + allowlist data
├── session.json # Chat-to-thread bindings
├── file-workflow.json # Pending file-upload follow-ups
├── runtime-state.json # Watermarks, offsets
├── instance.lock.json # Process lock
├── audit.log.jsonl # Structured audit stream (rotates to .1, .2, ...)
├── timeline.log.jsonl # Lifecycle events (turn.started, budget.*, crew.stage.*)
├── crew-runs/ # Coordinator-led crew run state (coordinator only)
│ └── <run-id>.json
├── service.stdout.log # Service stdout
├── service.stderr.log # Service stderr
└── inbox/ # Downloaded attachments
npm run dev -- <command> # Development mode
npm test # Run tests
npm run test:watch # Watch mode
npm run build # Build for production
npm start # Start production build# Build
docker build -t cc-telegram-bridge .
# Run (configure first, then start)
docker run -v ~/.cctb:/root/.cctb cc-telegram-bridge telegram configure <token>
docker run -v ~/.cctb:/root/.cctb cc-telegram-bridge telegram service startMount ~/.cctb to persist state across container restarts.
Bot does not reply
- Run
telegram service doctor --instance <name>to diagnose - Check
telegram service logsfor errors - Verify the engine is installed:
codex --versionorclaude --version - If the instance uses Claude, run
npm run smoke:claude-auth - If
service doctorreportslegacy-launchd, clean it withbash scripts/cleanup-legacy-launchd.sh --all
Codex Fast Mode causes engine-runtime failures
Fast Mode is a Codex CLI feature, but in unattended bridge instances it can surface upstream Codex diagnostics such as plugin warm-cache or Cloudflare challenge failures. The bridge preserves a completed assistant response when Codex only reports non-blocking plugin diagnostics, but real Codex errors still fail the turn.
- Send
/fast offin the affected bot. - Try one simple message such as
hi. - If it still fails, restart that bot instance once after the current turn is idle.
- Avoid force-restarting the same bot while it is generating a reply; that can kill the active Codex child process and appear as
codex exited with code null.
Claude works in Terminal but not in the bot
- Check shell auth first:
claude auth status - Run
npm run smoke:claude-auth - Run
telegram service doctor --instance <name> - If you recently changed
CLAUDE_CONFIG_DIR, restart the instance from that same shell - If
doctorreportslegacy-launchd, runbash scripts/cleanup-legacy-launchd.sh --all
More detail: docs/runtime-env-troubleshooting.md
Switching to Claude engine
telegram engine claude --instance <name>- Restart the service:
telegram service restart --instance <name> - Optionally add a
CLAUDE.mdin the workspace directory
Bot sends duplicate replies
A 409 Conflict means two processes are polling the same bot token. The service auto-detects this and shuts down. Run telegram service status to check, then telegram service stop and telegram service start to clean restart.
agent.md changes not taking effect
No restart needed — loaded fresh on every message. Verify path with telegram instructions path --instance <name>.
This project is already usable, but it is still evolving quickly. If you run several instances on one machine, a local supervisor agent can be a practical extra safety layer. This is optional, not required.
Use it for:
- checking instance health
- reading
service status/service doctor/ timeline before you touch anything - restarting only the affected instance when something is clearly down
- reporting what happened instead of silently changing config
Do not use it as a second product agent. Its job should be operations only: monitor, diagnose, restart, and report.
You can give a local supervisor agent a brief like this:
You are the local operations supervisor for cc-telegram-bridge on this machine.
Your job is to keep bot instances healthy and easy to diagnose.
Primary responsibilities:
1. Check instance health
2. Diagnose failures before taking action
3. Restart only the affected instance when needed
4. Report conclusions, evidence, and actions clearly
Default operating rules:
- Assume one instance serves one chat unless the instance is explicitly configured for multi-chat.
- Do not change engine, model, yolo/approval mode, pairing, access, or multi-chat unless the user explicitly asks.
- Do not clear tasks unless the user explicitly asks, or the task is confirmed stale and the user already approved cleanup.
- Do not edit project code or README unless the user explicitly asks.
- Prefer the smallest recovery action. Do not restart all instances unless necessary.
Default diagnostic order:
1. Check service status
2. Check service doctor
3. Check recent timeline/audit evidence
4. Check stdout/stderr logs only if needed
5. Decide whether the issue is:
- process not running
- engine/runtime failure
- Telegram delivery failure
- stale task/workflow residue
- auth/config problem
6. Then decide whether a restart is justified
Preferred commands:
- `node dist/src/index.js telegram service status --instance <name>`
- `node dist/src/index.js telegram service doctor --instance <name>`
- `node dist/src/index.js telegram timeline --instance <name>`
- `bash scripts/start-instance.sh <name>`
- `bash scripts/stop-instance.sh <name>`
Response format:
- Conclusion
- Evidence
- Action taken or recommended
If you already use a local agent such as Hermes, that is a good fit for this role.
Your agents. Your engines. Your rules.
