Open-source, multi-backend Telegram chatbot.
AI-On-TelegramBot is a versatile open-source Telegram chatbot that supports various AI backends. This project is designed to provide an easy-to-use, customizable, and robust chatbot solution that can be seamlessly integrated into the Telegram platform. With support for multiple AI models, including Groq, Gemini, Kimi K2, OpenAI, and Meta AI, this chatbot offers flexibility and a wide range of natural language processing capabilities.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details. This means you are free to use, modify, and distribute the software, provided you adhere to the terms of the GPLv3.
- Multi-Backend Support: Integrates with various AI providers such as Groq, Gemini, OpenAI, etc.
- Customizable: Open-source code allows for easy modification and customization.
- GPLv3 License: Ensures the freedom to use, modify, and distribute the software.
- Easy Installation Scripts: Simplified scripts to set up dependencies and run the bot.
- User & Group Tracking: Automatically tracks prompt counts and premium status for users, as well as joined group IDs.
- Cross-Platform Compatibility: Designed to run on Linux, Windows Subsystem for Linux, Termux-Proot, and Termux-Chroot.
- Privacy-Focused: Options for local LLMs to ensure data privacy.
- Performance Optimized: Capable of handling high request rates with efficient backend models.
- Community-Driven: Encourages contributions and improvements from the open-source community.
- Comprehensive Documentation: Provides clear instructions for installation, configuration, and usage.
| Path | Command | Notes |
|---|---|---|
| API Cloud | ./install-dependencies-api.sh |
Ensure you have an API key on console.groq.com |
| Local (Offline) | ./install-local-llm.sh |
Requires approximately 8 GB RAM and a GPU (minimal 2GB VRAM) or 4 Core CPU |
| File | Backend | Description |
|---|---|---|
bot-groq.py |
Groq / OpenAI | 185 t/s, 128 k context, model selection (LLaMA, Kimi K2, GPT 4.1, Meta Ai). |
bot-gemini.py |
Gemini-1.5-Flash or Gemini-1.5-Pro | 60 req/min free. |
bot-ollama.py |
Ollama local (Llama 3.2) | Model Selection (LLaMA 3.2,LLaMA 3.1,Tinyllama or etc.), No internet, 100% privacy. |
./run-bot # Run bot-openai.py
./run-mt # Maintenance broadcast
./run-done-mt # Broadcast “maintenance complete”| File | Function |
|---|---|
users.json |
Prompt count and premium status auto-tracking. |
groups.json |
Automatically joined group IDs. |
| OS | Status |
|---|---|
| Linux / Windows Subsystem Linux / Termux-Proot / Termux-Chroot | 🚀 Supported |
| Windows (native) | 🚫 Not supported (bash & ollama issues). |
Tip
⚠️ Before Execution: Read the entire code of each file to avoid token/API errors.- Ensure Python ≥ 3.11 and pip is installed.
- For API-based bots, an internet connection is required.
- For local LLM bots, ensure your system meets the specified hardware requirements.
- The
users.jsonandgroups.jsonfiles are created automatically upon the first run if they don't exist. - It is recommended to back up your
users.jsonandgroups.jsonfiles regularly. - For troubleshooting, check the bot's console output for error messages.
Built by @Gustyx-Power
