This document helps you quickly set up the complete Argo development environment locally (including both backend and frontend).
Please ensure the following are installed:
| Tool | Recommended Version |
|---|---|
| Python | ≥ 3.11 |
| Poetry | ≥ 2.0.1 |
| Node.js | ≥ 18.x LTS |
| Yarn / NPM | ≥ Yarn 1.22.x or NPM 9.x |
git clone https://github.com/xark-argo/argo.git
cd argoThe Argo backend relies on environment variables to run. Please create a .env file in the backend/ directory:
cp backend/.env.example backend/.env.env.example provides commonly used configuration options. Below are key explanations:
| Variable Name | Description |
|---|---|
ENABLE_MULTI_USER |
Whether to enable multi-user mode (each user has isolated sessions, bot config) |
OLLAMA_BASE_URL |
Local Ollama model service address (default port 11434) |
USE_ARGO_OLLAMA |
Whether to enable local Ollama (if disabled, requests remote models) |
USE_REMOTE_MODELS |
Whether to load models from remote model service |
USE_ARGO_TRACKING |
Enable anonymous usage tracking and error reporting (enabled by default, no private data) |
TOKENIZERS_PARALLELISM |
Controls whether tokenizer runs in parallel to avoid model errors |
NO_PROXY |
Prevent proxy settings from affecting local Ollama requests (set to localhost,127.0.0.1) |
ARGO_STORAGE_PATH |
Argo local data storage path (optional, default is ~/.argo) |
✅ Example configuration (from .env.example):
ENABLE_MULTI_USER=true
OLLAMA_BASE_URL=http://127.0.0.1:11434
USE_ARGO_OLLAMA=true
USE_REMOTE_MODELS=false
USE_ARGO_TRACKING=true
TOKENIZERS_PARALLELISM=false
NO_PROXY=http://127.0.0.1,localhost
ARGO_STORAGE_PATH=💡 You can add more custom variables as needed (e.g., private model address, remote services, etc.).
make installOr equivalently:
cd backend
poetry installIf you need to use the Web UI, first initialize the submodules (only needed on first run):
git submodule update --init --recursiveThen build the frontend:
make build-webThis will automatically copy the built frontend dist/ files to the backend directory.
make run [host=0.0.0.0] [port=11636]You can customize the startup address using the host and port parameters (optional). If not set, the default is:
http://localhost:11636
You can access:
http://localhost:11636/api/swagger/docto view API docshttp://localhost:11636/to open the chat UI (if frontend is built)
| Command | Description |
|---|---|
make run |
Start backend service |
make install |
Install Python dependencies (via Poetry) |
make build-web |
Build frontend and copy to backend |
make test |
Run tests (pytest + coverage) |
make lint |
Full format and type check |
make migration |
Generate database migration files |
| Issue | Solution |
|---|---|
| Frontend 404 | Did you run make build-web? Was the build successful? |
.env not working |
Ensure .env file is saved in the backend directory |
M Series Mac Error: (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) |
CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DGGML_METAL=on" .venv/bin/python -m pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python |
- 💼 Deployment & packaging: deploy/pyinstaller/README.md
- 🧑💻 Contribution guide: CONTRIBUTING.md
For further assistance, feel free to reach out via GitHub Issues!