Dify is an open-source platform for building AI applications with visual workflows, RAG pipelines, and agent orchestration. It provides a web UI for creating and managing AI apps without writing code.
- URL:
https://dify.<domain>(via Traefik) - Port (IP-only mode): 80 (via dify-nginx container)
- Components: API server, background worker, web frontend, PostgreSQL, Redis
- Default admin: created on first visit (setup wizard)
Traefik (443) → dify-nginx (80)
├── /console/api, /api, /v1, /files → dify-api (5001)
└── / → dify-web (3000)
dify-api ←→ dify-db (PostgreSQL 16)
dify-api ←→ dify-redis (Redis 7)
dify-worker ←→ dify-db, dify-redis (async tasks)
In .env:
DIFY_VERSION=latest
DIFY_SECRET_KEY=CHANGE-ME-dify-secret-key
DIFY_DB_PASSWORD=CHANGE-ME-dify-db-password
DIFY_REDIS_PASSWORD=CHANGE-ME-dify-redis-passwordGenerate secure values:
# Generate DIFY_SECRET_KEY (32+ chars)
openssl rand -base64 32
# Generate passwords
openssl rand -base64 16cd ~/ai-lab-server-setup
docker compose up -d
# Check all Dify containers are running
docker compose ps | grep difyOpen https://dify.<domain> (or http://<server-ip>:80 without Traefik).
The setup wizard creates the admin account on first visit.
Dify can use Ollama as a model provider for local LLM inference.
- Go to Settings → Model Providers → Ollama
- Add a new model:
- Model Name:
llama3.2 - Base URL:
http://ollama-compose:11434(Docker Compose) - Or
http://host.docker.internal:11434(native Ollama on host)
- Model Name:
- Click Save
If Ollama is installed natively (via
setup.sh), usehost.docker.internalsince Dify runs in Docker but Ollama runs on the host.
For native Ollama, ensure it listens on all interfaces:
sudo systemctl edit ollama
# Add: Environment="OLLAMA_HOST=0.0.0.0"
sudo systemctl restart ollamaDify supports Qdrant as a vector database for RAG (Knowledge Base).
- Go to Settings → Model Providers and add an Embedding Model (e.g.,
nomic-embed-textvia Ollama) - Create a Knowledge Base → choose Qdrant as vector store:
- URL:
http://qdrant-compose:6333(Docker Compose) - Or
http://host.docker.internal:6333(native Qdrant on host) - API Key: value from
QDRANT_API_KEYin.env
- URL:
-
Create Knowledge Base:
- Go to Knowledge → Create Knowledge
- Upload documents (PDF, TXT, Markdown)
- Dify chunks, embeds, and stores in Qdrant automatically
-
Create App:
- Go to Studio → Create App → Chat App
- In Context, link your Knowledge Base
- Set the Model to
llama3.2(Ollama) - Configure system prompt and retrieval settings
-
Publish:
- Click Publish to get a shareable URL or API endpoint
Dify provides an OpenAI-compatible API for each published app:
# Get API key from app settings → API Access
curl -X POST https://dify.<domain>/v1/chat-messages \
-H "Authorization: Bearer app-YOUR-API-KEY" \
-H "Content-Type: application/json" \
-d '{
"inputs": {},
"query": "What is RAG?",
"user": "test-user",
"response_mode": "blocking"
}'pip install dify-clientfrom dify_client import ChatClient
client = ChatClient(api_key="app-YOUR-API-KEY")
client.base_url = "https://dify.<domain>/v1"
response = client.create_chat_message(
inputs={},
query="Explain vector databases",
user="test-user",
response_mode="blocking",
)
print(response.json()["answer"])Dify's visual workflow builder supports:
- LLM nodes — call any configured model
- Knowledge Retrieval — query RAG knowledge bases
- Code nodes — run Python/JavaScript inline
- HTTP Request — call external APIs
- Conditional logic — branching and loops
- Variable aggregation — collect and merge outputs
Start → Knowledge Retrieval → LLM (with context) → Answer
- Create a Workflow App
- Add Knowledge Retrieval node → select your Knowledge Base
- Add LLM node → connect retrieval output as context
- Set system prompt: "Answer based on the provided context"
Expose dify-nginx port directly:
# In docker-compose.yml, add to dify-nginx:
ports:
- "8080:80"Access via http://<server-ip>:8080.
# Stop services
docker compose stop dify-api dify-worker dify-web dify-nginx
# Backup database
docker exec dify-db pg_dump -U dify dify > dify-backup-$(date +%Y%m%d).sql
# Backup storage (uploaded files)
docker cp dify-api:/app/api/storage ./dify-storage-backup/
docker compose start dify-api dify-worker dify-web dify-nginxdocker compose stop dify-api dify-worker dify-web dify-nginx
cat dify-backup.sql | docker exec -i dify-db psql -U dify dify
docker cp ./dify-storage-backup/. dify-api:/app/api/storage/
docker compose start dify-api dify-worker dify-web dify-nginx| Issue | Solution |
|---|---|
| Setup wizard not loading | Check all 5 containers: docker compose ps | grep dify |
| "Connection refused" to Ollama | Use host.docker.internal:11434 for native Ollama |
| Slow document processing | Check dify-worker logs: docker compose logs dify-worker |
| 502 error via Traefik | Verify dify-nginx is on traefik-public network |
| Database connection error | Check dify-db health: docker compose ps dify-db |
| Redis connection error | Verify DIFY_REDIS_PASSWORD matches in .env |
| File upload fails | Check storage volume permissions and client_max_body_size |
# API server
docker compose logs -f dify-api
# Background worker (document processing)
docker compose logs -f dify-worker
# Nginx routing
docker compose logs -f dify-nginx
# All Dify services
docker compose logs -f dify-api dify-worker dify-web dify-nginx dify-db dify-redis