This guide covers setting up your local environment for developing and testing the worker-comfyui.
Both tests will use the data from test_input.json, so make your changes in there to test different workflow inputs properly.
- Python >= 3.10
pip(Python package installer)- Virtual environment tool (like
venv)
- Clone the repository (if you haven't already):
git clone https://github.com/runpod-workers/worker-comfyui.git cd worker-comfyui - Create a virtual environment:
python -m venv .venv
- Activate the virtual environment:
- Windows (Command Prompt/PowerShell):
.\.venv\Scripts\activate
- macOS / Linux (Bash/Zsh):
source ./.venv/bin/activate
- Windows (Command Prompt/PowerShell):
- Install dependencies:
pip install -r requirements.txt
Running Docker with GPU acceleration on Windows typically requires WSL2 (Windows Subsystem for Linux).
- Install WSL2 and a Linux distribution (like Ubuntu) following Microsoft's official guide. You generally don't need the GUI support for this.
- Open your Linux distribution's terminal (e.g., open Ubuntu from the Start menu or type
wslin Command Prompt/PowerShell). - Update packages inside WSL:
sudo apt update && sudo apt upgrade -y - Install Docker Engine in WSL:
- Follow the official Docker installation guide for your chosen Linux distribution (e.g., Ubuntu).
- Important: Add your user to the
dockergroup to avoid usingsudofor every Docker command:sudo usermod -aG docker $USER. You might need to close and reopen the terminal for this to take effect.
- Install Docker Compose (if not included with Docker Engine):
sudo apt-get update sudo apt-get install docker-compose-plugin # Or use the standalone binary method if preferred - Install NVIDIA Container Toolkit in WSL:
- Follow the NVIDIA Container Toolkit installation guide, ensuring you select the correct steps for your Linux distribution running inside WSL.
- Configure Docker to use the NVIDIA runtime as default if desired, or specify it when running containers.
- Enable GPU Acceleration in WSL:
- Ensure you have the latest NVIDIA drivers installed on your Windows host machine.
- Follow the NVIDIA guide for CUDA on WSL.
After completing these steps, you should be able to run Docker commands, including docker-compose, from within your WSL terminal with GPU access.
Note
- It is generally recommended to run the Docker commands (
docker build,docker-compose up) from within the WSL environment terminal for consistency with the Linux-based container environment. - Accessing
localhostURLs (like the local API or ComfyUI) from your Windows browser while the service runs inside WSL usually works, but network configurations can sometimes cause issues.
Unit tests are provided to verify the core logic of the handler.py.
-
Run all tests:
python -m unittest discover tests/
-
Run a specific test file:
python -m unittest tests.test_handler
-
Run a specific test case or method:
# Example: Run all tests in the TestRunpodWorkerComfy class python -m unittest tests.test_handler.TestRunpodWorkerComfy # Example: Run a single test method python -m unittest tests.test_handler.TestRunpodWorkerComfy.test_s3_upload
For enhanced local development and end-to-end testing, you can start a local environment using Docker Compose that includes the worker and a ComfyUI instance.
Important
- This currently requires an NVIDIA GPU and correctly configured drivers + NVIDIA Container Toolkit (see Windows setup above if applicable).
- Ensure Docker is running.
Steps:
- Set Environment Variable (Optional but Recommended):
- While the
docker-compose.ymlsetsSERVE_API_LOCALLY=trueby default, you might manage environment variables externally (e.g., via a.envfile). - Ensure the
SERVE_API_LOCALLYenvironment variable is set totruefor theworkerservice if you modify the compose file or use an.envfile.
- While the
- Start the services:
# From the project root directory docker-compose up --build- The
--buildflag ensures the image is built locally using the current state of the code andDockerfile. - This will start two containers:
comfyuiandworker.
- The
- With the Docker Compose stack running, the worker's simulated RunPod API is accessible at: http://localhost:8000
- You can send POST requests to
http://localhost:8000/runorhttp://localhost:8000/runsyncwith the same JSON payload structure expected by the RunPod endpoint. - Opening http://localhost:8000/docs in your browser will show the FastAPI auto-generated documentation (Swagger UI), allowing you to interact with the API directly.
- The underlying ComfyUI instance running in the
comfyuicontainer is accessible directly at: http://localhost:8188 - This is useful for debugging workflows or observing the ComfyUI state while testing the worker.
- Press
Ctrl+Cin the terminal wheredocker-compose upis running. - To ensure containers are removed, you can run:
docker-compose down