-
Notifications
You must be signed in to change notification settings - Fork 32
CustomEndpoints
Omega supports connecting to any OpenAI-compatible API endpoint, in addition to the built-in providers (OpenAI, Anthropic, Google Gemini).
If you have a GitHub account, you can use models from the GitHub Models marketplace for free (with rate limits). Available models include GPT-4o, Llama, Phi, Mistral, and more.
- Create a personal access token on GitHub
- Set the environment variable:
export GITHUB_TOKEN="ghp_your_token_here"
- Start Omega -- GitHub Models are auto-detected
No configuration file changes are needed. Omega checks for GITHUB_TOKEN on startup and registers all available models automatically.
GitHub Models has per-model rate limits for free users (typically ~10 requests/minute, 8k input / 4k output tokens). This is suitable for experimentation and light usage. For heavier workloads, consider a paid provider.
You can connect Omega to Azure OpenAI, local LLM servers (Ollama, vLLM, llama.cpp), or any third-party provider that exposes an OpenAI-compatible API.
Add entries to ~/.omega/config.yaml:
custom_endpoints:
- name: "Azure GPT-4"
base_url: "https://my-resource.openai.azure.com/openai/deployments/gpt-4/v1"
api_key_env: "AZURE_OPENAI_API_KEY"
- name: "Local Ollama"
base_url: "http://localhost:11434/v1"
api_key_env: "OLLAMA_API_KEY"Each entry requires:
| Field | Description |
|---|---|
name |
Display name for logging |
base_url |
The OpenAI-compatible API base URL |
api_key_env |
Name of the environment variable holding the API key |
Then set the environment variables:
export AZURE_OPENAI_API_KEY="your_azure_key"
export OLLAMA_API_KEY="ollama" # Ollama doesn't need a real key, but the field must be non-emptyOn startup, Omega:
- Reads
custom_endpointsfrom~/.omega/config.yaml - For each endpoint, resolves the API key from the named environment variable
- Queries the endpoint's
/v1/modelsAPI to discover available models - Registers discovered models in the model dropdown
Custom endpoints never overwrite models already registered by built-in providers (OpenAI, Anthropic, Gemini).
| Provider | base_url |
Notes |
|---|---|---|
| Azure OpenAI | https://{resource}.openai.azure.com/openai/deployments/{model}/v1 |
Use Azure API key |
| Ollama | http://localhost:11434/v1 |
Set any non-empty API key |
| vLLM | http://localhost:8000/v1 |
Set any non-empty API key |
| Together AI | https://api.together.xyz/v1 |
Use Together API key |
| Groq | https://api.groq.com/openai/v1 |
Use Groq API key |
| LM Studio | http://localhost:1234/v1 |
Set any non-empty API key |
-
Endpoint skipped with "no models available": The
/v1/modelsAPI returned an empty list. Check that the server is running and the URL is correct. -
Endpoint skipped with "env var not set": The environment variable specified in
api_key_envis not set. Export it before starting napari. - Models not appearing in dropdown: Check the terminal output for registration messages. Omega logs success/failure for each endpoint.
- API Keys -- Setting up built-in providers
- Configure API Keys -- Using the API Key Vault
Getting Started
Usage
Reference