Skip to content

CustomEndpoints

royer edited this page Feb 9, 2026 · 1 revision

Custom Endpoints & GitHub Models

Omega supports connecting to any OpenAI-compatible API endpoint, in addition to the built-in providers (OpenAI, Anthropic, Google Gemini).


GitHub Models (free)

If you have a GitHub account, you can use models from the GitHub Models marketplace for free (with rate limits). Available models include GPT-4o, Llama, Phi, Mistral, and more.

Setup

  1. Create a personal access token on GitHub
  2. Set the environment variable:
    export GITHUB_TOKEN="ghp_your_token_here"
  3. Start Omega -- GitHub Models are auto-detected

No configuration file changes are needed. Omega checks for GITHUB_TOKEN on startup and registers all available models automatically.

Rate limits

GitHub Models has per-model rate limits for free users (typically ~10 requests/minute, 8k input / 4k output tokens). This is suitable for experimentation and light usage. For heavier workloads, consider a paid provider.


Custom OpenAI-compatible endpoints

You can connect Omega to Azure OpenAI, local LLM servers (Ollama, vLLM, llama.cpp), or any third-party provider that exposes an OpenAI-compatible API.

Configuration

Add entries to ~/.omega/config.yaml:

custom_endpoints:
  - name: "Azure GPT-4"
    base_url: "https://my-resource.openai.azure.com/openai/deployments/gpt-4/v1"
    api_key_env: "AZURE_OPENAI_API_KEY"

  - name: "Local Ollama"
    base_url: "http://localhost:11434/v1"
    api_key_env: "OLLAMA_API_KEY"

Each entry requires:

Field Description
name Display name for logging
base_url The OpenAI-compatible API base URL
api_key_env Name of the environment variable holding the API key

Then set the environment variables:

export AZURE_OPENAI_API_KEY="your_azure_key"
export OLLAMA_API_KEY="ollama"  # Ollama doesn't need a real key, but the field must be non-empty

How it works

On startup, Omega:

  1. Reads custom_endpoints from ~/.omega/config.yaml
  2. For each endpoint, resolves the API key from the named environment variable
  3. Queries the endpoint's /v1/models API to discover available models
  4. Registers discovered models in the model dropdown

Custom endpoints never overwrite models already registered by built-in providers (OpenAI, Anthropic, Gemini).

Common setups

Provider base_url Notes
Azure OpenAI https://{resource}.openai.azure.com/openai/deployments/{model}/v1 Use Azure API key
Ollama http://localhost:11434/v1 Set any non-empty API key
vLLM http://localhost:8000/v1 Set any non-empty API key
Together AI https://api.together.xyz/v1 Use Together API key
Groq https://api.groq.com/openai/v1 Use Groq API key
LM Studio http://localhost:1234/v1 Set any non-empty API key

Troubleshooting

  • Endpoint skipped with "no models available": The /v1/models API returned an empty list. Check that the server is running and the URL is correct.
  • Endpoint skipped with "env var not set": The environment variable specified in api_key_env is not set. Export it before starting napari.
  • Models not appearing in dropdown: Check the terminal output for registration messages. Omega logs success/failure for each endpoint.

See also

Clone this wiki locally