Skip to content

Add support for custom LLM integration#723

Open
the-cybersapien wants to merge 1 commit intosafishamsi:v7from
the-cybersapien:aditya/aditya/decouple-cheap-model-v7
Open

Add support for custom LLM integration#723
the-cybersapien wants to merge 1 commit intosafishamsi:v7from
the-cybersapien:aditya/aditya/decouple-cheap-model-v7

Conversation

@the-cybersapien
Copy link
Copy Markdown

This pull request adds support for using any custom OpenAI-compatible LLM as a backend for semantic extraction, making it easier and potentially cheaper to run Graphify with alternative LLM providers.
The implementation of kimi-2.6 is extended to support any custom backend, such as openrouter, qwen etc. As long as the endpoint is OpenAI compatible, graphify will support it.
Env vars are updated to be specific to Graphify, to ensure it does not clash with any other running service.

The documentation is updated with clear setup instructions, and the codebase is extended to detect and configure a "custom" LLM backend using environment variables.

LLM backend support and configuration:

  • Added documentation in README.md explaining how to use Kimi 2.6 via Moonshot or any custom OpenAI-compatible LLM, including required environment variables and example configuration for providers like Openrouter.
  • Introduced a "custom" backend in graphify/llm.py allowing users to specify their own OpenAI-compatible endpoint, model, and optional temperature via environment variables.
  • Updated the backend detection logic in graphify/llm.py to check for a custom backend (via GRAPHIFY_LLM_BASE_URL) after Kimi and before Claude, enabling seamless selection based on available API keys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant