Add support for custom LLM integration#723
Open
the-cybersapien wants to merge 1 commit intosafishamsi:v7from
Open
Add support for custom LLM integration#723the-cybersapien wants to merge 1 commit intosafishamsi:v7from
the-cybersapien wants to merge 1 commit intosafishamsi:v7from
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request adds support for using any custom OpenAI-compatible LLM as a backend for semantic extraction, making it easier and potentially cheaper to run Graphify with alternative LLM providers.
The implementation of kimi-2.6 is extended to support any custom backend, such as openrouter, qwen etc. As long as the endpoint is OpenAI compatible, graphify will support it.
Env vars are updated to be specific to Graphify, to ensure it does not clash with any other running service.
The documentation is updated with clear setup instructions, and the codebase is extended to detect and configure a "custom" LLM backend using environment variables.
LLM backend support and configuration:
README.mdexplaining how to use Kimi 2.6 via Moonshot or any custom OpenAI-compatible LLM, including required environment variables and example configuration for providers like Openrouter.graphify/llm.pyallowing users to specify their own OpenAI-compatible endpoint, model, and optional temperature via environment variables.graphify/llm.pyto check for a custom backend (viaGRAPHIFY_LLM_BASE_URL) after Kimi and before Claude, enabling seamless selection based on available API keys.