Skip to content

Add llm_kwargs support for local LLM backends (e.g. Ollama)#206

Open
omarsherif0 wants to merge 1 commit intoVectifyAI:mainfrom
omarsherif0:add-ollama-llm-kwargs
Open

Add llm_kwargs support for local LLM backends (e.g. Ollama)#206
omarsherif0 wants to merge 1 commit intoVectifyAI:mainfrom
omarsherif0:add-ollama-llm-kwargs

Conversation

@omarsherif0
Copy link
Copy Markdown

Adds llm_kwargs support to enable local LLM backends like Ollama

without needing to set environment variables

Changes

#add llm_kwargs to PageIndexClient
#pass llm_kwargs through PDF and Markdown indexing paths
#inject llm_kwargs into litellm.completion() and litellm.acompletion()
#add llm_kwargs: {} to config defaults

- Introduce llm_kwargs_scope context manager in utils.py to propagate
  custom LiteLLM kwargs through the full indexing pipeline
- Expose llm_kwargs param in page_index() and PageIndexClient
Copy link
Copy Markdown

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Claude Code Review

This pull request is from a fork — automated review is disabled. A repository maintainer can comment @claude review to run a one-time review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant