feat: add Ollama provider and integration test#81
feat: add Ollama provider and integration test#81Aneesh-0108 wants to merge 1 commit intoAcademySoftwareFoundation:mainfrom
Conversation
|
|
Signed-off-by: Aneesh-0108 <cpaneesh2006@gmail.com>
9560149 to
cbbf74a
Compare
|
Please review the openAI provider. You did not implement methods that are needed to sub the prompt. This should be tested with the full stack. PRs that are un-tested with the full stack are not allowed. For this PR to be accepted you must generate a note from the UI. This is to prevent Drive by forks. |
|
Hi, quick update on the Ollama provider work: While testing the full frontend flow, I noticed that when running the backend standalone, the /projects endpoint returns a 404 (GET /projects?email=...). This causes the ProjectSelector UI to fail since the frontend expects this endpoint. From what I can see, this endpoint appears to be available only when running the full docker-compose stack, not when launching main.py directly. Because of this, I wasn’t able to fully validate the project-selection flow in the standalone setup, but the Ollama integration itself is functioning as expected. I’m going to step back from this issue for now due to the environment complexity Thanks — I learned a lot working through this! |
Fixes(#77 )
This PR implements the OllamaProvider,enabling the dna project to perform local Large Language Model(LLM) inference.By integrating with Ollama,we can now use models like Llama 3 directly on a local machine.
Type of Change
Description and Changes Made
Implemented OllamaProvider: Created a new provider class in src/dna/llm_providers/ollama_provider.py that utilizes httpx.AsyncClient for high-performance, asynchronous communication with the Ollama API.
Enhanced Connection Logic: Configured the provider to handle local endpoints (defaulting to http://localhost:11434) and implemented a 90-second timeout to accommodate model loading ("Cold Starts").
Added Integration Test: Created tests/test_ollama_provider.py to verify the connection to Llama 3 and validate that the inference pipeline returns valid string responses.
Dependency Management: Updated requirements.txt to include httpx, ensuring the environment is reproducible for other developers.
Tests performed
Tested on Arch Linux with Python 3.x and Ollama (Llama 3). Verified that the generate method successfully returns text from the local model and that the integration test passes with the correct PYTHONPATH configuration.
Checklist
My code follows the style guidelines of this project
I have performed a self-review of my own code
I have commented my code where necessary
I have added tests that prove my fix is effective or that my feature works
I have added documentation wherever appropriate