Skip to content

feat: add Ollama provider and integration test#81

Open
Aneesh-0108 wants to merge 1 commit intoAcademySoftwareFoundation:mainfrom
Aneesh-0108:feature/add-ollama-provider
Open

feat: add Ollama provider and integration test#81
Aneesh-0108 wants to merge 1 commit intoAcademySoftwareFoundation:mainfrom
Aneesh-0108:feature/add-ollama-provider

Conversation

@Aneesh-0108
Copy link

Fixes(#77 )
This PR implements the OllamaProvider,enabling the dna project to perform local Large Language Model(LLM) inference.By integrating with Ollama,we can now use models like Llama 3 directly on a local machine.

Type of Change

  • New feature(non-breaking change which adds functionality)

Description and Changes Made

  • Implemented OllamaProvider: Created a new provider class in src/dna/llm_providers/ollama_provider.py that utilizes httpx.AsyncClient for high-performance, asynchronous communication with the Ollama API.

  • Enhanced Connection Logic: Configured the provider to handle local endpoints (defaulting to http://localhost:11434) and implemented a 90-second timeout to accommodate model loading ("Cold Starts").

  • Added Integration Test: Created tests/test_ollama_provider.py to verify the connection to Llama 3 and validate that the inference pipeline returns valid string responses.

  • Dependency Management: Updated requirements.txt to include httpx, ensuring the environment is reproducible for other developers.

Tests performed

Tested on Arch Linux with Python 3.x and Ollama (Llama 3). Verified that the generate method successfully returns text from the local model and that the integration test passes with the correct PYTHONPATH configuration.

Checklist

  • My code follows the style guidelines of this project

  • I have performed a self-review of my own code

  • I have commented my code where necessary

  • I have added tests that prove my fix is effective or that my feature works

  • I have added documentation wherever appropriate

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Feb 15, 2026

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: Aneesh-0108 / name: Aneesh Hebbar (cbbf74a)

Signed-off-by: Aneesh-0108 <cpaneesh2006@gmail.com>
@Aneesh-0108 Aneesh-0108 force-pushed the feature/add-ollama-provider branch from 9560149 to cbbf74a Compare February 15, 2026 14:35
@jspada200
Copy link
Collaborator

Please review the openAI provider. You did not implement methods that are needed to sub the prompt. This should be tested with the full stack. PRs that are un-tested with the full stack are not allowed. For this PR to be accepted you must generate a note from the UI. This is to prevent Drive by forks.

@Aneesh-0108
Copy link
Author

Hi, quick update on the Ollama provider work:

While testing the full frontend flow, I noticed that when running the backend standalone, the /projects endpoint returns a 404 (GET /projects?email=...). This causes the ProjectSelector UI to fail since the frontend expects this endpoint. From what I can see, this endpoint appears to be available only when running the full docker-compose stack, not when launching main.py directly.

Because of this, I wasn’t able to fully validate the project-selection flow in the standalone setup, but the Ollama integration itself is functioning as expected.

I’m going to step back from this issue for now due to the environment complexity

Thanks — I learned a lot working through this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments