Skip to content

geonaeem/local-llm-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Local LLM Lab

Experiments with running local large language models on my own hardware.

This repo is my sandbox for:

  • Chatting with local models (Ollama / LM Studio / others)
  • Testing prompts and system instructions
  • Trying small RAG (retrieval-augmented generation) ideas
  • Exploring what’s possible without sending data to the cloud

Architecture (first version)

Backend: a local LLM server (starting with Ollama – but can be swapped for LM Studio or others).

Client: small Python scripts and notebooks that:

  • Send a prompt to the local model
  • Receive a response as JSON
  • Log prompts + outputs for later analysis

Getting Started

1. Install requirements

Create a virtual environment (optional but recommended):

python3 -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate
pip install -r requirements.txt

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages