A private web interface for local LLMs. Works with Ollama, LM Studio, and any OpenAI-compatible API.
- 🔒 Complete privacy—everything works locally
- 🎨 Dark/light theme (automatic by system)
- 📝 Markdown and code highlighting support
- 🕒 Chat history (in
localStorage) - 🧠 Select a model from the list (Ollama)
- 📱 Responsive design
- Node.js 18+
- Ollama (or another OpenAI-compatible server)
git clone https://github.com/mscbuild/localchat.git
cd localchat
npm install
npm run devdocker build -t localchat .
docker run -p 3000:3000 --network host localchatBy default, it connects to:
- API:
http://localhost:11434(Ollama) - Model:
llama3
You can change it in the interface (drop-down list of models) or through environment variables:
OLLAMA_BASE_URL=http://192.168.1.100:11434 npm run devRun in terminal:
git init
git add .
git commit -m "feat: initial commit"
git branch -M main
git remote add origin https://github.com/mscbuild/localchat.git
git push -u origin main