AI-powered daily worklog manager. Type what you worked on in natural language, and AI extracts tasks, decisions, meetings, and notes into structured, searchable markdown files.
Disclaimer: This project is intended for personal use only and should be hosted locally. It is not production-grade software. The entire codebase was vibe coded with Claude by Anthropic.
- Natural language input — describe your day and AI structures it automatically
- Screenshot processing — upload or paste (Ctrl+V) screenshots for AI text extraction
- Daily markdown files — human-readable, one file per day, works outside the app
- Semantic search — find decisions, tasks, and notes by meaning, not just keywords
- Dark / light theme — toggle between themes, respects system preference
- Calendar navigation — browse any day's worklog or log entries to a future date
- Inline editing — click the pencil icon on any entry to edit it in place
- Meeting details — attendees and notes shown as sub-items under each meeting
- Task time tracking — tasks can carry a time (e.g. 15:00) shown in the UI
- Starred highlights — mark entries as important with the star toggle, shown in sidebar
- Work summary report — select a date range to view aggregated stats, highlights, decisions, meetings, and pending tasks with optional AI narrative summary and HTML download
- AI usage metrics — sidebar tracks daily AI request counts and estimated tokens (persisted across restarts)
- Processing indicators — animated progress bar and success summary on submission
- Dashboard — today's tasks, meetings, pending items, overdue items at a glance
- Multiple AI providers — Ollama (local, default), OpenAI, or Claude
- User authentication — JWT-based login and registration with bcrypt password hashing
- Fully containerized — one
docker compose upto start everything
- Docker and Docker Compose
git clone <repo-url> worklog-ai
cd worklog-ai
docker compose up --buildOpen http://localhost:3000 in your browser.
On first launch you'll see a registration page — create a username and password to get started. Subsequent visits show a login screen. Credentials are stored locally in the data/ directory (bcrypt-hashed passwords, auto-generated JWT secret). Tokens expire after 7 days.
By default the backend connects to Ollama on the host machine at http://host.docker.internal:11434. Install Ollama natively and pull the required models:
ollama pull llama3.2 # text processing
ollama pull llava # screenshot/image OCR
ollama pull nomic-embed-text # embeddings for semantic searchTo run Ollama inside Docker instead, start with the with-ollama profile:
docker compose --profile with-ollama up --buildA setup banner on the dashboard shows model download progress.
Backend:
cd backend
npm install
npm run dev # starts on port 4000
npm test # run testsFrontend:
cd frontend
npm install
npm run dev # starts on port 5173, proxies /api to :4000
npm test # run tests┌──────────┐ ┌──────────┐ ┌──────────┐
│ Frontend │────▶│ Backend │────▶│ Ollama │
│ React │ │ Express │ │ LLM │
│ :3000 │ │ :4000 │ │ :11434 │
└──────────┘ └────┬─────┘ └──────────┘
│
├──▶ Markdown files (./data/)
│
┌────▼─────┐
│ ChromaDB │
│ Vectors │
│ :8000 │
└──────────┘
Data flow:
- User types free text or uploads screenshots
- Backend sends to AI for structured extraction
- Structured data is saved to daily markdown file and indexed in ChromaDB
- Dashboard shows today's entries; search queries ChromaDB for semantic matches
Use the Settings page in the UI, or create a config.json in the project root:
| Setting | Default | Description |
|---|---|---|
dataPath |
./data |
Where markdown files are stored |
aiProvider |
ollama |
AI provider: ollama, openai, or claude |
ollamaUrl |
http://host.docker.internal:11434 |
Ollama server URL |
openaiApiKey |
(empty) | OpenAI API key (required if using OpenAI) |
claudeApiKey |
(empty) | Claude API key (required if using Claude) |
chromaUrl |
http://chromadb:8000 |
ChromaDB URL |
Environment variables OLLAMA_URL, CHROMA_URL, and DATA_PATH override the corresponding config values (set automatically in Docker).
Each day produces a markdown file (YYYY-MM-DD.md):
# Worklog - 2026-04-18
## Tasks
- [x] 15:00 Reviewed PR #142.
- [ ] Update API documentation. (due: 2026-04-21)
## Decisions
- Migration timeline pushed to Q3
## Meetings
- 10:00 Daily standup.
- Attendees: Alice, Bob, Charlie
- Discussed sprint blockers on payment API.
- Agreed to prioritise hotfix for checkout latency.
## Notes
- Check Grafana dashboard for latency spikes
## Highlights
- Migration timeline pushed to Q3
## Tags
#sprint-12 #migration #apiPre-built images are available on Docker Hub:
| Image | Description |
|---|---|
shri32msi/worklog-ai-backend |
Node.js/Express API server |
shri32msi/worklog-ai-frontend |
React frontend served via Nginx |
Pull and run directly:
docker pull shri32msi/worklog-ai-backend:latest
docker pull shri32msi/worklog-ai-frontend:latestOr use docker compose up --build to build from source.
- Frontend: React 19, Vite 8, Tailwind CSS 4, TypeScript
- Backend: Node.js 22, Express, TypeScript
- AI: Ollama (llama3.2, llava, nomic-embed-text), OpenAI (gpt-4o), Claude (sonnet)
- Search: ChromaDB with vector embeddings
- Infrastructure: Docker Compose
This project is licensed under the MIT License — see the LICENSE file for details.
Made with Claude by Anthropic.

