An intelligent conversational HR assistant built with LLMs, LangChain, and Gradio. This project demonstrates production-ready AI system architecture with tool-augmented LLM reasoning, intent classification, and state management.
- Overview
- Screenshots
- Features
- Architecture
- Project Structure
- Installation
- Usage
- System Design
- Technical Highlights
- Example Interactions
- Customization Guide
- Testing
- Future Enhancements
- Contributing
- License
The AI-Powered HR Assistant is a conversational AI system designed to handle various HR-related queries through natural language interaction. It intelligently routes requests between direct LLM responses and structured tool executions, demonstrating modern AI agent architecture patterns.
- Employee Information Retrieval - Get detailed employee information by name
- Leave Balance Checking - Query remaining leave days for employees
- Interview Question Generation - Generate role-specific interview questions
- General HR Knowledge - Answer policy questions and HR-related queries
Clean, professional interface with example queries and full-screen chat experience
Multi-employee disambiguation and detailed employee information retrieval
Instant leave balance lookup and role-specific interview question generation
Comprehensive HR knowledge base for policy and definition queries
- ✅ Intent Classification - Automatically detects user intent from natural language
- ✅ Tool-Augmented LLM - Seamlessly integrates LLM reasoning with structured data retrieval
- ✅ Multi-Turn Conversations - Maintains context across conversation turns
- ✅ Ambiguity Resolution - Handles duplicate employee names with clarification flow
- ✅ State Management - Tracks conversation state for follow-up queries
- ✅ Observability - Integrated LangSmith tracing for debugging and monitoring
- 🎨 Modern Gradio Interface - Clean, professional chat interface
- 🚀 Quick Actions - Pre-built example queries for common use cases
- 📱 Responsive Design - Full-screen chat experience
- 🎯 Real-time Responses - Instant feedback on user queries
┌─────────────┐
│ User Input │
└──────┬──────┘
│
v
┌─────────────────────┐
│ Intent │
│ Classification │
│ (LLM Parser) │
└──────┬──────────────┘
│
v
┌─────────────────────┐
│ Tool Required? │
└──────┬──────────────┘
│
┌───┴───┐
│ │
v v
┌──────┐ ┌──────────────┐
│Direct│ │Tool Execution│
│Reply │ │& Argument │
│ │ │Extraction │
└──┬───┘ └──────┬───────┘
│ │
└──────┬──────┘
v
┌──────────────┐
│Final Response│
└──────────────┘
┌───────────────────────────────────────────────────┐
│ chat_interface.py │
│ (Gradio UI Layer) │
└─────────────────────┬─────────────────────────────┘
│
v
┌───────────────────────────────────────────────────┐
│ main.py │
│ (Orchestration & State Management) │
└─────────────────────┬─────────────────────────────┘
│
┌────────────┼────────────┐
v v v
┌─────────────┐ ┌──────────┐ ┌──────────┐
│llm_parser.py│ │hr_logic.py│ │hr_tools.py│
│(Intent │ │(Business │ │(Data │
│Detection) │ │Logic) │ │Access) │
└─────────────┘ └──────────┘ └──────────┘
hr-assistant/
├── chat_interface.py # Gradio web interface
├── main.py # Main orchestration logic
├── llm_parser.py # LLM-based intent classification
├── hr_logic.py # Business logic handlers
├── hr_tools.py # Data access layer (tool functions)
├── test_chat.py # Terminal-based testing interface
├── logic_map.txt # Detailed system logic documentation
├── .env # Environment variables
└── README.md # This file
| File | Purpose | Key Functions |
|---|---|---|
| chat_interface.py | Gradio UI setup and event handlers | respond() |
| main.py | Request routing and state management | chat() |
| llm_parser.py | Intent extraction using LLM | parse_user_query() |
| hr_logic.py | Intent-specific business logic | handle_intent(), handle_employee_details(), handle_leave_query(), handle_interview_questions() |
| hr_tools.py | Database/API simulation layer | get_employee_details(), check_leave_balance(), generate_interview_questions() |
| test_chat.py | CLI testing interface | start_terminal_chat() |
- Python 3.8 or higher
- Ollama installed with LLaMA3 model
- pip package manager
git clone https://github.com/yourusername/hr-assistant.git
cd hr-assistantpip install -r requirements.txtRequired packages:
gradio==4.16.0 # Web interface framework
langchain-community==0.0.20 # LangChain ChatOllama integration
langsmith==0.1.0 # Tracing and observability
python-dotenv==1.0.0 # Environment variable management (.env file support)
requests==2.31.0 # HTTP library (imported but currently unused)
What each package does:
- gradio: Creates the web-based chat interface with minimal code
- langchain-community: Provides ChatOllama for LLM integration via Ollama
- langsmith: Enables tracing and debugging of LLM calls (optional but recommended)
- python-dotenv: Loads environment variables from
.envfile (used in main.py to load LangSmith API key) - requests: HTTP library (imported in llm_parser.py; included for potential future API integrations)
# Install Ollama (if not already installed)
# Visit: https://ollama.ai/download
# Pull LLaMA3 model
ollama pull llama3Create a .env file in the project root:
# LangSmith (optional - for observability)
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your_langsmith_api_key
LANGCHAIN_PROJECT=hr-assistantWeb Interface:
python chat_interface.pyTerminal Interface (for testing):
python test_chat.pyEdit hr_tools.py:
employees = {
"123": {"name": "Nagham Habli", "department": "AI development", "role": "Junior AI developer"},
"999": {"name": "New Employee", "department": "Marketing", "role": "Marketing Manager"}
}Update hr_logic.py:
EMPLOYEE_NAME_TO_IDS = {
"nagham habli": ["123"],
"new employee": ["999"]
}Edit hr_tools.py:
questions = {
"marketing manager": [
"What is your experience with digital marketing campaigns?",
"How do you measure marketing ROI?",
"Describe a successful product launch you've managed."
]
}- Update
llm_parser.pySYSTEM_PROMPT - Add handler function in
hr_logic.py - Create corresponding tool in
hr_tools.py(if needed) - Update
handle_intent()function
Use the terminal interface for quick testing:
python test_chat.py
Test various scenarios:
- Employee queries with unique names
- Employee queries with duplicate names
- Leave balance checks
- Interview question generation
- General HR knowledge queries
- Edge cases and error handling
Last Updated: February 2026