A full-stack AI chatbot system that supports both:
- Normal Chatbot (LLM-based)
- RAG Chatbot (Website / PDF / Text-based Q&A)
Built using FastAPI + Next.js + FAISS + Sentence Transformers + Groq LLM, and deployed on Hugging Face Spaces (Backend) and Vercel (Frontend).
-
Frontend (Vercel):
https://chatbot-frontend-bice-nu.vercel.app/rag-chatbot -
Backend API (Hugging Face):
https://kirankumar29-chatbot.hf.space
- Conversational AI (Groq LLM)
- Session-based chat
- ChatGPT-like UI
- Website-based Q&A
- PDF-based Q&A
- Text-based Q&A
- Context-aware answers (no hallucination)
- FAISS vector search
- Modern ChatGPT-style dark theme
- Sidebar navigation
- Separate Normal & RAG chatbot pages
- Upload + Chat workflow
- Real-time responses
- User uploads data (Website / PDF / Text)
- Content is split into chunks
- Embeddings are generated
- Stored in FAISS vector database
- User asks a question
- Relevant chunks are retrieved
- Groq LLM generates answer using context
- FastAPI
- LangChain
- FAISS
- Sentence Transformers
- Groq API
- Qdrant
- Next.js (React)
- Tailwind CSS
- shadcn/ui
- Axios
- Hugging Face Spaces (Backend)
- Vercel (Frontend)
.
├── backend/
│ ├── app.py
│ ├── main.py
│ ├── requirements.txt
│ ├── uploads/
│ ├── embeddings/
│ └── src/
│ ├── components/
│ │ ├── chatbot.py
│ │ └── ragchatbot.py
│ ├── datatransformer/
│ └── utils/
│
│
├── Dockerfile
├── .env
└── README.md🔹 Clone Repository
git clone https://github.com/kumar-kiran-24/chatbot
cd chatbot
pip install -r requirements.txt
uvicorn app:app --reload
.env
QDRANT_URL="qdrant url"
QDRANT_API_KEY="your api key"
HF_TOKEN="your api key"
GROQ_API_KEY="your api key"
- Endpoint Description
- /chatbot Normal chatbot
- /ragchatbot_url RAG chatbot
- /upload-pdf Upload PDF
- /web Load website
- /text Add raw text