This project is a Python-based implementation of a Language Model using OpenAI's GPT-3.5, designed to be used by a Telegram bot. The project consists of three main files and an environment variable configuration file (.env).
The Project is developed for HealthServe as part of AI in Humanity Module by Group 6 to assist in onboarding process and enhance employee's quality of service when serving customers
Use Cases:
- ☑️ Chat with your documents using OpenAI's LLM Chatbot
- ☑️ CD to AWS BeanStalk
- ☑️ Telegram Bot integration
- ☑️ WhatsApp integration using Twillio
- ☐ (Future implementation) Talk via voice notes to your documents using OpenAI's Voice to Text AI
- ☐ (Future implementation) Carbon Management to cloud usage
-
.env.template Create a file named
.envby copying this template and filling in the necessary values for the environment variables.OPENAI_API_KEY: Your OpenAI API key for accessing the language model.TELEGRAM_BOT_TOKEN: Your Telegram bot's API token.LANGCHAIN_API_KEY: Your LangChain API key if applicable.
-
model.py
- Main file for training the Language Model using OpenAI.
- Used to interact with the trained model.
To use this file, make sure to configure your environment variables as specified in the
.envfile. -
train.py
- Training script that can be executed using the command
python train.py. - Used for training your specific Language Model with OpenAI's GPT-3.5.
- Training script that can be executed using the command
-
telegram_bot.py (Telegram Bot)
- Main file to activate the Telegram bot connected to the trained Language Model.
- Run the Telegram bot using the command
python telegram_bot.py.
-
wa_bot.py (Whatsapp Bot)
- Main file to activate the Telegram bot connected to the trained Language Model.
- Run the Telegram bot using the command
python wa_bot.py.
- Clone this repository to your local machine:
git clone https://github.com/absolutelynoot/ready-serve-chatbot.git - Create an
.envfile based on the provided.env.templateand populate it with the necessary API keys and tokens. - Install the required Python dependencies:
pip install -r requirements.txt - Train the Language Model using
train.py. - Run the Telegram bot using
telegram_bot.pyto interact with the trained Language Model via Telegram.
Note: The system will run on localhost machine
Run the following code:
docker compose -p ready-serve-chatbot up
This project has automated CD pipeline to deploy your LLM chatbot to AWS Elastic Beanstalk using GitHub Actions and GitHub Secrets
To setup CD to AWS Elastic Beanstalk please follow the steps below:
-
Create AWS Infrastructure
- Register for an AWS Account
- Register an AWS account and enter
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEY - Create an AWS Beanstalk environment in AWS and create GitHub secret
AWS_EBS_APPLICATION_NAMEandAWS_EBS_ENVIRONMENT_NAMEwith newly created AWS Elastic Beanstalk Application/Environment
-
Register for a Docker Hub Account and enter you
DOCKERHUB_USERNAMEandDOCKERHUB_TOKENinto Github secrets
- Ensure that the environment variables are correctly configured in the
.envfile. - Train the custom Language Model using
train.pyif needed after storing into./docsfolder. - Run the Telegram bot using
telegram_bot.pyto start interacting with the Language Model through Telegram.
Developed by Faisal Ichsan Samudra as part of SMU AI in Humanity Module taught by Prof Andrew Koh.