MindEase is a mental health assistant designed to provide emotional support and advice to users. It leverages an ESP32-based hardware setup for audio input/output and a fine-tuned AI model for natural language understanding and response generation. The project integrates hardware, AI, and backend services to create a seamless user experience.
MindEase can be used to:
- Provide mental health support and advice.
- Act as a conversational assistant for emotional well-being.
- Demonstrate the integration of IoT devices with AI models.
To build the hardware setup, you will need:
- ESP32 Development Board (e.g., ESP32-DevKitC).
- INMP441 Microphone for audio input.
- MAX98357A Amplifier for audio output.
- LEDs for status indication:
- WiFi connection status.
- Audio recording status.
- Push Button for triggering the assistant.
- Power Supply (e.g., USB or battery).
-
Microphone (INMP441):
LRC→ GPIO 5DOUT→ GPIO 19BCLK→ GPIO 18
-
Amplifier (MAX98357A):
DIN→ GPIO 22BCLK→ GPIO 15LRC→ GPIO 21
-
LEDs:
- WiFi Status LED → GPIO 25
- Audio Recording LED → GPIO 32
- Built-in LED → GPIO 2 (optional)
-
Push Button:
- Connect to GPIO 4 with a pull-up resistor.
-
Power Supply:
- Connect the ESP32 to a 5V power source or use a Power Bank.
- Install Node.js and Python.
- Clone this repository:
git clone https://github.com/your-repo/MindEase.git cd MindEase - Navigate to the
Backendfolder:cd Backend - Install dependencies:
npm install
- Configure environment variables:
- Create a
.envfile in theBackendfolder. - Add the following variables:
PORT=3000 GOOGLE_APPLICATION_CREDENTIALS=path/to/google_creds.json GROQ_API_KEY=your_groq_api_key
- Create a
- Start the backend server:
npm start
- Install Python dependencies:
pip install fastapi uvicorn pyngrok transformers torch accelerate
- Start the Flask backend:
python app.py
- Use ngrok to expose the local server:
ngrok http 8000
- Copy the public URL provided by ngrok and use it to access the backend API.
- I'm useing my own address for the backend, you can use your own address for the backend.
To use your own fine-tuned model:
- Fine-tune the model using the provided notebook (
Model/Model_train_colab.ipynborModel/Model_train_local.ipynb). - Save the fine-tuned model in the
fine_tuned_modelfolder. or download the model from my drive link: "Drive Link - Update the backend and ESP32 code to point to your model.
You can use your Own api to access the model from local or use the provided MindEase_AI_backend.ipynb file to expose the model as an API. This API can be accessed from platforms like Google Colab or other applications.
This project is licensed under the MIT License. You are free to use, modify, and distribute this project, provided proper attribution is given.
- Hugging Face for providing pre-trained models.
- Google Cloud for Speech-to-Text and Text-to-Speech APIs.
- ngrok for exposing local servers to the internet.
- PlatformIO for ESP32 development.
Feel free to contribute to this project by submitting issues or pull requests!