Skip to content

chinni-d/driver_safety_ai

Repository files navigation

Driver Safety AI

An advanced AI-powered driver monitoring system leveraging facial recognition and pose estimation to detect driver fatigue, distraction, and anomalies in real-time. Designed to enhance road safety through proactive alerts.

🌐 Live Demo: drivesafe.manikantadarapureddy.in

🚀 Features

  • Eye Tracking & Blink Analysis: Monitors Eye Aspect Ratio (EAR) and blink patterns to detect prolonged eye closure, effectively identifying drowsiness.
  • Yawn Detection: Analyzes Mouth Aspect Ratio (MAR) and duration to recognize yawning related to fatigue.
  • Head Pose Estimation: Tracks head movements and orientation (yaw, pitch, roll) to detect loss of focus and physical distraction.
  • Object Detection: Identifies the presence of mobile phones or other objects that may cause driver distraction.
  • Dynamic Risk Scoring: Provides a real-time risk assessment categorized by safety levels (Safe, Warning, Critical).
  • Multi-Modal Alert System: Triggers visual, audio, and voice notifications upon detecting critical behavioral patterns.
  • Real-Time Edge Processing: Runs client-side machine learning at up to 60 FPS, ensuring completely private, secure, and fast inference without server data transmission.

🛠️ Tech Stack

⚙️ Prerequisites

To run this project locally, ensure you have the following installed:

  • Node.js (v18 or newer)
  • npm, yarn, or pnpm
  • A modern web browser with webcam access (Google Chrome or Microsoft Edge recommended for optimal MediaPipe performance)

💻 Getting Started

1. Clone the repository

git clone https://github.com/chinni-d/driver_safety_ai.git
cd driver_safety_ai

2. Install dependencies

npm install

3. Start the development server

npm run dev

Navigate to http://localhost:3000 in your browser to view the application.

4. Build for production

npm run build
npm run start

📁 Project Structure

driver_safety_ai/
├── app/                  # Application routes and layouts
├── components/           # Reusable React components
│   ├── ui/               # Radix-based UI primitives
│   ├── detection-dashboard.tsx # Core real-time detection UI
│   └── analytics-dashboard.tsx # Analytics visualization UI
├── hooks/                # Custom React hooks for state and ML logic
├── lib/                  # Utility functions and constants
└── public/               # Static assets

🚦 Usage Guide

  1. Navigate to the Detect page (/detect).
  2. Grant camera permissions when prompted by the browser.
  3. The system will start analyzing your facial landmarks to track:
    • Eye closure: EAR < 0.25 for over 2 seconds increments drowsiness alerts.
    • Yawning: MAR > 0.6 triggers fatigue counters.
    • Distraction: Head deviation angle > 25° triggers distraction warnings.
  4. Observe the dynamic risk level categorizing your focus state as Safe (0–39%), Warning (40–69%), or Critical (70–100%).
  5. Switch to the Analytics page (/analytics) to review session history and safety trends.

📄 License

This project is licensed under the MIT License.

About

Real-time AI driver monitoring system using MediaPipe and Next.js to detect fatigue, yawning, and distraction to enhance road safety.

Topics

Resources

License

Stars

Watchers

Forks

Contributors