One cutting-edge security function which may assist scale back accidents brought on by distracted or exhausted drivers is the Driver Monitoring System (DMS). An automatic database administration system (DMS) constructed with a mixture of ML and CV strategies is showcased on this challenge. The video processing can be finished utilizing OpenCV, the face landmark identification can be finished utilizing a pre-trained deep studying mannequin, and the evaluation can be finished utilizing PyTorch.
Kindly guarantee you may have the next libraries put in:
torch
opencv-python
dlib
numpy
streamlit
You may set up these dependencies utilizing a necessities.txt
file:
torch==1.10.0
opencv-python==4.5.3.56
dlib==19.22.1
numpy==1.21.2
streamlit==0.88.0
Set up the dependencies with the next command:
pip set up -r necessities.txt
Right here’s the listing construction for the challenge:
DriverMonitoringSystem/
│
├── dms.py
├── streamlit_app.py
├── necessities.txt
├── shape_predictor_68_face_landmarks.dat
└── examples/
├── driver1.mp4
└── driver2.mp4
We’ll use the pre-trained shape_predictor_68_face_landmarks.dat
mannequin from dlib for facial landmark detection. You may obtain it from dlib’s mannequin zoo.
Create a script dms.py
to deal with the core logic for detecting and analysing driver behaviour:
import cv2
import dlib
import numpy as np
# Load the pre-trained dlib mannequin for facial landmark detection
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat')
def detect_facial_landmarks(picture):
grey = cv2.cvtColor(picture, cv2.COLOR_BGR2GRAY)
faces = detector(grey)
for face in faces:
landmarks = predictor(grey, face)
landmarks_points = []
for n in vary(0, 68):
x = landmarks.half(n).x
y = landmarks.half(n).y
landmarks_points.append((x, y))
return landmarks_points
return None
def draw_landmarks(picture, landmarks):
for level in landmarks:
cv2.circle(picture, level, 2, (0, 255, 0), -1)
def process_video(video_path):
cap = cv2.VideoCapture(video_path)
whereas cap.isOpened():
ret, body = cap.learn()
if not ret:
break
landmarks = detect_facial_landmarks(body)
if landmarks:
draw_landmarks(body, landmarks)
cv2.imshow('Driver Monitoring', body)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.launch()
cv2.destroyAllWindows()
if __name__ == "__main__":
import sys
if len(sys.argv) < 2:
print("Utilization: python dms.py <video_path>")
else:
video_path = sys.argv[1]
process_video(video_path)
Create a brand new script streamlit_app.py
for the Streamlit app:
import streamlit as st
import cv2
import numpy as np
from dms import detect_facial_landmarks, draw_landmarksst.title("Driver Monitoring System")
uploaded_file = st.file_uploader("Select a video file...", sort=["mp4", "avi", "mov"])
if uploaded_file shouldn't be None:
tfile = tempfile.NamedTemporaryFile(delete=False)
tfile.write(uploaded_file.learn())
video_path = tfile.title
cap = cv2.VideoCapture(video_path)
stframe = st.empty()
whereas cap.isOpened():
ret, body = cap.learn()
if not ret:
break
landmarks = detect_facial_landmarks(body)
if landmarks:
draw_landmarks(body, landmarks)
stframe.picture(body, channels="BGR")
cap.launch()
To run the Streamlit app, use the next command:
streamlit run streamlit_app.py
Add a video file to provoke the evaluation. The app will show the video and spotlight facial landmarks in real-time.
Utilizing OpenCV, dlib, and Streamlit, we constructed a elementary Driver Monitoring System for this challenge. This expertise can determine face options and overlay them on video clips. To make it much more helpful, we will use extra subtle machine studying fashions and methodologies to determine whether or not a driver is distracted or exhausted. Many new avenues for enhancing driving security and accident prevention have opened up with the mixing of pc imaginative and prescient and machine studying.