Summary: | This work explores the capability of the new neural network architecture called Vision Transformer (ViT) in addressing prevalent issue of road accidents attributed to drowsy driving. The development of the ViT model involves the use of a pre-trained ViT_B_16 model with initial weight from IMAGENETIK_ VI and was trained using our own driver behavior dataset. The dataset undergoes a thorough preprocessing pipeline, including face extraction, normalization, and data augmentation techniques resulting in 33,034 images for training data. With a focus on detecting normal, yawning, and nodding behaviors, the system achieves remarkable accuracy, reaching 98.07% in training and 93% in testing. The ViT's implementation is demonstrated through webcam-based inferences with the model deployment on a Raspberry Pi 4 by measuring the FPS of the video inferences for capturing real time input in which it achieves unfavorable performance of 0.59 fps. However, on a better performance system, the model can achieve up to 21 fps. Overall, the project contributes to advancing driver monitoring systems and investigation of the ViT model's potential for real-time applications and highlighting the issues for implementing ViT in real world applications considering its computational demand for a low resource embedded system. © 2024 IEEE.
|