1. Introduction
The rapid development and widespread application of automobiles have greatly facilitated people’s transportation and travel. According to data from the National Bureau of Statistics of China, by the end of 2022, the number of vehicles in China had reached 319 million [1]. However, with the rapid increase in the number of automobiles, traffic accidents have become increasingly prominent, with nearly 1.35 million people globally dying or becoming disabled due to traffic accidents each year [2,3,4]. It is estimated that, by 2030, road traffic injuries will become the seventh leading cause of death worldwide [5]. Therefore, the safety of automobile driving should be given full attention.
To reduce the probability of automobile accidents, improving vehicle driving safety has become a common goal in both the automotive industry and academia. Vehicle driving safety is easily influenced by multiple factors, among which, the driver’s condition is an important factor [6,7,8]. The driver’s condition is affected by various aspects, including the driver’s physiology, psychology, and emotions [9,10]. Fatigue and distraction are the two most significant adverse manifestations of the driver’s condition [11]. Studies have shown that the presence of fatigue and distraction in drivers is one of the main causes of traffic fatalities, accounting for about 36% [12].
With the rapid development of technologies such as machine vision [13], deep learning [14], and the analysis and detection of human physiological electrical signals [15], using various intelligent sensors to detect drivers’ fatigue and distraction states has become a current research hotspot [16]. Machine vision [13] emulates the human visual system to recognize, track, and classify objects, serving as a vital perceptual tool in driving the development of automotive intelligence. Deep learning [14], a subset of machine learning composed of multiple neural network layers, utilizes large datasets to autonomously learn features and patterns, providing a robust framework for understanding and interpreting drivers’ behaviors and states. Physiological electrical signals, important manifestations of the body’s internal electrical activity, primarily include brain waves, heart signals, muscle activity, and eye movements, which can indirectly reflect a person’s fatigue or distraction state [15]. By integrating intelligent sensors (such as visual and physiological signal sensors) with deep learning and other technologies, Intelligent Detection Methods have been formed.
Over the past five years, the confluence of cognitive science, vehicular technology, and artificial intelligence has catalyzed unprecedented advancements in the comprehension and mitigation of driver fatigue and distraction. Despite these remarkable technological strides, traffic incidents stemming from impaired driving states continue to pose a significant public safety challenge, highlighting the critical need for more sophisticated detection methods. This comprehensive review undertakes a meticulous exploration of the cutting-edge intelligent detection techniques developed in the past five years, categorizing and conducting a comparative analysis based on signal modalities and types of feature extraction. This endeavor aims to bridge the prevailing gaps in both academic discourse and practical implementations within this domain.
2. The Impact of Fatigue and Distraction on Driving Behavior
The phenomenon of driver fatigue and distraction has become one of the most concerning issues in the field of traffic safety [17,18]. Fatigue driving usually stems from long hours of driving or lack of sleep, leading to physical and mental exhaustion, slow reactions, decreased judgment, and problems such as inattention, increased reaction time, judgment errors, and confusion [19,20]. On the other hand, distracted driving occurs during the driving process when the driver’s attention is diverted from the task of driving due to external distractions or internal thoughts. Activities such as using a mobile phone, conversing with passengers, or adjusting the stereo can diminish a driver’s alertness to the traffic environment and reduce their ability to respond to sudden situations, thereby increasing the risk of traffic accidents [21,22]. Figure 1 illustrates the main causes and behavior patterns leading to driver fatigue and distraction.
Regarding the effects of fatigued driving and distracted driving on driver behavior, scholars have mainly investigated these through two approaches: accident database analysis and experimental research. In terms of accident database analysis, Bioulac et al. [23] analyzed driving data from 70,000 drivers, which included a significant amount of accident data. They discovered that the collision risk under fatigued driving is twice that under normal driving conditions. However, the limitation of this study is its failure to explore in depth the specific impact of different levels of fatigue on collision risk. Furthermore, Sheila et al. [24] revealed through statistical analysis of 685 traffic accidents that the probability of collisions significantly increases for novice drivers when they are distracted, particularly while using mobile phones. This study highlighted the differential impact of distraction types on accident risk, but required broader data to validate the generality of its conclusions.
In experimental research, the convenience and safety of simulators have led to a broad focus on studying the impacts of fatigue and distraction on driving. Öztürk et al. [25] used the n-back test task on a simulator platform to examine the effects of varying cognitive loads on driving behavior and detection response tasks. Bassani et al. [26] conducted experiments based on a simulator to compare the differences in speed control and lateral vehicle control capabilities between distracted and non-distracted drivers. However, it has been noted that the actual impact of driver fatigue on reaction capabilities in real driving situations is more significant than what is represented in simulator data [27]. Additionally, the reactions of drivers to visual distractions and interface interactions in real road environments show significant differences from those in simulated environments [28]. Thus, when assessing impacts and quantifying research, as well as devising safety warning and response plans, reliance should not only be on simulated data; real vehicle experiments should also be conducted. Hu et al. [29] assessed the fatigue level of drivers using the Karolinska Sleepiness Scale (KSS) before real vehicle testing. Their analysis found that reaction time is positively correlated with level of fatigue, highlighting the actual impact of fatigued driving on safety. However, this study relied on subjective assessments, which may have introduced bias. Ma et al. [30] analyzed real vehicle driving test data to study the impact of distracted driving on driving capabilities, finding that driver distraction reduces the driver’s vehicle control capabilities and that this impact varies with the type of distraction, yet they did not fully explore the quantitative assessment of distraction level.
In summary, fatigue and distraction phenomena severely affect driving behavior, increasing the risk of traffic accidents. Current research mainly explores these impacts through accident database analysis and experimental studies. However, existing research falls short of deeply investigating the specific effects of different fatigue levels and distraction types on collision risk and driving capabilities. Particularly, the actual impacts of fatigue and distraction on driving behavior in real driving situations have not been fully assessed, and methods based on subjective assessment might introduce bias. Therefore, researching real-time, high-accuracy, and robust online detection methods for driver fatigue and distraction states is of significant importance. Such research could provide objective and accurate assessments of driver states and support the design of effective traffic safety warning and response schemes.
3. Intelligent Detection Methods for Driver Fatigue and Distraction
With the rapid development of deep learning technologies and the enhancement of computing hardware power, breakthroughs have been achieved in intelligent detection techniques for driver fatigue and distraction. Based on the types of features extracted, these technologies can be divided into intelligent detection methods based on facial features, head posture, behavioral actions, physiological signals, and vehicle data, as shown in Figure 2. Among these, the first three methods require the use of cameras, infrared cameras, and other visual sensors to collect drivers’ image information for further recognition of their states [31], which can be collectively referred to as image-information-based intelligent detection methods. Furthermore, some researchers have conducted studies on multimodal fusion algorithms that integrate image information, physiological signals, and vehicle data, thereby improving the detection’s accuracy and robustness.
3.1. Intelligent Detection Methods Based on Driver’s Facial Features
Intelligent detection methods based on drivers’ facial features primarily collect facial information through cameras installed inside vehicles. After image processing, machine learning or deep learning methods are used to identify the facial region and extract feature points, followed by model training. This process enables the detection of driver fatigue or distraction states. These methods are widely recognized and applied due to their real-time performance, low cost, and non-invasive nature.
The basic workflow of intelligent detection methods based on facial features is as follows, as illustrated in Figure 3. First, the Eye Aspect Ratio (EAR) and the Mouth Aspect Ratio (MAR) are calculated. Then, based on the EAR value, eye fatigue assessment indicators are computed, such as the percentage of eyelid closure over the pupil over time (PERCLOS) [32], blink frequency, and the duration of continuous eye closure. Based on the MAR value, yawning can be determined by calculating the yawn frequency and duration. Finally, by integrating these assessment indicators into a model, a comprehensive judgment is made on whether the driver is in a state of fatigue or distraction.
Currently, Driver State Monitoring (DSM) systems widely applied in actual vehicles often use traditional machine learning libraries such as dlib to extract features of the eyes and mouth, calculating fatigue and distraction state assessment indicators using fixed thresholds. This method is widely adopted due to its simplicity and low sensor requirements. However, its recognition accuracy and detection effectiveness need improvement, and it overlooks individual differences among drivers.
To address the challenge of low accuracy in face recognition and feature point extraction, the academic community has proposed various improvement schemes. Zhu Feng et al. [33] enhanced face detection accuracy significantly by integrating an improved YOLO v3 method with the Kalman filter algorithm for face detection and using the boosting tree algorithm to extract facial feature points, effectively solving the problem of missed detections caused by glasses and hats. Although this method has made progress in accuracy, its complexity and computational cost are higher. Wang et al. [34] pointed out the correlation between drivers’ eye position and ethnicity, greatly improving positioning accuracy through a skin color recognition model and a bidirectional integral projection method, but their accuracy may be limited to specific groups. Yang et al. [35] combined 3D Convolutional Neural Networks (3D-CNN) and Bidirectional Long Short-Term Memory networks (Bi-LSTM) to propose the 3D-LTS network, which effectively recognizes subtle facial movements, improving the accuracy of mouth feature point extraction and yawn detection, as shown in Figure 4. Although this method performs excellently in dynamic feature recognition, it still faces challenges in real-time performance and resource consumption.
To address the shortcomings of traditional methods that do not consider individual differences among drivers, some studies have optimized detection approaches. Li et al. [36] improved detection effectiveness by constructing a driver identity information database and extracting facial features as a reference for assessing the driver’s state. This personalized method has obvious advantages, but its application range may be limited by the size of the offline library. You et al. [37] used a Deep Cascaded Convolutional Neural Network (DCCNN) to detect facial features and established an SVM model based on facial features, replacing traditional uniform threshold methods and adding personalized elements to fatigue and distraction detection. Although this increases the personalization of detection, the model’s general applicability and accuracy still need further verification, as shown in Figure 5. Han et al. [38] used the ShuffleNet V2K16 neural network for driver facial recognition, reducing the impact of the detection environment, and by comparing the current frame’s EAR and MAR values with the maximum EAR value and minimum MAR value of the previous 100 frames, they minimized the impact of individual differences, greatly improving experimental accuracy.
Some researchers have utilized cutting-edge deep learning networks, significantly enhancing detection precision. Liu et al. [39] introduced a dual-stream neural network into the Multi-Task cascaded Convolutional Neural Network (MTCNN), combining the static and dynamic features of drivers to improve detection performance, achieving an accuracy rate of 97.06%. Moreover, by using Gamma correction, they enhanced the accuracy of nighttime detection, effectively overcoming the challenges of detection in night conditions. Ahmed et al. [40] incorporated two InceptionV3 modules into the MTCNN network to extract the features of eye and mouth sub-samples, significantly improving the detection precision of local features. Although this method performs well in local feature recognition, its efficiency in processing large-scale data remains a consideration.
In addressing delays caused by complex algorithms, Kim et al. [41] developed a lightweight driver state monitoring system that realized end-to-end detection, significantly improving detection efficiency. This method simplified the detection process, but may have sacrificed some recognition precision. He et al. [42] used near-infrared cameras instead of traditional RGB and infrared cameras and established an integrated deep learning network model, which ensured the algorithm’s accuracy while reducing the hardware computational requirements, offering a way to achieve efficient detection in resource-limited situations. Guo et al. [43] addressed the issue that most detection methods cannot detect distracted behaviors not included in the training set well by using a semi-supervised approach with a dual-stream backbone network design, ensuring the model’s lightweight structure while enhancing its generalization capability.
Table 1 presents a comparative analysis of recent studies on new detection methods based on facial features. Overall, such methods, under experimental conditions, boast a high accuracy and convenience, becoming the primary choice for developing driver state monitoring systems [44]. However, the actual application of these methods still faces several challenges, mainly as follows:
-
The actual precision of facial feature extraction is easily affected by the environment. Facial appearance may vary significantly under different lighting conditions, angle changes, and facial expressions, making it difficult to maintain stability and accuracy in facial feature extraction. Additionally, face occlusions, wearing glasses, and makeup can also affect the effectiveness of facial feature extraction.
-
In detection methods based on facial features, most use facial feature parameters such as EAR and MAR, etc., to judge fatigue or distraction states. However, the setting of thresholds often relies on subjective experience, lacking objectivity and unified standards.
-
Detection methods based on facial features are greatly affected by individual driver differences. While people may display facial states similar to fatigue or distraction, such as yawning, blinking, or looking down, in everyday life, it does not necessarily mean they are truly in a state of fatigue or distraction. Therefore, judging whether a driver is fatigued or distracted based solely on facial features presents potential risks of misjudgment or omission.
3.2. Intelligent Detection Methods Based on Driver’s Head Posture
Detection methods based on a driver’s head posture work by calculating the relative positions of various feature points in a three-dimensional space to obtain head posture angles, thereby detecting whether the driver is engaged in distracted behaviors, such as diverting their gaze from the driving task. Detecting the driver’s head posture can also help to identify behaviors such as looking down at a phone or nodding off due to drowsiness.
Since head rotation is a continuous process, models that consider the relationship between frames often achieve better detection results than methods that rely solely on real-time detection using single frames. Zhao et al. [45] optimized the fully connected layers of a residual error network with a composite loss function and expanded the training dataset through transfer learning. This approach accurately and continuously monitors a driver’s distracted state in real driving environments but lacks an in-depth analysis of adaptability to environmental changes and real-time requirements. Ansari et al. [46] argued that a driver’s psychological fatigue and sleepiness are reflected in changes in head posture. They introduced an improved linear unit layer into a bidirectional Long Short-Term Memory (LSTM) network structure to effectively process a 3D time series of head angular acceleration data, allowing for the effective recognition of complex head movement states. However, this method requires extensive training data to accurately identify different driver states, and data labeling is challenging.
Detection methods based on driver head posture share similarities with those based on facial features, as they can be conducted using only camera sensors, making them low-cost and easy to integrate. However, the accuracy of their detection is easily affected by head occlusions and the diversity of postures.
3.3. Intelligent Detection Methods Based on Driver’s Behavioral Actions
During the driving process, the actions and behaviors of drivers can significantly reflect their attention states. Behaviors such as making phone calls, texting, smoking, or reaching for items can lead to driver distraction. Accordingly, monitoring these behaviors can effectively determine whether a driver is distracted. Xing et al. [47] utilized a deep convolutional neural network (CNN) to design a system capable of recognizing a variety of driving behaviors, including normal driving, looking at rearview mirrors in various directions, using the car radio, texting, and making phone calls.
However, given the small differences between normal and distracted driving behaviors and the high similarity of some actions, relying solely on a CNN might lead to recognition errors. Therefore, Ma et al. [48] introduced a bilinear fusion network and attention mechanism, proposing a convolutional neural network specifically designed for driving behavior recognition, BACNN, which showed a good performance on the State Farm dataset. To further improve recognition accuracy and real-time performance, Zhang et al. [49] developed a specialized intertwined deep convolutional neural network, InterCNN. Fasanmade et al. [50] established an expert knowledge rule system and a discrete dynamic Bayesian classification model to predict the level of distraction in video frame sequences, providing a quantitative assessment of driver behavior.
In addition to using cameras to recognize driver actions and behaviors through visual images, some researchers have used wearable devices to collect signals reflecting driver behavior. However, large wearable devices are not portable and are costly, making them suitable only for experimental data collection and difficult to use for real-time detection. Thus, researching and developing portable wearable devices for the real-time monitoring of driver states is a trend. Xie et al. [51] proposed a method to identify driver distraction using a wristband equipped with an Inertial Measurement Unit (IMU), i.e., collecting acceleration data from the driver’s right wristband’s IMU and building a Convolutional Long Short-Term Memory (ConvLSTM) deep neural network model for training. The results showed that ConvLSTM achieved a better detection accuracy than CNN and LSTM models. Wagner et al. [52] specifically targeted dangerous behaviors such as using phones, eating, and smoking by drivers. They proposed a deep-learning-based classification detection method by capturing images with left and right infrared cameras.
In summary, these methods offer high real-time capabilities. Once they detect dangerous driving behaviors that could distract drivers, immediate warnings can be issued without the need for cumulative time to continuously observe changes in driver characteristics. Although these technologies have shown efficiency in the real-time monitoring of driver states and can promptly identify and warn of potential distractions, they still face challenges such as scene limitations and individual behavior differences. Future research needs to continue exploring more accurate and personalized recognition methods to adapt to the diversity and complexity of driver behaviors. Additionally, developing more convenient, lower-cost tactile motion sensing systems for broader applications is an important research direction.
3.4. Intelligent Detection Methods Based on Driver’s Physiological Characteristics
Studies indicate that significant changes in physiological characteristics occur when a driver is fatigued or distracted [53]. Monitoring a driver’s physiological signals through physiological instrument devices can analyze their attention level, emotional state, and degree of physical fatigue. Common physiological signals include the Electroencephalogram (EEG), Electrocardiogram (ECG), Heart Rate (HR), Electrooculogram (EOG), and Electromyogram (EMG).
Bundele et al. [54] monitored drivers’ skin conductivity and blood oxygen saturation using wearable devices, designing a multilayer perceptron model for training to classify and detect drivers’ psychological fatigue and drowsiness states. Chaudhuri et al. [55], based on EEG signals from drivers’ scalps, used electrophysiological source imaging and EEG source localization techniques for signal processing and feature extraction, and established a Support Vector Machine (SVM) classifier to detect whether drivers were in a state of extreme fatigue. Li et al. [56] employed convolutional neural networks and gated recurrent units to map the relationship between drivers’ distraction states and EEG signals in the time domain, verifying the effectiveness of their method through simulation experiments. Fu et al. [57] developed a non-contact vehicle-mounted driver fatigue detection system to collect drivers’ biceps femoris electromyographic signals and ECG signals. Chen et al. [58] decomposed EEG signals into wavelet sub-bands to extract nonlinear features beyond the original signal and integrated them with eyelid movement information, using an Extreme Learning Machine (ELM) for state classification.
The use of full-head EEG monitoring devices often leads to issues such as computational and storage resource wastage and low real-time performance. In light of this, Fan et al. [59] optimized EEG instruments by monitoring the frontal EEG signals of drivers to extract features like the energy, entropy, and frontal EEG asymmetry ratio of EEG signals, proposing a time-series-based ensemble learning method for fatigue and distraction detection.
Overall, intelligent detection methods based on drivers’ physiological characteristics can accurately assess drivers’ states by monitoring changes in their physiological states. However, most of these methods use invasive sensors, which may cause discomfort to drivers and have disadvantages such as poor convenience, high cost, and detection accuracy being easily affected by the environment, hindering their application.
3.5. Intelligent Detection Methods Based on Vehicle Travel Data
When drivers are fatigued or distracted, their driving skills and reaction times are compromised, leading to deviations in heading angle and difficulties in maintaining the correct direction of travel on the road. Furthermore, the emergence of fatigue and distraction can reduce a driver’s steering capability, potentially resulting in insufficient steering actions that affect their driving behavior. In severe cases, this may even lead to significant lane departures. Therefore, analyzing vehicle driving data can indirectly assess a driver’s fatigue and distraction levels.
Yang et al. [60] assessed the state and behavior of drivers based on vehicle GPS data and the Gaussian Mixture Model (GMM). Wang et al. [61] utilized a Bidirectional Long Short-Term Memory network (Bi-LSTM) with an attention mechanism to propose a method for indirectly detecting drivers using mobile phones based on vehicle performance parameters, achieving an accuracy rate of up to 91.2%. Sun et al. [62] also employed a Bi-LSTM network to process data on vehicle steering wheel angles, steering wheel angular velocity, vehicle yaw rate, lateral acceleration, and longitudinal acceleration. They used a wavelet packet analysis to extract characteristic frequency bands, thereby indirectly identifying driver fatigue and distraction.
Intelligent detection methods based on vehicle driving data generally involve data collection through a vehicle’s original onboard sensors, offering advantages in convenience and cost. However, this approach has difficulty in directly reflecting changes in drivers’ physiological and psychological states, especially under simple driving conditions, where vehicle driving data show minimal variations, making it challenging to reflect a driver’s state accurately.
3.6. Intelligent Detection Methods Based on Multimodal Fusion Feature
Driver fatigue or distraction is influenced by various physiological and psychological factors and exhibits individual differences, presenting diverse manifestations and behaviors. Therefore, it is challenging to make a comprehensive judgment based solely on single-modality signal features. Consequently, many scholars have integrated features such as driver image information, physiological signals, and vehicle data to study detection methods based on multimodal feature fusion, aiming to improve the accuracy and robustness of the detection and explore the potential connections between features and driver fatigue and distraction.
Wang et al. [63] integrated physiological signals collected by biometric instruments, such as heart rate and brain waves, with facial features and head postures extracted by cameras and trained with an RBF neural network, to achieve deployment and application at a lower cost. Du and others [64] addressed the issue of distracted driving caused by drivers conversing with others, by introducing the analysis and detection of voice signals. They proposed a dataset comprising three modalities: driver facial features, voice recognition, and vehicle signals, applied to the training of a multi-layer fusion model, achieving better detection results than single-modality models. Abbas and others [65] developed a multimodal detection system that integrates physiological and facial features for training a Deep Residual Neural Network (DRNN) model. They classified the driver states into five categories: normal, fatigued, visually distracted, cognitively distracted, and drowsy. The hardware setup is shown in Figure 6.
Detection methods based on multimodal feature fusion, by integrating various modal characteristics, can establish a more comprehensive assessment system. This approach compensates for the shortcomings of single-modality methods, collectively enhancing the accuracy and stability of detection and enabling more flexible and refined state monitoring. However, the requirement for a greater variety and number of sensors leads to increased hardware and software costs, and the processing and fusion algorithms for data from different sensors become more complex. Overall, multimodal fusion represents a research trend in the detection of driver fatigue and distraction states.
4. Safety Warning and Response Strategies Based on Fatigue and Distraction Detection
4.1. Warning Prompts Based on the Driving Cockpit
With the rapid development of intelligent cockpits, new forms of human–machine interaction (HMI) are continually emerging, providing new warning formats for detecting driver fatigue and distraction [66]. Upon the preliminary detection of driver fatigue or distraction, the design of in-cabin HMI can provide sensory stimuli to the driver in visual, auditory, and tactile forms as warning prompts. Generally, warning prompts are categorized into normal and urgent. For lower levels of fatigue or distraction, normal warnings are employed using standard visual or auditory signals [67]. When the level of fatigue or distraction is high and persists, urgent warnings should be issued by expanding the range of visual cues, changing the color of visual alerts, increasing the decibel level of auditory signals, or altering the words used in voice prompts [68,69]. Contact reminders such as vibrations in the seat and steering wheel [70] can also be utilized to enhance sensory stimulation for the driver, thereby facilitating the timely recovery of vehicle control capabilities.
4.2. Safety Response Based on Advanced Driver Assistance Systems
When drivers exhibit signs of fatigue or distraction, beyond issuing early warnings, safety responses can also be implemented through Advanced Driver Assistance Systems (ADAS). If it is detected that a driver is unable to control the vehicle promptly due to fatigue or distraction, an ADAS can automatically engage emergency braking or perform other evasive maneuvers to prevent potential collisions or accidents, thereby affording the driver more reaction time [71]. For instance, an Automatic Emergency Braking (AEB) system can autonomously apply the brakes to significantly reduce the risk of an accident if a collision is predicted and the driver has not responded in time. The Lane Keeping Assist System (LKAS) plays a vital role in preventing potential deviation accidents caused by driver fatigue or distraction by automatically adjusting the steering to help the vehicle stay in its current lane. Adaptive Cruise Control (ACC) reduces the risk of collisions caused by excessive speed or following too closely by automatically adjusting the vehicle’s speed to maintain a safe distance. The integration of these ADAS features not only enhances driving safety, but also effectively reduces the accident risk associated with driver fatigue or distraction, creating a safer driving environment for both drivers and passengers.
4.3. Multi-Level Response Mechanism Combining Autonomous Driving Technology
As the process of automobile intelligence and connectivity accelerates, integrating autonomous driving technology to establish a multi-level safety response mechanism offers a new solution for reducing the driving risks associated with driver fatigue or distraction. Within this framework, the system first attempts to alert the driver’s attention through warning prompts and provides preliminary safety interventions using Advanced Driver Assistance Systems (ADAS) in emergency situations, playing a crucial role when the driver’s response capabilities are insufficient to avoid potential risks.
If primary measures fail to effectively reduce the accident risk, or if the driver fails to respond in time, the Autonomous Driving System (ADS) is prepared to take over vehicle control. During this process, ADS uses its advanced perception and decision-making capabilities to automatically navigate the vehicle to a safe area or take the most appropriate actions to ensure the safety of both the occupants and surrounding traffic participants. Currently, despite the rapid progress in autonomous driving technology, including capabilities demonstrated in experimental and limited commercial deployments, there are still challenges to achieving full automation takeover, such as perception complexity, decision-making reliability, and legal and ethical issues [72,73].
Furthermore, it is critical to acknowledge that these intelligent vehicles are susceptible to potential cybersecurity threats from malicious actors. Such vulnerabilities necessitate the incorporation of cybersecurity measures within the autonomous driving framework to safeguard against cyberattacks that could compromise vehicle safety and functionality. Petit et al. [74] highlighted the importance of considering cybersecurity implications in cooperative automated vehicle systems, stressing the need for enhanced redundancy. Parkinson et al. [75] emphasized that, with increased connectivity and automation, vehicles face heightened risks of cybersecurity attacks, calling for a focus on addressing these risks in the autonomous vehicle sector.
In the context of cyberattacks during the vehicle handover process, the challenge of distinguishing between abnormal driving behaviors caused by driver issues, such as fatigue or distraction, and those induced by cyberattacks is significant. Addressing this challenge, several scholars have conducted in-depth studies aimed at clearly identifying the effects of cyberattacks on autonomous driving control systems. For instance, Petrillo et al. [76] explored secure adaptive control for autonomous vehicles under cyberattacks, while Guo et al. [77] focused on cyber-physical system-based path-tracking control, underscoring the necessity for sophisticated detection and mitigation strategies. These investigations reveal the complexities of maintaining autonomous vehicle safety in the face of cyber threats, emphasizing the covert nature of these threats and the critical need for advanced techniques in attack detection and response. Additionally, Sheehan et al. [78] proposed a proactive approach to cyber risk classification for CAVs, utilizing a Bayesian Network model predicated on known software vulnerabilities, representing a methodological advancement in effectively predicting and mitigating cyber risks.
Building on the understanding of cyberattacks’ impact on autonomous driving perception and control systems, numerous scholars have investigated real-time detection methods aimed at mitigating the effects of such attacks. Van Wyk et al. [79] enhanced sensor anomaly detection and identification capabilities significantly by combining Kalman filtering with multi-layered deep learning methods. Wang et al. [80] developed a comprehensive framework for detecting and isolating cyberattacks on autonomous driving systems, offering effective strategies to ensure that vehicle localization and navigation remain unaffected by cyber disturbances. Furthermore, Li et al. [81] introduced an anomaly detection model based on Generative Adversarial Networks (GANs), capable of detecting trajectory anomalies and sensor data injections in a timely manner using short-term data sequences.
Future research endeavors should not be limited to enhancing the sensory capabilities of autonomous driving systems and refining their decision-making algorithms; there is a pressing need to fortify cybersecurity defenses to safeguard against and diminish the ramifications of cyberattacks. Moreover, an intensified focus on distinguishing the origins of anomalies in autonomous vehicles is essential, a domain presently marked by scant research. Through the fusion of interdisciplinary collaboration and technological innovation, autonomous driving technology is poised to fundamentally revolutionize driving safety, promising a substantial reduction in traffic incidents attributed to driver fatigue and distraction.
5. Conclusions and Outlook
This paper comprehensively analyzes the current technological research progress on intelligent detection methods for driver fatigue and distraction. These methods are categorized into those based on image information, physiological characteristics, vehicle driving data, and multimodal feature fusion. Not only have these methods improved the accuracy and real-time capabilities of detection, but they have also provided technical support for a safer driving environment. Through the analysis and summary of various intelligent detection methods and research on safety response plans, the following conclusions are drawn:
Detection methods based on image information, especially those relying on machine learning and deep learning technologies, have significantly improved the accuracy of facial feature recognition, particularly in the analysis of face detection, eye, and mouth movements. However, these methods are highly sensitive to environmental conditions such as lighting and obstruction of the driver’s head, while also neglecting individual differences among drivers, somewhat limiting their widespread application potential.
Detection methods based on the physiological characteristics of drivers, by analyzing physiological signals like electroencephalograms (EEG), electrocardiograms (ECG), and heart rate (HR), provide more direct indicators for assessing the attention level and fatigue state of the driver. These methods can avoid external environmental interference to a certain extent and provide relatively stable detection results. Nonetheless, physiological signal detection often requires the use of invasive sensors, which may cause discomfort to the driver and have limited applicability in actual driving environments.
Intelligent detection methods based on vehicle driving data assess the driver’s level of attention distraction indirectly by analyzing the correlation between driving behavior and the vehicle operation status. These methods are easy to implement and cost-effective, but their detection accuracy is influenced by the complexity of the driving environment and the difficulty in directly reflecting changes in the driver’s physiological and psychological state.
To enhance the robustness, stability, and overall performance of detection systems, detection methods based on multimodal feature fusion have emerged, integrating features from image information, physiological signals, and vehicle data to improve the comprehensiveness and accuracy of detection. Although this approach effectively utilizes the advantages of different data sources and enhances the robustness of the detection system, it also introduces higher implementation costs, technical complexity, and computational demands.
In terms of safety warning research, current systems primarily rely on in-cabin human–machine interaction designs, issuing warnings to drivers through visual, auditory, and tactile means. These solutions can enhance driver alertness to some extent and reduce accidents caused by fatigue or distraction. However, the effectiveness of these systems is often limited by the driver’s subjective acceptance and real-time response capability. Moreover, safety response measures, such as the intervention of Advanced Driver Assistance Systems (ADAS) and the application of autonomous driving technology, can reduce risks to some extent, but their capability to handle complex traffic environments and the collaboration between drivers and systems require further research.
Looking forward to the future development of intelligent detection technology for driver fatigue and distraction, several technical dimensions can be envisioned:
Future research should focus on the adaptability of algorithms in complex environments, such as stability under different lighting conditions and changes in the driver’s posture, developing lightweight and efficient neural network models to ensure the rapidity of data processing and high accuracy of detection results.
The innovation of non-invasive sensors and related algorithms could be advanced by collecting physiological signals through non-contact or minimally invasive methods, reducing driver discomfort, and expanding application scenarios.
The application of machine learning and artificial intelligence technologies in analyzing the relationship between driving behavior and vehicle performance could be strengthened, precisely predicting driver states through detailed data collection and analysis.
Data fusion technologies and model integration strategies could be optimized, exploring effective feature fusion algorithms to enhance the complementarity and accuracy of analysis between different data sources.
Future safety warning schemes should pay more attention to personalization and intelligence, providing customized warning signals based on the driver’s behavior patterns and physiological state to enhance the effectiveness of warnings.
Research should be conducted on the seamless switching mechanism between advanced driver assistance systems and autonomous driving technology, improving the safety and flexibility of the system, exploring the data fusion of in-vehicle and external environment perception systems to provide comprehensive decision support for ADAS and autonomous driving technology.
Overall, this comprehensive review critically evaluates the latest advancements in deep-learning-based intelligent detection technologies for driver fatigue and distraction over the past five years. It sheds light on their theoretical foundations, methodological innovations, and existing shortcomings. By synthesizing the research within this period, our analysis not only highlights the progress made in improving detection accuracy and real-time response capabilities, but also points out the persisting challenges and gaps, setting the stage for future research. Key contributions from our analysis include:
An integrated framework that categorizes the intelligent detection methods based on deep learning developed over the past five years and proposes a comprehensive safety warning and response scheme. This scheme features varied warning and response mechanisms tailored to the different levels of vehicle automation.
A critical evaluation of the limitations inherent in the current methodologies and the potential for leveraging emerging technologies such as AI and machine learning to address these challenges.
The identification of areas that could significantly benefit from further research, including non-invasive sensing techniques, the integration of multimodal data, and the development of adaptive, personalized detection systems.
Looking forward, we advocate for a concerted effort towards interdisciplinary research that bridges cognitive science, vehicular engineering, and computer science. Such collaborative endeavors could unlock new pathways for understanding driver behavior and developing more sophisticated, context-aware technologies capable of mitigating the risks associated with driver fatigue and distraction. Furthermore, our review underscores the necessity for the rigorous validation of detection technologies in real-world settings, ensuring their efficacy and reliability across diverse driving conditions and populations.
In conclusion, this review not only consolidates current knowledge in the domain of driver fatigue and distraction detection, but also acts as a catalyst for future research. By highlighting the academic and practical value of recent advancements, we lay the groundwork for the next generation of detection technologies that promise enhanced road safety and driver well-being. As the field continues to evolve, it is imperative that future research endeavors are guided by the dual principles of innovation and inclusivity, ensuring that technological progress translates into tangible safety benefits for all road users.
Conceptualization, S.F., Z.Y. and Y.M.; formal analysis, S.F. and Z.Y.; investigation, S.F. and Z.Y.; resources, S.F. and Z.Y.; data curation, S.F. and Z.Y.; writing—original draft preparation, Z.Y. and Z.L.; writing—review and editing, S.F., Y.M. and L.X.; visualization, Y.M.; supervision, S.F. and H.Z. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. Classification of intelligent detection techniques for driver fatigue and distraction.
Figure 3. General flowchart of intelligent detection method based on facial features.
Figure 4. Extracting keyframes for subtle facial changes. (© (2020) IEEE. Reprinted, with permission, from [35]).
Figure 5. The architecture of detection methods considering individual differences among drivers [37].
Analysis and comparison of intelligent detection methods based on facial features from the last five years.
Author | Year | Methodological Innovations | Extraction of Features | Machine Learning Models | Dataset | Accuracy | Improvement over Traditional Methods |
---|---|---|---|---|---|---|---|
Zhang et al. [ | 2022 | Solving the problem of missed detection due to occlusion or misjudgment. | PERCLOS value, maximum eye closure time, number of yawns | Improves the yolov3+Kalman filter algorithm | Self-built real-vehicle dataset | 92.50% | Solves the problem of the low accuracy of recognizing face parts in traditional methods. |
Wang | 2019 | Proposing a bidirectional integral projection method to realize the precise localization of the human eye. | Blink frequency | KNN algorithm | Self-constructed simulator dataset | 87.82% | |
Yang | 2021 | Recognizing yawning behavior based on subtle facial movements for improved accuracy. | Subtle facial changes | 3D-LTS combining 3D convolution + Bi-LSTM | YawDD dataset | 92.10% | |
Li | 2020 | Offline construction of driver identity database to analyze driving status from driver features. | Facial features | Improved yolov3-tiny + improved dlib library | DSD dataset | 95.10% | Consideration of driver characteristics and their variability. |
You | 2019 | Analyzing changes in binocular aspect ratio using neural network training. | Eye aspect ratio | Deep cascaded convolutional neural network | FDDB dataset + self-built simulator dataset | 94.80% | |
Han | 2023 | Weakening environmental effects and individual differences, improving dlib method to enhance the accuracy of facial feature point extraction. | 64 feature points, EAR, MAR | ShuffleNet V2K16 neural network | Self-constructed real-vehicle dataset | 98.8% | |
Liu | 2019 | Introducing dual-stream neural network to combine static and dynamic image information for fatigue detection; utilizing gamma correction method to improve nighttime detection accuracy. | Static and dynamic image fusion information | Multi-task cascaded convolutional neural network | NTHU-DDD dataset | 97.06% | Research on high-performance deep learning models to improve detection accuracy. |
Ahmed | 2022 | Proposing a deep learning integration model and introducing the InceptionV3 module for feature extraction of eye and mouth subsamples. | Eye and mouth images | Multi-task cascaded convolutional neural network | NTHU-DDD dataset | 97.10% | |
Kim | 2019 | Reducing arithmetic requirements, realizing end-to-end detection, and improving detection efficiency. | Raw images | Multi-task lightweight neural network | Self-built simulator dataset | Face orientation: 96.40% | Research on lightweight models to promote technology application. |
He | 2019 | Building and integrating multiple lightweight deep learning models to recognize fatigue and risky driving behaviors. | Part recognition with extended range images | SSD-MobileNet model | 300 W + self-constructed validation dataset | 95.10% | |
Guo | 2024 | Adaptive detection of distracting behaviors not included in the training set to ensure lightweight and enhance generalization capability. | Full depth images | Visual Transformer model | Self-built real-vehicle dataset MAS | 98.98% |
References
1. National Bureau of Statistics of China. Statistical Communique of the People’s Republic of China on the 2022 National Economic and Social Development. Available online: https://www.stats.gov.cn/english/PressRelease/202302/t20230227_1918979.html (accessed on 1 March 2024).
2. Mohammed, A.A.; Ambak, K.; Mosa, A.M.; Syamsunur, D. A review of traffic accidents and related practices worldwide. Open Transp. J.; 2019; 13, pp. 65-83. [DOI: https://dx.doi.org/10.2174/1874447801913010065]
3. Sharma, B.R. Road traffic injuries: A major global public health crisis. Public Health; 2008; 122, pp. 1399-1406. [DOI: https://dx.doi.org/10.1016/j.puhe.2008.06.009]
4. Zhang, Y.; Jing, L.; Sun, C.; Fang, J.; Feng, Y. Human factors related to major road traffic accidents in China. Traffic Inj. Prev.; 2019; 20, pp. 796-800. [DOI: https://dx.doi.org/10.1080/15389588.2019.1670817] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31710507]
5. Ahmed, S.K.; Mohammed, M.G.; Abdulqadir, S.O.; El-Kader, R.G.A.; El-Shall, N.A.; Chandran, D.; Rehman, M.E.U.; Dhama, K. Road traffic accidental injuries and deaths: A neglected global health issue. Health Sci. Rep.; 2023; 6, e1240. [DOI: https://dx.doi.org/10.1002/hsr2.1240] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37152220]
6. Jie, Q.; Xin, X.; Chuanpan, L.; Junwei, Z.; Yongtao, L. A review of the influencing factors and intervention methods of drivers’ hazard perception ability. China Saf. Sci. J.; 2022; 32, pp. 34-41.
7. Khan, M.Q.; Lee, S. A comprehensive survey of driving monitoring and assistance systems. Sensors; 2019; 19, 2574. [DOI: https://dx.doi.org/10.3390/s19112574] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31174275]
8. Behzadi Goodari, M.; Sharifi, H.; Dehesh, P.; Mosleh-Shirazi, M.A.; Dehesh, T. Factors affecting the number of road traffic accidents in Kerman province, southeastern Iran (2015–2021). Sci. Rep.; 2023; 13, 6662. [DOI: https://dx.doi.org/10.1038/s41598-023-33571-8] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37095125]
9. Mesken, J.; Hagenzieker, M.P.; Rothengatter, T.; De Waard, D. Frequency, determinants, and consequences of different drivers’ emotions: An on-the-road study using self-reports,(observed) behaviour, and physiology. Transp. Res. Part F Traffic Psychol. Behav.; 2007; 10, pp. 458-475. [DOI: https://dx.doi.org/10.1016/j.trf.2007.05.001]
10. Hassib, M.; Braun, M.; Pfleging, B.; Alt, F. Detecting and influencing driver emotions using psycho-physiological sensors and ambient light. Proceedings of the IFIP Conference on Human-Computer Interaction; Paphos, Cyprus, 2–6 September 2019; Springer: Cham, Switzerland, 2019; pp. 721-742.
11. Dong, W.; Shu, Z.; Yutong, L. Research hotspots and evolution analysis of domestic traffic psychology. Adv. Psychol.; 2019; 9, 13.
12. Villán, A.F. Facial attributes recognition using computer vision to detect drowsiness and distraction in drivers. ELCVIA Electron. Lett. Comput. Vis. Image Anal.; 2017; 16, pp. 25-28. [DOI: https://dx.doi.org/10.5565/rev/elcvia.1134]
13. Moslemi, N.; Soryani, M.; Azmi, R. Computer vision-based recognition of driver distraction: A review. Concurr. Comput. Pract. Exp.; 2021; 33, e6475. [DOI: https://dx.doi.org/10.1002/cpe.6475]
14. Morooka, F.E.; Junior, A.M.; Sigahi, T.F.; Pinto, J.D.S.; Rampasso, I.S.; Anholon, R. Deep learning and autonomous vehicles: Strategic themes, applications, and research agenda using SciMAT and content-centric analysis, a systematic review. Mach. Learn. Knowl. Extr.; 2023; 5, pp. 763-781. [DOI: https://dx.doi.org/10.3390/make5030041]
15. Dash, D.P.; Kolekar, M.; Chakraborty, C.; Khosravi, M.R. Review of machine and deep learning techniques in epileptic seizure detection using physiological signals and sentiment analysis. ACM Trans. Asian Low-Resour. Lang. Inf. Process.; 2024; 23, pp. 1-29. [DOI: https://dx.doi.org/10.1145/3552512]
16. Hu, Y.; Qu, T.; Liu, J.; Shi, Z.; Zhu, B.; Cao, D.; Chen, H. Research status and prospect of human-machine collaborative control of intelligent vehicles. Acta Autom. Sin.; 2019; 45, 20.
17. Hatoyama, K.; Nishioka, M.; Kitajima, M.; Nakahira, K.; Sano, K. Perception of time in traffic congestion and drivers’ stress. Proceedings of the International Conference on Transportation and Development 2019; Alexandria, VA, USA, 9–12 June 2019; American Society of Civil Engineers: Reston, VA, USA, 2019; pp. 165-174.
18. Stutts, J.; Reinfurt, D.; Rodgman, E. The role of driver distraction in crashes: An analysis of 1995–1999 Crashworthiness Data System Data. Annu. Proc. Assoc. Adv. Automot. Med.; 2001; 45, pp. 287-301.
19. Zhang, H.; Ni, D.; Ding, N.; Sun, Y.; Zhang, Q.; Li, X. Structural analysis of driver fatigue behavior: A systematic review. Transp. Res. Interdiscip. Perspect.; 2023; 21, 100865. [DOI: https://dx.doi.org/10.1016/j.trip.2023.100865]
20. Al-Mekhlafi, A.B.A.; Isha, A.S.N.; Naji, G.M.A. The relationship between fatigue and driving performance: A review and directions for future research. J. Crit. Rev.; 2020; 7, pp. 134-141.
21. Li, W.; Huang, J.; Xie, G.; Karray, F.; Li, R. A survey on vision-based driver distraction analysis. J. Syst. Archit.; 2021; 121, 102319. [DOI: https://dx.doi.org/10.1016/j.sysarc.2021.102319]
22. Goodsell, R.; Cunningham, M.; Chevalier, A. Driver Distraction: A Review of Scientific Literature; ARRB Report Project National Transportation Commission: Melbourne, Australia, 2019; 013817.
23. Bioulac, S.; Micoulaud-Franchi, J.A.; Arnaud, M.; Sagaspe, P.; Moore, N.; Salvo, F.; Philip, P. Risk of motor vehicle accidents related to sleepiness at the wheel: A systematic review and meta-analysis. Sleep; 2017; 40, zsx134. [DOI: https://dx.doi.org/10.1093/sleep/zsx134]
24. Klauer, S.G.; Guo, F.; Simons-Morton, B.G.; Ouimet, M.C.; Lee, S.E.; Dingus, T.A. Distracted driving and risk of road crashes among novice and experienced drivers. New Engl. J. Med.; 2014; 370, pp. 54-59. [DOI: https://dx.doi.org/10.1056/NEJMsa1204142]
25. Öztürk, İ.; Merat, N.; Rowe, R.; Fotios, S. The effect of cognitive load on Detection-Response Task (DRT) performance during day-and night-time driving: A driving simulator study with young and older drivers. Transp. Res. Part F Traffic Psychol. Behav.; 2023; 97, pp. 155-169. [DOI: https://dx.doi.org/10.1016/j.trf.2023.07.002]
26. Bassani, M.; Catani, L.; Hazoor, A.; Hoxha, A.; Lioi, A.; Portera, A.; Tefa, L. Do driver monitoring technologies improve the driving behaviour of distracted drivers? A simulation study to assess the impact of an auditory driver distraction warning device on driving performance. Transp. Res. Part F: Traffic Psychol. Behav.; 2023; 95, pp. 239-250. [DOI: https://dx.doi.org/10.1016/j.trf.2023.04.013]
27. Philip, P.; Sagaspe, P.; Taillard, J.; Valtat, C.; Moore, N.; Åkerstedt, T.; Charles, A.; Bioulac, B. Fatigue, sleepiness, and performance in simulated versus real driving conditions. Sleep; 2005; 28, pp. 1511-1516. [DOI: https://dx.doi.org/10.1093/sleep/28.12.1511] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16408409]
28. Large, D.R.; Pampel, S.M.; Merriman, S.E.; Burnett, G. A validation study of a fixed-based, medium fidelity driving simulator for human–machine interfaces visual distraction testing. IET Intell. Transp. Syst.; 2023; 7, pp. 1104-1117. [DOI: https://dx.doi.org/10.1049/itr2.12362]
29. Chen, H.; Zhe, M.; Hui, Z. The influence of driver fatigue on takeover reaction time in human-machine co-driving environment. Proceedings of the World Transport Congress; Beijing, China, 4 November 2022; China Association for Science and Technology: Beijing, China, 2022; pp. 599-604.
30. Yongfeng, M.; Xin, T.; Shuyan, C. The impact of distracted driving behavior on ride-hailing vehicle control under natural driving conditions. Proceedings of the 17th China Intelligent Transportation Annual Conference; Macau, China, 8–12 October 2022; pp. 46-47.
31. Ngxande, M.; Tapamo, J.-R.; Burke, M. Driver drowsiness detection using behavioral measures and machine learning techniques: A review of state-of-art techniques. Proceedings of the 2017 Pattern Recognition Association of South Africa and Robotics and Mechatronics (PRASA-RobMech); Bloemfontein, South Africa, 30 November–1 December 2017; pp. 156-161.
32. Dinges, D.F.; Grace, R. PERCLOS: A Valid Psychophysiological Measure of Alertness as Assessed by Psychomotor Vigilance; Tech. Rep. MCRT-98-006 Federal Motor Carrier Safety Administration: Washington, DC, USA, 1998; [DOI: https://dx.doi.org/10.21949/1502740]
33. Feng, Z.; Jian, C.; Jingxin, C. Driver fatigue detection based on improved Yolov3. Sci. Technol. Eng.; 2022; 23, pp. 11730-11738.
34. Wang, J.; Yu, X.; Liu, Q.; Yang, Z. Research on key technologies of intelligent transportation based on image recognition and anti-fatigue driving. EURASIP J. Image Video Process.; 2019; 2019, pp. 1-13. [DOI: https://dx.doi.org/10.1186/s13640-018-0403-6]
35. Yang, H.; Liu, L.; Min, W.; Yang, X.; Xiong, X. Driver yawning detection based on subtle facial action recognition. IEEE Trans. Multimed.; 2020; 23, pp. 572-583. [DOI: https://dx.doi.org/10.1109/TMM.2020.2985536]
36. Li, K.; Gong, Y.; Ren, Z. A fatigue driving detection algorithm based on facial multi-feature fusion. IEEE Access; 2020; 8, pp. 101244-101259. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2998363]
37. You, F.; Li, X.; Gong, Y.; Wang, H.; Li, H. A real-time driving drowsiness detection algorithm with individual differences consideration. IEEE Access; 2019; 7, pp. 179396-179408. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2958667]
38. Zheng, H.; Wang, Y.; Liu, X. Adaptive Driver Face Feature Fatigue Detection Algorithm Research. Appl. Sci.; 2023; 13, 5074. [DOI: https://dx.doi.org/10.3390/app13085074]
39. Liu, W.; Qian, J.; Yao, Z.; Jiao, X.; Pan, J. Convolutional two-stream network using multi-facial feature fusion for driver fatigue detection. Future Internet; 2019; 11, 115. [DOI: https://dx.doi.org/10.3390/fi11050115]
40. Ahmed, M.; Masood, S.; Ahmad, M.; Abd El-Latif, A.A. Intelligent driver drowsiness detection for traffic safety based on multi CNN deep model and facial subsampling. IEEE Trans. Intell. Transp. Syst.; 2021; 23, pp. 19743-19752. [DOI: https://dx.doi.org/10.1109/TITS.2021.3134222]
41. Kim, W.; Jung, W.-S.; Choi, H.K. Lightweight driver monitoring system based on multi-task mobilenets. Sensors; 2019; 19, 3200. [DOI: https://dx.doi.org/10.3390/s19143200] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31330770]
42. He, J.; Chen, J.; Liu, J.; Li, H. A lightweight architecture for driver status monitoring via convolutional neural networks. Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO); Dali, China, 6–8 December 2019; pp. 388-394.
43. Guo, Z.; Liu, Q.; Zhang, L.; Li, Z.; Li, G. L-TLA: A Lightweight Driver Distraction Detection Method Based on Three-Level Attention Mechanisms. IEEE Trans. Reliab.; 2024; 99, pp. 1-12. [DOI: https://dx.doi.org/10.1109/TR.2023.3348951]
44. Gupta, K.; Choubey, S.; Yogeesh, N.; William, P.; Kale, C.P. Implementation of motorist weariness detection system using a conventional object recognition technique. Proceedings of the 2023 International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT); Bengaluru, India, 5–7 January 2023; pp. 640-646.
45. Zhao, Z.; Xia, S.; Xu, X.; Zhang, L.; Yan, H.; Xu, Y.; Zhang, Z. Driver distraction detection method based on continuous head pose estimation. Comput. Intell. Neurosci.; 2020; 2020, pp. 1-10. [DOI: https://dx.doi.org/10.1155/2020/9606908]
46. Ansari, S.; Naghdy, F.; Du, H.; Pahnwar, Y.N. Driver mental fatigue detection based on head posture using new modified reLU-BiLSTM deep neural network. IEEE Trans. Intell. Transp. Syst.; 2021; 23, pp. 10957-10969. [DOI: https://dx.doi.org/10.1109/TITS.2021.3098309]
47. Xing, Y.; Lv, C.; Wang, H.; Cao, D.; Velenis, E.; Wang, F.Y. Driver activity recognition for intelligent vehicles: A deep learning approach. IEEE Trans. Veh. Technol.; 2019; 68, pp. 5379-5390. [DOI: https://dx.doi.org/10.1109/TVT.2019.2908425]
48. Ma, C.; Wang, H.; Li, J. Driver behavior recognition based on attention module and bilinear fusion network. Proceedings of the Second International Conference on Digital Signal and Computer Communications (DSCC 2022); Changchun, China, 8–10 April 2022; pp. 381-386.
49. Zhang, C.; Li, R.; Kim, W.; Yoon, D.; Patras, P. Driver behavior recognition via interwoven deep convolutional neural nets with multi-stream inputs. IEEE Access; 2020; 8, pp. 191138-191151. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3032344]
50. Fasanmade, A.; He, Y.; Al-Bayatti, A.H.; Morden, J.N.; Aliyu, S.O.; Alfakeeh, A.S.; Alsayed, A.O. A fuzzy-logic approach to dynamic bayesian severity level classification of driver distraction using image recognition. IEEE Access; 2020; 8, pp. 95197-95207. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2994811]
51. Xie, Z.; Li, L.; Xu, X. Real-time driving distraction recognition through a wrist-mounted accelerometer. Hum. Factors; 2022; 64, pp. 1412-1428. [DOI: https://dx.doi.org/10.1177/0018720821995000]
52. Wagner, B.; Taffner, F.; Karaca, S.; Karge, L. Vision based detection of driver cell phone usage and food consumption. IEEE Trans. Intell. Transp. Syst.; 2021; 23, pp. 4257-4266. [DOI: https://dx.doi.org/10.1109/TITS.2020.3043145]
53. Shen, K.Q.; Li, X.P.; Ong, C.J.; Shao, S.Y.; Wilder-Smith, E.P. EEG-based mental fatigue measurement using multi-class support vector machines with confidence estimate. Clin. Neurophysiol.; 2008; 119, pp. 1524-1533. [DOI: https://dx.doi.org/10.1016/j.clinph.2008.03.012] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/18468483]
54. Bundele, M.M.; Banerjee, R. Detection of fatigue of vehicular driver using skin conductance and oximetry pulse: A neural network approach. Proceedings of the 11th International Conference on Information Integration and Web-Based Applications & Services; Kuala Lumpur, Malaysia, 14–16 December 2009; pp. 739-744.
55. Chaudhuri, A.; Routray, A. Driver fatigue detection through chaotic entropy analysis of cortical sources obtained from scalp EEG signals. IEEE Trans. Intell. Transp. Syst.; 2019; 21, pp. 185-198. [DOI: https://dx.doi.org/10.1109/TITS.2018.2890332]
56. Li, G.; Yan, W.; Li, S.; Qu, X.; Chu, W.; Cao, D. A temporal–spatial deep learning approach for driver distraction detection based on EEG signals. IEEE Trans. Autom. Sci. Eng.; 2021; 19, pp. 2665-2677. [DOI: https://dx.doi.org/10.1109/TASE.2021.3088897]
57. Fu, R.; Wang, H. Detection of driving fatigue by using noncontact EMG and ECG signals measurement system. Int. J. Neural Syst.; 2014; 24, 1450006. [DOI: https://dx.doi.org/10.1142/S0129065714500063] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24552510]
58. Chen, L.L.; Zhao, Y.; Zhang, J.; Zou, J.Z. Automatic detection of alertness/drowsiness from physiological signals using wavelet-based nonlinear features and machine learning. Expert Syst. Appl.; 2015; 42, pp. 7344-7355. [DOI: https://dx.doi.org/10.1016/j.eswa.2015.05.028]
59. Fan, C.; Peng, Y.; Peng, S.; Zhang, H.; Wu, Y.; Kwong, S. Detection of train driver fatigue and distraction based on forehead EEG: A time-series ensemble learning method. IEEE Trans. Intell. Transp. Syst.; 2021; 23, pp. 13559-13569. [DOI: https://dx.doi.org/10.1109/TITS.2021.3125737]
60. Yang, J.; Chang, T.N.; Hou, E. Driver distraction detection for vehicular monitoring. Proceedings of the IECON 2010–36th Annual Conference on IEEE Industrial Electronics Society; Glendale, AZ, USA, 7–10 November 2010; pp. 108-113.
61. Wang, X.; Xu, R.; Zhang, S.; Zhuang, Y.; Wang, Y. Driver distraction detection based on vehicle dynamics using naturalistic driving data. Transp. Res. Part C Emerg. Technol.; 2022; 136, 103561. [DOI: https://dx.doi.org/10.1016/j.trc.2022.103561]
62. Sun, Q.; Wang, C.; Guo, Y.; Yuan, W.; Fu, R. Research on a cognitive distraction recognition model for intelligent driving systems based on real vehicle experiments. Sensors; 2020; 20, 4426. [DOI: https://dx.doi.org/10.3390/s20164426]
63. Zhenyu, W.; Yong, L.; Jianxi, L. Fatigue detection system based on multi-source information source fusion. Electron. Des. Eng.; 2022; 19, 30.
64. Du, Y.; Raman, C.; Black, A.W.; Morency, L.P.; Eskenazi, M. Multimodal polynomial fusion for detecting driver distraction. arXiv; 2018; arXiv: 181010565
65. Abbas, Q.; Ibrahim, M.E.; Khan, S.; Baig, A.R. Hypo-driver: A multiview driver fatigue and distraction level detection system. CMC Comput. Mater Contin; 2022; 71, pp. 1999-2017.
66. Lu, J.; Peng, Z.; Yang, S.; Ma, Y.; Wang, R.; Pang, Z.; Feng, X.; Chen, Y.; Cao, Y. A review of sensory interactions between autonomous vehicles and drivers. J. Syst. Archit.; 2023; 141, 102932. [DOI: https://dx.doi.org/10.1016/j.sysarc.2023.102932]
67. Wogalter, M.S.; Mayhorn, C.B.; Laughery, K.R., Sr. Warnings and hazard communications. Handbook of Human Factors and Ergonomics; John Wiley & Sons: Hoboken, NJ, USA, 2021; pp. 644-667.
68. Sun, X.; Zhang, Y. Improvement of autonomous vehicles trust through synesthetic-based multimodal interaction. IEEE Access; 2021; 9, pp. 28213-28223. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3059071]
69. Murali, P.K.; Kaboli, M.; Dahiya, R. Intelligent In-Vehicle Interaction Technologies. Adv. Intell. Syst.; 2022; 4, 2100122. [DOI: https://dx.doi.org/10.1002/aisy.202100122]
70. Zhou, X.; Zheng, J.; Zhang, W. Intelligent Connected Vehicle Information System (CVIS) for Safer and Pleasant Driving. Human-Automation Interaction: Transportation; Springer: Cham, Switzerland, 2022; pp. 469-479.
71. Rosekind, M.R.; Michael, J.P.; Dorey-Stein, Z.L.; Watson, N.F. Awake at the wheel: How auto technology innovations present ongoing sleep challenges and new safety opportunities. Sleep; 2023; 47, zsad316. [DOI: https://dx.doi.org/10.1093/sleep/zsad316] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38109232]
72. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access; 2020; 8, pp. 58443-58469. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2983149]
73. Geisslinger, M.; Poszler, F.; Betz, J.; Lütge, C.; Lienkamp, M. Autonomous driving ethics: From trolley problem to ethics of risk. Philos. Technol.; 2021; 34, pp. 1033-1055. [DOI: https://dx.doi.org/10.1007/s13347-021-00449-4]
74. Petit, J.; Shladover, S.E. Potential cyberattacks on automated vehicles. IEEE Trans. Intell. Transp. Syst.; 2014; 16, pp. 546-556. [DOI: https://dx.doi.org/10.1109/TITS.2014.2342271]
75. Parkinson, S.; Ward, P.; Wilson, K.; Miller, J. Cyber threats facing autonomous and connected vehicles: Future challenges. IEEE Trans. Intell. Transp. Syst.; 2017; 18, pp. 2898-2915. [DOI: https://dx.doi.org/10.1109/TITS.2017.2665968]
76. Petrillo, A.; Pescape, A.; Santini, S. A secure adaptive control for cooperative driving of autonomous connected vehicles in the presence of heterogeneous communication delays and cyberattacks. IEEE Trans. Cybern.; 2020; 51, pp. 1134-1149. [DOI: https://dx.doi.org/10.1109/TCYB.2019.2962601] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31995510]
77. Guo, J.; Li, L.; Wang, J.; Li, K. Cyber-physical system-based path tracking control of autonomous vehicles under cyber-attacks. IEEE Trans. Ind. Inform.; 2022; 19, pp. 6624-6635. [DOI: https://dx.doi.org/10.1109/TII.2022.3206354]
78. Sheehan, B.; Murphy, F.; Mullins, M.; Ryan, C. Connected and autonomous vehicles: A cyber-risk classification framework. Transp. Res. Part A Policy Pract.; 2019; 124, pp. 523-536. [DOI: https://dx.doi.org/10.1016/j.tra.2018.06.033]
79. Van Wyk, F.; Wang, Y.; Khojandi, A.; Masoud, N. Real-time sensor anomaly detection and identification in automated vehicles. IEEE Trans. Intell. Transp. Syst.; 2019; 21, pp. 1264-1276. [DOI: https://dx.doi.org/10.1109/TITS.2019.2906038]
80. Wang, Y.; Liu, Q.; Mihankhah, E.; Lv, C.; Wang, D. Detection and isolation of sensor attacks for autonomous vehicles: Framework, algorithms, and validation. IEEE Trans. Intell. Transp. Syst.; 2021; 23, pp. 8247-8259. [DOI: https://dx.doi.org/10.1109/TITS.2021.3077015]
81. Li, T.; Shang, M.; Wang, S.; Filippelli, M.; Stern, R. Detecting stealthy cyberattacks on automated vehicles via generative adversarial networks. Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC); Macau, China, 8–12 October 2022; pp. 3632-3637.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Detecting the factors affecting drivers’ safe driving and taking early warning measures can effectively reduce the probability of automobile safety accidents and improve vehicle driving safety. Considering the two factors of driver fatigue and distraction state, their influences on driver behavior are elaborated from both experimental data and an accident library analysis. Starting from three modes and six types, intelligent detection methods for driver fatigue and distraction detection from the past five years are reviewed in detail. Considering its wide range of applications, the research on machine vision detection based on facial features in the past five years is analyzed, and the methods are carefully classified and compared according to their innovation points. Further, three safety warning and response schemes are proposed in light of the development of autonomous driving and intelligent cockpit technology. Finally, the paper summarizes the current state of research in the field, presents five conclusions, and discusses future trends.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China; Department of Standard and Quota, Ministry of Housing and Urban-Rural Development, Beijing 100835, China
2 School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
3 School of Transportation Science and Engineering, Beihang University, Beijing 102206, China