This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
Due to the uncertainty of driver’s state, the variability of road conditions, and the complexity of traffic environment, driving behavior has become an important factor affecting vehicle traffic safety. Driver behavior states include fatigue driving, distracted driving, and drunk driving. Vehicle states mainly include acceleration, deceleration, turning, lane change, and following. With the development of sensing technology and assisted driving technology, all kinds of high-performance sensors have been widely used in vehicles. Generally, they can be divided into vision class (such as camera) and timing class (such as speed, accelerometer). At the same time, with the popularization of CAN bus, on-board terminal, and Internet of vehicles technology, the use of on-board CAN bus to collect data of driver’s state and vehicle’s state under natural driving conditions has become the mainstream way, and massive and accurate on-board sensor data can provide strong support for driving behavior analysis. By integrating all kinds of data information and constructing models to identify or predict driving behaviors and taking corrective measures to abnormal and dangerous driving behaviors, it will be beneficial to improve the safety of vehicle driving and restrain the occurrence of road traffic accidents caused by drivers.
Driving behavior recognition mainly includes regular behavior recognition and risky driving behavior recognition; regular driving behaviors have appeared in the process of driving such as car-following and regular lane change behavior, and risky driving behavior recognition refers to the phenomenon of nonstandard driving (distraction, etc), abnormal driving behavior of drivers (abrupt change, too closely car-following, etc.), and obvious changes in physiological parameters (fatigue, etc). Driving behavior recognition is a pattern recognition process, the selection of appropriate information fusion method is the basis of accurate identification of driving behavior, and the construction of a reasonable mathematical model is the key to accurate identification of driving behavior. Driving behavior with the external drive of multidimensional, complexity, randomness, and uncertainty, as well as road conditions, traffic conditions, vehicle status, and behavior factors such as the coupling correlation and multilevel characteristics, leads to the driving cause of chaos and nonlinear; how to construct the driving behavior recognition model has been the technical difficulty and research hotspot. There are many methods for constructing the recognition models of driving behavior (as shown in Figure 1). Most of the existing models are traditional models based on empirical rule inference and mathematical statistical analysis, such as Hidden Markov Model (HMM) [1–3], Gaussian Mixture Model (GMM) [4–6], random forest (RF) [7–10], support vector machine (SVM) [11–13], Naive Bayes (NB) [14–16], Fuzzy Neural Network (FNN) [17], and
[figure(s) omitted; refer to PDF]
With the progress of big data and artificial intelligence technologies, deep learning models with more hidden layers have attracted more attention from researchers. Deep learning model is a neural network model with multiple hidden layers and large-scale parameters such as neuron connection weights and thresholds. This kind of model can automatically extract the deep temporal and spatial features of multidimensional, complex, uncertain, incomplete, and coupled temporal data, which has great advantages in feature learning. Meanwhile, it can be organically integrated with the classifier to realize end-to-end data learning and significantly improve the recognition accuracy. Some researchers try to adopt deep neural network (DNN) [20, 21], convolutional neural network (CNN) [22–29], and recurrent neural network (RNN) [30–37] to construct a driving behavior recognition model, which has achieved good results. In recent years, with the widespread application of on-board sensors and CAN bus technology in cars, driving behavior data in the natural driving process have been collected and stored, which provides massive data samples for the construction of deep learning model, and the recognition methods of driving behavior are gradually evolving to deep learning model. However, deep learning algorithms require more data to establish a validate model, and the training time may be longer than traditional machine learning algorithms. Subsequently, with the establishment of lots of driving behavior recognition models including traditional models and deep learning models, it becomes difficult and confusing to choose a suitable one that meets the needs when developing a driving behavior recognition model.
Previous reviews have addressed a subset of problems of different driving behaviors such as drowsiness [38], distraction [38], lane changing [39], and car-following [40], but their recommendations are limited to specific areas. For example, Kaplan et al. [38] have presented a review of driving behavior analysis for safe driving wherein driving behaviors cover drowsiness and distraction; they focus on identification technologies of drowsiness and distraction and not comparisons of driving behavior recognition models. Koesdwiady et al. [39] and Saifuzzaman and Zheng [40] have provided a review of specific driving behavior, without going into the details of different models and performance comparisons. None of these studies specifically focus on the comparison of driving behavior recognition methods. In the most related article to our work, Abou et al. [41] provide a review of four traditional machine learning algorithms such as SVM, NN, EL, and EB for driving behavior analysis. The authors make statistics on the use frequency of models and performance indicators, analyze the distribution of commonly used performance evaluation indicators, and briefly analyze the differences on performances of four models. Mozaffari et al. [42] present a survey only on different deep learning approaches such as RNN and CNN for autonomous vehicle behavior prediction; the authors mainly introduce the structure of models, and there is no comparison with the performance and characteristics of traditional machine learning models. However, to the best of our knowledge, systematic and comparative review papers focusing on introducing and surveying the difference of models including machine learning-based methods and deep learning-based methods are not available in this field yet.
Therefore, we present a review of such studies that is aimed at summarizing the difference between traditional machine learning and deep learning-based approaches applied in driving behavior studies in recent years. In addition, some other contents are also summarized: firstly, the common data processing and information fusion methods are introduced based on the CAN bus data of vehicle operation. Then, several driving behavior recognition models based on machine learning and deep learning are reviewed. Finally, the advantages and disadvantages of several driving behavior recognition models are compared and analyzed, and the corresponding conclusions and prospects are given. The purpose of this paper is to provide a reference for the researchers in the field of driving behavior recognition and theoretical support for the development of the traffic safety field.
The main contributions of this paper are given as follows:
(i) Elaboration on the recent research on driving behavior recognition model and algorithm, to help researchers develop optimal methods and establish driving behavior recognition models with higher accuracy and performance
(ii) Understanding the characteristics of the information of vehicle sensors, help researchers to find the reason sensor data, to know about the main feature processing methods and to design vehicle multisensor data sampling system for driving behavior data
(iii) Establish gaps in the recognition model of the current state of the art that can be addressed by future driving behavior research, and an overview of research opportunities in the future is also provided
The structure of the paper is as follows: the vehicle sensor information and vehicle CAN bus data acquisition system is introduced in Section 2. The research progress of several driving behavior recognition models is reviewed in Section 3. At last, the paper gives some conclusions and prospects.
2. The Information of Vehicle Sensors
With the development of intelligent, networked, and electric technology of vehicles, all kinds of on-board sensor devices such as GPS, gyroscope, camera, radar, and CAN bus have been widely used in modern vehicles. These devices can not only provide reliable information input for controlling the safe operation of the vehicles but also accurately record the behavior state information of the driver and the vehicle. This vehicle sensor information can be divided into six categories according to the dimensions of driver and vehicle: driver status information, driver control vehicle information, vehicle control state information, vehicle driving state information, and road environment state information. This state information contains abundant characteristic information of driving behavior, the data information with high correlation with driving behavior characteristics can be mined, and then, the accurate identification of different driving behaviors can be realized by the information fusion method. Various types of data information are shown in Table 1, and the typical on-board sensors and CAN bus structure are shown in Figure 2.
Table 1
The main vehicle sensor information.
The information of vehicle sensors | Main data | Data acquisition device or sensor | |
Drivers | Status information of drivers | Camera data on fatigue, distraction, drunk driving, or ill driving | Video surveillance system |
Control information of drivers | Acceleration pedal opening, brake pedal opening, hand brake, steering wheel angle, etc. | Vehicle-mounted terminals such as dashcam | |
Vehicles | Control status information | Switch signal, torque, speed, voltage, current, etc. | Vehicle data recorder |
Vehicle driving information | Speed, acceleration, turn signal, brake signal, location, distance, etc. | Vehicle data recorder, gyroscope, GPS units, radar | |
Roads | Status information of road | Track, lane line, signal lamp, intersection, zebra and other data | Cameras |
Environment | Environment information | Temperature, snow, rain | Vehicle data recorder |
[figure(s) omitted; refer to PDF]
All kinds of on-board sensors have different perception results for drivers and vehicle states. Mining on-board multisensor data with high correlation with driving behavior characteristics is the key to accurately identifying drivers’ driving behaviors. The relevant features of information from data for driving behavior recognition can be extracted by using the feature extraction method [43]. The method can obtain new feature representation by mapping original features into the new space, including feature reduction and feature selection. The commonly used feature extraction methods are shown in Figure 3.
[figure(s) omitted; refer to PDF]
Feature dimension reduction is to reduce the dimension of data by mapping data points from a high-dimensional space to a low-dimensional space using a mapping method. Feature selection is a method to find the optimal feature subset by eliminating irrelevant or redundant features, including filter, wrapper, and embedded. Filter directly selects features from the data set, while wrapper selects subsets from the initial feature set to train and optimize the learner until the best subset is selected. The performance of this method is better than filter, but it requires multiple training of the learner, and the calculation cost is high. Embedded is an automatic feature selection in the process of training the learner by closely combining the feature selection process with the training of the learner. For example, the convolutional neural network can automatically extract the deep spatiotemporal features of the data through convolution and pooling operations and seamlessly connect with the classifiers.
3. The Recognition Model of Driving Behavior
Driving behavior recognition is a pattern recognition process. Accurate identification or prediction of driving behavior plays an important role in the development of high-performance driver assistance system and improvement of driver-caused accidents. Driving behavior recognition models can be generally divided into two categories: one is the traditional machine learning model, which is usually based on empirical rule reasoning and mathematical statistical analysis methods, such as HMM, GMM, SVM, RF, NB, FNN, and KNN. Among them, RF and SVM are widely used methods for constructing the recognition models of driving behavior. The other is the deep learning model. Deep learning model is a neural network model with multiple hidden layers and large-scale neuron connection weights, thresholds, and other parameters, including deep neural network (DNN), convolutional neural network model (CNN), and recurrent neural network model (RNN). CNN and RNN can automatically extract the deep spatiotemporal features of driving behavior data and integrate feature extraction and recognition prediction into a model to achieve end-to-end learning with high recognition accuracy. They are widely used in the construction of the recognition models of driving behavior. Deep learning models become popular with the development of big data sampling technology and computer hardware. The basic principles and characteristics of random forest, support vector machine, convolutional neural network, and recurrent neural network, and their research situations are summarized as follows.
3.1. The Random Forest
Random forest model is a classification and regression method based on a decision tree (the principle is shown in Figure 4). In this method, the multiple training sets are generated by constructing bagging integrated random sampling with putting back. A decision tree is constructed for each training set, and the output results are determined by comprehensive voting of these decision trees. In the training process of the decision tree, the optimal attribute is selected from several randomly selected attributes for node segmentation, so as to reduce the variance of the model and avoid overfitting. This method is suitable for processing high-dimensional multifeature data and has the characteristics of fast learning and high classification accuracy [44].
[figure(s) omitted; refer to PDF]
Random forest has been widely used in many fields and achieved great success. Some researchers used the random forest model to construct the recognition model of driving behavior. In terms of fatigue driving behavior recognition, McDonald et al. [45] applied the random forest model to design an algorithm based on steering wheel angle data to detect fatigue driving behavior. Mårtensson et al. [46] pointed out that the random forest algorithm is superior to other mainstream classifiers in identifying tired driving, with good accuracy and robustness and high computational efficiency, and can adapt to the training environment of small samples. Cai et al. [47] adopted the random forest model to identify the fatigue driving behavior based on vehicle running data through manual extraction of features, with an accuracy of only 78.5%. Dong and Lin [48] used the random forest model to identify fatigue driving behavior based on facial image data, with an accuracy of 91%. Some scholars also use random forest to build the recognition models of distracted driving behavior, such as Ahangari et al. [7] used random forest classification model to identify 6 common distracted driving behaviors, with an accuracy rate of 76.5%. Yao et al. [49] established a random forest model to identify distracted driving behavior based on machine vision and behavioral data, with a scoring accuracy of 90%. It can be seen from the above literature that the accuracy of driving behavior recognition varies widely, which is closely related to sensor data fusion and feature extraction. Some scholars also use the random forest model to identify driving behaviors such as car-following [50, 51], lane change [8], and being followed [9] or to evaluate driving safety risks [10] and identify drivers [52, 53], driving style [54], and driving posture [55]. Other researchers [56] combined random forest and other methods to construct a driving behavior recognition model.
In conclusion, using the random forest method to construct the recognition model of driving behavior has been widely used. This method can be applied to the identification of various driving behaviors, and the recognition accuracy is greater than 75%. However, the recognition results fluctuate greatly under the influence of different application scenarios. Now, its application fields and corresponding advantages and disadvantages are analyzed as shown in Table 2.
Table 2
The characteristics analysis of the random forest model.
Types of driving behavior | Applications | Advantages and disadvantages |
Fatigue driving [45–48] | Used for identification of fatigue driving behavior with high accuracy fluctuation | Advantages: it can be used for the identification of various driving behaviors. The algorithm is simple and easy to implement, the calculation cost is small, and it is not easy to appear overfitting. It is suitable for the classification of high-dimensional and multifeature small sample data, and the accuracy is usually more than 75%. |
Distracted driving [7] [49] | Used for distracted driving behavior recognition | |
Car-following [50] [51] | Used for identification of following driving behavior | |
Lane change [8] [56] | Used for lane change driving behavior recognition | |
Others [9, 10] [52] [53–55] | Used for acceleration and deceleration, turning, lane change, being followed, and driver, driving style, driving posture recognition |
3.2. Support Vector Machines
Support vector machine is a binary supervised learning model, which was first used to solve the binary problem of linearly fractionable data. Its main goal is to find a hyperplane that can best divide the data set into desired classes. Its basic principle is to find an optimal hyperplane that meets the requirements of data classification interval, so that the maximum distance between the hyperplane and the two types of sample points can be achieved under the condition that the classification accuracy is ensured [57]. As shown in Figure 5,
[figure(s) omitted; refer to PDF]
SVM is characterized by the strong learning ability of small samples and good model generalization performance and has been used by some scholars to build the recognition model of driving behavior. For example, Wang et al. [11] used the SVM model to identify the driving behavior in the process of car-following, with an accuracy of 80.8%. Savaş and Becerikli [58] established the fatigue driving behavior recognition model using SVM, achieving the fatigue driving recognition rate of 97.93%. However, You et al. [59] used SVM to establish the identification model of fatigue driving behavior, and the accuracy was only 80.83%. Amsalu et al. [12] constructed a SVM model for identifying the driving intentions at intersections and estimated driver intentions by using the data of natural driving states of vehicles, with the recognition accuracy reaching 97%. Tomar and Verma [60] established a SVM model to identify the driving behavior of lane change and car-following by preprocessing and constructing features of real vehicle track sample data, with an accuracy of 98.41%. Liao et al. [61] believed that the support vector machine model has certain advantages to identify distracted driving behaviors after comparing the performance of different models, but there are problems such as large fluctuation of recognition results and unstable performance. At the same time, they pointed out that the deep learning model has high accuracy to identify distracted driving behaviors. Some researchers improved the performance of recognition models by combining SVM with other methods. For example, Yang et al. [13] classified and evaluated the safety levels of different driving behaviors by combining the SVM and decision tree model. Wang et al. [62] proposed a method for the classification of fatigue driving behavior by using CNN to extract features and then SVM, with an accuracy of 94%.
To sum up, the SVM method to build the recognition model of driving behavior has been widely used. This method can be used to identify various driving behaviors such as fatigue, distraction, following, and lane change, and the recognition accuracy is usually above 80%. The analysis of its application fields and corresponding advantages and disadvantages is shown in Table 3.
Table 3
The characteristics analysis of SVM.
Types of driving behavior | Applications | Advantages and disadvantages |
Fatigue driving [58, 59] [62] | Used to identify tired driving behavior | Advantages: Suitable for high-dimensional feature small sample classification problem, the model generalization performance is good. The computer has low memory usage and fast computing speed. Kernel function has better effect on nonlinear decision making. |
Distracted driving [61, 63, 64] | Used to identify distracted driving behavior | |
Car-following [11, 60] | Used to identify car-following driving behavior | |
Change lanes [65, 66] | Used to identify lane change driving behavior | |
Others [12, 13, 67, 68] | Used to identify sharp deceleration, sharp steering, lane departure, and other abnormal driving behavior |
3.3. Convolutional Neural Network
As a representative of deep learning model, the convolutional neural network is a feedforward neural network with a deep structure that includes convolutional computation, which is proposed by biological receptive field mechanism, and generally consists of input layer, convolutional layer, pooling layer, full connection layer, and output layer (as shown in Figure 6). The function of the convolution layer is to extract the features of a local region, and the function of the convergence layer is to select features and reduce the number of features. CNN performs feature extraction on input information by combining multiple convolutional layers and aggregation layers and then realizes mapping with output targets at the full connection layer. Each convolutional layer contains multiple feature mappings. Through multilayer processing, the initial shallow feature representation is gradually transformed into a deep feature representation. CNN adopts convolution instead of full connection, which has the properties of local connection, weight sharing, and convergence. Compared with a full connection feedforward neural network, it can reduce the number of hyperparameters on a large scale.
[figure(s) omitted; refer to PDF]
CNN took the lead in a large number of successful applications in the fields of image recognition and speech processing [69, 70]. Xing et al. [23] built a CNN model for fatigue driving recognition to automatically extract features from drivers’ facial images and identify fatigue state, with an accuracy rate of 87.5%. Jabbar et al. [71] built the CNN model to automatically extract the depth features of the driver’s face in the two states of wearing glasses or not to identify the fatigue state of the driver, with a recognition accuracy of 88% and 85%, respectively. At the same time, there are also a large number of successful cases to identify distracted driving behaviors by extracting driver attitude images through CNN. For example, Eraqi et al. [24] established a CNN model and used driver attitude images to identify distracted driving behaviors, with recognition accuracy reaching 90%. Baheti et al. [25] established the distracted driving recognition CNN model in the improved convolutional neural network VGG-16 model framework, and the recognition accuracy reached 95.54%. Majdi et al. [72] applied CNN and the decision tree algorithm to construct DriveNet, a behavior recognition model of distracted driving, with recognition accuracy reaching 95%. Liu et al. [73] jointly applied ResNet50, Inception V3, and Xception for model pretraining and adopted transfer learning to extract driving behavior characteristics to construct a hybrid CNN model for identifying distracted driving behavior, with recognition accuracy reaching 96.74%. In recent years, many researchers have developed several new convolutional network models based on the CNN basic model, such as VGG, ResNet, and InceptionNet, and began to apply them to driving behavior recognition. For example, Srinivasan et al. [74] made a comparative analysis of the recognition effects of CNN models such as VGG16, ResNet, and Xception in the recognition of distracted driving behavior and analyzed that ResNet convolutional neural network model had the highest recognition accuracy. Some researchers also built CNN models to extract vehicle state data features to predict lane changing and following driving behaviors. For example, Azadani and Boukerche [26] designed a continuous-time convolution network system architecture model to identify lane-changing behaviors, with a recognition accuracy of 95.3%. Xie et al. [75] constructed a CNN driving behavior recognition model to recognize lane change, braking, and other driving behaviors with an accuracy of 87.67%, which is better than KNN and FR. Lee et al. [76] adopted CNN for lane change driving behavior recognition, with an average accuracy of 89.87%, better than HMM (81.14%). To carry out engineering applications, some researchers carried out lightweight designs on the CNN model. For example, Qin et al. [77] constructed a CNN lightweight model for the identification of distracted driving, and the recognition accuracy reached 95.59%. At present, CNN tends to use a smaller convolutional kernel, deeper structure, and fewer aggregation layers and gradually develop into fully convolutional network.
In conclusion, the driving behavior recognition model constructed based on CNN has achieved significant application effects in the recognition of various driving behaviors such as fatigue, distraction, and lane change, with high accuracy of more than 85%. The application fields and corresponding advantages and disadvantages are summarized in Table 4.
Table 4
The characteristic analysis of CNN.
Types of driving behavior | Applications | Advantages and disadvantages |
Fatigue driving [23, 69, 70, 78–82] | Used to identify tired driving behavior | Advantages: suitable for big data and high-dimensional data samples, reduce the number of parameters through weight sharing and aggregation, automatic feature extraction, and organic integration with classifier to achieve end-to-end learning, high accuracy, and antinoise interference. |
Distracted driving [24, 25, 72–74, 77, 83–85] | Used to identify distracted driving behavior | |
Change lanes [26] [75] [76] [86] [87] [88] | Used to identify lane change driving behavior | |
Others [83] [89] [90] [91] [92] | Used to identify the driving behavior, driver, driving style, and so on |
3.4. Recurrent Neural Network
Recurrent neural network (RNN) is a new network structure based on deep neural network, which establishes association learning on time series data through cyclic calculation of the weight of hidden layers along the timeline and ensures that the network has a certain short-term memory ability. The hidden layer neuron of RNN can not only accept the information of other neurons but also accept its information, forming a network structure with loops, as shown in Figure 7. By using self-feedback neurons, RNN can process time-series data of any length and theoretically approximate any nonlinear dynamic system. The internal structure of the simple cyclic neural network is shown in Figure 8.
[figure(s) omitted; refer to PDF]
The driver state and vehicle state collected by vehicle sensors are high-dimensional nonlinear data with time-series relationships, and RNN has an excellent performance in processing such data. Relevant scholars have carried out a large number of explorations using RNN to construct driving behavior models and achieved good results. For example, Li et al. [31] and Wei et al. [93] used RNN to predict lane change behavior, and the accuracy reached 96% and 93.5%, respectively. EdDoughmi et al. [32] identified fatigued driving behaviors by inputting the collected video stream data of drivers’ faces into the 3D-RNN model established, with an accuracy of 92%. To solve the problem of long-term dependence caused by gradient explosion and gradient disappearance of RNN, researchers have developed novel RNN based on gated mechanisms, such as Long Short-Term Memory Network (LSTM) and Gated Recurrent Unit (GRU) network. LSTM solves the problem of long-range dependence by introducing an input gate, forgetting gate, and output gate to control the path of information transmission. Some researchers used LSTM to research the identification of driving behavior. For example, Griesbach et al. [34] applied LSTM to predict lane change behavior, and the accuracy reached 90%. Zhang et al. [94] constructed a DeepConvLSTM model based on RNN framework to identify different driving behaviors by introducing LSTM gating mechanism, with an accuracy of 95.19%, far better than the random forest model (87.39%). Some researchers also use LSTM to identify tired driving behavior and distracted driving behavior. For example, Yarlagadda et al. [95] used LSTM to predict driver fatigue state, and the recognition accuracy reached 97.25%. Sun et al. [96] adopted LSTM model to identify distracted driving behavior, and the accuracy reached 91%, better than SVM. GRU is different from LSTM in principle. It solves the problem of long-term dependence by setting a gate to control the balance between input and forgetting. Some researchers have also applied GRU to identify driving behaviors, such as Tang et al. [97] who built a DeepConvGRU model to identify driving behaviors by introducing GRU gating mechanism. The accuracy reached 95.08%. Yan et al. [98] constructed a GRU model to predict lane change behavior, and the accuracy reached 94.76%, better than LSTM (93.47%).
To give full play to the advantages of various deep learning models, some researchers combine RNN and other models to improve the performance of driving behavior recognition. For example, Mafeni et al. [99] jointly applied BI-LSTM and InceptionV3 CNNs to construct a distracted driving model (as shown in Figure 9) and identified distracted driving behaviors through driving posture pictures, with an accuracy of 93.1%. Wollmer et al. [33] used RNN combined with random forest FR to identify distracted driving behavior, and the accuracy reached 95%. Zhang et al. [100] propose a composite model that combines a multiscale convolutional neural network (MSCNN) and bi-LSTM to identify driving behaviors through vehicle track data. The highest accuracy was 97.75%. Zhang et al. [101] built a vehicle driving behavior recognition model based on joint data enhancement with a multiview convolutional neural network, and the model recognition accuracy was better than CNN, RNN, LSTM, CNN+LSTM, and other models. Xing et al. [102] jointly applied LSTM and BI-RNN to establish a lane change behavior recognition model; the accuracy reached 96.1% and shortens the prediction time cycle.
[figure(s) omitted; refer to PDF]
Other researchers have further improved the performance of recurrent neural networks by increasing their depth and using them in driving behavior recognition research, such as Stacked Recurrent Neural Networks (SRNN) [103–109], Bidirectional Recurrent Neural Network (Bi-RNN) [100, 102, 110–112], Recursive Neural Networks [113], and Graph Neural Network (GNN) [114, 115], which have been applied in driving behavior recognition and achieved high recognition accuracy.
The above studies are only some typical application cases. There are many successful application cases of RNN in the construction of the driving behavior recognition model [116–124]. It can be seen that RNN can be used for all kinds of driving behavior recognition, is suitable for high-dimensional and big data sample learning, and can extract deep temporal and spatial features with an accuracy rate of more than 90%. However, a large sample size of the data is required for training RNN.
Although the above model has been proved to be effective in driving behavior recognition by some studies, it does not perform well in all driving behavior recognition tasks. In this problem, there is no general model for driving behavior recognition, and the performance of the model depends heavily on the task for which it is applicable. Therefore, researchers not only need to understand the characteristics of candidate models but also need to understand driving scenarios and conduct corresponding tests to correctly select models and effectively apply them to practical driving behavior recognition tasks.
The test part mainly includes the following three types: real vehicle test, simulation verification, and vehicle in-loop test. The advantages of the real vehicle test are real and objective, but the disadvantages are that the high-risk scene data cannot be obtained and the manpower and material resources are heavily invested. The advantages of simulation verification are easy to test various complex working conditions, with high efficiency, but the scene cannot be completely close to the real situation. Vehicle in-loop test is a relatively popular method at present, which can integrate the advantages of real vehicle tests and simulation tests and can not only reproduce all kinds of actual scenes but also exhaust high-risk and complex scenes. After the driving behavior scene test, massive test data samples can be obtained by the driving behavior scene test, which is imported into the deep learning model for training and computation after data preprocessing, to complete the recognition of driving behavior.
All in all, the deep learning models could be mainstream model for its high accuracy and stable robustness if a lot of data set are available to train the models. The advantages of the deep learning model are as follows: strong learning ability and can be transplanted to the framework of many platforms; the neural network of deep learning has a large number of layers and a wide width. Theoretically, it can be mapped to any function, so it can solve very complex problems. Deep learning is highly data-dependent, and the larger the amount of data, the better its performance. The disadvantage is that the model design of deep learning is very complex and requires a large amount of data and computational power, so the cost is very high. Deep learning has high requirements on computer hardware, and ordinary CPU can no longer meet the requirements of deep learning. However, with the development of big data, computing technology, and hardware, the application of deep learning in driving behavior will be greatly developed and its application will be more and more extensive.
4. Summaries and Prospects
This paper reviews and summarizes driving behavior recognition methods based on vehicle sensor information fusion. On-board sensor data contains abundant driving behavior information; based on the two main factors of driving behavior and vehicle control, driving behavior information can be divided into the driver state information, the driver control vehicle information, vehicle control state information, the vehicle state information, and road environment state information, as well as the corresponding data acquisition system. The characteristics of common data level, feature level, and decision level information fusion methods are analyzed to guide for selecting appropriate information fusion methods and the basic principles and main characteristics of feature extraction methods. Driving behavior recognition methods are classified into traditional machine learning methods and deep learning methods. Random forest, support vector machine, and its application in fatigue, distraction, following, lane change, and other driving behavior recognition are introduced. The application of convolutional neural network (CNN) and recurrent neural network (RNN) in the construction of driving behavior recognition models is analyzed. The characteristics of the four driving behavior recognition models described in this paper are briefly summarized as shown in Table 5.
Table 5
The comparisons of characteristics of driving behavior recognition models.
The model’s name | FR | SVM | CNN | RNN |
Characteristics | Can be used for all kinds of driving behavior recognition, features need manual selection | Can be used for all kinds of driving behavior recognition, features need manual selection | Can be used for all kinds of driving behavior recognition, and depth features are automatically extracted by the convolutional layer | Can be used for all kinds of driving behavior recognition and automatic feature extraction of time series samples |
Recognition accuracy | >70%, distribution of discrete | >75%, distribution of discrete | >85% | >90% |
Calculate power demand | Middle | Low | High | High |
Efficiency | Middle | High | Middle | Low |
Adaptability | Static model | Static model | Static model | Dynamic adaptive model |
Defects | Unable to obtain time-series features, features extracted manually, feature consistency is poor | Unable to obtain time-series features, features extracted manually, feature consistency is poor | Time series features cannot be obtained, the sample input is fixed length, and the features have no clear meaning | Features have no clear meaning |
Applications | Suitable for pattern recognition and classification and applies to 1D time-series data and multidimensional data | Suitable for pattern recognition classification, multifeature, small sample | Suitable for pattern recognition and classification, large samples of fixed length | Suitable for timing data of different lengths, large sample |
With the development of automobile intelligence and networked technology, the research of driving behavior recognition methods based on vehicle sensor information fusion will have a broad application prospect in the field of driving assistance. The development and large-scale application of high-performance onboard sensor technology enable real-time collection and storage of massive driving behavior data reflecting the natural and real driving state. At the same time, the rapid development of big data technology and artificial intelligence technology provides technical support for the research of driving behavior recognition models. How deep excavating multisource, heterogeneous, vast amounts of on-board sensor data information; information on different hierarchical classification using information fusion technology and characteristics of different processing technology; and comprehensive utilization of the characteristics of different driving behavior recognition models, to more accurately identify all types of driving behavior, will serve as a research hotspot in the field of driving safety technology. In particular, it will provide useful help for the development of high-performance advanced driver assistance systems and autonomous driving technology. At the same time, how to combine the characteristics of driving behavior data in the natural driving state and jointly apply multiclass models to establish the compound driving behavior recognition model, to construct a higher precision driving behavior recognition model, is also an important research direction in the future. As abnormal driving behaviors may lead to instantaneous accidents, how to further improve the identification efficiency and develop a lightweight model that can accurately identify driving behaviors and meet the requirements of real-time online identification, low computational power, and engineering needs is also an important research direction. Driving behavior is greatly affected by the driver’s state, and the driving behavior shown by the same driver may differ greatly. At the same time, driving behavior is also susceptible to the influence of road, environment, and vehicle, so there are certain uncertainties. How to improve the robustness and generalization ability of the driving behavior recognition model remains to be further studied.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (62073298), partly supported by Key Scientific and Technological Project of Henan Province (222102240077 and 222102220106), and partly supported by Research Foundation of Zhengzhou University of Light Industry (2021BSJJ021).
[1] Q. Deng, D. Söffker, "A review of HMM-based approaches of driving behaviors recognition and prediction," IEEE Transactions on Intelligent Vehicles, vol. 7 no. 1, pp. 21-31, DOI: 10.1109/TIV.2021.3065933, 2022.
[2] S. Zhang, Y. Zhi, R. He, J. Li, "Research on traffic vehicle behavior prediction method based on game theory and HMM," IEEE Access, vol. 8, pp. 30210-30222, DOI: 10.1109/ACCESS.2020.2971705, 2020.
[3] D. Chao, C. Wu, N. Lyu, Z. Huang, "Driving style recognition method using braking characteristics based on hidden Markov model," PLoS One, vol. 12 no. 8, article e0182419,DOI: 10.1371/journal.pone.0182419, 2017.
[4] J. Carmona, M. A. de Miguel, D. Martin, F. Garcia, A. de la Escalera, "Embedded system for driver behavior analysis based on GMM," IEEE Intelligent Vehicles Symposium (IV), pp. 61-65, DOI: 10.1109/IVS.2016.7535365, .
[5] M. N. Shakib, M. Shamim, M. N. H. Shawon, M. K. F. Isha, M. M. A. Hashem, M. A. S. Kamal, "An adaptive system for detecting driving abnormality of individual drivers using Gaussian mixture model," 2021 5th International Conference on Electrical Engineering and Information Communication Technology (ICEEICT),DOI: 10.1109/ICEEICT53905.2021.9667850, .
[6] R. R. Mardi Putri, C. -H. Yang, C. -C. Chang, D. Liang, "Smartwatch-based open-set driver identification by using GMM-based behavior modeling approach," IEEE Sensors Journal, vol. 21 no. 4, pp. 4918-4926, DOI: 10.1109/JSEN.2020.3030810, 2021.
[7] S. Ahangari, M. Jeihani, A. Ardeshiri, M. M. Rahman, A. Dehzangi, "Enhancing the performance of a model to predict driving distraction with the random forest classifier," Transportation Research Record, vol. 2675 no. 11, pp. 612-622, DOI: 10.1177/03611981211018695, 2021.
[8] Q. Sun, C. Wang, R. Fu, Y. Guo, W. Yuan, Z. Li, "Lane change strategy analysis and recognition for intelligent driving systems based on random forest," Expert Systems with Applications, vol. 186, article 115781,DOI: 10.1016/j.eswa.2021.115781, 2021.
[9] X. Yueru, S. Bao, K. P. Anuj, "Modeling drivers’ reaction when being tailgated: a random forests method," Journal Of Safety Research, vol. 78, pp. 28-35, DOI: 10.1016/j.jsr.2021.05.004, 2021.
[10] S. Xiaolin, Y. Yin, H. Cao, S. Zhao, M. Li, B. Yi, "The mediating effect of driver characteristics on risky driving behaviors moderated by gender, and the classification model of driver’s driving risk," Accident Analysis & Prevention, vol. 153, article 106038,DOI: 10.1016/j.aap.2021.106038, 2021.
[11] H. Wang, M. Gu, S. Wu, C. Wang, "A driver’s car-following behavior prediction model based on multi-sensors data," EURASIP Journal on Wireless Communications and Networking, vol. 2020,DOI: 10.1186/s13638-020-1639-2, 2020.
[12] S. B. Amsalu, A. Homaifar, F. Afghah, S. Ramyar, A. Kurt, "Driver behavior modeling near intersections using support vector machines based on statistical feature extraction," IEEE Intelligent Vehicles Symposium (IV), pp. 1270-1275, DOI: 10.1109/IVS.2015.7225857, .
[13] K. Yang, C. Al Haddad, G. Yannis, C. Antoniou, "Driving behavior safety levels: classification and evaluation," 2021 7th International Conference on Models and Technologies for Intelligent Transportation Systems (MT-ITS),DOI: 10.1109/MT-ITS49943.2021.9529309, .
[14] W. Bouslimi, M. Kassaagi, D. Lourdeaux, P. Fuchs, "Augmented naive Bayesian network for driver behavior modeling," IEEE Proceedings. Intelligent Vehicles Symposium, pp. 236-242, DOI: 10.1109/IVS.2005.1505108, .
[15] Z. Chen, C. Wu, Z. Huang, N. Lyu, Z. Hu, M. Zhong, Y. Cheng, B. Ran, "Dangerous driving behavior detection using video-extracted vehicle trajectory histograms," Journal of Intelligent Transportation Systems, vol. 21 no. 5, pp. 409-421, DOI: 10.1080/15472450.2017.1305271, 2017.
[16] X. Wu, J. Zhou, J. An, Y. Yang, "Abnormal driving behavior detection for bus based on the Bayesian classifier," Tenth International Conference on Advanced Computational Intelligence (ICACI), pp. 266-272, DOI: 10.1109/ICACI.2018.8377618, .
[17] T. Jinjun, F. Liu, W. Zhang, R. Ke, Y. Zou, "Lane-changes prediction based on adaptive fuzzy neural network," Expert Systems with Applications, vol. 91, pp. 452-463, DOI: 10.1016/j.eswa.2017.09.025, 2018.
[18] L. Zhenlong, Q. Zhang, X. Zhao, "Performance analysis of K-nearest neighbor, support vector machine, and artificial neural network classifiers for driver drowsiness detection with different road geometries," International Journal of Distributed Sensor Networks, vol. 13 no. 9,DOI: 10.1177/1550147717733391, 2017.
[19] S. L. Karri, L. C. De Silva, D. T. C. Lai, S. Y. Yong, "Classification and prediction of driving behaviour at a traffic intersection using SVM and KNN," SN Computer Science, vol. 2 no. 3,DOI: 10.1007/s42979-021-00588-7, 2021.
[20] J. O. López, A. C. Pinilla, "Driver behavior classification model based on an intelligent driving diagnosis system," 2012 15th International IEEE Conference on Intelligent Transportation Systems, pp. 894-899, DOI: 10.1109/ITSC.2012.6338727, .
[21] M. S. Al-Rakhami, A. Gumaei, M. M. Hassan, A. Alamri, M. Alhussein, M. A. Razzaque, G. Fortino, "A deep learning-based edge-fog-cloud framework for driving behavior management," Computers & Electrical Engineering, vol. 96, article 107573,DOI: 10.1016/j.compeleceng.2021.107573, 2021.
[22] A. Krizhevsky, I. Sutskever, G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances In Neural Information Processing Systems, vol. 25, 2012.
[23] J. Xing, G. Fang, J. Zhong, J. Li, "Application of face recognition based on CNN in fatigue driving detection," Proceedings of the 2019 International Conference on Artificial Intelligence and Advanced Manufacturing,DOI: 10.1145/3358331.3358387, .
[24] H. M. Eraqi, Y. Abouelnaga, M. H. Saad, M. N. Moustafa, "Driver distraction identification with an ensemble of convolutional neural networks," Journal of Advanced Transportation, vol. 2019,DOI: 10.1155/2019/4125865, 2019.
[25] B. Baheti, S. Gajre, S. Talbar, "Detection of distracted driver using convolutional neural network," Proceedings of the IEEE conference on computer vision and pattern recognition workshops, .
[26] M. N. Azadani, A. Boukerche, "Siamese temporal convolutional networks for driver identification using driver steering behavior analysis," IEEE Transactions on Intelligent Transportation Systems,DOI: 10.1109/TITS.2022.3151264, 2022.
[27] L. Ye, C. Chen, M. Wu, S. Nwobodo, A. A. Antwi, C. N. Muponda, K. D. Ernest, R. S. Vedaste, "Using CNN and channel attention mechanism to identify Driver’s distracted behavior," Transactions on Edutainment XVI, vol. 11782, pp. 175-183, DOI: 10.1007/978-3-662-61510-2_17, 2020.
[28] S. Yin, J. Duan, P. Ouyang, L. Liu, S. Wei, "Multi-CNN and decision tree based driving behavior evaluation," Proceedings of the Symposium on Applied Computing, pp. 1424-1429, DOI: 10.1145/3019612.3019649, .
[29] M. N. Azadani, A. Boukerche, "Performance evaluation of driving behavior identification models through can-bus data," 2020 IEEE Wireless Communications and Networking Conference (WCNC). IEEE,DOI: 10.1109/WCNC45663.2020.9120734, .
[30] W. Foland, J. H. Martin, "CU-NLP at SemEval-2016 task 8: AMR parsing using LSTM-based recurrent neural networks," Proceedings of the 10th international workshop on semantic evaluation, pp. 1197-1201, .
[31] L. Li, W. Zhao, C. Xu, C. Wang, Q. Chen, S. Dai, "Lane-change intention inference based on RNN for autonomous driving on highways," IEEE Transactions on Vehicular Technology, vol. 70 no. 6, pp. 5499-5510, DOI: 10.1109/TVT.2021.3079263, 2021.
[32] Y. EdDoughmi, N. Idrissi, Y. Hbali, "Real-time system for driver fatigue detection based on a recurrent neuronal network," Journal of Imaging, vol. 6 no. 3,DOI: 10.3390/jimaging6030008, 2020.
[33] M. Wollmer, C. Blaschke, T. Schindl, B. Schuller, B. Farber, S. Mayer, B. Trefflich, "Online driver distraction detection using long short-term memory," IEEE Transactions on Intelligent Transportation Systems, vol. 12 no. 2, pp. 574-582, DOI: 10.1109/TITS.2011.2119483, 2011.
[34] K. Griesbach, M. Beggiato, K. H. Hoffmann, "Lane change prediction with an echo state network and recurrent neural network in the urban area," IEEE Transactions on Intelligent Transportation Systems, vol. 23 no. 7, pp. 6473-6479, DOI: 10.1109/TITS.2021.3058035, 2021.
[35] J. Zhang, Z. Wu, F. Li, J. Luo, T. Ren, S. Hu, W. Li, W. Li, "Attention-based convolutional and recurrent neural networks for driving behavior recognition using smartphone sensor data," IEEE Access, vol. 7, pp. 148031-148046, DOI: 10.1109/ACCESS.2019.2932434, 2019.
[36] Y. Wang, N. Zhang, X. Chen, E. Sciubba, "A Short-Term Residential Load Forecasting Model Based on LSTM Recurrent Neural Network Considering Weather Features," Energies, vol. 14 no. 10, 2021.
[37] J. Hong, B. Sapp, J. Philbin, "Rules of the road: predicting driving behavior with a convolutional model of semantic interactions," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8454-8462, .
[38] S. Kaplan, M. A. Guvensan, A. G. Yavuz, Y. Karalurt, "Driver behavior analysis for safe driving: a survey," IEEE Transactions on Intelligent Transportation Systems, vol. 16 no. 6, pp. 3017-3032, DOI: 10.1109/TITS.2015.2462084, 2015.
[39] A. Koesdwiady, R. Soua, F. Karray, M. S. Kamel, "Recent trends in driver safety monitoring systems: state of the art and challenges," IEEE Transactions on Vehicular Technology, vol. 66 no. 6, pp. 4550-4563, DOI: 10.1109/TVT.2016.2631604, 2017.
[40] M. Saifuzzaman, Z. Zheng, "Incorporating human-factors in car-following models: a review of recent developments and research needs," Transportation Research Part C: Emerging Technologies, vol. 48 no. 8, pp. 379-403, DOI: 10.1016/j.trc.2014.09.008, 2014.
[41] Z. E. Abou Elassad, H. Mousannif, H. Al Moatassime, A. Karkouch, "The application of machine learning techniques for driving behavior analysis: a conceptual framework and a systematic literature review," Engineering Applications of Artificial Intelligence, vol. 87, article 103312,DOI: 10.1016/j.engappai.2019.103312, 2020.
[42] S. Mozaffari, O. Y. Al-Jarrah, M. Dianati, P. Jennings, A. Mouzakitis, "Deep learning-based vehicle behavior prediction for autonomous driving applications: a review," IEEE Transactions on Intelligent Transportation Systems, vol. 23 no. 1, pp. 33-47, DOI: 10.1109/TITS.2020.3012034, 2020.
[43] Y. Ma, W. Li, K. Tang, Z. Zhang, S. Chen, "Driving style recognition and comparisons among driving tasks based on driver behavior in the online car-hailing industry," Accident Analysis & Prevention, vol. 154,DOI: 10.1016/j.aap.2021.106096, 2021.
[44] G. Biau, E. Scornet, "A random forest guided tour," TEST, vol. 25 no. 2, pp. 197-227, DOI: 10.1007/s11749-016-0481-7, 2016.
[45] A. D. McDonald, J. D. Lee, C. Schwarz, "Steering in a random forest: ensemble learning for detecting drowsiness-related lane departures," Human factors, vol. 56 no. 5, pp. 986-998, DOI: 10.1177/0018720813515272, 2014.
[46] H. Mårtensson, O. Keelan, C. Ahlström, "Driver sleepiness classification based on physiological data and driving performance from real road driving," IEEE Transactions on Intelligent Transportation Systems, vol. 20 no. 2, pp. 421-430, DOI: 10.1109/TITS.2018.2814207, 2019.
[47] S. X. Cai, C. K. Du, S. Y. Zhou, "Fatigue driving state detection based on vehicle running data," Journal of Transportation Systems Engineering and Information Technology, vol. 20 no. 4, 2020.
[48] B. -T. Dong, H. -Y. Lin, "An on-board monitoring system for driving fatigue and distraction detection," 2021 22nd IEEE International Conference on Industrial Technology (ICIT), pp. 850-855, DOI: 10.1109/ICIT46573.2021.9453676, .
[49] Y. Yao, X. Zhao, H. du, Y. Zhang, J. Rong, "Classification of distracted driving based on visual features and behavior data using a random forest method," Transportation Research Record, vol. 2672 no. 45, pp. 210-221, DOI: 10.1177/0361198118796963, 2018.
[50] H. Shi, T. Wang, F. Zhong, H. Wang, J. Han, X. Wang, "A data-driven car-following model based on the random forest," World Journal of Engineering and Technology, vol. 9, pp. 503-515, DOI: 10.4236/wjet.2021.93033, 2021.
[51] F. Teimouri, M. Ghatee, "A real-time warning system for rear-end collision based on random forest classifier," ,DOI: 10.22115/scce.2020.217605.1172, 2018.
[52] M. N. Azadani, A. Boukerche, "DriverRep: driver identification through driving behavior embeddings," Journal of Parallel and Distributed Computing, vol. 162, pp. 105-117, DOI: 10.1016/j.jpdc.2022.01.010, 2022.
[53] D. Hallac, A. Sharang, R. Stahlmann, A. Lamprecht, M. Huber, M. Roehder, J. Leskovec, "Driver identification using automobile sensor data from a single turn," 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), pp. 953-958, DOI: 10.1109/ITSC.2016.7795670, .
[54] G. Li, S. E. Li, B. Cheng, P. Green, "Estimation of driving style in naturalistic highway traffic using maneuver transition probabilities," Transportation Research Part C: Emerging Technologies, vol. 74, pp. 113-125, DOI: 10.1016/j.trc.2016.11.011, 2017.
[55] C. H. Zhao, B. L. Zhang, J. He, J. Lian, "Recognition of driving postures by contourlet transform and random forests," IET Intelligent Transport Systems, vol. 6 no. 2, pp. 161-168, DOI: 10.1049/iet-its.2011.0116, 2012.
[56] M. Belgiu, L. Dragut, "Random forest in remote sensing: A review of applications and future directions," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 114, pp. 24-31, 2016.
[57] W. S. Noble, "What is a support vector machine?," Nature Biotechnology, vol. 24 no. 12, pp. 1565-1567, DOI: 10.1038/nbt1206-1565, 2006.
[58] B. K. Savaş, Y. Becerikli, "Real time driver fatigue detection based on SVM algorithm," 2018 6th International Conference on Control Engineering & Information Technology (CEIT),DOI: 10.1109/CEIT.2018.8751886, .
[59] Z. You, Y. Gao, J. Zhang, H. Zhang, M. Zhou, C. Wu, "A study on driver fatigue recognition based on SVM method," 2017 4th International Conference on Transportation Information and Safety (ICTIS), pp. 693-697, DOI: 10.1109/ICTIS.2017.8047842, .
[60] R. S. Tomar, S. Verma, "Trajectory predictions of lane changing vehicles using SVM," International journal of vehicle safety, vol. 5 no. 4, pp. 345-355, DOI: 10.1504/IJVS.2011.045775, 2011.
[61] Y. Liao, S. E. Li, G. Li, W. Wang, B. Cheng, F. Chen, "Detection of driver cognitive distraction: an SVM based real-time algorithm and its comparison study in typical driving scenarios," IEEE Intelligent Vehicles Symposium (IV), pp. 394-399, DOI: 10.1109/IVS.2016.7535416, .
[62] J. J. Wang, Y. K. Wang, F. Zhang, S. W. Zhang, Y. Dai, X. D. Yu, "Real-time detection for eye closure feature of fatigue driving based on CNN and SVM," Computer Systems & Applications, vol. 30 no. 6, pp. 118-126, 2021.
[63] M. A. Yan-li, G. U. Gao-feng, G. Yue-e, M. A. Yong, "Driver distraction judging model under in-vehicle information system operation based on driving performance," China Journal Of Highway And Transport, vol. 29 no. 4, 2016.
[64] Y. Liang, M. L. Reyes, J. D. Lee, "Real-time detection of driver cognitive distraction using support vector machines," IEEE Transactions on Intelligent Transportation Systems, vol. 8 no. 2, pp. 340-350, DOI: 10.1109/TITS.2007.895298, 2007.
[65] Y. Diange, H. Chang, L. Man, H. Qiguang, "Vehicle steering and lane-changing behavior recognition based on a support vector machine," Journal of Tsinghua University (Science and Technology), vol. 55 no. 10, pp. 1093-1097, 2015.
[66] C. Wang, J. Qin, R. Zhang, Y. Zhang, "Application of SVM in lane change recognitaion," Journal Of Theoretical & Applied Information Technology, vol. 48 no. 3, pp. 1449-1457, 2013.
[67] J. Yu, Z. Chen, Y. Zhu, Y. Chen, L. Kong, M. Li, "Fine-grained abnormal driving behaviors detection and identification with smartphones," IEEE Transactions on Mobile Computing, vol. 16 no. 8, pp. 2198-2212, DOI: 10.1109/TMC.2016.2618873, 2017.
[68] A. A. Albousefi, H. Ying, D. Filev, F. Syed, K. O. Prakah-Asante, F. Tseng, H. H. Yang, "A two-stage-training support vector machine approach to predicting unintentional vehicle lane departure," Journal of Intelligent Transportation Systems, vol. 21 no. 1, pp. 41-51, DOI: 10.1080/15472450.2016.1196141, 2017.
[69] Y. LeCun, Y. Bengio, "Convolutional networks for images, speech, and time series," The Handbook Of Brain Theory And Neural Networks, vol. 3361 no. 10, 1995.
[70] J. Chen, Y. Liu, "Fatigue modeling using neural networks: a comprehensive review," Fatigue & Fracture of Engineering Materials & Structures, vol. 45 no. 4, pp. 945-979, DOI: 10.1111/ffe.13640, 2022.
[71] R. Jabbar, M. Shinoy, M. Kharbeche, K. Al-Khalifa, M. Krichen, K. Barkaoui, "Driver drowsiness detection model using convolutional neural networks techniques for android application," 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), pp. 237-242, DOI: 10.1109/ICIoT48696.2020.9089484, .
[72] M. S. Majdi, S. Ram, J. T. Gill, J. J. Rodríguez, "Drive-Net: convolutional network for driver distraction detection," IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI),DOI: 10.1109/SSIAI.2018.8470309, .
[73] J. Liu, Y. Liu, J. Lin, D. Wei, X. Xia, W. Ni, X. Huang, L. Song, "One-dimensional convolutional neural network model for abnormal driving behaviors detection using smartphone sensors," International Conference on Networking Systems of AI (INSAI), pp. 143-150, DOI: 10.1109/INSAI54028.2021.00035, .
[74] K. Srinivasan, L. Garg, D. Datta, A. A. Alaboudi, N. Z. Jhanjhi, R. Agarwal, A. Thomas, "Performance comparison of deep CNN models for detecting driver’s distraction," CMC-Computers, Materials & Continua, vol. 68, pp. 4109-4124, 2021.
[75] J. Xie, K. Hu, G. Li, Y. Guo, "CNN-based driving maneuver classification using multi-sliding window fusion," Expert Systems with Applications, vol. 169,DOI: 10.1016/j.eswa.2020.114442, 2021.
[76] D. Lee, Y. P. Kwon, S. McMains, J. K. Hedrick, "Convolution neural network-based lane change intention prediction of surrounding vehicles for ACC," 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC),DOI: 10.1109/ITSC.2017.8317874, .
[77] B. Qin, J. Qian, Y. Xin, B. Liu, Y. Dong, "Distracted driver detection based on a CNN with decreasing filter size," IEEE Transactions on Intelligent Transportation Systems, vol. 23, pp. 6922-6933, DOI: 10.1109/TITS.2021.3063521, .
[78] G. Sikander, S. Anwar, "Driver fatigue detection systems: a review," IEEE Transactions on Intelligent Transportation Systems, vol. 20 no. 6, pp. 2339-2352, DOI: 10.1109/TITS.2018.2868499, 2019.
[79] W. H. Gu, Y. Zhu, X. D. Chen, L. F. He, B. B. Zheng, "Hierarchical CNN-based real-time fatigue detection system by visual-based technologies using MSP model," IET Image Processing, vol. 12 no. 12, pp. 2319-2329, DOI: 10.1049/iet-ipr.2018.5245, 2018.
[80] Z. Zhao, N. Zhou, L. Zhang, H. Yan, Y. Xu, Z. Zhang, "Driver fatigue detection based on convolutional neural networks using EM-CNN," Computational Intelligence And Neuroscience, vol. 2020,DOI: 10.1155/2020/7251280, 2020.
[81] F. Zhang, J. Su, L. Geng, Z. Xiao, "Driver fatigue detection based on eye state recognition," International Conference on Machine Vision and Information Technology (CMVIT), vol. 2017, article 7251280, pp. 105-110, DOI: 10.1109/CMVIT.2017.25, 2017.
[82] X. Ma, L. Chau, K. Yap, "Depth video-based two-stream convolutional neural networks for driver fatigue detection," 2017 International Conference on Orange Technologies (ICOT), pp. 155-158, DOI: 10.1109/ICOT.2017.8336111, .
[83] Y. Hu, M. Lu, X. Lu, "Driving behaviour recognition from still images by using multi-stream fusion CNN," Machine Vision and Applications, vol. 30 no. 5, pp. 851-865, DOI: 10.1007/s00138-018-0994-z, 2019.
[84] C. Huang, X. Wang, J. Cao, S. Wang, Y. Zhang, "HCF: a hybrid CNN framework for behavior detection of distracted drivers," IEEE Access, vol. 8, pp. 109335-109349, DOI: 10.1109/ACCESS.2020.3001159, 2020.
[85] Y. Abouelnaga, H. M. Eraqi, M. N. Moustafa, "Real-time distracted driver posture classification," 2017. ArXiv Preprint ArXiv:1706.09498
[86] A. Díaz-Álvarez, M. Clavijo, F. Jiménez, F. Serradilla, "Inferring the driver’s lane change intention through LiDAR-based environment analysis using convolutional neural networks," Sensors, vol. 21 no. 2,DOI: 10.3390/s21020475, 2021.
[87] R. Izquierdo, A. Quintanar, I. Parra, D. Fernández-Llorca, M. A. Sotelo, "Experimental validation of lane-change intention prediction methodologies based on CNN and LSTM," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 3657-3662, DOI: 10.1109/ITSC.2019.8917331, .
[88] K. Pavan, M. D. Teja, A. Pravin, T. P. Jacob, G. Nagarajan, "A unique adaptive framework for predicting lane changing intention based on CNN," International Conference on Emerging Trends and Advances in Electrical Engineering and Renewable Energy, vol. 691, pp. 577-584, 2020.
[89] M. Shahverdy, M. Fathy, R. Berangi, M. Sabokrou, "Driver behaviour detection using 1D convolutional neural networks," Electronics Letters, vol. 57 no. 3, pp. 119-122, DOI: 10.1049/ell2.12076, 2021.
[90] Y. Xun, J. Liu, N. Kato, Y. Fang, Y. Zhang, "Automobile driver fingerprinting: a new machine learning based authentication scheme," IEEE Transactions on Industrial Informatics, vol. 16 no. 2, pp. 1417-1426, DOI: 10.1109/TII.2019.2946626, 2020.
[91] H. Hu, J. Liu, Z. Gao, P. Wang, "Driver identification using 1D convolutional neural networks with vehicular CAN signals," IET Intelligent Transport Systems, vol. 14 no. 13, pp. 1799-1809, DOI: 10.1049/iet-its.2020.0105, 2020.
[92] M. M. Bejani, M. Ghatee, "Convolutional neural network with adaptive regularization to classify driving styles on smartphones," IEEE Transactions On Intelligent Transportation Systems, vol. 21 no. 2, pp. 543-552, DOI: 10.1109/TITS.2019.2896672, 2020.
[93] C. Wei, F. Hui, A. J. Khattak, "Driver lane-changing behavior prediction based on deep learning," Journal Of Advanced Transportation, vol. 2021,DOI: 10.1155/2021/6676092, 2021.
[94] J. Zhang, Z. Wu, F. Li, C. Xie, T. Ren, J. Chen, L. Liu, "A deep learning framework for driving behavior identification on in-vehicle CAN-BUS sensor data," Sensors, vol. 19 no. 6,DOI: 10.3390/s19061356, 2019.
[95] V. Yarlagadda, S. G. Koolagudi, M. Kumar, S. Donepudi, "Driver drowsiness detection using facial parameters and RNNs with LSTM," 2020 IEEE 17th India Council International Conference (INDICON),DOI: 10.1109/INDICON49873.2020.9342348, .
[96] S. U. N. Jian, Z. H. Yi-hao, W. A. Jun-hua, "Detecting distraction behavior of drivers using naturalistic driving data," China Journal of Highway and Transport, vol. 33 no. 9,DOI: 10.19721/j.cnki.1001-7372.2020.09.022, 2020.
[97] T. Q. Tang, Y. Gui, J. Zhang, T. Wang, "Car-following model based on deep learning and Markov theory," Journal of Transportation Engineering, Part A: Systems, vol. 146 no. 9,DOI: 10.1061/JTEPBS.0000430, 2020.
[98] Z. Yan, K. Yang, Z. Wang, B. Yang, T. Kaizuka, K. Nakano, "Time to lane change and completion prediction based on gated recurrent unit network," IEEE Intelligent Vehicles Symposium (IV), vol. 2019, pp. 102-107, DOI: 10.1109/IVS.2019.8813838, 2019.
[99] X. Liu, J. Zhou, H. Qian, "Short-term wind power forecasting by stacked recurrent neural networks with parametric sine activation function," Electric Power Systems Research, vol. 192 no. 4, 2021.
[100] H. Zhang, Z. Nan, T. Yang, Y. Liu, N. Zheng, "A driving behavior recognition model with bi-LSTM and multi-scale CNN," 2020 IEEE Intelligent Vehicles Symposium (IV), pp. 284-289, DOI: 10.1109/IV47402.2020.9304772, .
[101] Y. Zhang, J. Li, Y. Guo, C. Xu, J. Bao, Y. Song, "Vehicle driving behavior recognition based on multi-view convolutional neural network with joint data augmentation," IEEE Transactions on Vehicular Technology, vol. 68 no. 5, pp. 4223-4234, DOI: 10.1109/TVT.2019.2903110, 2019.
[102] Y. Xing, C. Lv, H. Wang, D. Cao, E. Velenis, "An ensemble deep learning approach for driver lane change intention inference," Transportation Research Part C: Emerging Technologies, vol. 115,DOI: 10.1016/j.trc.2020.102615, 2020.
[103] S. Monjezi Kouchak, A. Gaffar, "Detecting driver behavior using stacked long short term memory network with attention layer," IEEE Transactions on Intelligent Transportation Systems, vol. 22 no. 6, pp. 3420-3429, DOI: 10.1109/TITS.2020.2986697, 2021.
[104] K. Saleh, M. Hossny, S. Nahavandi, "Driving behavior classification based on sensor data fusion using LSTM recurrent neural networks," 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC),DOI: 10.1109/ITSC.2017.8317835, .
[105] M. A. Khodairy, G. Abosamra, "Driving behavior classification based on oversampled signals of smartphone embedded sensors using an optimized stacked-LSTM neural networks," IEEE Access, vol. 9, pp. 4957-4972, DOI: 10.1109/ACCESS.2020.3048915, 2021.
[106] Y. Ma, Z. Xie, S. Chen, Y. Wu, F. Qiao, "Real-time driving behavior identification based on multi-source data fusion," International Journal Of Environmental Research And Public Health, vol. 19 no. 1,DOI: 10.3390/ijerph19010348, 2022.
[107] Y. Wu, H. Tan, X. Chen, B. Ran, "Memory, attention and prediction: a deep learning architecture for car-following," Transportmetrica B: Transport Dynamics, vol. 7 no. 1, pp. 1553-1571, DOI: 10.1080/21680566.2019.1650674, 2019.
[108] S. Patel, B. Griffin, K. Kusano, J. J. Corso, "Predicting future lane changes of other highway vehicles using RNN-based deep models," 2021. arXiv preprint arXiv:1801.04340
[109] J. Mafeni Mase, P. Chapman, G. P. Figueredo, M. Torres Torres, "Benchmarking deep learning models for driver distraction detection," International Conference on Machine Learning, Optimization, and Data Science. LOD 2020, vol. 12566,DOI: 10.1007/978-3-030-64580-9_9, 2020.
[110] O. Olabiyi, E. Martinson, V. Chintalapudi, R. Guo, "Driver action prediction using deep (bidirectional) recurrent neural network," 2017. arXiv preprint arXiv,1706.02257
[111] S. Monjezi Kouchak, A. Gaffar, "Using bidirectional long short term memory with attention layer to estimate driver behavior," 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), pp. 315-320, DOI: 10.1109/ICMLA.2019.00059, .
[112] G. Du, Z. Wang, B. Gao, S. Mumtaz, K. M. Abualnaja, C. Du, "A convolution bidirectional long short-term memory neural network for driver emotion recognition," IEEE Transactions on Intelligent Transportation Systems, vol. 22 no. 7, pp. 4570-4578, DOI: 10.1109/TITS.2020.3007357, 2021.
[113] C. -S. Shih, P. -W. Huang, E. -T. Yen, P. -K. Tsung, "Vehicle speed prediction with RNN and attention model under multiple scenarios," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp. 369-375, DOI: 10.1109/ITSC.2019.8917479, .
[114] S. Casas, C. Gulino, R. Liao, R. Urtasun, "SpAGNN: spatially-aware graph neural networks for relational behavior forecasting from sensor data," 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9491-9497, DOI: 10.1109/ICRA40945.2020.9196697, .
[115] Z. Li, C. Lu, Y. Yi, J. Gong, "A hierarchical framework for interactive behaviour prediction of heterogeneous traffic participants based on graph neural network," IEEE Transactions on Intelligent Transportation Systems, vol. 23,DOI: 10.1109/TITS.2021.3090851, 2021.
[116] S. Bae, D. Saxena, A. Nakhaei, C. Choi, K. Fujimura, S. Moura, "Cooperation-aware lane change maneuver in dense traffic based on model predictive control with recurrent neural network," 2020 American Control Conference (ACC), pp. 1209-1216, DOI: 10.23919/ACC45564.2020.9147837, .
[117] Z. Li, L. Chen, L. Nie, S. X. Yang, "A novel learning model of driver fatigue features representation for steering wheel angle," IEEE Transactions on Vehicular Technology, vol. 71 no. 1, pp. 269-281, DOI: 10.1109/TVT.2021.3130152, 2022.
[118] Y. Lin, P. Wang, Y. Zhou, F. Ding, C. Wang, H. Tan, "Platoon trajectories generation: a unidirectional interconnected LSTM-based car-following model," IEEE Transactions on Intelligent Transportation Systems, vol. 23 no. 3, pp. 2071-2081, DOI: 10.1109/TITS.2020.3031282, 2022.
[119] X. Huang, J. Sun, J. Sun, "A car-following model considering asymmetric driving behavior based on long short-term memory neural networks," Transportation Research Part C: Emerging Technologies, vol. 95, pp. 346-362, DOI: 10.1016/j.trc.2018.07.022, 2018.
[120] K. Yeon, K. Min, J. Shin, M. Sunwoo, M. Han, "Ego-vehicle speed prediction using a long short-term memory based recurrent neural network," International Journal of Automotive Technology, vol. 20 no. 4, pp. 713-722, 2019.
[121] A. Cura, H. Küçük, E. Ergen, İ. B. Öksüzoğlu, "Driver profiling using long short term memory (LSTM) and convolutional neural network (CNN) methods," IEEE Transactions on Intelligent Transportation Systems, vol. 22 no. 10, pp. 6572-6582, DOI: 10.1109/TITS.2020.2995722, 2021.
[122] M. Z. Liu, X. Xu, J. Hu, Q. N. Jiang, "Real time detection of driver fatigue based on CNN-LSTM," IET Image Processing, vol. 16 no. 2, pp. 576-595, DOI: 10.1049/ipr2.12373, 2022.
[123] F. Omerustaoglu, C. O. Sakar, G. Kar, "Distracted driver detection by combining in-vehicle and image data using deep learning," Applied Soft Computing, vol. 96, article 106657,DOI: 10.1016/j.asoc.2020.106657, 2020.
[124] A. Lambay, Y. Liu, P. Morgan, Z. Ji, "A data-driven fatigue prediction using recurrent neural networks," 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA),DOI: 10.1109/HORA52670.2021.9461377, .
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Dengfeng Zhao et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The frequent traffic accidents lead to a large number of casualties and large related financial losses every year; this serious state is owed to several factors; among those, driving behavior is one of the most imperative subjects to discuss. Driving behaviors mainly include behavior characteristics such as car-following, lane change, and risky driving behavior such as distraction, fatigue, or aggressive driving, which are of great help to various tasks in traffic engineering. An accurate and reliable method of driving behavior recognition is of great significance and guidance for vehicle driving safety. In this paper, the vehicle multisensor information, vehicle CAN bus data acquisition system, and typical feature extraction methods are summarized at first. And then, several driving behavior recognition models based on machine learning and deep learning are reviewed. Through a detailed analysis of the features of random forests, support vector machines, convolutional neural networks, and recurrent neural networks used to build driving behavior recognition models, the following findings are obtained: the driving behavior model constructed by traditional machine learning model is relatively mature but it is greatly affected by feature extraction, data scale, and model structure, which affects the accuracy of the final driving behavior recognition. Deep learning model based on a neural network has achieved high accuracy in identifying driving behavior, and it may gradually become the mainstream of constructing the driving behavior model with the development of big data, artificial intelligence technology, and computer hardware. Finally, this paper points out some content that needs to be further explored, to provide reference and inspiration for scholars in this field to continue to study the driving behavior recognition model in depth.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Henan Provincial Key Laboratory of Intelligent Manufacturing of Mechanical Equipment, Mechanical and Electrical Engineering Institute, Zhengzhou University of Light Industry, Zhengzhou, 450002 Henan, China
2 Zhengzhou Senpeng Electronic Technology Co., LTD, Zhengzhou, 450052 Henan, China