This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
1. Introduction
Urban rail transit is one of the most popular means of transportation for urban people, and its development speed is also very rapid [1]. Among them, the technology of fully automatic driverless metro train is a hot research content of urban rail transit [2–4], and its most key link is the rapid state analysis and emergency treatment when the train encounters obstacles.
According to the statistics of rail train operation safety accidents in recent years, there are many factors that will affect the subway train operation safety, mainly including management level, equipment reliability, and rail roadblocks [5, 6]. At the same time, because the subway traffic environment is mostly closed and low, the operating environment and lighting conditions are not enough to support the traditional detection methods to realize the identification of track obstacles. In addition, the fast running speed of subway trains poses a certain challenge to the safe and stable operation of subway, resulting in potential safety hazards during the running of trains [7]. Therefore, it is particularly important to develop a reasonable and efficient subway obstacle perception and recognition method.
The traditional method adopts contact detection method, and the train will be braked urgently only after the obstacle collides with the detection beam. The contact obstacle detection system can accurately find the target ahead and stop the train. But at the same time, the target was discovered, the rail train stopped running. The train may also be subject to a greater impact, so that the safety of subway and passengers cannot be guaranteed [8, 9].
With the development of sensor technology, state data acquisition is based on the installation of detection devices on specific tracks [10]. For example, a certain radar or RF device is installed at the front side of the subway train, which can collect the status data of the running track before not contacting the obstacles, upload it to the monitoring system platform for analysis and decision-making, realize effective and stable braking and deceleration, and greatly improve the operation safety.
However, there are still some problems in the noncontact train detection method: First, the detection device is a state acquisition device. Because of the differences in the nature of the sensors and the installation environment, simultaneous interpreting of objects by a single sensor can not guarantee the reliability of data and affect the accuracy of detection [11]. Second, there are too many links in obstacle identification and analysis. Relying on the detection and analysis of subway monitoring platform can improve the accuracy to a certain extent, but it can not meet the requirements of track foreign object identification for analysis speed.
2. Related Work
Due to the limited line of sight in the subway operation environment, it is sometimes difficult to distinguish the foreign objects in the track. The safety accidents caused by collision with obstacles often have the characteristics of large loss and serious harm. Therefore, it is particularly important to develop a fast and accurate obstacle autonomous recognition method for the safe operation of locomotive in case of obstacles.
The traditional obstacle detection method uses the contact obstacle detection system. The system installs a detection beam on the bottom of the train head and realizes the detection function when detection beam touches obstacles. The sensor detects deformation of the beam, and the train system prompts the train to brake train urgently according to sensors [12]. However, the contact obstacle detection system must break the train when detection beam touches obstacles. The speed of subway train is very fast. Although obstacles are detected, it will also cause damage to the train and cannot ensure the safety of train.
With the development of sensor technology, rail trains began to use radar detection, radiofrequency detection, or stereo camera to detect foreign objects. However, any single-sensor technology has shortcomings: for example, the detection effect of infrared camera is very poor when the temperature is high, the stereo camera can hardly collect data in bad weather, and the information collected by radar is also poor when the external environment is poor. A variety of heterogeneous sensors form sensor clusters at the edge of the network and fuse the actual data samples with each other, which can overcome the shortcomings of single-sensor technology, improve the detection results of the system, and support the stable operation of the train.
Thanks to the development of intelligent algorithms and big data technology, deep network technology is applied to the analysis of subway operation status. Based on the state data uploaded by sensors at the edge of network, through the continuous training and learning of multilayer network structure [13], the noncontact perception and recognition of obstacles in the track is realized. Reference [14] proposed a deep learning segmentation algorithm for railway detection based on RailNet network model. The multilayer network structure can be used to continuously extract the characteristics of a sample dataset to achieve noncontact recognition of foreign objects. Reference [15] improved the deep convolutional neural network (CNN) to construct a subway operation detection network. Besides, it used transfer learning technology to train facility images in subway tunnels to improve the performance of obstacle model detection. Reference [16] proposed a CNN-based railway area detection method to achieve pixel-level classification of track areas. Reference [17] combined the semantic segmentation algorithm with CNN to realize the accurate recognition of track area and forward train. Reference [18] introduced LeNet-5CNN to realize rail transit obstacle detection and provide intelligent early warning information for the train control system. The above method can realize obstacle perception and identification before the train comes into contact with obstacles. However, only relying on the single-state data uploaded by sensors to realize decision analysis has the problem of single unreliable data sample and the danger of missing valid data. On the other hand, overreliance on the subway monitoring cloud platform for detection can improve accuracy, but the real-time performance is not high [19]. It may lead to a slower braking action when encountering obstacles and the risk of car crashes and deaths.
To solve the above problems, under cloud edge collaboration architecture, this paper proposes a subway obstacle perception and identification method using deep learning. The innovations of this paper are as follows:
(1) Propose a track area identification method based on the Mask RCNN network model to meet the demand for autonomous and efficient identification of train running tracks in actual scenarios
(2) Overcome the incomprehensiveness of single-sensor data collection, realize feature-level data fusion of sensor cluster data at the edge of network to enhance the credibility of analysis data, and then improve the reliability of entire detection network system
(3) Based on reliable dataset support, use the YOLOv3 and DeepSort algorithms to train and establish detection network on the cloud analysis platform. At the cloud edge, the detection network is used to realize rapid perception and control, which greatly improves the safety and reliability of train operation
3. Method Framework
3.1. Overall Framework
The method architecture proposed in this paper combines cloud edge (metro monitoring cloud platform) decision-making, and edge side (train) monitoring. Under the condition of mutual cooperation between the cloud edge and edge side, the efficient perception and identification of subway obstacles can be realized to support the safe and reliable operation of rail subway [20]. Figure 1 is the overall block diagram of the proposed method.
[figure omitted; refer to PDF]
As shown in Figure 1, the method proposed in this paper supports the reliable operation of rail subways by cloud edge decision analysis-edge real-time control of cloud edge collaboration. First, the position of rails is detected by cameras. Based on deep learning for rail identification, we determine the detection area of obstacles in the process of subway trains. The edge layer is responsible for fusing multisensor data and executing the trained detection model issued by the subway monitoring cloud platform to detect obstacles in real time. The subway monitoring cloud platform is responsible for using deep learning methods to train and learn the track environment and obstacle characteristics in various scenarios, generate detection models, and periodically send them to the edge layer for execution.
3.2. Rail Perception Based on Deep Learning
Traditional analysis methods have certain limitations, and it is difficult to support the analysis requirements for autonomous rail identification and dangerous area division of rail trains. In this paper, the position of railroad track is detected by cameras, and based on the deep learning algorithm on cloud edge, the track area of subway train is drawn.
Firstly, the features of rail training samples are extracted based on CNN; then, the region proposal network (RPN) was used for training. Mask RCNN is responsible for rail detection and identifying dangerous areas [21]; as shown in Figure 2, a regional candidate network is selected to extract candidate frames in order to improve efficiency.
[figure omitted; refer to PDF]
RPN network is a full convolution network specially used to extract candidate regions. It processes the previously extracted feature map, looks for candidate frames that may contain the target region, and predicts the category score of each frame.
Using CNN to directly generate candidate area frames is the core idea of the RPN network, which scans images by the sliding of window. The RPN network produces two outputs for each anchor point. One is the category of anchor points, for all anchor point boxes generated. After screening and filtering, the SoftMax classification function is used to judge whether the anchor point belongs to the foreground or the background; that is, it is a railroad track or not a railroad track, so as to realize the identification of the railroad track. At the same time, the other is frame fine adjustment, which uses the bounding box regression function to correct the anchor point frame to form a more accurate candidate frame. After being extracted by CNN, the obtained feature map is input into RPN network, as shown in Figure 3.
[figure omitted; refer to PDF]
The input of the RPN network is a picture of any size, and the network output is a series of candidate frames for different sizes. And the RPN network generates two outputs for each candidate frame, which are the probability value of identifying the target object and position information of target object equivalent to pictures. RPN network uses a
[figure omitted; refer to PDF]
If the intersection over union (IoU) value of the prediction box corresponding to the anchor point and the ground truth box is the largest, it is marked as a positive sample. If the IoU between the predicted frame and actual frame corresponding to the anchor point is greater than 0.33, it is marked as a positive sample. If the IoU is less than 0.33, it is marked as a negative sample. The rest are neither positive nor negative samples and do not participate in the final training. The loss function selects cross-direction objective function, and its expression is
Compared with the quadratic objective function, when the training error is larger, the gradient is larger, and the parameter adjustment speeds up, which makes the training faster and faster. The reasons are as follows:
Find the gradient of parameter
The entire loss function is
3.3. Side-to-Side Multisensor Fusion
A single sensor has detection limitations. This paper uses sensor clusters to collect the train status when detecting rail train faults and highly integrates multiple status data to realize global situational awareness of fault status. The use of multisensor feature data fusion can greatly improve the system’s ability to perceive environment; this improves the intelligence of the entire detection system platform [22].
As shown in Figure 5, the feature-level fusion used in this paper is an intermediate-level data fusion. To extract the feature vector contained in collected data, it can reflect the attributes of monitored physical quantity, which is the feature fusion of monitored objects. In the process of feature-level fusion, the representative features extracted from sensor data should be fused into a single feature vector. Then, we use the method of pattern recognition to process, and feature-level fusion realizes information compression, which is convenient for real-time processing. In this paper, the wavelet transform method is used to realize the data fusion of heterogeneous sensor cluster datasets.
[figure omitted; refer to PDF]
After precleaning the images collected by the multisensor cluster before fusion, the data sample set is divided into three bands
Take the low-frequency coefficients
RGB three-channel synthesis is used for the three band images to obtain the fused reliable dataset.
In the process of multisensor data fusion, sensor calibration is particularly important. In order to simplify the calculation, this paper selects the sensor coordinate system as a unified coordinate system. We obtain the external parameters jointly calibrated by the camera and LIDAR, so as to realize the unity between the two coordinate systems. In this paper, the point cloud data of LIDAR is mapped to the image coordinate system, which can complete the sensor spatial synchronization. Figure 6 is a schematic diagram of the joint calibration method.
[figure omitted; refer to PDF]
The conversion formula for the joint calibration of LIDAR and camera is as follows:
The relationship between the LIDAR coordinate system and pixel coordinates is as follows:
The joint calibration process is as follows:
(1) Run the camera and LIDAR node, start the camera and LIDAR sensor, and record and save the joint file of camera and LIDAR
(2) Restart the camera and LIDAR node and import the parameter file obtained from the previous calibration
(3) Adjust the angle of view of point cloud and then make sure that both the image and point cloud can see the complete calibration board and obtain multiframe images and point clouds
(4) Align the point cloud with the image, that is, extract the corresponding points in the point cloud and image, and obtain the external parameters jointly calibrated by the camera and LIDAR by calculation
3.4. Obstacle Recognition Based on Deep Learning on Cloud Edge
Based on the reliable dataset support provided by edge side sensor cluster, this paper uses the YOLOv3 and DeepSort algorithms on the subway monitoring cloud platform to iteratively learn the rail train status data in each scene to construct and improve the detection network model. The training network model is transferred to edge side equipment to complete the real-time rapid deceleration and avoidance operation when the train encounters obstacles.
The traditional CNN network has the problem of long detection time when processing a large amount of computational data. The YOLOv3 network model has a faster processing speed than the CNN model and is often used in real-time detection and analysis research. The YOLOv3 algorithm uses a network structure diagram that combines a multilayer convolutional network with a pooling layer and a fully connected layer. The input picture size has been expanded to
The YOLOv3 algorithm divides the input image into
The YOLOv3 algorithm divides the input image into grids. If there is a detection target in a detection grid, the detection grid is responsible for detecting the object. Each grid cell predicts
If the target does not fall into the detection grid,
In the
In this way, the confidence score of the specific category of each regression frame can be obtained. This product not only contains the probability information of the classification predicted in the regression frame but also reflects whether the regression frame contains objects and the accuracy of the coordinates of the regression frame.
The steps of using YOLOv3 for target detection are shown in Figure 7:
[figure omitted; refer to PDF]
Step 1: input the input left-eye image frame into YOLOv3 network after size transformation and divide it into
Step 2: after each raster is processed by the YOLOv3 network, two prediction frames
Step 3: determine whether the object falls into the grid. If the object does not fall into the grid, set Confidence to 0. If the object falls into the grid, the predicted Confidence value will be output, and the prediction frame
Step 4: compare the predicted Confidence value with threshold
Step 5: determine whether the input target position of previous module falls into the reserved position window. If it falls into the reserved position window, output the recognition result. If it does not fall into the reserved position window, discard it.
However, it should be noted that rail trains are generally in high-speed motion. Adding the DeepSort algorithm framework to the obstacle recognition network, using the motion model and apparent information for data association, can achieve end-to-end multitarget visual fast tracking. This enables the vehicle target to obtain a good tracking effect under complex conditions such as illumination, fast movement, and occlusion [23, 24].
The DeepSort algorithm has deep association features and is based on the improvement of Sort algorithm. Its tracking effect is based on the existing accurate detection results. The prediction module uses Wiener filtering, and the update module uses IOU to match the Hungarian algorithm. The tracking process is shown in Figure 8.
[figure omitted; refer to PDF]
In order to prevent a target covering multiple targets or multiple detectors detecting a target in multitarget tracking, the DeepSort algorithm uses an eight-dimensional state space
The Mahalanobis distance indicates the degree of deviation of the detection target from the average position of target trajectory; the Mahalanobis distance can be used to measure the degree of matching between the target state predicted by Wiener filtering and detection value. We use
The left and right detected targets are screened by the Mahalanobis distance, and threshold
When the motion uncertainty is very low, the Mahalanobis distance can be a good measure of the relationship between the detected target and trajectory. But when the camera shakes violently, the association method fails. Thus, CNN is introduced for correlation. We obtain feature vector
The trained YOLOv3 detector is used for train obstacle detection in complex environments, and the obstacle detection model trained by YOLOv3 is used. The abnormal target detection result is used as the real-time input of DeepSort tracker, thus making up for the own shortcomings of DeepSort.
4. Experiment and Comparative Analysis
In order to verify the feasibility and accuracy of the proposed method for the detection and identification of subway track obstacles, this paper uses references [15], [17], and [18] as comparison methods. The proposed method and the comparison method are set in the same experimental scene for simulation verification. The experimental scene settings are shown in Table 1.
Table 1
Operation scenarios of simulation experiment.
Software environment | Operating system | Windows 10 |
Deep learning framework | PaddlePaddle | |
Program editor | PyCharm | |
Hardware environment | CPU | Intel Core i7 9700 |
GPU | GeForce GTX-1650 | |
Running memory | 32 GB |
The experimental dataset uses the actual subway operation dataset of a city in China in 2020. The dataset randomly extracts the rail train operating status data on a certain day in July. The data sample parameter is 30 frames/s, and pixel size is
The main network parameters of the subway obstacle analysis method proposed in this paper are shown in Table 2.
Table 2
Network parameter setting.
Parameter | Value |
Weight attenuation silver | 0.0012 |
Momentum parameter | 0.97 |
Initial learning rate | 0.001 |
Maximum learning rate | 0.027 |
Training batch | 200 |
4.1. Accuracy Analysis of Track Recognition
In order to verify the feasibility of the proposed method for subway track recognition, we build a proposed detection network model based on the above parameters and reproduce the methods in references [15], [17], and [18] in the same experimental scenario. Figure 9 shows the detection and analysis results of subway tracks under each method.
[figures omitted; refer to PDF]
As shown in Figure 9, at the 45th iteration of proposed detection method, the network loss function drops to 0.06. At the same time, the detection network’s orbit recognition accuracy has increased to 98.9%; its value is almost close to 100% and remains stable. References [15], [17], and [18] achieved a stable network performance at 120 times, 90 times, and 60 times, respectively. However, it can be seen that the comparative reference not only has a certain disadvantage compared with the proposed method in terms of analysis speed. Moreover, the analysis accuracy is slightly inferior to the recognition ability of the proposed method. Reference [15] is 11.6% lower than proposed method, reference [17] is 21.8% lower than the proposed method, and the analysis accuracy of reference [18] is 72.3%.
4.2. Performance Analysis of Obstacle Detection
The detection and processing of obstacles before the subway encounters obstacles is particularly important. Therefore, we also discuss the performance of the detection method in obstacle recognition and analysis. Figure 10 is a discussion of obstacle detection performance under different recognition methods.
[figure omitted; refer to PDF]
As shown in Figure 10, the method proposed in this paper can effectively distinguish obstacles in the 50th iteration with a recognition accuracy of 98.9%. However, the accuracy of reference [15] is 11.2% lower than proposed method, and the accuracy of reference [18] is 14.6% lower than proposed method. Reference [17] has not yet found the optimal solution in the iterative analysis process. The reason is that we implement feature-level fusion of sensor cluster data on edge side to provide reliable and complete data support for detection network model. The comparison literature only carries out simple data preprocessing on the collected samples. For the deep network, the quality of the dataset samples will determine the accuracy of obstacle recognition to a certain extent. At the same time, reference [17] combines the semantic segmentation network and deep learning network, which has the possibility of local optimization due to the complex network structure, which limits the analysis and recognition.
At the same time, we also analyzed the calculation efficiency of different methods, and the results are shown in Table 3.
Table 3
Statistical table of target detection experiment results.
Method | Analysis time (s) |
The proposed method | 1.43 |
Reference [15] | 2.79 |
Reference [17] | — |
Reference [18] | 5.42 |
According to Table 3, with the help of edge computing for fast and efficient action control at the edge of network, the method proposed in this paper can complete the detection of obstacles in track within 1.43 s. The comparison methods all have a certain time delay. The detection time of reference [15] is 2.79 s, the time of reference [18] is 5.42 s, and reference [17] did not complete the reliability of subway track obstacles within the set time. At the same time, the YOLOv3 network used in this paper is essentially a one-step solution, which can realize direct and efficient feature extraction for the sample dataset, while the CNN network used in the comparative literature needs to classify the sample dataset first and then realize feature extraction. Therefore, it is proved that the proposed method has the ability of an efficient and rapid obstacle analysis.
4.3. Target Tracking Analysis
At the same time, we also analyze the performance of multitarget tracking. Table 4 shows the performance of multitarget tracking analysis under different methods.
Table 4
Statistical table of multitarget tracking analysis results.
Method | Actual number of obstacles | Number of obstacles detected | Accuracy (%) |
The proposed method | 100 | 96 | 96 |
Reference [15] | 100 | 91 | 91 |
Reference [17] | 100 | 72 | 61 |
Reference [18] | 100 | 64 | 64 |
As shown in Table 4, due to the introduction of DeepSort algorithm, the proposed method can effectively achieve multitarget visual fast tracking at the edge of network, and the recognition accuracy can reach 96%. The comparison method is obviously not as good as the proposed method. The recognition accuracy of references [15], [17], and [18] is 91%, 61%, and 64%.
In summary, the proposed method can meet the needs of efficient identification for obstacles in actual subway operation. Compared with the current analysis methods, it has better image feature mining and analysis capabilities, which achieves reliable support for stable operation of rail trains.
5. Conclusion
An efficient and accurate obstacle identification method is very important for the stable and safe operation of the subway. Based on cloud edge cooperation mode and deep learning technology, this paper proposes a fast and effective rail transit obstacle recognition method. In this method, Mask RCNN algorithm is applied to the route identification of a metro rail transit, which can provide route guarantee for the safe directional operation of trains. Based on the local fast computing mode of edge computing, the state perception and foreign object recognition of running track are realized on the edge side of the network based on the YOLOv3 and DeepSort algorithms. Through the simulation analysis, it can be seen that the method proposed in this paper can achieve more rapid and accurate track obstacle analysis in the actual complex scene.
The nature of edge computing is lightweight on-site computing. However, the memory and computing power of smart devices at the edge of network are greatly restricted under the condition of limited hardware costs. In order to further reduce the difficulty of computing and solution, the lightweight processing research will be carried out on the deep learning detection network model in the future. Furthermore, it can save network memory and reduce computational complexity and realize sensitive and efficient identification of track obstacles in actual complex scenes.
Acknowledgments
The work described in this paper was fully supported by a grant from the Natural Science Foundation of colleges and universities of Jiangsu Province (No. 18KJD510009).
[1] C. Wu, X. Qiang, Y. Wang, C. Yan, G. Zhai, "Efficient detection of obstacles on tramways using adaptive multilevel thresholding and region growing methods," Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit, vol. 232 no. 5, pp. 1375-1384, DOI: 10.1177/0954409717720840, 2018.
[2] D. Li, L. Deng, Z. Cai, "Design of traffic object recognition system based on machine learning," Neural Computing and Applications, vol. 33 no. 14, pp. 8143-8156, DOI: 10.1007/s00521-020-04912-9, 2021.
[3] S. Q. Guo, Y. Dong, "Research on obstacle detection method in front of train running on straight track based on radar," Journal of Railway Science and engineering, vol. 17 no. 1, pp. 224-231, 2020.
[4] P. Pavel, O. Andrey, "Autonomous train - the Russian perspective," Automation, Communication and Informatics, vol. 8 no. 1, 2019.
[5] X. K. Ding, X. D. Hu, Q. Wei, "High speed train line safety monitoring technology based on optical measurement," Journal of Railway Science and engineering, vol. 15 no. 9, pp. 2224-2231, 2018.
[6] J. Li, F. Zhou, T. Ye, "Real-world railway traffic detection based on faster better network," IEEE Access, vol. 6 no. 1, pp. 68730-68739, DOI: 10.1109/ACCESS.2018.2879270, 2018.
[7] S. Shi, "Review of active obstacle detection in rail transit system," Mechanical and electrical engineering technology, vol. 50 no. 6, pp. 212-216, 2021.
[8] G. R. Zhai, C. H. Hu, L. J. Zhang, "Reliability design of obstacle and derailment detection device," Heilongjiang sci tech information, vol. 34, pp. 155-155, 2014.
[9] H. Mukojima, D. Deguchi, Y. Kawanishi, "Moving camera background-subtraction for obstacle detection on railway tracks," 2016 IEEE International Conference on Image Processing (ICIP), .
[10] X. Zhang, M. Zhou, P. Qiu, Y. Huang, J. Li, "Radar and vision fusion for the real-time obstacle detection and identification," Industrial Robot: the international journal of robotics research and application, vol. 46 no. 3, pp. 391-395, DOI: 10.1108/IR-06-2018-0113, 2019.
[11] L. B. Chang, S. B. Zhang, H. M. Du, "Position-aware lightweight object detectors with depthwise separable convolutions," Journal of Real-Time Image Processing, vol. 18 no. 3, pp. 857-871, 2021.
[12] S. T. Ding, S. R. Qu, "Traffic target region of interest detection based on deep learning," Chinese Journal of highways, vol. 31 no. 9, pp. 167-174, 2018.
[13] T. Ye, Z. Zhang, X. Zhang, F. Zhou, "Autonomous railway traffic object detection using feature-enhanced single-shot detector," IEEE Access, vol. 8 no. 1, pp. 145182-145193, DOI: 10.1109/ACCESS.2020.3015251, 2020.
[14] Y. Wang, L. Wang, Y. H. Hu, J. Qiu, "RailNet: a segmentation network for railroad detection," IEEE Access, vol. 7 no. 1, pp. 143772-143779, DOI: 10.1109/ACCESS.2019.2945633, 2019.
[15] D. He, Z. Jiang, J. Chen, J. Liu, J. Miao, A. Shah, "Classification of metro facilities with deep neural networks," Journal of Advanced Transportation, vol. 2019 no. 1,DOI: 10.1155/2019/6782803, 2019.
[16] Z. Wang, X. Wu, G. Yu, M. Li, "Efficient rail area detection using convolutional neural network," IEEE Access, vol. 6 no. 1, pp. 77656-77664, DOI: 10.1109/ACCESS.2018.2883704, 2018.
[17] Q. Zhang, F. Yang, B. Zhang, "Application of intelligent obstacle detection system in the automatic operation of Beijing new airport line," Railway rolling stock, vol. 39 no. 6, pp. 114-118, 2019.
[18] Y. Z. Deng, M. Lin, "Recognition method of rail transit obstacles based on improved LeNet-5," Industrial control computer, vol. 33 no. 1, pp. 63-66, 2020.
[19] M. A. Albreem, A. M. Sheikh, M. H. Alsharif, M. Jusoh, M. N. M. Yasin, "Green internet of things (GIoT): applications, practices, awareness, and challenges," IEEE Access, vol. 9 no. 1, pp. 38833-38858, DOI: 10.1109/ACCESS.2021.3061697, 2021.
[20] T. Qiu, J. Chi, X. Zhou, Z. Ning, M. Atiquzzaman, D. O. Wu, "Edge computing in industrial internet of things: architecture, advances and challenges," IEEE Communications Surveys & Tutorials, vol. 22 no. 4, pp. 2462-2488, DOI: 10.1109/COMST.2020.3009103, 2020.
[21] G. Han, J. P. Su, C. W. Zhang, "A method based on multi-convolution layers joint and generative adversarial networks for vehicle detection," KSII Transactions on Internet and Information Systems, vol. 13 no. 4, pp. 1795-1811, DOI: 10.3837/tiis.2019.04.003, 2019.
[22] P. F. Alcantarilla, S. Stent, G. Ros, R. Arroyo, R. Gherardi, "Street-view change detection with deconvolutional networks," Autonomous Robots, vol. 42 no. 7, pp. 1301-1322, DOI: 10.1007/s10514-018-9734-5, 2018.
[23] Z. X. Li, W. Sun, M. M. Liu, "Research on vehicle detection and tracking algorithm in traffic monitoring scene," Computer engineering and application, vol. 57 no. 8, pp. 103-111, 2021.
[24] X. Lin, C.-T. Li, V. Sanchez, C. Maple, "On the detection-to-track association for online multi-object tracking," Pattern Recognition Letters, vol. 146 no. 9, pp. 200-207, DOI: 10.1016/j.patrec.2021.03.022, 2021.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2021 Li Feng et al. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The traditional analysis method of train obstacle uses isomorphic sensors to obtain the state information and completes detection and identification analysis at the remote end of a network. A single data sample and more processing links will reduce the accuracy and speed analysis for subway encountering obstacles. To solve this problem, this paper proposes a subway obstacle perception and identification method based on cloud edge cooperation. The subway monitoring cloud platform realizes the training and construction of a detection model, and the network edge side completes the situation awareness of track state and real-time action when the train encounters obstacles. Firstly, the railroad track position is detected by cameras, and subway running track is identified by Mask RCNN algorithm to determine the detection area of obstacles in the process of subway train running. At the edge of network, the feature-level fusion of data collected by sensor cluster is carried out to provide reliable data support for detection work. Then, based on the DeepSort and YOLOv3 network models, the subway obstacle detection model is constructed on the subway monitoring cloud platform. Moreover, a trained model is distributed to the network edge side, so as to realize the fast and efficient perception and action of obstacles. Finally, the simulation verification is implemented based on actual collected datasets. Experimental results show that the proposed method has good detection accuracy and efficiency, which maintains 98.9% and 1.43 s for obstacle detection accuracy and recognition time in complex scenes.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer