Abstract: In this paper, a deep learning based approach for frontal gait recognition was investigated and presented. In today's world, gait recognition is an interesting way for person identification. Person identification is reguired in various fields of human life and many different methods are used. Some ofthem are fingerprint, methods based on different elements ofthe human eye such as the retina or iris, face recognition, speech or voice recognition, gait recognition, etc. It is important to notethat some ofthem, such as gait or face recognition, are suitable for identification at a greater distance and without interaction with a device to capture the different features on which they are based. Gait recognition isa method that uses the manner of human gait for identification. In recent years, many approaches to gait recognition have been developed and presented. In this work, a deep learning approach was developed based on a deep learning model and a gait recognition method called Gait Energy Image (GEI). The experiment was performed using the well-known Casia Dataset Band the results were presented.
Keywords: Gait Recognition, Person Identification, Deep Learning, Gait Energy Image (GEI), Keras Model
1. INTRODUCTION
Person identification is important and widely used in various fields of human life. It is required in various security systems used by different public organizations and private companies. When crossing a border or accessing a certain area of a company person identification is also required. Nowadays, many cities in the world are equipped with cameras and people can be tracked and identified in case of law violations and crimes.
Various methods are in use for person identification. The mentioned methods are usually based on some human body characteristics and features derived from them. Accordingly, there are methods such as fingerprint, methods based on eye elements (primarily iris or retina), voice and speech recognition, keystroke dynamics, face recognition, gait recognition, etc. As can be concluded from the listed methods, some of them are more reliable than others. For example, methods based on the iris of the human eye are more reliable than methods based on keystroke dynamics. On the other hand, there are also methods that work at a greater distance and without interaction with the person to be identified. For example, in case of fingerprint methods, the person must place finger on a device for the features to be obtained. In the case of face or gait recognition, the features can be detected and extracted from a greater distance and without interaction with the person. This can be achieved by using a long-range RGB (Red, Green, Blue) camera or an RGB-D (Red, Green, Blue - Depth) device.
In the age of ubiquitous artificial intelligence and today's rapid development in this area, further methods for person identification are likely to be based largely on machine and deep learning. Many developed methods are also based on machine and deep learning. A process of person identification based on the use of artificial intelligence could be improved in terms such as speed of identification and reliability of identification.
In this paper, a deep learning approach to frontal gait recognition was analyzed and presented. Frontal gait recognition is needed in many areas of human life, especially in various security systems. In this type of identification, a person usually faces a camera or sensor. It is important to note that in the case where a person is facing the camera or sensor, other human body parts (and extracted features) such as the face can also be used. In this case, a kind effusion between gait and face features can be realized and used. A well-known Gait Energy Image (GEI) [3] method and deep learning model were used to realize the mentioned approach. An experiment was conducted on a known dataset where 99 subjects were used and the results were presented.
2. A GAIT RECOGNITION APPROAGH
Gait recognition is a method for person identification that exploits a person's gait patterns forthe purpose of identification. In recent years, many methods have been developed and presented [1-5,8,9,11-16]. In general, there are two approaches to gait recognition. The mentioned approaches are model-based and appearance-based. The appearance-based approach is usually based on the silhouettes of the person, while the model-based approach is based on a defined model. In a defined model, various elements of the human body, such as the length of the arms or legs, can be used for identification.
The main advantage of gait recognition is that this type of identification can be performed without interaction with the person being identified. Also, an identification process based on gait recognition can be performed over a greater distance and without knowledge of a person to be identified that identification process is underway. This makes gait recognition an interesting method for identifying people, especially in large spaces. An RGB camera or some kind of RGB-D device is usually sufficient for data acquisition which depends on the approach used. In the case of an appearance-based approach, for example, images in RGB format, but also depth images, can be used for silhouette extraction. In the past, there were many problems because the RGB cameras, but especially the RGB-D devices, were not as widely available as they are today. Also, the price of the mentioned devices was much higher and the resolution was not up to today's level. Nowadays, such devices are widely used and offer state-of-the-art components and often supporting software and tools, not infrequently based on artificial intelligence. This often facilitates image processing work, which is crucial in this field. An approach to gait recognition based on deep learning could be realized as depicted in Figure 1. As shown in Figure 1, the approach is divided into three parts and is based on a well-known gait recognition method called GEI. GEI is essentially an image containing the silhouettes of a person during a gait cycle.
The mentioned silhouettes are normalized, aligned, and temporally averaged [3]. Examples of GEI images from the Casia Dataset В [10,18,19] are shown in Figure 2. Other gait recognition method may be used instead of GEI, such as the Backfilled Gait Energy Image (BGEI) [16]. The approach shown in Figure 1 can be modified to be based on gait recognition methods other than the aforementioned GEI or BGEI. In this study, the same approach as in Figure 1 was used, based on the GEI method.
So, the first part of the mentioned approach is the part where GEI images should be created. This means that silhouettes for all persons must be obtained first in order to create the GEI images for all persons. The person silhouettes can be obtained from RGB or depth images. In Figure 1, Image Acquisition refers to obtaining RGB or depth images with an RGB camera or some kind of RGB-D device.
The second part of the presented approach is the part where a deep learning model is to be built, trained and validated (in this case with the GEI images) and then used forthe purpose of person identification. In this research, the TensorFlow platform [17] and Keras [6] were used to build the model, but other platforms can also be used.
The third part is an identification part. Afterthe model is created, trained and validated with the GEI images and then stored, it can be used for identification. This means that RGB or depth images of a particular person should be obtained in real time, from a video or by other means. Afterthat, the silhouettes should be extracted and a GEI image should be created forthat particular person. Then the mentioned GEI image forthe particular person will be passed to the model, and the model will classify the GEI image.
3. EXPERIMENTAL SETUP
An experiment was performed using the specified approach (Figure 1). In the experiment, frontal gait recognition for person identification was investigated. A known Casia Dataset В [10,18,19] was used forthe experimental evaluation. Casia Dataset В is a gait database of 124 subjects, taken from 11 viewpoints (e.g. 0,18, 90,180...), containing images (for example, for this type of research, silhouette images or GEI images are available) where subjects are in normal gait or have clothing and carrying condition changes [10]. For the experiment, 99 subjects with viewing angles of 0 and 18 degrees were used. Some examples of the GEI images used in the experiment are shown in Figure 3.
Images taken with a viewing angle of 0 degrees were used because the subject is facing the camera, which is appropriate for frontal gait recognition. Images taken with a viewing angle of 18 degrees were used for performance verification because the subject is slightly swiveled with respect to the camera.
The experiment was performed with four sets of GEI images. The first set of GEI images contains only the images taken with a viewing angle of 0 degrees. This means that for each of the 99 subjects there were 10 GEI images (6 GEI images with normal gait, 2 GEI images with carrying condition changes and 2 GEI images with clothing changes). Thus, there were 990 GEI images in total. The second set of GEI images contains the images taken at 0 and 18 degrees. In this case, there were 1980 GEI images, 20 GEI images for each of the 99 subjects (12 GEI images with normal gait, 4 GEI images with carrying condition changes and 4 GEI images with clothing changes).
The third set of GEI images contains only the images taken with a 0 degrees viewing angle, but only the GEI images with the subject I.e. person in normal gait (excluding the images with variations in carrying condition changes and clothing changes). There were 6 GEI images for each of the 99 subjects. Thus, there were 594 GEI images. The fourth set of GEI images contains the GEI images taken with a viewing angle of 0 and 18 degrees with the subject in normal gait (also excluding the images with variations in carrying condition changes and clothing changes). In this case, there were 12 GEI images for each of the 99 subjects. A total of 1188 GEI images.
The TensorFlow platform [17] and Keras [6] were used to create the model. To achieve this, the Keras Sequential model was used. The model used consisted of preprocessing layer, convolution layers, pooling layers, reshaping layer and core layers. The sets of GEI images used were divided in the ratio of 80 percent fortraining and 20 percentfor validation. In the case of the first set, out of 990 images, 792 were used for training and 198 for validation. In the second set of GEI images (1980 GEI images), 1584 images were used for training and 396 images were used for validation. In the third set of 594 GEI images, 476 images were used fortraining and 118 for validation. The fourth set, with 1988 GEI images available, used 951 images for training and 237 for validation. Other settings were 30 epochs and the Adaptive Moment Estimation Optimizer (Adam) [7] was used.
4. RESULTSAND DISCUSSION
With the experiment defined and the settings described above, the following results were obtained. In the case of using the first set of GEI images (990 GEI images), the validation accuracy was 92.93%. In the case of the second set of GEI images, containing 1980 GEI images, the validation accuracy was 97.73%. In the case of the third set of GEI images containing 594 GEI images, the validation accuracy was 96.61%. The validation accuracy for the fourth set of GEI images (1188 GEI images) was 97.47%. The above results can be seen in Table 1 and Figure 4. Figures 5-8 show the training and validation accuracy for the GEI image sets used.
As shown in Table 1 and Figures 4-8, the highest validation accuracy of 97,73% was achieved when the second set of GEI images was used. The said set of GEI images contained the most images (1980 GEI images) of the subjects with viewing angles of 0 and 18 degrees. The second highest validation accuracy of 97,47% was also obtained when using images with the same viewing angles (fourth set). The mentioned set was the second largest in terms of the number of GEI images (1188 GEI images). In the cases where the sets contained only GEI images with a viewing angle of 0 degrees, the validation accuracy was lower but exceeded 90% in both cases.
5. CONCLUSION
In this paper, an approach to frontal gait recognition based on deep learning was analyzed and described. Gait recognition is a promising method that uses a person's gait patterns for identification. The main advantage of gait recognition is the ability to work over a larger distance and without interaction with the person to be identified. Nowadays, two approaches are used for gait recognition: model-based and appearance-based. The appearance-based approach is usually based on the silhouette of the person, while the model-based approach is based on a defined model where different human features, such as the length of the arms or legs, can be used.
The research in this paper is based on the use of deep learning in gait recognition. To achieve this, a deep learning model was built using the TensorFlow platform and Keras, ¡.e. Keras Sequential Model. A wellknown gait recognition method called Gait Energy Image (GEI) was used as the baseline method for gait recognition. An experiment was conducted using a well-known gait database called Casia Dataset B. The GEI images from the mentioned database were used, which are suitable for frontal gait recognition. The results obtained were promising, as the validation accuracy was above 90% in all cases.
References
[1] Arora, P. and Srivastava, S. (2015). Gait Recognition Using Gait Gaussian Image. In: 2nd International Conference on Signal Processing and Integrated Networks (SPIN), 791-794. IEEE.
[2] Chattopadhyay, P., Roy, A., Sural, S. and Mukhopadhyay, J. (2014). Pose Depth Volume Extraction from RGB-D Streams for Frontal Gait Recognition. Journal of Visual Communication and Image Representation, 25(1), 53-63. Elsevier.
[3] Han, J. and Bhanu, B. (2005). Individual Recognition Using Gait Energy Image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(2), 316-322. IEEE.
[4] Hofmann, M., Bachmann, S. and Rigoll, G. (2012). 2.5D Gait Biometrics Using the Depth Gradient Histogram Energy Image. In: 5th International Conference on Biometrics: Theory, Applications and Systems (BTAS), 399-403. IEEE.
[5] Iwashita, Y., Uchino, K. and Kurazume, R. (2013). Gait-based Person Identification Robust to Changes in Appearance. Sensors, 13(6), 7884-7901. MDPI.
[6] Keras. Link: https://keras.io/ [Accessed 05/6/2023]
[7] Kingma, D. P. and Ba, J. (2014). Adam: A Method for Stochastic Optimization. arXiv preprint arXiv: 1412.6980.
[8] Kumar, M. N. and Babu, R. V. (2012). Human Gait Recognition Using Depth Camera: A Covariance Based Approach. In: Proceedings of the 8th Indian Conference on Computer Vision, Graphicsand Image Processing, 1-6.
[9] Lenac, K., Sušanj, D., Ra ma kić, A. and Pinčić, D. (2019). Extending Appearance Based Gait Recognition with Depth Data. Applied Sciences, 9(24), 5529. MDPI.
[10] Official Web Page of the Institute of Automation, Chinese Academy of Sciences. Link: http://www.cbsr.ia.ac.cn/english/Gait\%20Databases.asp [Accessed 05/1/2023]
[11] Preis, J., Kessel, M., Werner, M. and Linnhoff-Popien, C. (2012). Gait Recognition with Kinect. In: 1st International Workshop on Kinect in Pervasive Computing, 1-4. New Castle, UK.
[12] Ramakić, A. and Bundalo, Z. (2023). Gait Recognition as an Approach for People Identification. In: International Symposium on Innovative and Interdisciplinary Applications of Advanced Technologies, 717-726. Springer.
[13] Ramakić, A., Bundalo, D. and Bundalo, Z. (2023). An Approach to Gait Recognition Using Deep Neural Network. Acta Technica Co rvi n ien sis-Bu 11 eti n of Engineering, 16(2), 1-6.
[14] Ramakić, A., Bundalo, Z. and Bundalo, D. (2020). A Method for Human Gait Recognition from Video Streams Using Silhouette, Height and Step Length. Journal of Circuits, Systemsand Computers, 29(7), 2050101. World Scientific.
[15] Sivapalan, S., Chen, D., Denman, S., Sridharan, S. and Fookes, C. (2011). Gait Energy Volumesand Frontal Gait Recognition Using Depth Images. In: International Joint Conference on Biometrics (IJCB), 1 -6. IEEE.
[16] Sivapalan, S., Chen, D., Denman, S., Sridharan, S. and Fookes, C. (2012). The Backfilled GEI-A Cross-capture Modality Gait Feature for Frontal and Side-view Gait Recognition. In: International Conference on Digital Image Computing Technigues and Applications (DICTA), 1-8. IEEE.
[17] Tensor Flow. Link: https://www.tensorflow.org/ [Accessed 05/6/2023]
[18] Yu, S., Tan, D. and Tan, T. (2006). A Framework for Evaluating the Effect of View Angle, Clothing and Carrying Condition on Gait Recognition. In: 18th International Conference on Pattern Recognition (ICPR), 441-444. IEEE.
[19] Zheng, S., Zhang, J., Huang, K., He, R. and Tan, T. (2011). Robust View Transformation Model for Gait Recognition. In: 18th International Conference on Image Processing, 2073-2076. IEEE.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023. This work is published under http://annals.fih.upt.ro/index.html (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In this paper, a deep learning based approach for frontal gait recognition was investigated and presented. In today's world, gait recognition is an interesting way for person identification. Person identification is reguired in various fields of human life and many different methods are used. Some ofthem are fingerprint, methods based on different elements ofthe human eye such as the retina or iris, face recognition, speech or voice recognition, gait recognition, etc. It is important to notethat some ofthem, such as gait or face recognition, are suitable for identification at a greater distance and without interaction with a device to capture the different features on which they are based. Gait recognition isa method that uses the manner of human gait for identification. In recent years, many approaches to gait recognition have been developed and presented. In this work, a deep learning approach was developed based on a deep learning model and a gait recognition method called Gait Energy Image (GEI). The experiment was performed using the well-known Casia Dataset Band the results were presented.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Technical Faculty, University of Bihać, Bihać, BOSNIA & HERZEGOVINA
2 Faculty of Electrical Engineering, University of Banja Luka, Banja Luka, BOSNIA & HERZEGOVINA
3 Faculty of Philosophy, University of Banja Luka, Banja Luka, BOSNIA & HERZEGOVINA
4 Faculty of Transport and Traffic Engineering, University of East Sarajevo, Doboj, BOSNIA & HERZEGOVINA