1. Introduction
Gait is a person’s pattern of walking. The complete gait cycle is called a stride. Strides consist of two phases swing and stance. Vision-based gait recognition is an emerging trend in computer vision research groups due to its adaptability to low-resolution, remotely-accessed video data. The conventional biometric traits, including face, iris, and fingerprint biometrics, require high-definition visuals for feature extraction. In contrast, surveillance videos are captured from a remote distance and without the consent or cooperation of the subject. While face, eyes, and fingerprint biometrics are morphed with face masks, oversized glasses, and gloves, gait biometric-based person identification is the best option for visual surveillance in this scenario. Gait recognition for visual surveillance includes age group estimation [1,2,3], ethnicity classification [4], biometric identification [5,6,7], gender recognition [8,9,10], and suspect identification in forensics [11,12]. Moreover, gait biometrics is hard to spoof and impossible to hide [13,14,15]. Gait biometric is distinct enough for person identification, age group estimation, gender recognition, and prediction of ethnic affiliation [16,17,18].
Due to these advantages, researchers investigate the adaptability of gait recognition-based person identification. The future of gait biometric-based visual surveillance relies on the robustness of vision-based gait recognition techniques towards environment-related and subject-related constraints. The environment-related variables include illumination, scene depth/distance between subject and vision camera, viewing angle between subject and vision camera [19,20], the surface of the walk [21], static and dynamic occlusion [22,23] the spatial resolution of vision camera, and noise [24,25]. The subject-related variables include clothing type [19,26], carrying items [27,28] and shoes, walk speed [29,30,31], the direction of walk [32], age [1,2,33,34,35,36], gender [37,38], and physiological conditions. These variables are not controllable in real-time surveillance videos and must be addressed.
The proposed research contributes to vision-based gait recognition robust to appearance variance under different views. The spatio temporal power spectral (STPS) gait features preserve the spatio temporal pattern of gait features. The adaptability of STPS gait features with a quadratic support vector machine (SVM) indicates the significant accuracy of gait recognition across different appearances and views. We discuss significant contributions reported in vision-based gait recognition and their limitations in Section 2. Section 3 summarizes the methodology of the proposed work. Section 4 is dedicated to results and discussion. Section 5 briefly indicates the future direction of the proposed work.
2. Related Work
There are different gait feature extraction techniques developed to address appearance variance challenges. These techniques includes gait entropy image [39,40,41,42], chrono gait image [43,44,45], frame difference energy image [46,47], and motion silhouette image [48,49].
The existing research work primarily focuses on the utilization of gait energy image [39,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64]. Gait energy image (GEI) [50] is computed by aligning and size normalizing gait silhouette images followed by averaging these silhouette images of the gait cycle (sequence). The resulting GEI is an intensity-based gait image that summarizes all key poses observed during the gait cycle, where the intensity of each pose reflects the frequency of occurrence [51].
The gait silhouette and contours are utilized to extract frame-wise gait features such as angular displacement [65], skeletal movement, normal distance maps [66], and pose estimation [31,67,68,69]. The gait silhouette image is adapted to analyze gait motion flow as a sequence of images [70,71]. In [72], gait silhouette images of the complete gait cycle are utilized to create one “Bulk motion gait silhouette image”. The contours of the gait silhouette are previously adapted for shape analysis based [73,74], deformable active contours based [75], normal distance maps based [66], and posture and contour analysis-based gait recognition [76]. The gait silhouette is also utilized as the primary gait feature with deep convolution neural network-based gait recognition, including [77,78,79,80,81,82].
A few research contributions extract gait features from motion estimation at the pixel level, referred to as optical flow [83]. The optical flow-based gait feature extraction is computed on pixel-wise apparent motion estimation. The optical flow-based research work includes [84,85,86,87]. The gait optical flow image is further processed with LDA (Linear Discriminant Analysis) and PCA (Principal Component Analysis) for gait recognition robust to appearance and speed of walk [88]. The gait flow image (GFI) is computed by applying optical flow to images of the gait cycle. The optical flow computed between two gait silhouette images reflects the relative motion of the human body. The GFI-based gait recognition includes studies [86,89,90].
The effective adaptation of GEI and gait silhouette (GS) reflects their importance, but some inherent limitations question their adaptability for gait recognition robust to appearance variance. Collectively, these (GEI, GS) spatial gait features rely on the shape of the human body, and they are highly affected by appearance and view variance. That phenomenon subsequently affects the robustness of GEI and GS for gait recognition as the GEI-based gait recognition techniques convert spatio temporal gait patterns into a spatial pattern, redundancy of irrelevant, trivial gait features, and adaptation of hand-crafted features. Where adaptation of the gait silhouette as a sequenced set preserves the spatio temporal patterns but exhibits a higher spatial variance in case of appearance variance, the optical flow-based gait dynamics extraction has less computational complexity, and it requires no prior human body measurements, reflecting their effectiveness for gait feature extraction robust to appearance variance [84,85,91].
In our previous work, we developed dynamic gait features (DGF) that are robust toward appearance variance. In this paper, we extend our research work to retain the spatio temporal pattern of DGF and to evaluate their robustness toward appearance and gradual view variance.
3. Materials and Method
STPS feature-based gait recognition extends our experiments conducted to address the impact of appearance variance on appearance variance. We identify aspects of DGF-based gait recognition that need improvement, including preserving spatio temporal gait patterns, feature transformation, and evaluating proposed work on different appearances (clothing and carrying items) under different views.
3.1. Dataset
The proposed STPS feature-based gait recognition framework is evaluated on a locally collected South Asian Clothing Variance (SACV) gait dataset [92]. The STPS features are the extension of DGF. The STPS gait features are the extension of DGF. The SACV-gait dataset has multiple appearance-based gait datasets captured with gradual view variance. Further details of use cases of appearance and views are discussed in the results in Section 4, Table 1.
3.2. Spatio Temporal Power Spectral Feature-Based Gait Recognition
The experiments conducted in [92] backed the adaptation of the dynamic gait features, which are proven to resolve the higher intra-class variance caused by appearance variance.
The STPS feature-based gait recognition extends our DGF-based gait recognition technique robust to appearance variance. Here we utilize DGF for STPS gait feature extraction and its evaluation on the SACV gait dataset. The spatio temporal power spectral feature based gait recognition takes the input of greyscale gait video data and extracts STPS gait features, followed by SVM-based gait recognition under different views and use cases.
The framework depicted in Figure 1 performs gait recognition in four steps. In the first step, preprocessing of gait data is carried out with the help of an image differencing technique for foreground extraction, followed by gait cycle detection. The second step consists of dynamic gait features extraction. The dynamic gait features are further processed in the third step for the spatio temporal power spectral gait feature extraction. The last and fourth step includes utilizing STPS gait features for SVM classifier-based gait recognition.
3.2.1. Preprocessing
The preprocessing step is based on foreground extraction and gait cycle detection of the gait data. In order to do the foreground extraction, the image differencing technique is applied [93]. The interval of the gait cycle is defined by foot-to-floor contact by the same foot [24].
We have adapted temporal normalization to analyze uniform numbers of gait images of each subject in the dataset. The process of temporal normalization (uniform number of gait images in the gait cycle) is achieved by considering a sub-sampling of spatial features. The spatial sub-sampling is based on maximum information gained from consecutive images of the gait cycle. Spatial sampling is taking a subset of data drawn according to some specified rule and, based on this subset, making inferences about the spatial population from which the data have been drawn [94].
Temporal normalization helps to achieve a uniform number of gait images under each gait cycle and reduces the total number of gait images. Thus, temporal normalization and removal of redundant spatial features are simultaneously achieved. Temporal normalization helps to achieve and to set the range of images required under each gait cycle. For this purpose, we extract the DGF of each class (subject) with uniform parameters (window size, search space). Thus, the DGF computed for subjects under different appearances lies in the exact spatial coordinates.
We adapted the spatial normalization by taking a subset of gait images according to the relative spatial displacement of the subject within two frames. In subsampling, we adapt down sampling of gait images concerning their significance for relative motion estimation between different poses of the gait cycle. The gait image is discarded if the spatial displacement is minimal (magnitude of two-direction motion vector) and is visualized as point displacement rather than vector field. Thus, we consider the proportion of relative motion observed to include gait images for gait feature extraction. The preprocessing of spatially and temporally normalized gait data is further processed for gait cycle detection. Figure 2 depicts the gait cycle after preprocessing.
3.2.2. Dynamic Gait Feature Extraction
Dynamic gait feature extraction with sub-pixel motion estimation [95] is extended for spatio temporal power spectral feature extraction. Figure 3 shows the DGF computed for consecutive steps of the gait cycle. The adaptation of sub-pixel motion estimation is mentioned in Equations (1) and (2). Equation (3) represents the DGF computation for different views and appearances.
(1)
(2)
(3)
3.2.3. Spatio Temporal Power Spectral Feature Extraction
STPS gait features are developed by processing DGF features with a HOG operator, power spectral analysis, and principal component analysis. We have evaluated STPS gait features for robustness against view and appearance variance. Thus, the gait data of each subject under all appearances (use cases) and viewing angles (views) is considered.
HOG Computation of Dynamic Gait Feature
The Dynamic Gait Features extracted through sub-pixel motion estimation of consecutive gait images are further analyzed to extract a significant flow of gait features across the complete gait cycle. Dynamic gait features are drawn in the form of motion vectors. The length of motion vectors indicates the magnitude of optical computed in that region. Thus, the histogram of gradient (HOG) provides the localized summary of dynamic gait features. Computation of HOG features involves gradient computation and orientation of gradient computation. For gradient computation, gradient operator filters of size 11 × 11 are utilized. The choice of gradient operator is based on metaheuristics and visualization of the result obtained from adapting different cell sizes during data analysis. Gradient computation is followed by orientation computation (0–180°) based on majority voting. The HOG features of DGF-based images are computed to retain the two-dimensional motion vectors with significant directional displacement. For this purpose, bin size 9 with cell size [10 × 10] is utilized to optimize gait dynamics in terms of HOG features.
The computation of HOG features is explained in Equations (4)–(6). Equation (4) describes the magnitude of the gradient. Equations (5) and (6) summarize orientation computation. The cell size for gradient computation was set as 11 × 11.
(4)
(5)
(6)
The output of the HOG application on DGF is a sequence of consecutive gait images that spatially represents gait dynamics and that is still high-dimensional, needing to be composed into a time-varying signal as a gait signature. For this purpose, we adapt the power spectral density estimation on HOG of DGF. Figure 4 represents the visual HOG descriptors of DGF. Figure 5 depicts the intensity value-based HOG descriptor of DGF, where the number of motion vectors of the DGF motion vector along the y-axis under each bin is mentioned along the x-axis.
Power Spectral Density Analysis of Gait Features
Power spectral density (PSD) estimation transforms time-varying spatial features into frequency features. The PSD estimation transforms time-varying spatial data into frequency spectra. As HOG of DGF is computed for consecutive gait images, the PSD is computed for the entire sequence of gait images representing the gait cycle. Power spectral density estimation of stationary single-image gait data is adapted with the help of Fast Fourier Transform (FFT). Transforming spatial gait features into the frequency domain has two-fold benefits: it helps to identify frequency spectra (band) for a significant spatial feature, and it performs feature dimensionality reduction. Figure 6 depicts the spectral features obtained by adapting FFT to the response of the HOG operator, where the PSD gait features in frequency are plotted along the y-axis and in normalized frequency along the x-axis.
Adapting PSD estimation transforms the spatial DGF into frequency-based gait features. These frequency-based gait features are further processed for dimensionality reduction and principal component analysis. The transformation of gait features from the input image to DGF, HOG, and PSD-based gait features follows.
The dimension of gait data from input to the STPS gait signatures is mentioned below. Here the input gait image and the DGF image are the matrices representing the statistical values and the response of sub-pixel motion estimation on the input image. The HOG features and their gait signature represent the corresponding feature vector and the compilation of features matrices for the complete gait cycle, respectively. Similarly, PSD features and their signature represent the PSD feature vector and the compilation of feature matrices for the complete gait cycle. The STPS gait signature represents the feature matrices fed to the SVM classification model.
-
INPUT IMAGE
I = [1200 × 451 × 3]
-
DGF IMAGE
DGFI = [656 × 875 × 1]
-
HOG FEATURES
Hog = [1 × 71,928]
-
HOG FEATURES (GAIT SIGNATURE)
HOG = [1 × 359,640]
-
PSD FEATURES
PSD = [81,938 × 1]
-
PSD FEATURES FOR GAIT SIGNATURE
PSD = [65,537 × 1]
-
STPS FEATURES FOR GAIT SIGNATURE
STPS = [65,537 × 5]
Principal Component Analysis
The principal component analysis of PSD gait features is performed to preserve Eigen features and dimensionality reduction. Principal component analysis-based feature extraction is applied on GEI for feature extraction [56,96,97,98,99] and Eigen value-based SVD factorization [59,100]. We transform the PSD gait feature matrix into the principal component matrix, and we apply the SVM classification model for gait recognition. The HOG computation signifies the dominant areas of DGF (areas of dense motion) power spectral analysis. The visual analysis of frequency analysis in Figure 5 and Figure 6 indicates that the spread of meaningful STPS gait features is transformable in lower-dimensional feature space. Thus, the PCA of STPS features helps to extract the Eigen matrix of STPS features.
The PCA of STPS gait features indicated that 96.6% variance lies in 75% of the features. Thus, we have utilized 75% of the STPS frequency-based gait features for SVM-based classification aiming at gait recognition.
3.2.4. Gait Recognition with Support Vector Machine
A quadratic SVM based upon cross-correlation is used for the classification. SVM performs classification by defining relative decision boundaries between two classes. The hyperplanes of quadratic SVM kernels work similarly to the hyperparameters of deep neural network. Similar techniques are used by [101,102,103,104] for gait recognition using deep learning approaches. The quadratic kernel-based SVM is adapted for multiclass classification as described in Equation (8). In this equation, W represents the weight vector, X represents input data points, and b represents bias. Equations (9) and (10) define decision boundaries concerning each class in binary classification through SVM. Equation (11) shows the quadratic SVM. The quadratic SVM kernel for a one-to-one classification schema is adapted for CCS-based gait recognition. The dimensionality of the initial input image is 1200 × 451 × 3. After preprocessing and DGF extraction, the resultant image is 656 × 875 × 1. Assuming the total number of images in one gait cycle is “n”, the number of DGF images is “n−1”. The number of use cases (appearances) and the number of subjects is denoted as “k” and ‘’s”, respectively.
(7)
(8)
(9)
In the case of a quadratic SVM, the equation becomes
(10)
where a = classes and s = features, W’s weight factor is called polynomial depending on the number of ‘s’. In the case of a quadratic SVM, it is 2.4. Results and Discussion
The proposed STPS features and SVM classifier-based gait recognition is performed on SACV gait data collected under three views and four appearance use cases. The accuracy achieved in the training phase with the 70/30 validation schema is summarized in Figure 7. The accuracy of the prosed gait recognition technique in the testing phase is summarized in Table 1.
4.1. Results
“STPS features and SVM classification model-based gait recognition” is evaluated on a locally collected gait dataset that captures significant shape variance caused by appearance variance. The proposed research results under each use case and viewing angle are discussed in the next section. The experimental results of “STPS features and SVM classification model-based gait recognition” are evaluated on all four use cases of the SACV gait dataset captured under view 1 (45°), view 2 (90°), and view 3 (135°) viewing angles. The accuracy achieved in four use cases under each view is mentioned in Table 1. Figure 7 depicts the confusion matrix of the training phase in use cases 1–4 and views 1–3.
4.2. Discussion
We evaluate STPS gait feature-based gait recognition in the combination of all three views and use cases mentioned in Table 2.The result of STPS-based gait recognition in different combinations of views and use cases are further expressed as three sets of comparative analyses. The first set includes use cases 1, 2, and 3 under view 1 (45°). The second set comprises use cases 1, 2, and 3 under view 2 (90°). The third set includes use cases 1, 2, and 3 under view 3 (135°). The evaluation of STPS gait features for gait recognition in three views with all three appearance use cases indicates that STPS gait features are robust to view appearances and to outperform the gait silhouette [68,77,105,106,107,108]. Figure 8 depicts the comparative analysis of spatio temporal power spectral gait features and supports vector machine-based gait recognition with existing work under all three views and use cases. The accuracy achieved by STPS gait recognition under different views and use cases is significant compared to existing work. The accuracy of the gait silhouette and DCNN-based gait recognition [108] also decline in use cases 2 and 3 of views 1–3.
This pattern is evident from Table 2 that existing techniques achieve good accuracy for use case 1 under view 1–3, but this accuracy declines in use case 2 and use case 3. At the same time, the STPS gait feature with the SVM classification model has consistent accuracy across all three views and use cases. The evaluation of STPS gait features for gait recognition in three views with all three appearance use cases indicates that STPS gait features are robust to view appearance and to outperform gait silhouette-based techniques [68,77,105,106,107,108]. Gait recognition based on STPS features has achieved consistent accuracy in use case 4 and views 1–3, as mentioned in Table 1.
The set of three gait silhouette images is utilized as gait features for LSTM CNN-based gait recognition [82]. The adaptability of gait silhouette as a set outperforms GEI-based gait recognition in rank one accuracy.
Adapting STPS gait features with SVM yields a good accuracy on all three use cases and views. These results encourage us to analyze their adaptability with classical neural network-based gait recognition. For this purpose, we consider STPS gait features use case 1 at 90°. In existing work, most CNN-based gait recognition is mainly addressed with deep CNN and spatial features. The adaptability of STPS features and neural network-based gait recognition is analyzed with different hidden layers, neurons, iterations, and activation functions. We achieved 98.4% accuracy with 10 neurons, 100 iterations, and a time cost of 3.65 s with a computation rate of 82,000 observations per second.
5. Conclusions
The proposed work addresses the challenges of vision-based gait recognition, including appearance variance and view variance. The existing state of the artwork utilizes spatial features, including GEI and gait silhouette. These spatial features are highly affected by the significant spatial variance caused by varying clothing and carrying items. Clothing combination and carrying items introduce different challenges, such as reduced visibility of the lower limbs, change in the subject’s body shape, and dynamic noise caused by the back-and-forth motion of carrying bags.
We address the problem of significant spatial variance across clothing and carrying items by introducing sub-pixel motion estimation-based gait features named Dynamic Gait Features. In this paper, we extend the Dynamic Gait Feature-based feature extraction to STPS gait features that retain the spatiotemporal pattern of gait features and exhibit robustness toward appearance variance across different viewing angles. In the future, we will adapt the STPS gait features for neural network-based gait recognition and their fusion with spatial features.
Conceptualization, H.M. and H.F.; methodology, H.M.; software, H.M.; validation, H.M.; formal analysis, H.M.; investigation, H.M.; resources, H.M.; data curation, H.M.; writing—original draft preparation, H.M.; writing—review and editing, H.M. and H.F.; visualization, H.M.; supervision, H.F.; project administration, H.F. All authors have read and agreed to the published version of the manuscript.
The dataset is available on request.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. The framework of STPS Gait Features and SVM Classification Based Gait Recognition.
Figure 7. STPS Gait Recognition Results for Use Cases 1, 2, 3, and 4 and Views 1, 2, and 3.
Summary of STPS Gait Recognition Results for Use Cases and Views.
View 1 | View 2 | View 3 | |
---|---|---|---|
Use Case 1 | 98.7% | 99.87% | 99.5% |
Use Case 2 | 98% | 99% | 99.27% |
Use Case 3 | 99.4% | 97.15% | 99% |
Use Case 4 | 99% | 99.3% | 99.35% |
Comparison of the Presented Work with Existing Work under All Three Views and Use Cases.
Research Work and Methodology | Accuracy % | ||||||||
---|---|---|---|---|---|---|---|---|---|
View 1 | View 2 | View 3 | |||||||
Use Case 1 | Use Case 2 | Use Case 3 | Use Case 1 | Use Case 2 | Use Case 3 | Use Case 1 | Use Case 2 | Use Case 3 | |
GS + Gaitset [ |
96.9% | 88.8% | 77.3% | 91.7% | 81% | 70.1% | 97.8% | 90% | 73.5% |
Pose + LSTM [ |
96.7% | 76.6% | 61.29% | 97.6% | 70.2% | 56.5% | 94.35% | 69.35% | 54.84% |
GS + GLconv [ |
97.9% | 95.5% | 87.1% | 95.4% | 89.3% | 79% | 98.9% | 96.5% | 87% |
3DCNN [ |
99.3% | 97.5% | 89.2% | 96% | 91.7% | 80.5% | 99.1% | 96.5% | 84.3% |
CSTL [ |
98.4% | 96% | 87.2 % | 95.2% | 90.5% | 81.5% | 98.9% | 96.8% | 88.4% |
MSGG [ |
99.3% | 97.6% | 93.8% | 97.5% | 91.6% | 89.4% | 99.1% | 96.6% | 93.8% |
STPS (Our) | 98.7% | 98% | 99.4% | 99.87% | 99% | 97.15% | 99.5% | 99.27% | 99% |
References
1. Xu, C.; Makihara, Y.; Ogi, G.; Li, X.; Yagi, Y.; Lu, J. The ou-isir gait database comprising the large population dataset with age and performance evaluation of age estimation. IPSJ Trans. Comput. Vis. Appl.; 2017; 9, 24. [DOI: https://dx.doi.org/10.1186/s41074-017-0035-2]
2. Li, X.; Makihara, Y.; Xu, C.; Yagi, Y.; Ren, M. Gait-based human age estimation using age group-dependent manifold learning and regression. Multimed. Tools Appl.; 2018; 77, pp. 28333-28354. [DOI: https://dx.doi.org/10.1007/s11042-018-6049-7]
3. Sakata, A.; Takemura, N.; Yagi, Y. Gait-based age estimation using multi-stage convolutional neural network. IPSJ Trans. Comput. Vis. Appl.; 2019; 11, 4. [DOI: https://dx.doi.org/10.1186/s41074-019-0054-2]
4. Zhang, D.; Wang, Y.; Bhanu, B. Ethnicity Classification Based on Gait Using Multi-View Fusion. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops; San Francisco, CA, USA, 13–18 June 2010.
5. Masood, H.; Farooq, H. A Proposed Framework for Vision Based Gait Biometric System against Spoofing Attacks. Proceedings of the 2017 International Conference on Communication, Computing and Digital Systems (C-CODE); Islamabad, Pakistan, 8–9 March 2017.
6. Rida, I.; Almaadeed, N.; Almaadeed, S. Robust gait recognition: A comprehensive survey. IET Biom.; 2018; 8, pp. 14-28. [DOI: https://dx.doi.org/10.1049/iet-bmt.2018.5063]
7. Bouchrika, I. A Survey of Using Biometrics for Smart Visual Surveillance: Gait Recognition. Surveillance in Action; Springer: Cham, Switzerland, 2018; pp. 3-23.
8. Liu, T.; Ye, X.; Sun, B. Combining Convolutional Neural Network and Support Vector Machine for Gait-Based Gender Recognition. Proceedings of the 2018 Chinese Automation Congress (CAC); Xi’an, China, 30 November 2018.
9. Kitchat, K.; Khamsemanan, N.; Nattee, C. Gender Classification from Gait Silhouette Using Observation Angle-Based Geis. Proceedings of the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM); Bangkok, Thailand, 18–20 November 2019.
10. Isaac, E.R.; Elias, S.; Rajagopalan, S.; Easwarakumar, K. Multiview gait-based gender classification through pose-based voting. Pattern Recognit. Lett.; 2019; 126, pp. 41-50. [DOI: https://dx.doi.org/10.1016/j.patrec.2018.04.020]
11. Bouchrika, I.; Carter, J.N.; Nixon, M.S. Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras. Multimed. Tools Appl.; 2016; 75, pp. 1201-1221. [DOI: https://dx.doi.org/10.1007/s11042-014-2364-9]
12. van Mastrigt, N.M.; Celie, K.; Mieremet, A.L.; Ruifrok, A.C.; Geradts, Z. Critical review of the use and scientific basis of forensic gait analysis. Forensic Sci. Res.; 2018; 3, pp. 183-193. [DOI: https://dx.doi.org/10.1080/20961790.2018.1503579]
13. Hadid, A.; Ghahramani, M.; Kellokumpu, V.; Pietikäinen, M.; Bustard, J.; Nixon, M. Can Gait Biometrics Be Spoofed?. Proceedings of the 21st International Conference on Pattern Recognition (Icpr2012); Tsukuba Science City, Japan, 11 November 2012.
14. Hadid, A.; Ghahramani, M.; Bustard, J.; Nixon, M. Improving gait biometrics under spoofing attacks. Improving Gait Bio-metrics Under Spoofing Attacks. Proceedings of the International Conference on Image Analysis and Processing; Naples, Italy, 9–13 September 2013; Springer: Berlin, Germany, 2013.
15. Jia, M.; Yang, H.; Huang, D.; Wang, Y. Attacking Gait Recognition Systems via Silhouette Guided GANs. Proceedings of the 27th ACM International Conference on Multimedia; Nice, France, 15 October 2019.
16. Yang, T.; Zeng, Z.; Chen, X. Gait Recognition Robust to Dress and Carrying Using Multi-Link Gravity Center Track. Proceedings of the 2015 IEEE International Conference on Information and Automation; Beijing, China, 8 August 2015.
17. Ng, H.; Tan, W.-H.; Abdullah, J.; Tong, H.-L. Development of vision based multiview gait recognition system with MMUGait database. Sci. World J.; 2014; 2014, 376569. [DOI: https://dx.doi.org/10.1155/2014/376569]
18. Towheed, M.A.; Kiyani, W.; Ummar, M.; Shanableh, T.; Dhou, S. Motion-Based Gait Recognition for Recognizing People in Traditional Gulf Clothing. Proceedings of the 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA); Abu Dhabi, United Arab Emirates, 3 November 2019.
19. Yu, S.; Tan, D.; Tan, T. Modelling the Effect of View Angle Variation on Appearance-Based Gait Recognition. Proceedings of the Asian Conference on Computer Vision; Hyderabad, India, 13–16 January 2006; Springer: Berlin, Germany, 2006.
20. Takemura, N.; Makihara, Y.; Muramatsu, D.; Echigo, T.; Yagi, Y. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Trans. Comput. Vis. Appl.; 2018; 10, pp. 1-14. [DOI: https://dx.doi.org/10.1186/s41074-018-0039-6]
21. Phillips, P.J.; Sarkar, S.; Robledo, I.; Grother, P.; Bowyer, K. The Gait Identification Challenge Problem: Data Sets and Baseline Algorithm. Proceedings of the 16th International Conference on Pattern Recognition (ICPR’02) 2002; Quebec City, Canada, 11 August 2002; Volume 1.
22. Hofmann, M.; Sural, S.; Rigoll, G. Gait Recognition in The Presence of Occlusion: A New Dataset and Baseline Algorithms. Proceedings of the 19th International Conference on Computer Graphics, Visualization and Computer Vision (WSCG); Plzen, Czech Republic, 31 January 2011.
23. Uddin, M.Z.; Muramatsu, D.; Takemura, N.; Ahad, M.A.R.; Yagi, Y. Spatio-temporal silhouette sequence reconstruction for gait recognition against occlusion. IPSJ Trans. Comput. Vis. Appl.; 2019; 11, pp. 1-18. [DOI: https://dx.doi.org/10.1186/s41074-019-0061-3]
24. Singh, J.P.; Jain, S.; Arora, S.; Singh, U.P. Vision-based gait recognition: A survey. IEEE Access.; 2018; 6, pp. 70497-70527. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2879896]
25. Makihara, Y.; Nixon, M.S.; Yagi, Y. Gait Recognition: Databases, Representations, and Applications. Computer Vision: A Reference Guide; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1-13.
26. Iwama, H.; Okumura, M.; Makihara, Y.; Yagi, Y. The ou-isir gait database comprising the large population dataset and performance evaluation of gait recognition. IEEE Trans. Inf. Forensics Secur.; 2012; 7, pp. 1511-1521. [DOI: https://dx.doi.org/10.1109/TIFS.2012.2204253]
27. Gross, R.; Shi, J. The Cmu Motion of Body (Mobo) Database; Princeton University Press: Princeton, NJ, USA, 2001.
28. Uddin, M.Z.; Ngo, T.T.; Makihara, Y.; Takemura, N.; Li, X.; Muramatsu, D.; Yagi, Y. The ou-isir large population gait database with real-life carried object and its performance evaluation. IPSJ Trans. Comput. Vis. Appl.; 2018; 10, pp. 1-11. [DOI: https://dx.doi.org/10.1186/s41074-018-0041-z]
29. Makihara, Y.; Mannami, H.; Tsuji, A.; Hossain, M.A.; Sugiura, K.; Mori, A.; Yagi, Y. The OU-ISIR gait database comprising the treadmill dataset. IPSJ Trans. Comput. Vis. Appl.; 2012; 4, pp. 53-62. [DOI: https://dx.doi.org/10.2197/ipsjtcva.4.53]
30. Xu, C.; Makihara, Y.; Li, X.; Yagi, Y.; Lu, J. Speed-invariant gait recognition using single-support gait energy image. Multimed. Tools Appl.; 2019; 78, pp. 26509-26536. [DOI: https://dx.doi.org/10.1007/s11042-019-7712-3]
31. Semwal, V.B.; Mazumdar, A.; Jha, A.; Gaud, N.; Bijalwan, V. Speed, Cloth and Pose Invariant Gait Recognition-Based Person Identification. Machine Learning: Theoretical Foundations and Practical Applications; Springer: Singapore, 2021; pp. 39-56.
32. Verlekar, T.T.; Correia, P.L.; Soares, L.D. View-invariant gait recognition system using a gait energy image decomposition method. IET Biom.; 2017; 6, pp. 299-306. [DOI: https://dx.doi.org/10.1049/iet-bmt.2016.0118]
33. Lu, J.; Tan, Y.-P. Gait-based human age estimation. IEEE Trans. Inf. Forensics Secur.; 2010; 5, pp. 761-770. [DOI: https://dx.doi.org/10.1109/TIFS.2010.2069560]
34. Makihara, Y.; Okumura, M.; Iwama, H.; Yagi, Y. Gait-Based Age Estimation Using a Whole-Generation Gait Database. Proceedings of the 2011 International Joint Conference on Biometrics (IJCB); Washington, DC, USA, 11 October 2011.
35. Chuen, B.K.Y.; Connie, T.; Song, O.T.; Goh, M. A Preliminary Study of Gait-Based Age Estimation Techniques. Proceedings of the 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA); Hong Kong, China, 16–19 December 2015.
36. Hong, J. Human Gait Identification and Analysis. Ph.D. Thesis; Brunel University School of Engineering and Design: London, UK, 2012.
37. Sudha, L.; Bhavani, R. An efficient spatio-temporal gait representation for gender classification. Appl. Artif. Intell.; 2013; 27, pp. 62-75. [DOI: https://dx.doi.org/10.1080/08839514.2013.747373]
38. Hassan, O.M.S.; Abdulazeez, A.M.; TİRYAKİ, V.M. Gait-Based Human Gender Classification Using Lifting 5/3 Wavelet and Principal Component Analysis. Proceedings of the 2018 International Conference on Advanced Science and Engineering (ICOASE); Duhok, Iraq, 9 October 2018.
39. Bashir, K.; Xiang, T.; Gong, S. Cross View Gait Recognition Using Correlation Strength. Proceedings of the British Machine Vision Conference, BMVC; Aberystwyth, UK, 31 August 2010.
40. Bashir, K.; Xiang, T.; Gong, S. Gait Recognition Using Gait Entropy Image. Proceedings of the 3rd International Conference on Imaging for Crime Detection and Prevention (ICDP 2009); London, UK, 3 December 2009.
41. Jeevan, M.; Jain, N.; Hanmandlu, M.; Chetty, G. Gait Recognition Based on Gait Pal and Pal Entropy Image. Proceedings of the 2013 IEEE International Conference on Image Processing; Melbourne, VIC, Australia, 15–18 September 2013.
42. Rokanujjaman, M.; Islam, M.S.; Hossain, M.A.; Islam, M.R.; Makihara, Y.; Yagi, Y. Effective part-based gait identification using frequency-domain gait entropy features. Multimed. Tools Appl.; 2015; 74, pp. 3099-3120. [DOI: https://dx.doi.org/10.1007/s11042-013-1770-8]
43. Wang, C.; Zhang, J.; Pu, J.; Yuan, X.; Wang, L. Chrono-Gait Image: A Novel Temporal Template for Gait Recognition. Proceedings of the European Conference on Computer Vision; Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010.
44. Wang, C.; Zhang, J.; Wang, L.; Pu, J.; Yuan, X. Human identification using temporal information preserving gait template. IEEE Trans. Pattern Anal. Mach. Intell.; 2011; 34, pp. 2164-2176. [DOI: https://dx.doi.org/10.1109/TPAMI.2011.260]
45. Liu, Y.; Zhang, J.; Wang, C.; Wang, L. Multiple Hog Templates for Gait Recognition. Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012); Tsukuba, Japan, 11–15 November 2012.
46. Chen, C.; Liang, J.; Zhao, H.; Hu, H.; Tian, J. Factorial HMM and parallel HMM for gait recognition. IEEE Trans. Syst. Man Cybern. Part C; 2008; 39, pp. 114-123. [DOI: https://dx.doi.org/10.1109/TSMCC.2008.2001716]
47. Chen, C.; Liang, J.; Zhao, H.; Hu, H.; Tian, J. Frame difference energy image for gait recognition with incomplete silhouettes. Pattern Recognit. Lett.; 2009; 30, pp. 977-984. [DOI: https://dx.doi.org/10.1016/j.patrec.2009.04.012]
48. Lam, T.H.; Lee, R.S. A New Representation for Human Gait Recognition: Motion Silhouettes Image (Msi). International Conference on Biometrics, Hong Kong, China, 5–7 January 2006; Springer: Berlin, Germany, 2006.
49. Lee, H.; Hong, S.; Nizami, I.F.; Kim, E. A noise robust gait representation: Motion energy image. Int. J. Control. Autom. Syst.; 2009; 7, pp. 638-643. [DOI: https://dx.doi.org/10.1007/s12555-009-0414-2]
50. Kusakunniran, W.; Wu, Q.; Li, H.; Zhang, J. Multiple Views Gait Recognition Using View Transformation Model Based on Optimized Gait Energy Image. Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops; Kyoto, Japan, 27 September 2009.
51. Han, J.; Bhanu, B. Individual recognition using gait energy image. IEEE Trans. Pattern Anal. Mach. Intell.; 2005; 28, pp. 316-322. [DOI: https://dx.doi.org/10.1109/TPAMI.2006.38]
52. Kusakunniran, W.; Wu, Q.; Zhang, J.; Li, H. Support Vector Regression for Multi-View Gait Recognition Based on Local Motion Feature Selection. Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; San Francisco, CA, USA, 13–18 June 2010.
53. Zheng, S.; Zhang, J.; Huang, K.; He, R.; Tan, T. Robust view transformation model for gait recognition. Proceedings of the 2011 18th IEEE International Conference on Image Processing; Brussels, Belgium, 11–14 September 2011.
54. Yang, X.; Zhou, Y.; Zhang, T.; Shu, G.; Yang, J. Gait recognition based on dynamic region analysis. Signal Processing; 2008; 88, pp. 2350-2356. [DOI: https://dx.doi.org/10.1016/j.sigpro.2008.03.006]
55. Abdullah, B.A.; El-Alfy, E.S.M. Statistical Gabor-Based Gait Recognition Using Region-Level Analysis. Signal Processing; 2008; 88, pp. 2350-2356.
56. Wang, X.; Wang, J.; Yan, K. Gait recognition based on Gabor wavelets and (2D) 2 PCA. Multimed. Tools Appl.; 2018; 77, pp. 12545-12561. [DOI: https://dx.doi.org/10.1007/s11042-017-4903-7]
57. Jia, N.; Sanchez, V.; Li, C.T. On view-invariant gait recognition: A feature selection solution. IET Biom.; 2018; 7, pp. 287-295. [DOI: https://dx.doi.org/10.1049/iet-bmt.2017.0151]
58. Choudhury, S.D.; Tjahjadi, T. Robust view-invariant multiscale gait recognition. Pattern Recognit.; 2015; 48, pp. 798-811. [DOI: https://dx.doi.org/10.1016/j.patcog.2014.09.022]
59. Xing, X.; Wang, K.; Yan, T.; Lv, Z. Complete canonical correlation analysis with application to multi-view gait recognition. Pattern Recognit.; 2016; 50, pp. 107-117. [DOI: https://dx.doi.org/10.1016/j.patcog.2015.08.011]
60. Alvarez, I.R.T.; Sahonero-Alvarez, G. Gait Recognition Based on Modified Gait Energy Image. Proceedings of the 2018 IEEE Sciences and Humanities International Research Conference (SHIRCON); Lima, Peru, 20–22 November 2018.
61. Rida, I. Towards human body-part learning for model-free gait recognition. arXiv; 2019; arXiv: 1904.01620
62. Wu, Z.; Huang, Y.; Wang, L.; Wang, X.; Tan, T. A comprehensive study on cross-view gait based human identification with deep cnns. IEEE Trans. Pattern Anal. Mach. Intell.; 2017; 39, pp. 209-226. [DOI: https://dx.doi.org/10.1109/TPAMI.2016.2545669] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27019478]
63. Yu, S.; Chen, H.; Wang, Q.; Shen, L.; Huang, Y. Invariant feature extraction for gait recognition using only one uniform model. Neurocomputing; 2017; 239, pp. 81-93. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.02.006]
64. Elharrouss, O.; Almaadeed, N.; Al-Maadeed, S.; Bouridane, A. Gait recognition for person re-identification. J. Supercomput.; 2021; 77, pp. 3653-3672. [DOI: https://dx.doi.org/10.1007/s11227-020-03409-5]
65. Lu, H.; Plataniotis, K.N.; Venetsanopoulos, A.N. Venetsanopoulos, A full-body layered deformable model for automatic model-based gait recognition. EURASIP J. Adv. Signal Processing; 2007; 2008, pp. 1-13. [DOI: https://dx.doi.org/10.1155/2008/261317]
66. El-Alfy, H.; Mitsugami, I.; Yagi, Y. Gait recognition based on normal distance maps. IEEE Trans. Cybern.; 2017; 48, pp. 1526-1539. [DOI: https://dx.doi.org/10.1109/TCYB.2017.2705799]
67. Sokolova, A.; Konushin, A. Pose-based deep gait recognition. IET Biom.; 2019; 8, pp. 134-143. [DOI: https://dx.doi.org/10.1049/iet-bmt.2018.5046]
68. Liao, R.; Yu, S.; An, W.; Huang, Y. A model-based gait recognition method with body pose and human prior knowledge. Pattern Recognit.; 2020; 98, 107069. [DOI: https://dx.doi.org/10.1016/j.patcog.2019.107069]
69. Li, X.; Makihara, Y.; Xu, C.; Yagi, Y. End-to-End Model-Based Gait Recognition Using Synchronized Multi-View Pose Constraint. Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, BC, Canada, 11–17 October 2021.
70. Wang, L.; Tan, T.; Ning, H.; Hu, W. Silhouette analysis-based gait recognition for human identification. IEEE Trans. Pattern Anal. Mach. Intell.; 2003; 25, pp. 1505-1518. [DOI: https://dx.doi.org/10.1109/TPAMI.2003.1251144]
71. Zeng, W.; Wang, C.; Yang, F. Silhouette-based gait recognition via deterministic learning. Pattern Recognit; 2014; 47, pp. 3568-3584. [DOI: https://dx.doi.org/10.1016/j.patcog.2014.04.014]
72. Tafazzoli, F.; Bebis, G.; Louis, S.; Hussain, M. Genetic feature selection for gait recognition. J. Electron. Imaging; 2015; 24, 013036. [DOI: https://dx.doi.org/10.1117/1.JEI.24.1.013036]
73. Liu, L.; Yin, Y.; Qin, W.; Li, Y. Gait recognition based on outermost contour. Int. J. Comput. Intell. Syst.; 2011; 4, pp. 1090-1099.
74. Choudhury, S.D.; Tjahjadi, T. Gait recognition based on shape and motion analysis of silhouette contours. Comput. Vis. Image Underst.; 2013; 117, pp. 1770-1785. [DOI: https://dx.doi.org/10.1016/j.cviu.2013.08.003]
75. Lee, C.P.; Tan, A.W.; Tan, S.C. Gait recognition via optimally interpolated deformable contours. Pattern Recognit. Lett.; 2013; 34, pp. 663-669. [DOI: https://dx.doi.org/10.1016/j.patrec.2013.01.013]
76. Ma, Y.; Wei, C.; Long, H. A Gait Recognition Method Based on the Combination of Human Body Posture and Human Body Contour. Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020.
77. Chao, H.; He, Y.; Zhang, J.; Feng, J. Gaitset: Regarding Gait as a Set for Cross-View Gait Recognition. Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence; Honolulu, HI, USA, 27 January 2019.
78. Deng, M.; Yang, H.; Cao, J.; Feng, X. View-Invariant Gait Recognition Based on Deterministic Learning and Knowledge Fusion. Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN); Budapest, Hungary, 14 July 2019.
79. Mu, Z.; Castro, F.M.; Marín-Jiménez, M.J.; Guil, N.; Li, Y.-R.; Yu, S. iLGaCo: Incremental Learning of Gait Covariate Factors. Proceedings of the 2020 IEEE International Joint Conference on Biometrics (IJCB); Houston, TX, USA, 28 September 2020.
80. Wang, Y.; Chen, Z.; Wu, Q.J.; Rong, X. Deep mutual learning network for gait recognition. Multimed. Tools Appl.; 2020; 79, pp. 22653-22672. [DOI: https://dx.doi.org/10.1007/s11042-020-09003-4]
81. Li, S.; Zhang, M.; Liu, W.; Ma, H.; Meng, Z. Appearance and Gait-Based Progressive Person Re-Identification for Surveillance Systems. Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM); Xi’an, China, 13–16 September 2018.
82. Wang, X.; Zhang, J.; Yan, W.Q. Gait recognition using multichannel convolution neural networks. Neural Comput. Appl.; 2020; 32, pp. 14275-14285. [DOI: https://dx.doi.org/10.1007/s00521-019-04524-y]
83. Beauchemin, S.S.; Barron, J.L. The computation of optical flow. ACM Comput. Surv.; 1995; 27, pp. 433-466. [DOI: https://dx.doi.org/10.1145/212094.212141]
84. Castro, F.M.; Marín-Jiménez, M.J.; Guil, N.; López-Tapia, S.; de la Blanca, N.P. Evaluation of CNN Architectures for Gait Recognition Based on Optical Flow Maps. Proceedings of the 2017 International Conference of the Biometrics Special Interest Group (BIOSIG); Darmstadt, Germany, 20 September 2017.
85. Mahfouf, Z.; Merouani, H.F.; Bouchrika, I.; Harrati, N. Investigating the use of motion-based features from optical flow for gait recognition. Neurocomputing; 2018; 283, pp. 140-149. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.12.040]
86. Arora, P.; Srivastava, S.; Singhal, S. Analysis of Gait Flow Image and Gait Gaussian Image Using Extension Neural Network for Gait Recognition. Deep Learning and Neural Networks: Concepts, Methodologies, Tools, and Applications; IGI Global: Hershey, PA, USA, 2020; pp. 429-449.
87. Yang, Y.; Tu, D.; Li, G. Gait Recognition Using Flow Histogram Energy Image. Proceedings of the 2014 22nd International Conference on Pattern Recognition; Montreal, QC, Canada, 24 August 2014.
88. Luo, Z.; Yang, T.; Liu, Y. Gait Optical Flow Image Decomposition for Human Recognition. Proceedings of the 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference; Chongqing, China, 20–22 May 2016.
89. Lam, T.H.; Cheung, K.H.; Liu, J.N. Gait flow image: A silhouette-based gait representation for human identification. Pattern Recognit.; 2011; 44, pp. 973-987. [DOI: https://dx.doi.org/10.1016/j.patcog.2010.10.011]
90. Wang, L.; Jia, S.; Li, X.; Wang, S. Human Gait Recognition Based on Gait Flow Image Considering Walking Direction. Proceedings of the 2012 IEEE International Conference on Mechatronics and Automation; Chengdu, China, 5–8 August 2012.
91. Hu, M.; Wang, Y.; Zhang, Z.; Zhang, D.; Little, J.J. Incremental learning for video-based gait recognition with LBP flow. IEEE Trans. Cybern.; 2012; 43, pp. 77-89.
92. Masood, H.; Farooq, H. An Appearance Invariant Gait Recognition Technique Using Dynamic Gait Features. Int. J. Opt.; 2021; 2021, pp. 1-15. [DOI: https://dx.doi.org/10.1155/2021/5591728]
93. Gong, S.; Liu, C.; Ji, Y.; Zhong, B.; Li, Y.; Dong, H. Advanced Image and Video Processing Using MATLAB; Springer: Berlin/Heidelberg, Germany, 2018; Volume 12.
94. Haining, R.P. Spatial autocorrelation and the quantitative revolution. Geogr. Anal.; 2009; 41, pp. 364-374. [DOI: https://dx.doi.org/10.1111/j.1538-4632.2009.00763.x]
95. Chan, S.H.; Võ, D.T.; Nguyen, T.Q. Subpixel Motion Estimation without Interpolation. Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing; Dallas, TX, USA, 14–19 March 2010.
96. Su, H.; Liao, Z.-W.; Chen, G.-Y. A Gait Recognition Method Using L1-PCA and LDA. Proceedings of the 2009 International Conference on Machine Learning and Cybernetics; Baoding, China, 12 July 2009.
97. Pushparani, M.; Sasikala, D. A survey of gait recognition approaches using pca and ica. Glob. J. Comput. Sci. Technol.; 2012; 12, pp. 1-5.
98. Ali, H.; Dargham, J.; Ali, C.; Moung, E.G. Gait Recognition Using Radon Transform With Principal Component Analysis. Proceedings of the 3rd International Conference on Machine Vision (ICMV); Hong Kong, China, 28 December 2010.
99. Liu, L.-F.; Jia, W.; Zhu, Y.-H. Gait Recognition Using Hough Transform and Principal Component Analysis. Proceedings of the International Conference on Intelligent Computing; Ulsan, Korea, 16–19 September 2009; Springer: Cham, Switzerland, 2009.
100. Kusakunniran, W.; Wu, Q.; Zhang, J.; Li, H. Gait recognition under various viewing angles based on correlated motion regression. IEEE Trans. Circuits Syst. Video Technol.; 2012; 22, pp. 966-980. [DOI: https://dx.doi.org/10.1109/TCSVT.2012.2186744]
101. Zhang, Z.; Tran, L.; Yin, X.; Atoum, Y.; Liu, X.; Wan, J.; Wang, N. Gait Recognition via Disentangled Representation Learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Long Beach, CA, USA, 20 June 2019.
102. Liao, R.; Cao, C.; Garcia, E.B.; Yu, S.; Huang, Y. Pose-Based Temporal-Spatial Network (Ptsn) for Gait Recognition with Carrying and Clothing Variations. Proceedings of the Chinese Conference on Biometric Recognition; Shenzhen, China, 28–29 October 2017; Springer: Berlin, Germany.
103. Martín-Félez, R.; Xiang, T. Gait Recognition by Ranking. Proceedings of the European Conference on Computer Vision; Florence, Italy, 7–13 October 2012; Springer: Berlin, Germany.
104. Liu, W.; Zhang, C.; Ma, H.; Li, S. Learning efficient spatial-temporal gait features with deep learning for human identification. Neuroinformatics; 2018; 16, pp. 457-471. [DOI: https://dx.doi.org/10.1007/s12021-018-9362-4]
105. Lin, B.; Zhang, S.; Yu, X. Gait Recognition via Effective Global-Local Feature Representation and Local Temporal Aggregation. Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, QC, Canada, 11–17 October 2021.
106. Huang, X.; Zhu, D.; Wang, H.; Wang, X.; Yang, B.; He, B.; Liu, W.; Feng, B. Context-Sensitive Temporal Feature Learning for Gait Recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, QC, Canada, 11–17 October 2021.
107. Peng, Y.; Hou, S.; Ma, K.; Zhang, Y.; Huang, Y.; He, Z. Learning Rich Features for Gait Recognition by Integrating Skeletons and Silhouettes. arXiv preprint; 2021; arXiv: 2110.13408
108. Huang, Z.; Xue, D.; Shen, X.; Tian, X.; Li, H.; Huang, J.; Hua, X.-S. 3D Local Convolutional Neural Networks for Gait Recognition. Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, QC, Canada, 11–17 October 2021.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This study aimed to develop a vision-based gait recognition system for person identification. Gait is the soft biometric trait recognizable from low-resolution surveillance videos, where the face and other hard biometrics are not even extractable. The gait is a cycle pattern of human body locomotion that consists of two sequential phases: swing and stance. The gait features of the complete gait cycle, referred to as gait signature, can be used for person identification. The proposed work utilizes gait dynamics for gait feature extraction. For this purpose, the spatio temporal power spectral gait features are utilized for gait dynamics captured through sub-pixel motion estimation, and they are less affected by the subject’s appearance. The spatio temporal power spectral gait features are utilized for a quadratic support vector machine classifier for gait recognition aiming for person identification. Spatio temporal power spectral preserves the spatiotemporal gait features and is adaptable for a quadratic support vector machine classifier-based gait recognition across different views and appearances. We have evaluated the gait features and support vector machine classifier-based gait recognition on a locally collected gait dataset that captures the effect of view variance in high scene depth videos. The proposed gait recognition technique achieves significant accuracy across all appearances and views.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer