1. Introduction
Finger vein (FV) biometrics has emerged as an exceptionally secure and reliable technology for personal identity authentication. Finger veins are vascular pattern features that are imperceptible to the naked eye, but can be captured by using near-infrared (NIR) light with a specific wavelength ranging from 700 nm to 1000 nm [1]. When NIR light passes through the finger, blood vessels absorb the light, causing a distinctive dark pattern on the image. Such unique vein patterns offer several advantages over other biometric traits, including:
Highsecurity. The intricate and distinctive patterns of FV are unique, rendering them exceedingly difficult to replicate or forge.
Non-contact. Finger vein recognition (FVR) does not require physical contact with the sensor, significantly reducing the risk of contamination and the transmission of germs.
User-friendly. The process of FVR is swift and straightforward, simply requiring the user to put their finger close to the sensor. Moreover, FVR is accessible to a wide range of individuals, regardless of age, gender, or complexion.
The cornerstone of FVR lies in the extraction of discriminative features from acquired images, which can be achieved through two types of primary methods: the handcrafted-based and the deep learning-driven. In the early stages of research, Miura [2,3] pioneered curvature-based methods that captured the extent of curve bending at a particular point, albeit being susceptible to noise. Later, Gabor filtering-based methods [4,5] were introduced to enhance and extract FV features, while Gabor filters are tunable to detect specific frequencies and orientations, finding optimal parameters for a given dataset remains challenging. Subsequently, curvature and Radon-like features (RLFs) were combined to effectively aggregate spatial information around vein structures [6], highlighting vein patterns and suppressing spurious non-boundary responses and noise. However, the obtained features are influenced by illumination variations. Recently, binary patterns of phase congruency (BPPCs) and pyramids of histograms of orientation gradients (PHOGs) have been incorporated for FV feature extraction [7]. However, this method remains susceptible to local changes in scale, translation, and other factors. Handcrafted-based methods rely heavily on expert experience rather than data-driven, which are not always efficient but tend to vary across databases and scenarios.
On the contrary, deep learning-driven methods, which are inherently reliant on training data, have the potential to address some of these challenges. Various classical convolutional neural networks (CNNs), such as VGGNet [8,9], AlexNet [10,11], ResNet [12], DenseNet [13,14,15], Siamese Networks [16,17], Xception [18], and generative adversarial networks (GANs) [19], have demonstrated robustness in a range of image recognition issues, and also exhibited outstanding performance in FVR through fine-tuning and transfer learning [20]. In addition, self-attention mechanisms are also explored in FVR. Among them, a vein pattern constrained transformer (VPCFormer) [21] was proposed that incorporates a self-attention mechanism to capture the correlations between different views of FV patterns, helping the model learn more discriminative features and improving its robustness. Then, a large kernel and attention mechanism network (Let-Net) [22] was presented that also utilizes a self-attention mechanism to enhance the feature representation. By incorporating large kernels and an attention mechanism, the network can capture both local and global context information. SE-DenseNet-HP [23], on the other hand, combined the squeeze-and-excitation (SE) channel attention with a hybrid pooling mechanism, allowing the model to dynamically recalibrate channel-wise feature responses and extract discriminative multi-scale features. The attention mechanism acquires the attention weights by calculating the similarity between different units (channels and channels, pixels and pixels) in the feature maps, thus achieving a concentration of information.
It is noteworthy that the attention mechanism typically elevates the computational and storage requirements of the network, necessitating longer training and inference times. In certain scenarios, the attention mechanism might inadvertently concentrate on irrelevant features, potentially causing the model to overlook crucial information [24]. In contrast, the human visual system possesses a swift and dynamic ability to adjust its perception of external objects. When the visual range is optimally positioned, it can effortlessly capture intricate details. Conversely, for objects situated too far or too close, the visual system instinctively lowers its resolution to prioritize discernible features, given the challenges of distinguishing finer details.
To address these challenges and harness the strengths of both traditional visually guided handcrafted methods and deep learning (DL) methods, while minimizing their respective limitations, we propose a uniquely configured multi-scale and multi-orientation convolutional neural network. This unique architecture, coined the visual feature-guided diamond convolutional network (hereinafter dubbed ‘VF-DCN’), boasts a deliberate three-layer configuration and fully unsupervised training process, focusing on attaining simplicity and optimal performance. In all convolutional layers of VF-DCN, the convolutional kernels are tuned through multi-scale Log-Gabor filters, and then, an adaptive orientational filter learning strategy for the convolutional kernels across different scales is implemented that draws on the human vision. Remarkably, VF-DCN showcases an innovative diamond-shaped convolutional structure that efficiently maintains a wider range of orientational kernels at medium scales. The main contributions of this work are summarized as follows:
Visual feature-guided convolutional kernels. The Log-Gabor filters, which closely mimic the frequency response of visual cells, are used to generate multi-scale Log-Gabor convolutional kernels. This ingenious design empowers the network to capture visual features with unprecedented effectiveness.
Diamondconvolutionalstructure. Inspired by retina imaging, where images become blurred at extreme focal lengths, a diamond convolutional structure is crafted to extract significant orientational information through training across multi-scale Log-Gabor filters.
Fullyunsupervisedlearningnetwork. The network is deliberately designed with just two Log-Gabor convolutional layers and a fully unsupervised training process, achieving a harmonious balance between simplicity and efficiency.
The remainder of this paper is organized as follows: Section 2 provides a summary review of Gabor and Log-Gabor filtering approaches for FVR. Section 3 details the design of Log-Gabor convolutional kernels. Section 4 elaborates on the entire recognition process of the proposed VF-DCN model. Section 5 discusses the experimental results to comprehensively assess the performance of the VF-DCN model. Four FV databases are adopted that contain images with varying qualities, resolutions, and dynamic ranges. Section 6 concludes the work with some remarks and hints at plausible future research lines.
2. Related Works
In this section, we provide a concise overview of Gabor-like filters, specifically Gabor and Log-Gabor, in the context of FVR applications. The Gabor filter family, inspired by the receptive fields of simple cells in the mammalian visual cortex, exhibits robustness to distortion in their coefficient magnitudes, rendering them ideally suited for pattern recognition tasks [25], including those pertaining to finger veins.
2.1. Gabor Filters
In the field of FVR, Gabor filters have been broadly used for feature enhancement and representation. Among them, a bank of even-symmetric Gabor filters with 8 orientations was used to exploit vein information in the images [4]. Then, Yang et al. [26] extended the Gabor filter bank to 2 scales and 8 orientations, and Wang et al. [27] used a bank of 24 Gabor filters covering 4 scales and 6 orientations. Moreover, fusion schemes are introduced to offer insight into the complementarity of various feature extraction methods. Specifically, a fuzzy-based fusion method was proposed in [28] that integrated Gabor filters with Retinex filters, resulting in enhanced visibility and recognition capabilities for FV images. In [29], adaptive Gabor filters were combined with SIFT/SURF feature extractors to enhance vein patterns. In [30], the concept of point grouping was incorporated into Gabor filters to effectively capture local vein patterns. The above Gabor filtering technologies primarily extract texture and orientation features in FV images, which are susceptible to image blurring, translation, rotation, and noise. To address these issues, Shi et al. [31] incorporated scattering removal techniques with Gabor filters to improve the clarity and reliability of FV patterns, alleviating the interference of noise and blurring artifacts. Li et al. [32] proposed a histogram of competitive Gabor directional binary statistics (HCGDBS) approach to improve the discriminant ability of features and robustness to variation in image quality.
In recent years, numerous efforts have been directed towards integrating Gabor filters with deep learning networks, aimed at eliminating the constraints of manual parameter tuning and the limited representation capacity of Gabor filters. In [33], Gabor filters were employed as a preprocessing step, where Gabor-filtered images served as the input of the network. Further, in [34], the first layer of the network used Gabor kernels for feature learning, leaving the rest of the layers unchanged. Notably, the parameters of Gabor kernels are learned by backpropagation. In [35], a few of the early convolutional layers were substituted by a parameterized Gabor convolutional layer. Moreover, Luan et al. [36] adopted Gabor filters to modulate learnable convolutional kernels, allowing the network to capture more robust features across orientation and scale variations, without incurring additional computational burden. Similarly, Yao et al. [17] introduced Gabor orientation filters (GoFs) to modulate conventional convolutional kernels and constructed a Siamese network for FV verification.
It is crucial to acknowledge that Gabor filters possess two prominent limitations. First, the maximum bandwidth of a Gabor filter is constrained to approximately one octave, which restricts its ability to cover a wide range of frequencies. Second, Gabor filters are not the preferred choice when seeking broad spectrum information while requiring optimal spatial localization, as this hinders their efficiency in FV feature extraction.
2.2. Log-Gabor Filters
The Log-Gabor filter, proposed by Field [25], serves as an alternative to the Gabor filter with several distinct advantages. In the frequency domain, the Log-Gabor filter exhibits an attenuation rate that aligns more closely with the human visual system. This characteristic makes it more sensitive to low-frequency information and less sensitive to high-frequency information. As a result, the Log-Gabor filter demonstrates stronger anti-interference ability and is more accurate and reliable in extracting multi-scale image features. Among them, Gao [37] pioneered the use of Log-Gabor filters to decompose input images into multiple scales and orientations. Arrospide [38] demonstrated the superiority of Log-Gabor filters over Gabor filters in the context of image-based vehicle verification. Yang et al. [39] employed phase congruency and Log-Gabor energy for multimodal medical image fusion, showcasing the filters’ versatility in fusing diverse image modalities. Bounneche [40] proposed an oriented multi-scale Log-Gabor filter tailored for multispectral palmprint recognition. Lv et al. [41] utilized an odd-symmetric 2D Log-Gabor filter to analyze the phase and amplitude of iris textures across different frequencies and orientations. Shams et al. [42] combined a diffusion-coherence filter with a 2D Log-Gabor filter to enhance fingerprint images. Beyond these applications, Log-Gabor filters have also found their niche in motion estimation [43], remote sensing [44], and numerous other domains.
Overall, Log-Gabor filters exhibit superior performance compared to Gabor filters across various image processing and computer vision applications, particularly in multi-scale feature extraction, frequency feature matching, and noise resilience. Given that Log-Gabor has not yet been harnessed in FVR, we propose to incorporate Log-Gabor filters into the design of a lightweight FVR network. In the following, we will delve into the formulation of Log-Gabor convolutional kernels and the recognition process of our proposed VF-DCN model.
3. Log-Gabor Convolutional Kernels
In this section, 1D and 2D Log-Gabor filtering kernels are presented, and their corresponding parameter selection is discussed.
3.1. Log-Gabor Function
As described in [25], the transfer function of a Log-Gabor filter is a Gaussian function on a logarithmic frequency scale, the corresponding 1D Log-Gabor function is defined as Equation (1):
(1)
where is the central frequency of the filter, and is the standard deviation used to determine the filter bandwidth. It can be observed from Equation (1) that the frequency response of a Log-Gabor is symmetric on a logarithmic axis.When extending the 1D Log-Gabor filter to 2D, the filter f in Equation (1) should be constructed in the polar coordinate system of frequency domain due to the singularity of log function at the origin. Specifically, the 2D Log-Gabor is decomposed into two components: radialfilter and angularfilter, so that the bandwidth of each component can be adjusted independently to facilitate analysis. Among these, the radial filter provides a frequency response to determine the frequency band, as described by Equation (2):
(2)
and the angular filter is used to determine the orientation, as described by Equation (3):(3)
Then, these two components are multiplied together to construct the overall 2D Log-Gabor filter, as shown in Equation (4):
(4)
where are the polar coordinates, with r representing the radial coordinate and representing the angular coordinate. is the orientation angle of the filter, and and are used to determine the radial and angular bandwidths, respectively. Table 1 shows the parameter settings required to build the two components of the 2D Log-Gabor filter, and the selection of specific parameters is discussed below.3.2. Radial Parameters Selection
In Equation (2), determines the radial filter bandwidth. The smaller the value of , the larger the radial filter bandwidth. Empirically, when the value of is , the radial filter bandwidth is approximately one octave, and when the value of is , the radial filter bandwidth is approximately two octaves. Figure 1 shows the results of the radial filters under different values of . In our experiments, is set to for balancing purposes.
In addition, the filter’s central frequency is calculated by Equation (5) as the reciprocal of the .
(5)
Here, the is calculated by Equation (6).
(6)
is the wavelength of the smallest scale filter, M is the radial scaling factor, which is used to control the successive wavelength of the radial filters, and S denotes the radial filter scales varying from 1 to . When the is set to the minimum value , the frequency attains its maximum value. In Section 5.4.1, we discussed the influence of different on the recognition performance and set pixels.
3.3. Angular Parameters Selection
In Equation (3), is the orientation angle of the filter, as defined by Equation (7):
(7)
Similarly, the angular bandwidth of the filter is determined by the parameter , which is calculated by Equation (8).
(8)
The angular bandwidth determines the directionality of the filter, a narrower bandwidth results in stronger directionality. Moreover, the angular interval between filter orientations is fixed by . In the frequency domain, the spread of the 2D Log-Gabor filter in the angular orientation is a Gaussian with respect to the polar angle around the center. The angular overlap of the filter transfer functions is controlled by the angular interval between filter orientations and angular scaling factor T. Figure 2 shows the results of angular filters when , under different angular scaling factors T. It can be observed that the larger the values of T, the less angular overlap. In the following experiments, T is set to to achieve approximately minimal overlap.
3.4. Bank of Log-Gabor Filtering Kernels
With the parameter settings of and , we can obtain a bank of 2D Log-Gabor filters by Equation (4). According to the parameter settings in Table 1, we present the bank of Log-Gabor filters obtained in Figure 3.
4. VF-DCN Model for Finger Vein Recognition
As previously discussed, the human visual system exhibits nonlinear logarithmic characteristics. In this regard, Log-Gabor is consistent with the human visual system, potentially enabling it to encode natural images more efficiently than ordinary Gabor functions. Given the remarkable performance gains achieved by Gabor filters integrated with CNNs in the field of FVR, it is reasonable to hypothesize that the incorporation of Log-Gabor filters into CNNs could further bring improvements. Motivated by the above premise, we integrated Log-Gabor filters with a CNN architecture to devise a uniquely configured multi-scale and multi-orientational finger vein recognition network, namely ‘VF-DCN’.
In this section, the overall framework of our VF-DCN and its processing flow specific to FVR are firstly elaborated. Then, an adaptive orientational filter selection and retention mechanism for Log-Gabor convolutional kernels across various scales is implemented. This stands as the cornerstone of our VF-DCN model, ensuring optimal utilization of Log-Gabor filters for capturing intricate vein patterns across different orientations and scales. Finally, the output feature vectors of image samples are extracted from the well-trained VF-DCN and serve as inputs for downstream recognition or verification tasks.
4.1. Framework of VF-DCN Model
The overall framework of the VF-DCN is depicted in Figure 4. It is known as a lightweight network, consisting of a preprocessing stage and an unsupervised training process. Here, the unsupervised training aims to learn the convolutional kernels within its two convolutional layers. By utilizing multi-scale Log-Gabor filters and incorporating the human visual system’s sensitivity to orientation at varying scales, the optimal orientational filters are adaptively identified and function as the final convolutional kernels. For detailed unsupervised training strategies, refer to Section 4.2. Upon completion of the thorough training process, the VF-DCN model transforms into a feature extractor, generating feature vectors that can be directly employed in downstream recognition or verification tasks.
4.1.1. Preprocessing Stage
In the preprocessing step, we employed a synergistic approach that integrates the 3 criterion dynamic threshold strategy [1] with the Kirsch detector [45] to localize the region of interest (ROI). Compared to Sobel, Canny, etc., the Kirsch detector exhibits a superior balance in identifying weak edges and minimizing false edges, yielding a clearer binary edge gradient image. Nonetheless, when FV image quality is hindered by uneven illumination and noise, edges may exhibit pronounced discontinuities, and some weak edges may remain undetected. To address this issue, the 3 criterion offers three-level dynamic thresholds that automatically adjust to varying image qualities. This ensures the generation of more complete boundary lines, thereby facilitating the efficacy of the ROI extraction process. For illustration, Figure 7c,d show examples of ROI extracted from two FV databases.
4.1.2. Unsupervised Training Process of VF-DCN
In this section, we initially illustrate the network topology of VF-DCN, followed by a detailed exposition of its specific training process.
The backbone of VF-DCN boasts a deliberate three-layer CNN architecture consisting of two consecutive Log-Gabor convolutional layers ( and ), followed by a binary hashing and block-wise histogram layer. As shown in Figure 5.
The input layer comprises the ROI samples derived from the preprocessing stage. Assuming the i-th input ROI sample possesses dimensions of . For two consecutive Log-Gabor convolutional layers, scales and orientations of Log-Gabor filters are adaptively constructed, comprising a bank of and filtering kernels in each convolutional layer. In the first convolutional layer, each of filtering kernel is convolved with the input sample , forming a total of output feature maps with dimensions of , as mathematically expressed in Equation (9):
(9)
where * signifies the 2D Log-Gabor convolution operation.After the completion of the first convolutional layer, each input feature map undergoes a convolution operation with every convolution kernel in , resulting in a total of output feature maps with dimensions of . This transformation is concisely encapsulated in Equation (10):
(10)
Subsequently, binary hashing is performed on the acquired feature maps, and the final histogram features are distilled through block-wise histogram encoding. In this process, the binary layer serves as a nonlinear transformer, leveraging a straightforward binary hashing quantization method to remap the feature maps into a binary representation, as expressed in Equation (11).
(11)
where is a Heaviside step function that outputs 1 when the variable is positive and 0 otherwise. ∑ denotes the weighted sum of binary images, so as to obtain the encoded feature maps with integer-valued mode.The block-wise histogram layer plays the role of feature pooling. It uses simple block-wise histograms of the binary encoding to generate the final 1D feature vector. First, feature map is partitioned into number of non-overlapping blocks. Then, the histogram of decimal values in each block is computed, and all block histograms are concatenated into a 1D vector, as expressed in Equation (12).
(12)
where is the histogram operation function, and is the learned feature vector corresponding to the input image sample .In short, VF-DCN innovatively incorporates Log-Gabor convolutional kernels to extract multi-scale and multi-orientation human-like visual features, which mitigates overfitting and simplifies the training process. It can be seen as a simple unsupervised deep convolutional network, allowing for random sample selection during network training without the need to tune or optimize various regularization parameters. Moreover, the block-wise histogram of VF-DCN implicitly encodes spatial information in the image, effectively approximating the probability distribution function of image features within each block.
4.2. Adaptive Orientational Filtering Selection
As previously mentioned, the key training objective revolves around determining the optimal Log-Gabor convolutional kernels across two consecutive convolutional layers. To achieve this, we devised an adaptive orientational filter selection and retention strategy across multiple scales, tailored to extract multi-scale features while dynamically selecting the most suitable orientational filters for diverse FV datasets. The learning process of the adaptive filter consists of three main steps:
Firstly, a candidate bank of Log-Gabor filters is constructed, comprising 4 scales and 10 orientations. Specifically, the radial filter scale S (as denoted in Equation (6)) is set to , and the orientation angle (as denoted in Equation (7)) spans from .
Secondly, for each scale, we carry out a histogram statistical analysis of the most pertinent orientational filters. It should be noted here that the reason why each scale is carried out separately is inspired by the nature of retinal imaging, where fine details become harder to discern at extreme distances due to declined detail resolution, we should adjust to varying focal lengths and perspectives when analyzing objects at different scales. Likewise, in the convolutional layer of the VF-DCN, it becomes imperative to dynamically adjust the number of orientational filters based on the scale’s suitability in extracting features. To address this, we carry out the selection of orientational filters within each scale in turn. Specifically, aims to the aforementioned 10 candidate orientational filters within each scale, each training ROI image is convolved with them, resulting in a total of 10 filtered complex images (denoted as , ). Subsequently, we extract the absolute value of the real part from each filtered complex image to generate the corresponding power map (denoted as ). Next, the magnitude responses of each pixel in these power map images serve as a metric for assessing the filter’s impact on the image. We then sort these magnitude responses in descending order across all pixels and all power maps, simultaneously recording the index of the power map, as well as the corresponding spatial row and column coordinates. This enables us to identify the most prominent orientations—those filters most frequently utilized—by analyzing the statistical histogram of high magnitude responses among the candidate orientational filters.
Finally, we retain the filters with the highest count of such high-magnitude responses, effectively fine-tuning the number of orientations at each scale. This strategy ensures that the convolutional filters better reflect the inherent characteristics of the image and the scale’s contribution to feature extraction. By mirroring the adaptability of the human visual system in processing objects at varying distances, this mechanism enhances the efficiency and realism of the convolutional filters.
In order to better understand the whole process of orientational filtering selection, we provide a pseudo-code description in Algorithm 1.
In Algorithm 1, is the Log-Gabor filter construction function, enabling the generation of Log-Gabor filters tailored to specific scales and orientations as dictated by Formula (4). To efficiently perform approximate Log-Gabor image convolution operations, the algorithm leverages the and functions, which represent the two-dimensional discrete Fourier transform and its inverse transform, respectively. Following the convolution operations, the function is employed to isolate the real part of the transformed data. The function, whose pseudo-code is detailed in Algorithm 2, plays a pivotal role in sorting the magnitude responses of each pixel across all orientational power maps. Subsequently, the function, accompanied by its pseudo-code in Algorithm 3, delves into statistical analysis. It meticulously counts the frequency of occurrence of each candidate orientational filter across all pixel positions. Finally, the function simplifies the process by directly identifying and selecting the most frequently used orientational filters from the pool of candidates. This streamlined approach ensures that the most representative filters are prioritized for further analysis or application.
Algorithm 1 Pseudo-code of the orientational filter selection algorithm |
Input:
|
As illustrated in Figure 3, filters corresponding to extreme scales, specifically S = 1 and S = 4, are overly large or small, respectively. Conversely, filters at intermediate scales, notably S = 2 and S = 3, contribute more significantly to capturing crucial features. Consequently, for the extreme scales (S = 1 and S = 4), we strategically select a relatively fewer orientational filters (e.g., n1 = n4 = 2), while for the intermediate scales (S = 2 and S = 3), we retain a comparatively higher number of orientational filters (e.g., n2 = n3 = 7).
Surprisingly, the acquired convolutional kernel structure resembles a diamond shape, aptly modeling the human eye’s adaptability to varying focal lengths and perspectives when observing objects at different distances. This feature not only brings a bio-plausible mechanism but also significantly enhances the robustness of a computer vision model when processing real-world images. Figure 6 depicts the adaptive orientational filter learning strategy applied to the convolutional kernels across diverse scales. This strategy enables the model to dynamically refine its orientation selection, optimizing its performance based on the intricacies of the data it encounters.
Algorithm 2 Pseudo-code for function |
Input:
|
Algorithm 3 Pseudo code for function |
Input:
|
4.3. Recognition
Following the aforementioned procedures, we have learned the respective feature vectors for each training image through the VF-DCN framework. These feature vectors exhibit versatility, capable of being applied in both classification and verification scenarios.
Under the classification paradigm, the ensemble of feature vectors {} extracted from the FV ROIs serves as the foundational input for determining the class label (or identity) correlated with each feature vector. To assess the proficiency of VF-DCN in extracting highly discriminative feature vectors, we have opted for a simple yet effective classifier: the k-nearest neighbor (k-NN) classifier based on Euclidean distance, with k = 1 (denoted as 1-NN in the following). This choice is advantageous due to its absence of training requirements and the lack of tunable parameters, ensuring a direct evaluation of the feature vectors’ discriminative power. Figure 6
Adaptive orientational filter learning strategy for the convolutional kernels across different scales.
[Figure omitted. See PDF]
Shifting to the verification mode, a crucial matching step ensues. Here, two biometric templates, each encapsulated within their respective feature vectors and , are compared to yield a corresponding distance metric , where is the Euclidean distance used for quantitative measure of the similarity between the two feature vectors.
5. Experimental Analysis
This section presents the experimental analysis to evaluate the performance of the proposed VF-DCN model. First, Section 5.1 provides the details of the experimental FV databases. Then, Section 5.2 and Section 5.3 present the experimental setting and corresponding evaluation metrics. After that, some key parameters are analyzed in Section 5.4, and the ablation study of the VF-DCN model is presented in Section 5.5. Finally, computational complexity is discussed in Section 5.6, and the comparison with some state-of-the-art methods is presented in Section 5.7.
5.1. Experimental Databases
In our experiments, four distinct finger vein databases: MMCBNU_6000 [46], FV_USM [47], HKPU [5], and our Self-Made ZSC_FV [1] are employed to facilitate a fair and comprehensive comparison. These databases capture FV images under diverse conditions and heterogeneous acquisition devices, thereby ensuring the robustness and representativeness of our evaluation for real-world applications. Table 2 shows the pertinent characteristics of the four FV databases, and Figure 7 visually depicts the ROIs of each database.
5.1.1. MMCBNU_6000 [46]
MMCBNU_6000 database (MMCBNU_6000 is available at
ROI images of four FV databases, in which, ROIs in (a,b) are provided by the dataset itself, while ROIs in (c,d) are extracted by 3σ criterion [1].
[Figure omitted. See PDF]
[Figure omitted. See PDF]
5.1.2. FV_USM [47]
FV_USM database (FV_USM is available at
5.1.3. HKPU [5]
HKPU database (HKPU is available at
5.1.4. ZSC_FV [1]
ZSC_FV database, created by our team, contains 37,080 FV images collected from 1030 undergraduate students, all within the age range of 18 to 22 years old. Each student contributed 36 images—six samples from the index, middle, and ring fingers of both hands. The acquisition process was conducted indoors under varying illumination conditions, enriching its analytical potential. The capturing device was manufactured by Beijing YanNan Tech Co., Ltd. (Beijing, China). All finger vein images are saved in bitmap (.bmp) format with a resolution of pixels. Prior to analysis or use in FVR, these images undergo pre-processing that includes ROI segmentation [1] (as shown in Figure 7d). Statistical analysis using the 3 criterion [1] reveals that (totaling 35,090 samples) comprises good quality images. Conversely, of the images (1778 samples) are classified as poor quality, while the remainder falls into the medium quality category. ZSC_FV provides a substantial and diverse dataset of FV images from a young population, and captured under varying conditions, offers compelling experimental results to prove the superiority of our proposed methods.
5.2. Experimental Setting
Our experiments were carried out under a computing environment with 3.6 GHz Intel Core i7 CPU (Intel Corporation, Santa Clara, CA, USA) and 32 GB RAM. We adopted an open-set protocol, ensuring that the training and testing sets were entirely non-overlapping. Specifically, for each database, approximately of fingers were randomly selected for training, with the remainder reserved for testing. Notably, in scenarios where a finger was captured across two sessions, we consolidated the images to simulate a realistic data collection scenario, maintaining the distinctiveness between training and testing fingers. The classification and verification tasks were solely executed on the testing set, and the final results were averaged over five iterations for enhanced accuracy. In the verification phase, Euclidean distance served as the metric for similarity assessment.
5.3. Evaluation Metrics
As performance metrics, we focused on the equal error rate (EER), accuracy (ACC), and the receiver operating characteristic (ROC) curve, which are widely recognized standards for evaluating the performance of FVR [17].
The EER signifies the optimal balance between the False Acceptance Rate (FAR) and the False Rejection Rate (FRR), with a lower EER indicating superior verification performance. Among these, FAR quantifies the error rate where the unenrolled FV images are accepted as enrolled images, the corresponding formula is shown in Equation (13).
(13)
while FRR represents the error rate where the enrolled FV images are rejected as unenrolled images. The corresponding formula is shown in Equation (14).(14)
5.4. Key Parameters Analysis
In this experiment, we analyzed some key parameters used in the VF-DCN model, allowing us to understand the specific impact of each parameter on the overall performance. As discussed in Section 3, some key parameters, including the , M (radial scaling factor), and T (angular scaling factor) will affect the representation ability of Log-Gabor, so we chose these parameters for testing. When these three parameters are set, the central frequency of the filter and the angular standard deviation are also set by Equations (5) and (8). It should be noted that each sub-experiment focuses on evaluating one parameter while keeping the others fixed according to Table 1, and the FV database adopted is MMCBNU_6000.
By systematically varying each parameter and observing the changes in recognition performance, we can gain insights into how these parameters influence the filter’s effectiveness. Specifically, the diamond convolution structure utilized is [2,7,7,2].
5.4.1.
This sub-experiment delves into exploring the impact of adjusting on recognition performance. Upon setting , the maximum frequency is derived using Equations (5) and (6). Table 3 presents the recognition performance, and Figure 8a visually illustrates the trend of EER as varies. Notably, when the value of is set to 2, a relatively superior performance is achieved.
5.4.2. Radial Scaling Factor (M)
This sub-experiment investigates the effect of varying the Radial Scaling Factor (M) on recognition performance. By adjusting M, a sequence of wavelengths and corresponding frequencies are generated, adhering to Equations (5) and (6). Our findings in Table 4 reveal that while variations in M have a relatively minor influence on ACC, they significantly impact the EER. Specifically, as M increases from to , the EER continuously decreases, indicating an enhanced recognition performance. Figure 8b illustrates the trend of EER, showing how EER improves with increasing M values.
5.4.3. Angular Scaling Factor (T)
This section investigates the impact of varying T (Angular Scaling Factor) on recognition performance. As elaborated in Section 3.3, Equation (8) underscores the role of T in influencing the . Table 5 presents the recognition performance under various T values. Figure 8c visually depicts the trend of EER as T varies. When the value of T is set to , a relatively superior performance is observed, indicating an optimal setting for maximizing recognition accuracy. This adjustment ensures a smooth and effective balance of the angular scaling, thereby enhancing the overall recognition performance.
5.5. Ablation Study
In this section, we conduct ablation studies to gain insights into the individual contributions of different scales to the discriminative features and identify the optimal diamond-shaped convolutional structure that maximizes performance. It is important to note that for this study, we utilize the parameter settings detailed in Table 1, specifically M = 2.2, = 0.55, T = 1.3, = 2.0, and all ROIs are resized to .
Firstly, we test the contribution degree of the four scales to the discriminative feature. To do this, we choose 10 orientations from each single scale. In the first column of Table 6, indicates that 10 orientations are chosen from scale , with no orientations selected from the other scales. Similar interpretations apply for , which means 10 orientations are chosen from scale , with no orientations selected from the other scales. From Table 6, the EER on is too high, and the EER on takes the second place, revealing that using only the smallest () or largest () scales results in unacceptably high EERs, akin to the visual blurring that occurs when observing objects at extreme distances or proximities. Conversely, scales at and demonstrate relatively lower EERs, suggesting that intermediate scales contribute more effectively to the discriminative features.
Secondly, we explore the effectiveness of various diamond-shaped convolutional structures. In the first column of Table 6, signifies that the two most predominant orientations are selected on scales and , respectively, while the seven most predominant orientations are selected on scales and , respectively. From Table 6, the diamond convolutional structure consistently outperforms other configurations across four databases, as evident from the EER values reported in Table 6 and further illustrated in Figure 9. This optimal structure effectively balances the orientation selection across scales, leading to improved recognition performance.
5.6. Feature Extraction Time
In this experiment, we conducted a comprehensive analysis of the feature extraction time for various diamond-shaped convolutional structures. Table 7 presents the feature extraction times (in seconds) for these structures across four FV databases. A clear trend emerges from the results: the fewer orientations selected within a given structure, the lower the time required for feature extraction. Although the structure inevitably takes longer due to its increased number of orientations, it is noteworthy that the time cost for our proposed method remains exceptionally low, at approximately s. This is a testament to the efficiency of our VF-DCN model, even when compared to other DL methods [14], which often come with significantly higher computational overheads. Therefore, our VF-DCN model not only achieves superior performance in terms of recognition accuracy but also maintains an acceptable feature extraction time, making it suitable for real-time applications. The balance between effectiveness and efficiency underscores the practicality and value of our proposed diamond-shaped convolutional structure.
5.7. Comparison Experiment
In this experiment, we conducted a thorough comparison of our proposed VF-DCN against the following typical and recent FV feature representation and recognition methods in terms of EER and ACC.
-
(1). RLF [6]: RLF is a handcrafted method, which combines curvature and radon-like features, can effectively aggregate the dispersed spatial information around the vein structures, thus highlighting vein patterns and suppressing spurious non-boundary responses and noises, obtaining a more smoothing vein structure image. From Table 8, the performance of RLF as a recent handcrafted method is better than GCN and less than other DL methods. It shows that handcrafted methods that are close to human vision also have their advantages.
-
(2). GCN [36]: GCN (The source code for GCN can be available at
https://github.com/jxgu1016/Gabor_CNN_PyTorch , accessed on 12 January 2024) is a Gabor convolutional network with Gabor filters incorporated into DCNNs. The network is composed of four Gabor convolution layers, a Max-pooling and ReLU following the convolution layer, and a dropout layer after the fully connected layer. From Table 8, although as a DL method, the performance of GCN is limited by the depth of the network. -
(3). PalmNet [48]: PalmNet (The source code for PalmNet can be available at
https://github.com/AngeloUNIMI/PalmNet , accessed on 12 January 2024) is a 3-layer CNN with two Gabor convolutional layers and one binarization layer, which uses an innovative unsupervised training algorithm and can tune filters based on a limited quantity of data. PalmNet is a hybrid method comprised of a Gabor filter and a shallow convolution network. From Table 8, the performance is better than other DL methods, proving the idea that fusing handcrafted and DL is feasible. -
(4). SNGR [17]: SNGR was constructed based on a Siamese framework and embedded with a pair of eight-layer tiny ResNets as the backbone branch network. We chose the EER and ACC when the ratio of training and testing data is 9:1, as reported in [17].
-
(5). SC-SDCN [14]: SC-SDCN is a DL method, which proposes a sparsified densely connected network with separable convolution. The more training data, the better the performance. For comparison fairly, we chose the EER and ACC when the ratio of training and testing data is 5:5. If the training data increase, the performance also improves, which has been reported in [14]. It shows that the DL method is affected by the training data; however, our proposed VF-DCN requires little data.
-
(6). DenseNet161 [49]: DenseNet161 (The source code for DenseNet161 can be available at
https://github.com/ridvansalihkuzu/vein-biometrics , accessed on 12 January 2024) is a DL method. We chose the EER and ACC when the ratio of training and testing data is 9:1, which has been reported in [17].
Despite the unique strengths exhibited by all the methods under consideration, the proposed VF-DCN model demonstrates superior performance across four distinct databases, as shown in Table 8. Our method achieves low EERs of , , , and , and high ACCs of , , , and on the MMCBNU_6000, FV_USM, HKPU, and ZSC_FV databases, respectively. This achievement validates the feasibility of our innovative approach, which integrates simulated retinal imaging techniques with a combination of Log-Gabor filters and a diamond-shaped convolutional structure. The successful integration of these components not only enhances the network’s ability to capture intricate FV features but also showcases the potential of this novel approach in advancing the field of FV technology. Table 8
Comparison with other methods on four FV databases.
Methods | MMCBNU_6000 | FV_USM | HKPU | ZSC_FV | ||||
---|---|---|---|---|---|---|---|---|
EER | ACC | EER | ACC | EER | ACC | EER | ACC | |
RLF [6] | - | - | - | - | ||||
GCN [36] | - | - | - | - | ||||
PalmNet [48] | ||||||||
SNGR [17] | - | - | - | - | ||||
SC-SDCN [14] | - | - | - | - | ||||
DenseNet161 [49] | - | - | - | - | ||||
VF-DCN |
6. Conclusions
In this paper, we carried out a hybrid exploration of Log-Gabor and a diamond convolutional structure. The advantages of the proposed VF-DCN are as follows:
(1). IntegrationofLog-GaborFilters: Log-Gabor filters are well-suited for natural image processing due to their ability to capture the statistical properties of natural scenes. By incorporating Log-Gabor filters into our network architecture, we effectively leverage their benefits for improved image feature extraction and representation.
(2). DiamondConvolutionalStructure: This structure enables the network to capture spatial information in a more efficient and effective manner, leading to improved performance.
(3). SimulatingRetinalImaging: By combining Log-Gabor filters and diamond convolutions, we created a network that simulates the processes of the human retina. This approach results in a network that is better able to represent and process visual information in a way that is similar to the human visual system.
(4). ImprovedPerformance: The fact that VF-DCN achieves the best performance compared to other methods is a clear indication that our approach is effective. This not only validates our idea but also demonstrates the potential of combining Log-Gabor filters and diamond convolutions for visual information processing tasks.
(5). PotentialforFurtherApplications: The success of VF-DCN in achieving superior performance suggests that this approach has the potential to be applied to a wide range of image processing and computer vision tasks, such as object detection, image segmentation, and visual recognition.
While the VF-DCN excels as an efficient lightweight network model, featuring just two convolutional layers, it is prudent to acknowledge its inherent limitations in extracting deeper, more abstract features. Consequently, there is a pressing need to delve deeper into extending this network model, exploring ways to transform it into a deeper, more comprehensive architecture. Furthermore, although the adaptive learning strategy of orientational filters is indeed inspired by the intricate workings of the human visual system, it is imperative to undertake rigorous research to determine the optimal number of orientational filters at each scale. Looking ahead, we plan to continue this line of research and endeavor to integrate VF-DCN with self-attention mechanisms, thereby enhancing the network’s ability to mimic the fundamental principles underlying biological visual imaging systems even more closely.
Methodology, Q.Y., D.S. and X.X.; writing—original draft preparation, Q.Y. and X.X.; writing—review and editing, Q.Y., D.S., X.X. and K.Z. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.
The authors would like to take this opportunity to thank the Editors and anonymous reviewers for their detailed comments and suggestions, which greatly helped us to improve the clarity and presentation of our manuscript.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 3. Bank of Log-Gabor filters. Each row in (c) contains filters computed with the same scale, for each scale, 10 orientations are sampled.
Figure 3. Bank of Log-Gabor filters. Each row in (c) contains filters computed with the same scale, for each scale, 10 orientations are sampled.
Figure 9. ROC curves of various diamond-shaped convolutional structures on four finger vein databases.
Parameters setting of the 2D Log-Gabor filter.
Parameter | Value | Description | |
---|---|---|---|
Radial | | 2 | Wavelength of the smallest scale filter |
| 0.55 | Radial standard deviation | |
| 4 | Number of radial filter scales | |
M | 2.2 | Radial scaling factor | |
Angular | | – | Angular standard deviation |
| 10 | Number of filter orientations | |
T | 1.3 | Angular scaling factor |
Details of four FV databases (For column ‘Fingers’, i: index, m: middle, r: ring. For column ‘Hands’, l: left hand, r: right hand).
Databases | Total | Num of | Num of | Fingers | Hands | Num of | Sessions | ROI |
---|---|---|---|---|---|---|---|---|
MMCBNU_6000 | 6000 | 600 | 100 | i, m, r | l, r | 10 | 1 | provided |
FV_USM | 5904 | 492 | 123 | i, m | l, r | 12 | 2 | provided |
HKPU | 3132 | 312 | 156 | i, m | l | 6/12 | 2 | 3 |
ZSC_FV | 37,080 | 6189 | 1030 | i, m, r | l, r | 6 | 1 | 3 |
Varying
1.0 | 1.5 | 2.0 | 2.5 | 3.0 | |
EER | | | | | |
ACC | | | | | |
Different M (radial scaling factor) results on recognition performance.
1.4 | 1.5 | 1.6 | 1.7 | 1.8 | 1.9 | 2.0 | 2.1 | 2.2 | 2.3 | |
EER | | | | | | | | | | |
ACC | | | | | | | | | | |
Different T (angular scaling factor) results on recognition performance.
1.0 | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 | |
EER | | | | | | |
ACC | | | | | | |
Number of convolution results on four FV databases.
Number of Convolution | MMCBNU_6000 | FV_USM | HKPU | ZSC_FV | ||||
---|---|---|---|---|---|---|---|---|
EER | ACC | EER | ACC | EER | ACC | EER | ACC | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
Feature extraction time(/s) of various diamond structures on four FV databases.
Diamond-Shape | MMCBNU_6000 | FV_USM | HKPU | ZSC_FV |
---|---|---|---|---|
| 0.0030 | 0.0031 | 0.0024 | 0.0059 |
| 0.0030 | 0.0031 | 0.0023 | 0.0058 |
| 0.0025 | 0.0026 | 0.0019 | 0.0043 |
| 0.0019 | 0.0019 | 0.0017 | 0.0030 |
| 0.0068 | 0.0066 | 0.0273 | 0.0117 |
| 0.0111 | 0.0103 | 0.0272 | 0.0386 |
| 0.0111 | 0.0109 | 0.0282 | 0.0178 |
| 0.0356 | 0.0344 | 0.0285 | 0.0408 |
References
1. Yao, Q.; Song, D.; Xu, X. Robust Finger-vein ROI Localization Based on the 3σ Criterion Dynamic Threshold Strategy. Sensors; 2020; 20, 3997. [DOI: https://dx.doi.org/10.3390/s20143997] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32708410]
2. Miura, N.; Nagasaka, A.; Miyatake, T. Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification. Mach. Vis. Appl.; 2004; 15, pp. 194-203. [DOI: https://dx.doi.org/10.1007/s00138-004-0149-2]
3. Miura, N.; Nagasaka, A.; Miyatake, T. Extraction of finger-vein patterns using maximum curvature points in image profiles. IEICE-Trans. Inf. Syst.; 2007; E90-D, pp. 1185-1194. [DOI: https://dx.doi.org/10.1093/ietisy/e90-d.8.1185]
4. Yang, J.; Yang, J.; Shi, Y. Finger-Vein Segmentation Based on Multi-channel Even-symmetric Gabor Filters. Proceedings of the 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems; Shanghai, China, 20–22 November 2009; Volume 4, pp. 500-503.
5. Kumar, A.; Zhou, Y. Human Identification Using Finger Images. IEEE Trans. Image Process.; 2012; 21, pp. 2228-2244. [DOI: https://dx.doi.org/10.1109/TIP.2011.2171697]
6. Yao, Q.; Song, D.; Xu, X.; Zou, K. A Novel Finger Vein Recognition Method Based on Aggregation of Radon-Like Features. Sensors; 2021; 21, 1885. [DOI: https://dx.doi.org/10.3390/s21051885]
7. Lv, W.; Ma, H.; Li, Y. A finger vein authentication system based on pyramid histograms and binary pattern of phase congruency. Infrared Phys. Technol.; 2023; 132, 104728. [DOI: https://dx.doi.org/10.1016/j.infrared.2023.104728]
8. Huang, H.; Liu, S.; Zheng, H.; Ni, L.; Zhang, Y.; Li, W. DeepVein: Novel finger vein verification methods based on Deep Convolutional Neural Networks. Proceedings of the 2017 IEEE International Conference on Identity, Security and Behavior Analysis (ISBA); New Delhi, India, 22–24 February 2017; pp. 1-8.
9. Anas Bilal, G.S.; Mazhar, S. Finger-vein recognition using a novel enhancement method with convolutional neural network. J. Chin. Inst. Eng.; 2021; 44, pp. 407-417. [DOI: https://dx.doi.org/10.1080/02533839.2021.1919561]
10. Fairuz, S.; Habaebi, M.H.; Elsheikh, E.M.A. Finger Vein Identification Based on Transfer Learning of AlexNet. Proceedings of the 7th International Conference on Computer and Communication Engineering (ICCCE); Kuala Lumpur, Malaysia, 19–20 September 2018; pp. 465-469.
11. Lu, Y.; Xie, S.; Wu, S. Exploring Competitive Features Using Deep Convolutional Neural Network for Finger Vein Recognition. IEEE Access; 2019; 7, pp. 35113-35123. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2902429]
12. Kim, W.; Song, J.M.; Park, K.R. Multimodal Biometric Recognition Based on Convolutional Neural Network by the Fusion of Finger-Vein and Finger Shape Using Near-Infrared (NIR) Camera Sensor. Sensors; 2018; 18, 2296. [DOI: https://dx.doi.org/10.3390/s18072296]
13. Song, J.M.; Kim, W.; Park, K.R. Finger-Vein Recognition Based on Deep DenseNet Using Composite Image. IEEE Access; 2019; 7, pp. 66845-66863. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2918503]
14. Yao, Q.; Xu, X.; Li, W. A Sparsified Densely Connected Network with Separable Convolution for Finger-Vein Recognition. Symmetry; 2022; 14, 2686. [DOI: https://dx.doi.org/10.3390/sym14122686]
15. Noh, K.J.; Choi, J.; Hong, J.S.; Park, K.R. Finger-Vein Recognition Based on Densely Connected Convolutional Network Using Score-Level Fusion with Shape and Texture Images. IEEE Access; 2020; 8, pp. 96748-96766. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2996646]
16. Tang, S.; Zhou, S.; Kang, W.; Wu, Q.; Deng, F. Finger vein verification using a Siamese CNN. IET Biom.; 2019; 8, pp. 306-315. [DOI: https://dx.doi.org/10.1049/iet-bmt.2018.5245]
17. Yao, Q.; Chen, C.; Song, D.; Xu, X.; Li, W. A Novel Finger Vein Verification Framework Based on Siamese Network and Gabor Residual Block. Mathematics; 2023; 11, 3190. [DOI: https://dx.doi.org/10.3390/math11143190]
18. Shaheed, K.; Mao, A.; Qureshi, I.; Kumar, M.; Hussain, S.; Ullah, I.; Zhang, X. DS-CNN: A pre-trained Xception model based on depth-wise separable convolutional neural network for finger vein recognition. Expert Syst. Appl.; 2022; 191, 116288. [DOI: https://dx.doi.org/10.1016/j.eswa.2021.116288]
19. Hou, B.; Yan, R. Triplet-Classifier GAN for Finger-Vein Verification. IEEE Trans. Instrum. Meas.; 2022; 71, pp. 1-12. [DOI: https://dx.doi.org/10.1109/TIM.2022.3154834]
20. Kuzu, R.S.; Maiorana, E.; Campisi, P. Vein-based Biometric Verification using Transfer Learning. Proceedings of the 43rd International Conference on Telecommunications and Signal Processing (TSP); Milan, Italy, 7–9 July 2020; pp. 403-409.
21. Zhao, P.; Song, Y.; Wang, S.; Xue, J.H.; Zhao, S.; Liao, Q.; Yang, W. VPCFormer: A transformer-based multi-view finger vein recognition model and a new benchmark. Pattern Recognit.; 2024; 148, 110170. [DOI: https://dx.doi.org/10.1016/j.patcog.2023.110170]
22. Li, M.; Gong, Y.; Zheng, Z. Finger Vein Identification Based on Large Kernel Convolution and Attention Mechanism. Sensors; 2024; 24, 1132. [DOI: https://dx.doi.org/10.3390/s24041132]
23. Devkota, N.; Kim, B.W. Finger Vein Recognition Using DenseNet with a Channel Attention Mechanism and Hybrid Pooling. Electronics; 2024; 13, 501. [DOI: https://dx.doi.org/10.3390/electronics13030501]
24. Li, X.; Feng, J.; Cai, J.; Lin, G. FV-MViT: Mobile Vision Transformer for Finger Vein Recognition. Sensors; 2024; 24, 1331. [DOI: https://dx.doi.org/10.3390/s24041331]
25. Field, D.J. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am. A Opt. Image Sci.; 1987; 4, pp. 2379-2394. [DOI: https://dx.doi.org/10.1364/JOSAA.4.002379] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/3430225]
26. Yang, J.; Shi, Y.; Yang, J. Finger-Vein Recognition Based on a Bank of Gabor Filters. Proceedings of the Computer Vision—ACCV 2009; Xi’an, China, 23–27 September 2009; Springer: Berlin/Heidelberg, Germany, 2010; pp. 374-383.
27. Wang, R.; Wang, G.; Chen, Z.; Zeng, Z.; Wang, Y. A palm vein identification system based on Gabor wavelet features. Neural Comput. Appl.; 2014; 24, pp. 161-168. [DOI: https://dx.doi.org/10.1007/s00521-013-1514-8]
28. Shin, K.Y.; Park, Y.H.; Nguyen, D.T.; Park, K.R. Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering. Sensors; 2014; 14, pp. 3095-3129. [DOI: https://dx.doi.org/10.3390/s140203095] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24549251]
29. Kovač, I.; Marák, P. Finger vein recognition: Utilization of adaptive gabor filters in the enhancement stage combined with SIFT/SURF-based feature extraction. Signal Image Video Process.; 2023; 17, pp. 635-641. [DOI: https://dx.doi.org/10.1007/s11760-022-02270-8]
30. Yang, L.; Yang, G.; Wang, K.; Liu, H.; Xi, X.; Yin, Y. Point Grouping Method for Finger Vein Recognition. IEEE Access; 2019; 7, pp. 28185-28195. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2901017]
31. Shi, Y.; Yang, J. Image restoration and enhancement for finger-vein recognition. Proceedings of the 2012 IEEE 11th International Conference on Signal Processing; Beijing, China, 21–25 October 2012; Volume 3, pp. 1605-1608.
32. Li, M.; Wang, H.; Li, L.; Zhang, D.; Tao, L. Finger Vein Recognition Based on a Histogram of Competitive Gabor Directional Binary Statistics. J. Database Manag.; 2023; 34, pp. 1-19. [DOI: https://dx.doi.org/10.4018/JDM.321547]
33. Calderon, A.F.L.; Roa, S.; Victorino, J. Handwritten Digit Recognition using Convolutional Neural Networks and Gabor filters. Proceedings of the 2003 International Congress on Computational Intelligence; Medellín, Colombia, 6–8 November 2003.
34. Alekseev, A.; Bobe, A. GaborNet: Gabor filters with learnable parameters in deep convolutional neural network. Proceedings of the 2019 International Conference on Engineering and Telecommunication (EnT); Dolgoprudny, Russia, 20–21 November 2019; pp. 1-4.
35. Pérez, J.C.; Alfarra, M.; Jeanneret, G.; Bibi, A.; Thabet, A.; Ghanem, B.; Arbeláez, P. Gabor Layers Enhance Network Robustness. Proceedings of the Computer Vision—ECCV 2020; Glasgow, UK, 23–28 August 2020; Springer International Publishing: Cham, Switzerland, 2020; pp. 450-466.
36. Luan, S.; Chen, C.; Zhang, B.; Han, J.; Liu, J. Gabor Convolutional Networks. IEEE Trans. Image Process.; 2018; 27, pp. 4357-4366. [DOI: https://dx.doi.org/10.1109/TIP.2018.2835143]
37. Gao, X.; Sattar, F.; Venkateswarlu, R. Multiscale Corner Detection of Gray Level Images Based on Log-Gabor Wavelet Transform. IEEE Trans. Circuits Syst. Video Technol.; 2007; 17, pp. 868-875.
38. Arróspide, J.; Salgado, L. Log-Gabor Filters for Image-Based Vehicle Verification. IEEE Trans. Image Process.; 2013; 22, pp. 2286-2295. [DOI: https://dx.doi.org/10.1109/TIP.2013.2249080]
39. Yang, Y.; Tong, S.; Huang, S.; Lin, P. Log-Gabor energy based multimodal medical image fusion in NSCT domain. Comput. Math. Methods Med.; 2014; 2014, 835481. [DOI: https://dx.doi.org/10.1155/2014/835481]
40. Bounneche, M.D.; Boubchir, L.; Bouridane, A.; Nekhoul, B.; Ali-Chérif, A. Multi-spectral palmprint recognition based on oriented multiscale log-Gabor filters. Neurocomputing; 2016; 205, pp. 274-286. [DOI: https://dx.doi.org/10.1016/j.neucom.2016.05.005]
41. Lv, L.; Yuan, Q.; Li, Z. An algorithm of Iris feature-extracting based on 2D Log-Gabor. Multimed. Tools Appl.; 2019; 78, pp. 22643-22666. [DOI: https://dx.doi.org/10.1007/s11042-019-7551-2]
42. Shams, H.; Jan, T.; Ali, A.; Ahmad, N.; Munir, A.; Khalil, R.A. Fingerprint image enhancement using multiple filters. PeerJ Comput. Sci.; 2023; 9, e1183. [DOI: https://dx.doi.org/10.7717/peerj-cs.1183] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37346560]
43. Wang, Y.; Lu, H.; Qin, X.; Guo, J. Residual Gabor convolutional network and FV-Mix exponential level data augmentation strategy for finger vein recognition. Expert Syst. Appl.; 2023; 223, 119874. [DOI: https://dx.doi.org/10.1016/j.eswa.2023.119874]
44. Zhu, B.; Yang, C.; Dai, J.; Fan, J.; Qin, Y.; Ye, Y. R2FD2: Fast and Robust Matching of Multimodal Remote Sensing Images via Repeatable Feature Detector and Rotation-Invariant Feature Descriptor. IEEE Trans. Geosci. Remote Sens.; 2023; 61, pp. 1-15. [DOI: https://dx.doi.org/10.1109/TGRS.2023.3264610]
45. Kirsch, R.A. Computer determination of the constituent structure of biological images. Comput. Biomed. Res.; 1971; 4, pp. 315-328. [DOI: https://dx.doi.org/10.1016/0010-4809(71)90034-6]
46. Lu, Y.; Xie, S.J.; Yoon, S.; Wang, Z.; Park, D.S. An available database for the research of finger vein recognition. Proceedings of the 2013 6th International Congress on Image and Signal Processing (CISP); Hangzhou, China, 16–18 December 2013; Volume 1, pp. 410-415.
47. Mohd Asaari, M.S.; Suandi, S.A.; Rosdi, B.A. Fusion of Band Limited Phase Only Correlation and Width Centroid Contour Distance for finger based biometrics. Expert Syst. Appl.; 2014; 41, pp. 3367-3382. [DOI: https://dx.doi.org/10.1016/j.eswa.2013.11.033]
48. Genovese, A.; Piuri, V.; Plataniotis, K.N.; Scotti, F. PalmNet: Gabor-PCA Convolutional Networks for Touchless Palmprint Recognition. IEEE Trans. Inf. Forensics Secur.; 2019; 14, pp. 3160-3174. [DOI: https://dx.doi.org/10.1109/TIFS.2019.2911165]
49. Kuzu, R.S.; Piciucco, E.; Maiorana, E.; Campisi, P. On-the-Fly Finger-Vein-Based Biometric Recognition Using Deep Neural Networks. IEEE Trans. Inf. Forensics Secur.; 2020; 15, pp. 2641-2654. [DOI: https://dx.doi.org/10.1109/TIFS.2020.2971144]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Finger vein (FV) biometrics have garnered considerable attention due to their inherent non-contact nature and high security, exhibiting tremendous potential in identity authentication and beyond. Nevertheless, challenges pertaining to the scarcity of training data and inconsistent image quality continue to impede the effectiveness of finger vein recognition (FVR) systems. To tackle these challenges, we introduce the visual feature-guided diamond convolutional network (dubbed ‘VF-DCN’), a uniquely configured multi-scale and multi-orientation convolutional neural network. The VF-DCN showcases three pivotal innovations: Firstly, it meticulously tunes the convolutional kernels through multi-scale Log-Gabor filters. Secondly, it implements a distinctive diamond-shaped convolutional kernel architecture inspired by human visual perception. This design intelligently allocates more orientational filters to medium scales, which inherently carry richer information. In contrast, at extreme scales, the use of orientational filters is minimized to simulate the natural blurring of objects at extreme focal lengths. Thirdly, the network boasts a deliberate three-layer configuration and fully unsupervised training process, prioritizing simplicity and optimal performance. Extensive experiments are conducted on four FV databases, including MMCBNU_6000, FV_USM, HKPU, and ZSC_FV. The experimental results reveal that VF-DCN achieves remarkable improvement with equal error rates (EERs) of
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer