1. Introduction
As a new subject, machine vision technology has been gradually integrated into people’s lives in recent years. Machine vision has made great progress in both technical and application fields. Target recognition is a typical application of machine vision technology in the industry. In the industrial field, industrial robots with machine vision technology have gradually begun to replace traditional robots. How to identify the workpiece accurately is the focus and difficulty of a vision robot [1,2,3]. It is also the basis of the grasping operation of a vision robot. In 1999, David Lowe put forward the scale invariant feature transform (SIFT) [4] feature matching algorithm based on the existing feature detection methods based on invariant technology SIFT, which has scale invariance. The algorithm is aimed at local feature points of objects. Image size transformation and rotation will not affect the detection results, and it has high robustness to noise and other influences. Bay et al. [5] proposed a fast robust feature extraction registration algorithm Speed-Up Robust Features (SURF) in 2006 based on the SIFT algorithm. Due to the harr feature and the introduction of the integral image, SURF outperforms the SIFT algorithm and can get faster speed.
Rosten et al. [6] proposed the corner in 2006, which can quickly determine the location of feature points. Calonder et al. put forward the brief descriptor to describe the image area around feature points [7]. Rublee et al. improved the fast corner and brief descriptor and proposed an ORB feature that can effectively replace SIFT and SURF [8]. The ORB algorithm appropriately reduces the accuracy and robustness of feature points, improves the calculation speed, and reduces the calculation time, which is a good compromise between the quality and performance of different types of feature points [9].
Due to the good performance of the ORB characteristics [10,11,12,13,14,15], many scholars at home and abroad have made different improvements to the ORB algorithm [16,17,18,19]. Hong et al. [20] matched the ORB feature point matching algorithm and eight parameters and combined the rotation model, improving the detection speed of the feature point; and Bing et al. [21] improved the rotation of the ORB algorithm in the feature point matching algorithm of the ORB, which enhanced the matching accuracy; but for special scenes, the study of the ORB algorithm is rare under the conditions of poor illumination conditions. For the above questions, in this paper, the template image in the video stream and the image to be input are first processed for grayscale. The second step is to perform adaptive histogram equalization processing on the image to improve the image quality of the input image and the template image. The third step is to extract the feature description through the ORB algorithm. Finally, the KNN matching algorithm is used. By comparing the distance between the optimal feature matching and the suboptimal feature matching, a reasonable ratio is set to eliminate the mismatch. Compared with the traditional ORB matching algorithm, the improved ORB matching algorithm can significantly improve the matching performance under poor lighting conditions. Finally, by comparing the logarithm of correct matching between the image features to be input and all template image features, a reasonable threshold is set, which effectively realizes the target classification of small samples.
2. ORB Algorithm Principle (Orient Fast and Rotated Brief)
Oriented Fast and Rotated Brief (ORB) is based on the famous fast feature detection and brief feature descriptor.
2.1. Feature Point Detection
Image feature points can be simply understood as more important points in the image, such as contour points, high spots in dark areas, and dark spots in bright areas. The ORB algorithm uses a fast algorithm [22,23,24,25] to find feature points. The core idea of fast is to find out those outstanding points, that is, to compare a point with the surrounding points, and if it is different from most of the surrounding points, it can be selected as a feature point [26].
(1)
where is the grayscale of any point on the circumference, is the grayscale of the center, and is the threshold value of the gray value difference. If N is greater than the given threshold value, which is generally three-quarters of the surrounding circle points, P is considered as a feature point.The specific calculation process of fast is as follows: First, select a pixel point P from Figure 1 and judge whether it can be used as a feature point. It is assumed that its gray value is M. Set a suitable threshold value V (for example, 20% of M). When the absolute value of the difference between the gray values of two points is greater than V, then these two points can be considered to be different. Then, taking the pixel point P as the center, select 16 pixels on a circle with a radius of 3. P can be regarded as a corner point if the gray levels of consecutive L points in these 16 pixels are all larger than M + V or smaller than M − V. Here, L is set to 12 and if at least 12 points exceed the threshold, P is considered as a feature point, Otherwise, it is considered that P is not a feature point. To obtain faster results, additional acceleration measures have also been adopted. If four points around the candidate point are tested at 90-degree intervals, at least three points should have enough difference with the gray value of the candidate point; otherwise, the candidate point should not be considered as a feature point without calculating other points. Figure 1 is a schematic diagram of fast feature point extraction.
Because the fast algorithm cannot detect the direction information of feature points, in the ORB algorithm, the method of establishing a three-layer Gaussian image pyramid is used to increase the scale invariance [27] and the gray centroid method [28] is used to endow feature points with rotation invariance. That is, the coordinate system is established with feature points as the origin, the centroid position is calculated in the neighborhood , and the vector is constructed with feature points as the starting point and the centroid as the endpoint. The moment of neighborhood is
(2)
where is the gray value of the image, , is the radius of the feature point neighborhood, and is the centroid position of the neighborhood:(3)
The orientation of feature points of fast is
(4)
To improve the rotation invariance of feature points, it is necessary to ensure that and are in the circular area of radius , that is, . is the neighborhood radius.
2.2. Calculate Feature Point Descriptors
ORB uses the improved brief algorithm to calculate the descriptor of a feature point and solves the primary defect that brief itself has no rotation invariance. Its core idea is to select point pairs in a specific pattern around feature points and combine the comparison results of these point pairs as descriptors.
The brief descriptor is simple and fast, which is based on the idea that the image neighborhood can be expressed by a relatively small amount of intensity contrast.
Define the criterion of the image neighborhood P:
(5)
Here, is the pixel intensity at the point of the , which in the image is spot after the filtering process. Choose and the position pair, which uniquely defines the binary criterion. The brief descriptor is a binary bit string of dimensions.
(6)
The pixel value can be 128, 256, 512, etc. Select different values to weigh between speed, storage efficiency, and identification.
The criterion of image neighborhood in the brief considers only a single pixel, so it is sensitive to noise. To solve this defect, each test point in ORB uses a 5 × 5 sub-window in the neighborhood of 31 × 31 pixels, in which the selection of the sub-window obeys Gaussian distribution and then uses the integral image to accelerate the calculation.
The brief itself is undirected and has no rotation invariance. The solution of ORB is to try to add a direction to the brief. At position , for any binary criterion feature set, defining a matrix:
(7)
Using the neighborhood direction and the corresponding rotation matrix , build a corrected version of , . So the Steed Brief descriptor is
(8)
After getting Steered brief, use greedy search [29], finding 256-pixel block pairs with the lowest correlation from all possible pixel block pairs, that is, obtaining the final brief. Figure 2 is a schematic diagram of the descriptor calculation.
2.3. The Flowchart of Basic ORB Algorithm
According to the principle of the ORB algorithm, we get the flow chart shown in Figure 3 below:
3. Feature Extraction and Matching Based on an Improved ORB Algorithm
The traditional ORB algorithm has a poor matching effect on underexposed or overexposed images caused by illumination and has a certain sensitivity to image noise. Before matching, the template images and the images to be matched in the video stream are grayscale processed to filter out the noise. On this basis, the template image and the input image are processed by adaptive histogram equalization, which not only increases the number of feature points of the input image and the template image but also increases the number of correctly matched feature points.
3.1. Histogram Equalization
When the gray value distribution of the image is too concentrated, histogram equalization [30,31,32,33] can make the gray probability distribution of the image uniform and make the histogram of the image as stable as possible. Transform as follows:
(9)
The relationship between the transform function and the original graph probability density function is:
(10)
The discrete forms are as follows:
(11)
After many transformations, the equalized image is obtained.
3.2. Adaptive Histogram Equalization
Due to shooting or environmental problems, there may be uneven image brightness, low contrast, high noise, and so on. The features after matching with the original image are concentrated in the areas with high contrast, while the features extracted from other areas are relatively few, so the obtained feature points cannot describe the whole image. The histogram distribution of the original image is shown in the upper-left picture and the bottom-left picture in Figure 4. Although ORB feature matching has a good matching effect under good illumination, the matching effect will be greatly reduced under insufficient illumination or overexposure.
Adaptive histogram equalization (AHE) is a computer image processing technology used to improve image contrast. Different from the ordinary histogram equalization algorithm, the AHE algorithm changes the image contrast by calculating the local histogram of the image and then redistributing the brightness. Therefore, the algorithm can improve the local contrast of the image and obtain more image details.
After adaptive histogram equalization, the details of the dark part of the original image become clearer and the candidate points with a higher Harris response value change during feature point detection, so high-quality matching point pairs in other areas of the same image can be obtained. After adaptive histogram equalization, the histogram distribution is shown in the upper-right picture and the bottom-right picture in Figure 4.
3.3. KNN Matching Algorithm to Eliminate Mismatching
The core idea of the K-nearest-neighbor (KNN) algorithm [34,35,36,37,38,39,40,41] is to search the top feature points with the highest similarity in another feature space as candidate features.
In this paper, the = 2 KNN algorithm is adopted to find the ratio between the distances of the nearest neighbor and the second-nearest neighbor:
(12)
where is the feature vector of the feature point p, is the feature vector of the nearest neighbor in an image, is the feature vector of the second-nearest-neighbor feature point in an image, and is the distance between the vectors.For each feature point, the optimal feature matching and sub-optimal feature matching are obtained and the distance between them is recorded as m and n, respectively. Each feature point matching pair is screened according to the ratio (0.7 in this paper), and matching with the excessive ratio is considered as mismatching.
Based on the above, this paper puts forward the idea of combining the image enhancement technology with the ORB algorithm and uses adaptive histogram equalization to process the input images in advance. The experimental results show that the ORB algorithm combined with the image enhancement technology improves feature extraction and matching. The specific process is shown in Figure 5 below.
4. Experimental Results and Analysis
Under natural conditions, illumination has a great influence on collected images and different illumination conditions may cause underexposure or overexposure to affect the quality of images, which brings inconvenience to subsequent matching research. This paper re-distributes the brightness of the image by adopting the adaptive histogram equalization, reducing the influence of the light on the input picture, and the advantages and disadvantages are measured by the number of feature points, the number of matching points, and the running time.
To verify the feasibility of the ORB algorithm, this paper compares the traditional ORB algorithm with the improved ORB algorithm, and the experimental environment is Pycharm 2021 and OpenCV 4.5.2. The recognition target map is a common book in the laboratory, and this paper will verify the two groups of images. The first group of pictures is an overexposed image experimental scene with a size of 523 × 481 in PNG format (Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10). The second group is an underexposed picture experiment scene with a size of 526 × 489 in PNG format (Figure 11, Figure 12, Figure 13, Figure 14 and Figure 15). The experimental results of the ORB algorithm and the ORB algorithm based on image enhancement are compared respectively.
A comparison of the data of Table 1 and Table 2 shows that the improved ORB algorithm not only maintains the superiority of the ORB itself but also improves the matching effect under conditions of insufficient exposure or excessive exposure, where under excessive exposure conditions, the characteristic points increased by 78% and the matching point increased by 45%, and under the conditions of insufficient exposure, the characteristic points increased by 23% and the matching point increased by 34%. The effectiveness of the algorithm was explained by the above experiments and comparison.
5. Conclusions
Due to problems such as shooting or environment, there may be uneven image brightness, low contrast, and noise. The features after matching with the original image are concentrated in the areas with higher contrast, while the features extracted from other areas are relatively few. The resulting feature point cannot describe the entire image. Although the matching effect is good in the case of light, the matching effect is greatly reduced when the light is insufficient or the exposure is excessive, and even a false match may occur. This paper proposes an improved ORB feature extraction algorithm by combining image enhancement techniques and ORB algorithms. Adaptive histogram filtering algorithms, by calculating the local histogram of the input image and then re-distributing the brightness to change the input image contrast, improve the local contrast of the input image and obtain more image details. The results show that the improved ORB algorithm maintains the superiority of ORB itself and significantly improves the matching effect under the conditions of underexposure or overexposure.
Conceptualization, Y.X. and Q.W.; methodology, Y.X. and Q.W.; validation, Y.X.; formal analysis, Y.X. and Q.W.; investigation, Y.C. and X.Z.; resources, Y.X.; writing—original draft preparation, Q.W.; writing—review and editing, Y.X. All authors have read and agreed to the published version of the manuscript.
This research was funded by Beijing Natural Science Foundation (grant nos. 4192023 and 4202024).
Not applicable.
Not applicable.
Not applicable.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 4. In the upper left is a histogram of overexposed images; in the upper right is a histogram of an overexposed image after adaptive histogram equalization; in the bottom left is a histogram of an underexposed image; in the bottom right is a histogram of an underexposed image after adaptive histogram equalization.
Figure 5. Traditional ORB feature matching (left) and improved ORB feature matching (right).
Figure 7. Feature points extracted by the traditional ORB algorithm under overexposure conditions.
Figure 8. Feature matching image extracted by the traditional ORB algorithm under overexposure conditions.
Figure 9. Feature points extracted by the improved ORB algorithm under overexposure conditions.
Figure 10. The improved ORB algorithm is used to extract feature matching images under overexposure conditions.
Figure 12. Feature points extracted by the traditional ORB algorithm under underexposed conditions.
Figure 13. Feature matching image extracted by the traditional ORB algorithm under underexposed conditions.
Figure 14. Feature points extracted by the improved ORB algorithm under underexposure conditions.
Figure 15. The improved ORB algorithm is used to extract feature matching images under underexposure conditions.
Matching data of overexposed images.
Method | Feature Points of the Left Graph | Feature Points of the Right Graph | Matching Points | Running Time (ms) |
---|---|---|---|---|
Traditional ORB algorithm | 3067 | 3112 | 734 | 304 |
SIFT algorithm | 722 | 650 | 297 | 109 |
Improved ORB algorithm | 5468 | 5519 | 1066 | 372 |
Matching data of underexposed images.
Method | Feature Points of the Left Graph | Feature Points of the Right Graph | Matching Points | Running Time (ms) |
---|---|---|---|---|
Traditional ORB algorithm | 2475 | 2315 | 210 | 273 |
SIFT algorithm | 376 | 346 | 165 | 87 |
Improved ORB algorithm | 3130 | 2866 | 282 | 295 |
References
1. Yang, Z.H. Application of machine vision technology in the field of industrial control. China Comput. Commun.; 2018; 17, pp. 87-88.
2. Li, Z. Application in machine vision technology and its automation in mechanical manufacturing. Sci. Technol. Innov. Inf.; 2018; 25, pp. 171-172.
3. Wang, F. Development of machine vision technology and its industrial applications. Electron. Technol. Softw. Eng.; 2018; 16, 246.
4. Lowe, D.G. Distinctive image features from scale-invariant key points. Int. J. Comput. Vis.; 2004; 60, pp. 91-110. [DOI: https://dx.doi.org/10.1023/B:VISI.0000029664.99615.94]
5. Bay, H.; Tuytelaars, T.; Gool, L.V. Surf: Speeded up robust features. Proceedings of the 9th European Conference on Computer Vision; Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; Volume Part I.
6. Rosten, E.; Tom, D. Machine learning for high-speed corner detection. Proceedings of the European Conference on Computer Vision; Graz, Austria, 7–13 May 2006; Springer: Berlin/Heidelberg, Germany, 2006.
7. Calonder, M. Brief: Binary robust independent elementary features. Proceedings of the European Conference on Computer Vision; Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010.
8. Yang, B.K.; Cheng, S.Y.; Zheng, Y. Improved ORB feature matching algorithm. Transducer Microsyst. Technol.; 2020; 39, pp. 141-144. [DOI: https://dx.doi.org/10.13873/J.1000-9787(2020)02-0136-04]
9. Yang, H.F.; Li, H. Image feature points extraction and matching method based on improved ORB algorithm. J. Graph.; 2020; 41, pp. 548-555. [DOI: https://dx.doi.org/10.11996/JG.j.2095-302X.2020040548]
10. Yao, J.; Zhang, P.; Wang, Y.; Luo, Z.; Ren, X. An adaptive uniform distribution ORB based on improved quadtree. IEEE Access; 2019; 7, pp. 143471-143478. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2940995]
11. Shao, C.; Zhang, C.; Fang, Z.; Yang, G. A deep learning-based semantic filter for ransac-based fundamental matrix calculation and the ORB-slam system. IEEE Access; 2020; 8, pp. 3212-3223. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2962268]
12. Wang, X.; Zou, J.B.; Shi, D.S. An Improved ORB Image Feature Matching Algorithm Based on SURF. Proceedings of the 2018 3rd International Conference on Robotics and Automation Engineering (ICRAE); Guangzhou, China, 17–19 November 2018; pp. 218-222. [DOI: https://dx.doi.org/10.1109/ICRAE.2018.8586755]
13. Wang, Z.; Li, Z.; Cheng, L.; Yan, G. An improved ORB feature extraction and matching algorithm based on affine transformation. Proceedings of the 2020 Chinese Automation Congress (CAC); Shanghai, China, 6–8 November 2020; pp. 1511-1515. [DOI: https://dx.doi.org/10.1109/CAC51589.2020.9327165]
14. Zhao, Y.; Xiong, Z.; Duan, S.; Zhou, S.; Cui, Y. Improved ORB based image registration acceleration algorithm in visual-inertial navigation system. Proceedings of the 2020 Chinese Automation Congress (CAC); Shanghai, China, 6–8 November 2020; pp. 5714-5718. [DOI: https://dx.doi.org/10.1109/CAC51589.2020.9326928]
15. Sun, H.; Wang, P.; Zhang, D.; Ni, C.; Zhang, H. An improved ORB algorithm based on optimized feature point extraction. Proceedings of the 2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE); Shenyang, China, 20–22 November 2020; pp. 389-394.
16. Zhang, L. Image matching algorithm based on ORB and k-means clustering. Proceedings of the 2020 5th International Conference on Information Science, Computer Technology and Transportation (ISCTT); Shenyang, China, 13–15 November 2020; pp. 460-464. [DOI: https://dx.doi.org/10.1109/ISCTT51595.2020.00088]
17. Feng, Y.; Li, S. Research on an image mosaic algorithm based on improved ORB feature combined with surf. Proceedings of the 2018 Chinese Control and Decision Conference (CCDC); Shenyang, China, 9–11 June 2018; pp. 4809-4814. [DOI: https://dx.doi.org/10.1109/CCDC.2018.8407963]
18. Yao, H.F.; Guo, B.L. An ORB-based feature matching algorithm. Electron. Des. Eng.; 2019; 27, pp. 175-179. [DOI: https://dx.doi.org/10.3969/j.issn.1674-6236.2019.16.038]
19. Dai, X.M.; Lang, L.; Chen, M.Y. Research of image feature point matching based on improved ORB algorithm. J. Electron. Meas. Instrum.; 2016; 30, pp. 233-240. [DOI: https://dx.doi.org/10.13382/j.jemi.2016.02.009]
20. Li, X.H.; Xie, C.M.; Jia, Y.H. Rapid moving object detection algorithm based on ORB features. J. Electron. Meas. Instrum.; 2013; 27, pp. 455-460. [DOI: https://dx.doi.org/10.3724/SP.J.1187.2013.00455]
21. Bai, X.B. Improved feature points matching algorithm based on speed-up robust feature and oriented fast and rotated brief. J. Comput. Appl.; 2016; 36, pp. 1923-1926. [DOI: https://dx.doi.org/10.11772/j.issn.1001-9081.2016.07.1923]
22. Yan, P.; An, R. Improved fast corner detection algorithm based on fast. Infrared Laser Eng.; 2009; 38, pp. 1104-1108. [DOI: https://dx.doi.org/10.3969/j.issn.1007-2276.2009.06.033]
23. Zhou, L.L.; Jiang, F. Image matching algorithm based on fast and brief. Comput. Eng. Des.; 2015; 5, pp. 1269-1273. [DOI: https://dx.doi.org/10.16208/j.issn1000-7024.2015.05.030]
24. Ding, Y.L.; Wang, J.D.; Qiu, Y.J. Fast feature detection algorithm based on self-adaptive threshold selection. Command. Control Simul.; 2013; 35, pp. 53-59. [DOI: https://dx.doi.org/10.3969/j.issn.1673-3819.2013.02.012]
25. Rosten, E.; Tom, D. Fusing points and lines for high performance tracking. Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1; Beijing, China, 17–21 October 2005; Volume 2.
26. Chen, S.C.; Liu, J.H.; He, L.Y. Improved brisk algorithm for image splicing. Chin. J. Liq. Cryst. Disp.; 2016; 31, pp. 324-330. [DOI: https://dx.doi.org/10.3788/YJYXS20163103.0324]
27. Pu, X.C.; Tan, S.F.; Zhang, Y. Research on the navigation of mobile robots based on the improved fast algorithm. CAAI Trans. Intell. Syst.; 2014; 9, pp. 419-424.
28. Fan, X.N.; Gu, Y.F.; Ni, J.J. Application of improved ORB algorithm in image matching. Comput. Mod.; 2019; 282, pp. 1-6. [DOI: https://dx.doi.org/10.3969/j.issn.1006-2475.2019.02.001]
29. Wang, S.; Wang, H.Y.L.; Wang, X.F. An improved mcmc particle filter based on greedy algorithm for video object tracking. Proceedings of the 2011 IEEE 13th International Conference on Communication Technology; Jinan, China, 25–28 September 2011.
30. Yelmanov, S.; Olena, H.; Yuriy, R. A new approach to the implementation of histogram equalization in image processing. Proceedings of the 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT); Lviv, Ukraine, 2–6 July 2019.
31. Gangolli, S.H.; Arnold, J.L.F.; Reena, S. Image enhancement using various histogram equalization techniques. Proceedings of the 2019 Global Conference for Advancement in Technology (GCAT); Bangaluru, India, 18–20 October 2019; [DOI: https://dx.doi.org/10.1109/GCAT47503.2019.8978413]
32. Tan, S.F.; Nor, A.M.I. Exposure based multi-histogram equalization contrast enhancement for non-uniform illumination images. IEEE Access; 2019; 7, pp. 70842-70861. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2918557]
33. Dubey, V.; Rahul, K. Adaptive histogram equalization based approach for sar image enhancement: A comparative analysis. Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS); Madurai, India, 6–8 May 2021.
34. Wang, J.; Yu, M.; Ren, H.Z. An improved ORB algorithm for image stitching. Chin. J. Liq. Cryst. Disp.; 2018; 33, pp. 520-527. [DOI: https://dx.doi.org/10.3788/YJYXS20183306.0520]
35. Chen, L.; Li, M.; Su, W.; Wu, M.; Hirota, K.; Pedrycz, W. Adaptive feature selection-based AdaBoost-KNN with direct optimization for dynamic emotion recognition in human–robot interaction. IEEE Trans. Emerg. Top. Comput. Intell.; 2021; 5, pp. 205-213. [DOI: https://dx.doi.org/10.1109/TETCI.2019.2909930]
36. Tu, B.; Wang, J.; Kang, X.; Zhang, G.; Ou, X.; Guo, L. KNN-Based representation of super pixels for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2018; 11, pp. 4032-4047. [DOI: https://dx.doi.org/10.1109/JSTARS.2018.2872969]
37. Ab Wahab, M.N.; Nazir, A.; Ren, A.T.; Noor, M.H.; Akbar, M.F.; Mohamed, A.S. Efficientnet-lite and hybrid CNN-KNN implementation for facial expression recognition on raspberry pi. IEEE Access; 2021; 9, pp. 134065-134080. [DOI: https://dx.doi.org/10.1109/ACCESS.2021.3113337]
38. Zhang, S.; Li, X.; Zong, M.; Zhu, X.; Wang, R. Efficient KNN classification with different numbers of nearest neighbors. IEEE Trans. Neural Netw. Learn. Syst.; 2018; 29, pp. 1774-1785. [DOI: https://dx.doi.org/10.1109/TNNLS.2017.2673241] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28422666]
39. Su, J.; Wang, M.; Wu, Z.; Chen, Q. Fast plant leaf recognition using improved multiscale triangle representation and KNN for optimization. IEEE Access; 2020; 8, pp. 208753-208766. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3037649]
40. Liu, L.; Su, J.; Liu, X.; Chen, R.; Huang, K.; Deng, R.H.; Wang, X. Toward highly secure yet efficient KNN classification scheme on outsourced cloud data. IEEE Internet Things J.; 2019; 6, pp. 9841-9852. [DOI: https://dx.doi.org/10.1109/JIOT.2019.2932444]
41. Li, C.; Liu, M.; Cai, J.; Yu, Y.; Wang, H. Topic detection and tracking based on windowed dbscan and parallel KNN. IEEE Access; 2021; 9, pp. 3858-3870. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3047458]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
A novel fast target recognition algorithm is proposed under the dynamic scene moving target recognition. Aiming at the poor matching effect of the traditional Oriented Fast and Rotated Brief (ORB) algorithm on underexposed or overexposed images caused by illumination, the idea of combining adaptive histogram equalization with the ORB algorithm is proposed to get better feature point quality and matching efficiency. First, the template image and each frame of the video stream are processed by grayscale. Second, the template image and the image to be input in the video stream are processed by adaptive histogram equalization. Third, the feature point descriptors of the ORB feature are quantized by the Hamming distance. Finally, the K-nearest-neighbor (KNN) matching algorithm is used to match and screen feature points. According to the matching good feature point logarithm, a reasonable threshold is established and the target is classified. The comparison and verification are carried out by experiments. Experimental results show that the algorithm not only maintains the superiority of ORB itself but also significantly improves the performance of ORB under the conditions of underexposure or overexposure. The matching effect of the image is robust to illumination, and the target to be detected can be accurately identified in real time. The target can be accurately classified in the small sample scene, which can meet the actual production requirements.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Institute of Information and Communication, Beijing Information Science & Technology University, Beijing 100101, China;
2 Institute of Information and Communication, Beijing Information Science & Technology University, Beijing 100101, China;
3 Beijing Tellhow Intelligent Engineering Co., Ltd., Beijing 100176, China;