COMPARATIVE STUDY OF FEATURE DETECTORS AND FILTERING METHODS IN IMAGE MATCHING
Abstract
Background. In modern computer vision, the accuracy and reliability of image matching primarily depend on the quality of local feature processing. False correspondences, which arise from changes in scale, illumination, or repetitive structures, have the potential to distort a scene's geometric model. Therefore, applying filtering algorithms that can distinguish informative matches from noise becomes an important step. Despite significant progress, most research focuses only on specific combinations of detectors and filtering methods, which prevents a comprehensive understanding of their interaction.
Materials and Methods. To investigate this issue, a series of experiments was conducted, and a representative subset from the Photo Tourism dataset was selected. Keypoint detection and description were performed, followed by matching, outlier filtering, and quantitative evaluation. The comparison involved the SIFT, SURF, KAZE, AKAZE, ORB, and BRISK detectors combined with the RANSAC, LMedS, RHO, GMS, VFC, and LPM filtering methods. For the evaluation, metrics such as the Fisher Criterion, IQR Separability, Whisker Gap, and a custom-developed metric called SMS were applied.
Results and Discussion. The investigation revealed that performance varies significantly among the detectors: binary descriptors offer much higher processing speeds. In contrast, methods using floating-point descriptors are more informative but require more computational resources. The hierarchy of filtering methods was consistent across all setups: VFC achieved the highest quality based on separability metrics, while LPM showed the most considerable difference between the distribution boundaries. RANSAC and LMedS remain classic benchmarks, while GMS and RHO serve as fast, compromise alternatives.
Conclusion. The results show that image matching effectiveness depends on the combination of the detector, the number of keypoints, and the filtering method. A comprehensive approach enables the selection of the right strategies for specific tasks, ranging from applications that require fast processing to scenarios that necessitate maximum separability or boundary error control. The analysis and metrics used provide a basis for future research and improvements in practical computer vision systems.
Keywords: feature detection, keypoint descriptors, image matching, match filtering.
Full Text:
PDFReferences
- Zhou, L., Wu, G., Zuo, Y., Chen, X., & Hu, H. (2024). A comprehensive review of vision-based 3D reconstruction methods. Sensors, 24(7), 2314. https://doi.org/10.3390/s24072314
- Ye, Z., Bao, C., Zhou, X., Liu, H., Bao, H., & Zhang, G. (2023). EC-SfM: Efficient covisibility-based structure-from-motion for both sequential and unordered images. IEEE Transactions on Circuits and Systems for Video Technology. https://doi.org/10.1109/TCSVT.2023.3285479
- Soltanpour, S., & Joslin, E. (2025). A survey on feature-based and deep image stitching. In Proceedings of the 20th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Volume 3: VISAPP (pp. 777–788). https://doi.org/10.5220/0013368500003912
- Herrera-Granda, E. P., Berrones-González, A., & Aguilar, W. (2024). Monocular visual SLAM, visual odometry, and structure from motion: A review. Heliyon, 10(9), e37356. https://doi.org/10.1016/j.heliyon.2024.e37356
- Abaspur Kazerouni, I., Fitzgerald, L., Dooly, G., & Toal, D. (2022). A survey of state-of-the-art on visual SLAM. Expert Systems with Applications, 205, 117734. https://doi.org/10.1016/j.eswa.2022.117734
- Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395. https://doi.org/10.1145/358669.358692
- Rousseeuw, P. J. (1984). Least median of squares regression. Journal of the American Statistical Association, 79(388), 871–880. https://doi.org/10.1080/01621459.1984.10477105
- Chum, O., & Matas, J. (2005). Matching with PROSAC - Progressive sample consensus. Proceedings of CVPR 2005 (pp. 220–226). https://doi.org/10.1109/CVPR.2005.221
- Bian, J.-W., Lin, W.-Y., Matsushita, Y., Yeung, S.-K., Nguyen, T.-D., & Cheng, M.-M. (2019). GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence. International Journal of Computer Vision, 128(6), 1580–1593. https://doi.org/10.1007/s11263-019-01280-3
- Ma, J., Zhao, J., Tian, J., Yuille, A. L., & Tu, Z. (2014). Robust point matching via vector field consensus. IEEE Transactions on Image Processing, 23(4), 1706–1721. https://doi.org/10.1109/TIP.2014.2307478
- Ma, J., Zhao, J., Jiang, J., Zhou, H., & Guo, X. (2018). Locality Preserving Matching. International Journal of Computer Vision, 127(5), 512–531. https://doi.org/10.1007/s11263-018-1117-z
- Liao, Y., Di, Y., Zhu, K. et al. Local feature matching from detector-based to detector-free: a survey. Appl Intell 54, 3954–3989 (2024). https://doi.org/10.1007/s10489-024-05330-3
- Isik, M. (2024). Comprehensive empirical evaluation of feature extractors in computer vision. PeerJ Computer Science, 10, e2415.https://doi.org/10.7717/peerj-cs.2415
- S. A. Khan Tareen and R. H. Raza, “Potential of SIFT, SURF, KAZE, AKAZE, ORB, BRISK, AGAST, and 7 More Algorithms for Matching Extremely Variant Image Pairs,” 2023 4th International Conference on Computing, Mathematics and Engineering Technologies, pp. 1-6, 2023. https://doi.org/10.1109/iCoMET57998.2023.10099250
- Baráth, D., & Matas, J. (2022). Graph-Cut RANSAC: Local optimization on spatially coherent structures. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(9), 4961–4974. https://doi.org/10.1109/TPAMI.2021.3071812
- Yunge Cui, Yingming Hao, Qingxiao Wu, et al.“An Optimized RANSAC for The Feature Matching of 3D LiDAR Point Cloud”. In Proceedings of the 2024 5th International Conference on Computing, Networks and Internet of Things (CNIOT '24). Association for Computing Machinery, New York, NY, USA, pp. 287–291, 2024. https://doi.org/10.1145/3670105.3670153
- Rodríguez, M., Facciolo, G., & Morel, J.-M., “Robust Homography Estimation from Local Affine Maps”. Image Processing On Line, 13, 65–89, 2023. https://doi.org/10.5201/ipol.2023.356
- Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
- Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding, 110(3), 346–359. https://doi.org/10.1016/j.cviu.2007.09.014
- Alcantarilla, P. F., Bartoli, A., & Davison, A. J. (2012). KAZE features. In ECCV 2012 (LNCS 7577, pp. 214–227). https://doi.org/10.1007/978-3-642-33783-3_16
- Alcantarilla, P. F., Nuevo, J., & Bartoli, A. (2013). Fast explicit diffusion for accelerated features in nonlinear scale spaces. In BMVC 2013 (pp. 1–11). https://doi.org/10.5244/C.27.13
- Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. In ICCV 2011 (pp. 2564–2571). https://doi.org/10.1109/ICCV.2011.6126544
- Leutenegger, S., Chli, M., & Siegwart, R. Y. (2011). BRISK: Binary robust invariant scalable keypoints. In ICCV 2011 (pp. 2548–2555). https://doi.org/10.1109/ICCV.2011.6126542
- Image Matching Challenge organizers. (2021). Data — Image Matching Challenge 2021 (Phototourism subset). University of British Columbia. https://www.cs.ubc.ca/research/image-matching-challenge/2021/data
- Bazargani, H., Bilaniuk, O., & Laganière, R. (2018). A fast and robust homography scheme for real-time planar target detection. Journal of Real-Time Image Processing, 15(4), 739–758. https://doi.org/10.1007/s11554-015-0508-4
- Fesiuk, A., & Furgala, Y. (2025). Keypoint matches filtering in computer vision: Comparative analysis of RANSAC and USAC variants. International Journal of Computing, 24(2), 343–350. https://doi.org/10.47839/ijc.24.2.4018
- M. Ivashechkin, D. Baráth, J. Matas, "USACv20: Robust Essential, Fundamental and Homography Matrix Estimation," 2021. https://doi.org/10.48550/arXiv.2104.05044
- Howse, Joseph, and Joe Minichino. “Learning OpenCV 4 Computer Vision with Python 3: Get to grips with tools, techniques, and algorithms for computer vision and machine learning.”, Packt Publishing Ltd, 2020.
- A. Fesiuk and Y. Furgala, “The Impact of Parameters on the Efficiency of Keypoints Detection and Description,” 2023 IEEE 13th International Conference on Electronics and Information Technologies (ELIT), Lviv, Ukraine, pp. 261-264, 2023. https://doi.org/10.1109/ELIT61488.2023.10310866
DOI: http://dx.doi.org/10.30970/eli.31.7
Refbacks
- There are currently no refbacks.

Electronics and information technologies / Електроніка та інформаційні технології