Characterising local feature descriptors for face sketch to photo matching

Sketch and photo are from a different modality. Inter-modality matching approach requires right feature representation to represent both images so that the modality gap can be neglected. Improper feature selection may result in low recognition rate. There are many local descriptors have been propose...

Full description

Bibliographic Details
Published in:International Journal of Computational Vision and Robotics
Main Author: Setumin S.; Suandi S.A.
Format: Article
Language:English
Published: Inderscience Publishers 2020
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85094909544&doi=10.1504%2fijcvr.2020.10031566&partnerID=40&md5=6f3ffd4f58200a293ee536476aec885d
Description
Summary:Sketch and photo are from a different modality. Inter-modality matching approach requires right feature representation to represent both images so that the modality gap can be neglected. Improper feature selection may result in low recognition rate. There are many local descriptors have been proposed in the literature, but it is unclear which descriptors are more appropriate for inter-modality matching. In this paper, we attempt to characterise local feature descriptors for face sketch to photo matching. Our evaluation for the characterisation uses cumulative match curve (CMC), and we compare seven different descriptors that are LBP, MLBP, HOG, PHOG, SIFT, SURF and DAISY. The evaluation focuses only on a viewed sketch. Based on the experiments, we observed that gradient-based descriptors gave higher accuracy as compared to the others. Out of five popular distance metrics evaluated, L1 gives a better result as compared to the other similarity distance measures. Copyright © 2020 Inderscience Enterprises Ltd.
ISSN:17529131
DOI:10.1504/ijcvr.2020.10031566