Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11779/1346
Title: An efficient framework for visible-infrared cross modality person re-identification
Authors: Başaran, Emrah
Gökmen, Muhittin
Kamasak, Mustafa E.
Keywords: Person re-identification
Cross modality person re-identification
Local Zernike moments
Publisher: Elsevier
Source: Basaran, E., Gökmen, M., & Kamasak, M. E. (September 01, 2020). An efficient framework for visible-infrared cross modality person re-identification. Signal Processing: Image Communication, 87. pp. 1-11.
Abstract: Visible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset.
URI: https://hdl.handle.net/20.500.11779/1346
https://doi.org/10.1016/j.image.2020.115933
ISSN: 0923-5965
1879-2677
Appears in Collections:Bilgisayar Mühendisliği Bölümü koleksiyonu
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection

Files in This Item:
File Description SizeFormat 
An efficient framework for visible.pdfTam Metin / Full Text1.27 MBAdobe PDFThumbnail
View/Open
Show full item record



CORE Recommender

SCOPUSTM   
Citations

32
checked on Aug 1, 2024

WEB OF SCIENCETM
Citations

23
checked on Jun 23, 2024

Page view(s)

6
checked on Jun 26, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.