An Efficient Framework for Visible-Infrared Cross Modality Person Re-Identification

Loading...
Thumbnail Image

Date

2020

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier

Open Access Color

HYBRID

Green Open Access

Yes

OpenAIRE Downloads

OpenAIRE Views

Publicly Funded

No
Impulse
Top 10%
Influence
Top 10%
Popularity
Top 10%

Research Projects

Journal Issue

Abstract

Visible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset.

Description

Keywords

Cross modality person re-identification, Local zernike moments, Person re-identification, FOS: Computer and information sciences, Cross modality person re-identification, Person re-identification, Local Zernike moments, Computer Vision and Pattern Recognition (cs.CV), Computer Science - Computer Vision and Pattern Recognition

Turkish CoHE Thesis Center URL

Fields of Science

0202 electrical engineering, electronic engineering, information engineering, 02 engineering and technology

Citation

Basaran, E., Gökmen, M., & Kamasak, M. E. (September 01, 2020). An efficient framework for visible-infrared cross modality person re-identification. Signal Processing: Image Communication, 87. pp. 1-11.

WoS Q

Q2

Scopus Q

Q1
OpenCitations Logo
OpenCitations Citation Count
30

Source

Signal Processing: Image Communication

Volume

87

Issue

Start Page

1

End Page

11
PlumX Metrics
Citations

CrossRef : 32

Scopus : 41

Captures

Mendeley Readers : 32

SCOPUS™ Citations

41

checked on Feb 03, 2026

Web of Science™ Citations

29

checked on Feb 03, 2026

Page Views

256

checked on Feb 03, 2026

Downloads

7766

checked on Feb 03, 2026

Google Scholar Logo
Google Scholar™
OpenAlex Logo
OpenAlex FWCI
3.35895417
Altmetrics Badge

Sustainable Development Goals

SDG data is not available