Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.11779/1346
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gökmen, Muhittin | - |
dc.contributor.author | Başaran, Emrah | - |
dc.contributor.author | Kamasak, Mustafa E. | - |
dc.date.accessioned | 2020-08-07T04:42:16Z | |
dc.date.available | 2020-08-07T04:42:16Z | |
dc.date.issued | 2020 | - |
dc.identifier.citation | Basaran, E., Gökmen, M., & Kamasak, M. E. (September 01, 2020). An efficient framework for visible-infrared cross modality person re-identification. Signal Processing: Image Communication, 87. pp. 1-11. | en_US |
dc.identifier.issn | 0923-5965 | - |
dc.identifier.issn | 1879-2677 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.11779/1346 | - |
dc.identifier.uri | https://doi.org/10.1016/j.image.2020.115933 | - |
dc.description.abstract | Visible-infrared cross-modality person re-identification (VI-ReId) is an essential task for video surveillance in poorly illuminated or dark environments. Despite many recent studies on person re-identification in the visible domain (ReId), there are few studies dealing specifically with VI-ReId. Besides challenges that are common for both ReId and VI-ReId such as pose/illumination variations, background clutter and occlusion, VI-ReId has additional challenges as color information is not available in infrared images. As a result, the performance of VI-ReId systems is typically lower than that of ReId systems. In this work, we propose a four-stream framework to improve VI-ReId performance. We train a separate deep convolutional neural network in each stream using different representations of input images. We expect that different and complementary features can be learned from each stream. In our framework, grayscale and infrared input images are used to train the ResNet in the first stream. In the second stream, RGB and three-channel infrared images (created by repeating the infrared channel) are used. In the remaining two streams, we use local pattern maps as input images. These maps are generated utilizing local Zernike moments transformation. Local pattern maps are obtained from grayscale and infrared images in the third stream and from RGB and three-channel infrared images in the last stream. We improve the performance of the proposed framework by employing a re-ranking algorithm for post-processing. Our results indicate that the proposed framework outperforms current state-of-the-art with a large margin by improving Rank-1/mAP by 29.79%/30.91% on SYSU-MM01 dataset, and by 9.73%/16.36% on RegDB dataset. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Elsevier | en_US |
dc.relation.ispartof | Signal Processing: Image Communication | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Cross modality person re-identification | en_US |
dc.subject | Local zernike moments | en_US |
dc.subject | Person re-identification | en_US |
dc.title | An Efficient Framework for Visible-Infrared Cross Modality Person Re-Identification | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1016/j.image.2020.115933 | - |
dc.identifier.scopus | 2-s2.0-85087420174 | - |
dc.authorid | Muhittin Gökmen / 0000-0001-7290-199X | - |
dc.description.woscitationindex | Science Citation Index Expanded | en_US |
dc.identifier.wosquality | Q2 | - |
dc.description.WoSDocumentType | Article | - |
dc.description.WoSInternationalCollaboration | Uluslararası işbirliği ile yapılmayan - HAYIR | en_US |
dc.description.WoSPublishedMonth | Eylül | en_US |
dc.description.WoSIndexDate | 2020 | en_US |
dc.description.WoSYOKperiod | YÖK - 2020-21 | en_US |
dc.identifier.scopusquality | Q1 | - |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.identifier.endpage | 11 | en_US |
dc.identifier.startpage | 1 | en_US |
dc.identifier.volume | 87 | en_US |
dc.department | Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü | en_US |
dc.identifier.wos | WOS:000551127300017 | - |
dc.institutionauthor | Gökmen, Muhittin | - |
item.languageiso639-1 | en | - |
item.fulltext | With Fulltext | - |
item.grantfulltext | open | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.cerifentitytype | Publications | - |
item.openairetype | Article | - |
crisitem.author.dept | 02.02. Department of Computer Engineering | - |
Appears in Collections: | Bilgisayar Mühendisliği Bölümü Koleksiyonu Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
An efficient framework for visible.pdf | Tam Metin / Full Text | 1.27 MB | Adobe PDF | View/Open |
CORE Recommender
SCOPUSTM
Citations
36
checked on Jan 18, 2025
WEB OF SCIENCETM
Citations
25
checked on Jan 18, 2025
Page view(s)
94
checked on Jan 13, 2025
Download(s)
10
checked on Jan 13, 2025
Google ScholarTM
Check
Altmetric
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.