Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11779/2133
Full metadata record
DC FieldValueLanguage
dc.contributor.authorFux, Michal-
dc.contributor.authorArslan , Şuayb Şefik-
dc.contributor.authorJang, Hojin-
dc.contributor.authorBoix, Xavier-
dc.contributor.authorCooper, Avi-
dc.contributor.authorGroth, Matt J-
dc.contributor.authorSinha, Pawan-
dc.date.accessioned2023-11-21T12:39:54Z-
dc.date.available2023-11-21T12:39:54Z-
dc.date.issued2023-
dc.identifier.citationFux, M., Arslan, S. S., Jang, H., Boix, X., Cooper, A., Groth, M. J., & Sinha, P. (2023). Comparing Humans and Deep Neural Networks on face recognition under various distance and rotation viewing conditions. Journal of Vision, 23(9), 5916-5916.en_US
dc.identifier.urihttps://doi.org/10.1167/jov.23.9.5916-
dc.identifier.urihttps://hdl.handle.net/20.500.11779/2133-
dc.description.abstractHumans possess impressive skills for recognizing faces even when the viewing conditions are challenging, such as long ranges, non-frontal regard, variable lighting, and atmospheric turbulence. We sought to characterize the effects of such viewing conditions on the face recognition performance of humans, and compared the results to those of DNNs. In an online verification task study, we used a 100 identity face database, with images captured at five different distances (2m, 5m, 300m, 650m and 1000m) three pitch values (00 - straight ahead, +/- 30 degrees) and three levels of yaw (00, 45, and 90 degrees). Participants were presented with 175 trials (5 distances x 7 yaw and pitch combinations, with 5 repetitions). Each trial included a query image, from a certain combination of range x yaw x pitch, and five options, all frontal short range (2m) faces. One was of the same identity as the query, and the rest were the most similar identities, chosen according to a DNN-derived similarity matrix. Participants ranked the top three most similar target images to the query image. The collected data reveal the functional relationship between human performance and multiple viewing parameters. Nine state-of-the-art pre-trained DNNs were tested for their face recognition performance on precisely the same stimulus set. Strikingly, DNN performance was significantly diminished by variations in ranges and rotated viewpoints. Even the best-performing network reported below 65% accuracy at the closest distance with a profile view of faces, with results dropping to near chance for longer ranges. The confusion matrices of DNNs were generally consistent across the networks, indicating systematic errors induced by viewing parameters. Taken together, these data not only help characterize human performance as a function of key ecologically important viewing parameters, but also enable a direct comparison of humans and DNNs in this parameter regimeen_US
dc.language.isoenen_US
dc.publisherJournal of Visionen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.titleComparing Humans and Deep Neural Networks on Face Recognition Under Various Distance and Rotation Viewing Conditionsen_US
dc.typeArticleen_US
dc.identifier.doi10.1167/jov.23.9.5916-
dc.authoridŞuayb Şefik Arslan / 0000-0003-3779-0731-
dc.description.PublishedMonthAğustosen_US
dc.relation.publicationcategoryMakale - Ulusal Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.issue9en_US
dc.identifier.volume23en_US
dc.departmentMühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.relation.journalVision Sciences Society Annual Meeting Abstracten_US
dc.institutionauthorArslan, Şuayb Şefik-
item.grantfulltextnone-
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.openairetypeArticle-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
crisitem.author.dept02.02. Department of Computer Engineering-
Appears in Collections:Bilgisayar Mühendisliği Bölümü Koleksiyonu
Show simple item record



CORE Recommender

Page view(s)

36
checked on Nov 18, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.