Akarun, LaleAktas, MujdeGokberk, Berk2026-04-032026-04-03201997817281397532154-512Xhttps://hdl.handle.net/20.500.11779/3300https://doi.org/10.1109/ipta.2019.8936081Recognition of non-manual components in sign language has been a neglected topic, partly due to the absence of annotated non-manual sign datasets. We have collected a dataset of videos with non-manual signs, displaying facial expressions and head movements and prepared frame-level annotations. In this paper, we present the Turkish Sign Language (TSL) non manual signs dataset and provide a baseline system for non manual sign recognition. A deep learning based recognition system is proposed, in which the pre-trained ResNet Convolutional Neural Network (CNN) is employed to recognize question, negation side to side and negation up-down, affirmation and pain movements and expressions. Our subject independent method achieves 78.49% overall frame-level accuracy on 483 TSL videos performed by six subjects, who are native TSL signers. Prediction results of consecutive frames are filtered for analyzing the qualitative results.eninfo:eu-repo/semantics/closedAccessFacial Expression RecognitionSign Language RecognitionNon-Manual Sign AnalysisRecognizing Non-Manual Signs in Turkish Sign LanguageConference Object10.1109/ipta.2019.8936081