Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11779/2402
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDemir, Seniz-
dc.contributor.authorAkdag, Hanifi Ibrahim-
dc.date.accessioned2024-11-05T19:50:45Z-
dc.date.available2024-11-05T19:50:45Z-
dc.date.issued2024-
dc.identifier.issn1300-0632-
dc.identifier.issn1303-6203-
dc.identifier.urihttps://search.trdizin.gov.tr/en/yayin/detay/1264635/mention-detection-in-turkish-coreference-resolution-
dc.identifier.urihttps://doi.org/10.55730/1300-0632.4095-
dc.identifier.urihttps://hdl.handle.net/20.500.11779/2402-
dc.description.abstractA crucial step in understanding natural language is detecting mentions that refer to real-world entities in a text and correctly identifying their boundaries. Mention detection is commonly considered a preprocessing step in coreference resolution which is shown to be helpful in several language processing applications such as machine translation and text summarization. Despite recent efforts on Turkish coreference resolution, no standalone neural solution to mention detection has been proposed yet. In this article, we present two models designed for detecting Turkish mentions by using feed-forward neural networks. Both models extract all spans up to a fixed length from input text as candidates and classify them as mentions or not mentions. The models differ in terms of how candidate text spans are represented. The first model represents a span by focusing on its first and last words, whereas the representation also covers the preceding and proceeding words of a span in the second model. Mention span representations are formed by using contextual embeddings, part-of-speech embeddings, and named-entity embeddings of words in interest where contextual embeddings are obtained from pretrained Turkish language models. In our evaluation studies, we not only assess the impact of mention representation strategies on system performance but also demonstrate the usability of different pretrained language models in resolution task. We argue that our work provides useful insights to the existing literature and the first step in understanding the effectiveness of neural architectures in Turkish mention detection.en_US
dc.language.isoenen_US
dc.publisherTubitak Scientific & Technological Research Council Turkeyen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectCoreference resolutionen_US
dc.subjectmention detectionen_US
dc.subjectneural networken_US
dc.subjectlanguage modelen_US
dc.subjectTurkishen_US
dc.titleMention Detection in Turkish Coreference Resolutionen_US
dc.typeArticleen_US
dc.identifier.doi10.55730/1300-0632.4095-
dc.identifier.scopus2-s2.0-85205146511en_US
dc.authoridŞeniz Demir / 0000-0003-4897-4616en_US
dc.authorscopusid14044928200-
dc.authorscopusid59346454800-
dc.description.woscitationindexScience Citation Index Expanded-
dc.identifier.wosqualityQ4-
dc.identifier.scopusqualityQ3-
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.issue5en_US
dc.identifier.volume32en_US
dc.department“MEF University”en_US
dc.departmentMühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.identifier.trdizinid1264635-
dc.identifier.wosWOS:001321123900002en_US
dc.institutionauthorDemir, Şeniz-
item.fulltextWith Fulltext-
item.openairetypeArticle-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Files in This Item:
File SizeFormat 
Mention detection in Turkish coreference resolution.pdf462.21 kBAdobe PDFView/Open
Show simple item record



CORE Recommender

Page view(s)

2
checked on Nov 11, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.