Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11779/2360
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDemir, Şeniz-
dc.contributor.authorAkdağ, Hanifi İbrahim-
dc.date.accessioned2024-10-05T18:38:43Z-
dc.date.available2024-10-05T18:38:43Z-
dc.date.issued2024-
dc.identifier.issn1300-0632-
dc.identifier.issn1300-0632-
dc.identifier.urihttps://doi.org/10.55730/1300-0632.4095-
dc.identifier.urihttps://search.trdizin.gov.tr/tr/yayin/detay/1264635/mention-detection-in-turkish-coreference-resolution-
dc.identifier.urihttps://hdl.handle.net/20.500.11779/2360-
dc.description.abstractA crucial step in understanding natural language is detecting mentions that refer to real-world entities in a text and correctly identifying their boundaries. Mention detection is commonly considered a preprocessing step in coreference resolution which is shown to be helpful in several language processing applications such as machine translation and text summarization. Despite recent efforts on Turkish coreference resolution, no standalone neural solution to mention detection has been proposed yet. In this article, we present two models designed for detecting Turkish mentions by using feed-forward neural networks. Both models extract all spans up to a fixed length from input text as candidates and classify them as mentions or not mentions. The models differ in terms of how candidate text spans are represented. The first model represents a span by focusing on its first and last words, whereas the representation also covers the preceding and proceeding words of a span in the second model. Mention span representations are formed by using contextual embeddings, part-of-speech embeddings, and named-entity embeddings of words in interest where contextual embeddings are obtained from pretrained Turkish language models. In our evaluation studies, we not only assess the impact of mention representation strategies on system performance but also demonstrate the usability of different pretrained language models in resolution task. We argue that our work provides useful insights to the existing literature and the first step in understanding the effectiveness of neural architectures in Turkish mention detection.en_US
dc.language.isoenen_US
dc.relation.ispartofTurkish Journal of Electrical Engineering and Computer Sciencesen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.titleMention Detection in Turkish Coreference Resolutionen_US
dc.typeArticleen_US
dc.identifier.doi10.55730/1300-0632.4095-
dc.description.PublishedMonthTemmuzen_US
dc.identifier.wosqualityQ4-
dc.identifier.scopusqualityQ3-
dc.relation.publicationcategoryMakale - Ulusal Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.endpage697en_US
dc.identifier.startpage682en_US
dc.identifier.issue5en_US
dc.identifier.volume32en_US
dc.departmentMühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.identifier.trdizinid1264635en_US
dc.institutionauthorŞeniz, Demir-
item.fulltextWith Fulltext-
item.openairetypeArticle-
item.cerifentitytypePublications-
item.grantfulltextembargo_restricted_20400101-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
crisitem.author.dept02.02. Department of Computer Engineering-
Appears in Collections:TR-Dizin İndeksli Yayınlar Koleksiyonu / TR Dizin Indexed Publications Collection
Files in This Item:
File SizeFormat 
Full Text - Article.pdf
  Restricted Access
553.24 kBAdobe PDFView/Open    Request a copy
Show simple item record



CORE Recommender

Page view(s)

48
checked on Nov 11, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.