Bilgisayar Mühendisliği Bölümü Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.11779/1940
Browse
Browsing Bilgisayar Mühendisliği Bölümü Koleksiyonu by Institution Author "Demir, Şeniz"
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
Article Citation - WoS: 3Citation - Scopus: 2A Benchmark Dataset for Turkish Data-To Generation(Elsevier, 2022) Demir, Şeniz; Öktem, SezaIn the last decades, data-to-text (D2T) systems that directly learn from data have gained a lot of attention in natural language generation. These systems need data with high quality and large volume, but unfortunately some natural languages suffer from the lack of readily available generation datasets. This article describes our efforts to create a new Turkish dataset (Tr-D2T) that consists of meaning representation and reference sentence pairs without fine-grained word alignments. We utilize Turkish web resources and existing datasets in other languages for producing meaning representations and collect reference sentences by crowdsourcing native speakers. We particularly focus on the generation of single-sentence biographies and dining venue descriptions. In order to motivate future Turkish D2T studies, we present detailed benchmarking results of different sequence-to-sequence neural models trained on this dataset. To the best of our knowledge, this work is the first of its kind that provides preliminary findings and lessons learned from the creation of a new Turkish D2T dataset. Moreover, our work is the first extensive study that presents generation performances of transformer and recurrent neural network models from meaning representations in this morphologically-rich language.Article Citation - WoS: 20Citation - Scopus: 28An Evaluation of Recent Neural Sequence Tagging Models in Turkish Named Entity Recognition(Elsevier, 2021) Makaroğlu, Didem; Demir, Şeniz; Aras, Gizem; Çakır, AltanNamed entity recognition (NER) is an extensively studied task that extracts and classifies named entities in a text. NER is crucial not only in downstream language processing applications such as relation extraction and question answering but also in large scale big data operations such as real-time analysis of online digital media content. Recent research efforts on Turkish, a less studied language with morphologically rich nature, have demonstrated the effectiveness of neural architectures on well-formed texts and yielded state-of-the art results by formulating the task as a sequence tagging problem. In this work, we empirically investigate the use of recent neural architectures (Bidirectional long short-term memory (BiLSTM) and Transformer-based networks) proposed for Turkish NER tagging in the same setting. Our results demonstrate that transformer-based networks which can model long-range context overcome the limitations of BiLSTM networks where different input features at the character, subword, and word levels are utilized. We also propose a transformer-based network with a conditional random field (CRF) layer that leads to the state-of-the-art result (95.95% f-measure) on a common dataset. Our study contributes to the literature that quantifies the impact of transfer learning on processing morphologically rich languages.Conference Object Citation - Scopus: 3An Xml Parser for Turkish Wikipedia(IEEE, 2019) Demir, Şeniz; Vardar, Uluç Furkan; Devran, İlkay TevfikNowadays, visual and written data that can be easily accessed over the internet has enabled the development of research in many different fields. However, the availability of data is not sufficient by itself. It is of great importance that these data can be effectively utilized and interpreted in accordance with the requirements. Access to written content in the Wikipedia encyclopedia, which is becoming increasingly common in Turkish natural language processing, can be done via XML dumps. In this study, our aim is to develop and demonstrate the applicability of an XML parser for the processing of Turkish Wikipedia dumps. The use of the open-source parser, which allows information extraction at different levels of granularity, is reported on pages containing biography infoboxes and textual contents.Conference Object Does Prompt Engineering Help Turkish Named Entity Recognition?(Institute of Electrical and Electronics Engineers Inc., 2024) Pektezol, A.S.; Ulugergerli, A.B.; Öztoklu, V.; Demir, ŞenizThe extraction of entity mentions in a text (named entity recognition) has been traditionally formulated as a sequence labeling problem. In recent years, this approach has evolved from recognizing entities to answering formulated questions related to entity types. The questions, constructed as prompts, are used to elicit desired entity mentions and their types from large language models. In this work, we investigated prompt engineering in Turkish named entity recognition and studied two prompting strategies to guide pretrained language models toward correctly identifying mentions. In particular, we examined the impact of zero-shot and few-shot prompting on the recognition of Turkish named entities by conducting experiments on two large language models. Our evaluations using different prompt templates revealed promising results and demonstrated that carefully constructed prompts can achieve high accuracy on entity recognition, even in languages with complex morphology. © 2024 IEEE.Article Citation - WoS: 6Citation - Scopus: 12Graph-Based Turkish Text Normalization and Its Impact on Noisy Text Processing(Elsevier, 2022) Topçu, Berkay; Demir, ŞenizUser generated texts on the web are freely-available and lucrative sources of data for language technology researchers. Unfortunately, these texts are often dominated by informal writing styles and the language used in user generated content poses processing difficulties for natural language tools. Experienced performance drops and processing issues can be addressed either by adapting language tools to user generated content or by normalizing noisy texts before being processed. In this article, we propose a Turkish text normalizer that maps non-standard words to their appropriate standard forms using a graph-based methodology and a context-tailoring approach. Our normalizer benefits from both contextual and lexical similarities between normalization pairs as identified by a graph-based subnormalizer and a transformation-based subnormalizer. The performance of our normalizer is demonstrated on a tweet dataset in the most comprehensive intrinsic and extrinsic evaluations reported so far for Turkish. In this article, we present the first graph-based solution to Turkish text normalization with a novel context-tailoring approach, which advances the state-of-the-art results by outperforming other publicly available normalizers. For the first time in the literature, we measure the extent to which the accuracy of a Turkish language processing tool is affected by normalizing noisy texts before being processed. An analysis of these extrinsic evaluations that focus on more than one Turkish NLP task (i.e., part-of-speech tagger and dependency parser) reveals that Turkish language tools are not robust to noisy texts and a normalizer leads to remarkable performance improvements once used as a preprocessing tool in this morphologically-rich language.Article Mention Detection in Turkish Coreference Resolution(Tubitak Scientific & Technological Research Council Turkey, 2024) Demir, Seniz; Akdag, Hanifi IbrahimA crucial step in understanding natural language is detecting mentions that refer to real-world entities in a text and correctly identifying their boundaries. Mention detection is commonly considered a preprocessing step in coreference resolution which is shown to be helpful in several language processing applications such as machine translation and text summarization. Despite recent efforts on Turkish coreference resolution, no standalone neural solution to mention detection has been proposed yet. In this article, we present two models designed for detecting Turkish mentions by using feed-forward neural networks. Both models extract all spans up to a fixed length from input text as candidates and classify them as mentions or not mentions. The models differ in terms of how candidate text spans are represented. The first model represents a span by focusing on its first and last words, whereas the representation also covers the preceding and proceeding words of a span in the second model. Mention span representations are formed by using contextual embeddings, part-of-speech embeddings, and named-entity embeddings of words in interest where contextual embeddings are obtained from pretrained Turkish language models. In our evaluation studies, we not only assess the impact of mention representation strategies on system performance but also demonstrate the usability of different pretrained language models in resolution task. We argue that our work provides useful insights to the existing literature and the first step in understanding the effectiveness of neural architectures in Turkish mention detection.Article Neural Coreference Resolution for Turkish(2023) Demir, ŞenizCoreference resolution deals with resolving mentions of the same underlying entity in a given text. This challenging task is an indispensable aspect of text understanding and has important applications in various language processing systems such as question answering and machine translation. Although a significant amount of studies is devoted to coreference resolution, the research on Turkish is scarce and mostly limited to pronoun resolution. To our best knowledge, this article presents the first neural Turkish coreference resolution study where two learning-based models are explored. Both models follow the mention-ranking approach while forming clusters of mentions. The first model uses a set of hand-crafted features whereas the second coreference model relies on embeddings learned from large-scale pre-trained language models for capturing similarities between a mention and its candidate antecedents. Several language models trained specifically for Turkish are used to obtain mention representations and their effectiveness is compared in conducted experiments using automatic metrics. We argue that the results of this study shed light on the possible contributions of neural architectures to Turkish coreference resolution.Article Ön Eğitimli Dil Modelleriyle Duygu Analizi(İstanbul Sabahattin Zaim Üniversitesi Fen Bilimleri Enstitüsü, 2023) Yürütücü, Ömer Yiğit; Demir, ŞenizDuygu analizi, çeşitli platformlarda bir konu hakkında düşünce, duygu ya da tutumu irdelemek, analiz etmek ve yorumlamak amacıyla kullanılan yöntemlerden biridir. Farklı konulardaki metinlerin öznel içeriklerine göre sınıflandırılabildiği duygu analizinde makine öğrenmesi ve derin öğrenme modellerinden sıklıkla faydalanılmaktadır.Bu çalışmada, önceden eğitilmiş dil modellerinden yararlanılarak Covid-19 tweet metinleri üzerinde duygu analizi yapılmıştır. Naive Bayes sınıflandırıcıya ek olarak BERT, RoBERTa ve BERTweet dil modelleri kullanılarak farklı sınıflandırıcılar eğitilmiş ve tweet veri kümesi üzerinde elde edilen sonuçlar kıyaslanmıştır. Bildiride aktarılan çalışmanın ileride bu alanda yürütülecek araştırmalara bir zemin oluşturacağı öngörülmektedir.Research Project Özyinelemeli Sinir Ağları ile Türkçe Doğal Dil Üretimi(TÜBİTAK, 2018) Demir, Şeniz; Gökmen, Muhittinİnsanlar arasındaki iletişimi sağlayan doğal diller, zaman içinde insanlarla etkin ve kullanıcı dostu etkileşim kurabilmek amacıyla sistemler ve yazılımlar tarafından kullanılmaya başlanmıştır. Tıpkı insanlar gibi sesli veya yazılı doğal dil ifadelerini anlayabilen ve sonrasında kullanıcıların beklentilerini karşılayabilen dil tabanlı teknolojiler (örn. arama motorları, bilgisayar destekli eğitici sistemler ve diyalog sistemleri) bu motivasyonla ortaya çıkmıştır. Bu çalışmalarda, problemin doğası ve hedef dilin yapısındaki zorluklara ek olarak insanların doğal dilleri nasıl öğrendiğini ve kullandığını modellemedeki kısıtlar başarım oranlarını etkilemiştir. Günümüzde, dil tabanlı teknolojiler insanlar tarafından yaygın şekilde kullanılıyor olsalar da (örn. Google Arama Motoru ve Apple Siri), ulaşılan teknolojik seviye hedef dile göre çeşitlilik göstermektedir. Sondan eklemeli ve zengin dil yapısı ile Türkçe geliştirilen teknolojik çözümler ve üretilen veri kaynakları açısından pek çok doğal dilin gerisinde kalmaktadır. Ayrıca, bugüne kadar Türkçe dil teknolojileri konusunda yapılan çalışmaların ağırlıklı olarak dili işleme, anlama ve analiz etmeye dönük (örn. kelimelerin morfolojik analizi, özel isim tespiti, bağlılık çözümlemesi, metin sınıflandırma ve metin özetleme) olduğu gözlemlenmektedir. Türkçe dil üretimi konusunda sınırlı yeteneklere sahip ve akademik seviyede kalarak devamı getirilmemiş birkaç çalışma mevcuttur. Fakat bu çalışmalar karmaşık sayılabilecek dilbilimi teorileri ile ifade edilen içerik ifadelerini cümlelere dönüştürmekten öteye geçmemiştir ve başka uygulamalarla entegre olarak test edilmemiştir. Bu çalışmada, Türkçe dilinin derin öğrenme tabanlı bir sistem (dil aracı) ile otomatik olarak üretimi hedeflenmektedir. Bu sistemin, girdi olarak verilen içerik ifadelerini Türkçe dili kurallarına uygun ve anlaşılır cümlelere dönüştüreceği öngörülmektedir. Literatürdeki en kapsamlı Türkçe dil üretimi sistemi olması planlanan bu çalışmada son yıllarda pek çok dil teknolojisinde başarımı ispat edilmiş diziden diziye öğrenebilen (örn. kelime dizisinden başka bir kelime dizisi) özyinelemeli sinir ağı yapıları kullanılacaktır. Bu ağların sağladığı dinamiklik ile farklı çeşitler (örn. uzun kısa süreli bellek ve girişli özyinelemeli birim) ve genişlemeler (örn. dikkat mekanizması) denenecektir ve başarımı en yüksek sinir ağı mimarisi belirlenecektir. Buna ek olarak, sinir ağlarının kullanımı bazı faktörlerin (örn. bağlam bilgisi ve kullanıcı tercihleri) sisteme entegrasyonuna ve üretim aşamasına olan etkilerinin incelenmesine imkân sağlayacaktır.
