Arısoy Saraçlar, Ebru

Loading...
Profile Picture
Name Variants
Arısoy, Ebru
Job Title
Email Address
saraclare@mef.edu.tr
Main Affiliation
02.05. Department of Electrical and Electronics Engineering
Status
Current Staff
Website
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID

Sustainable Development Goals

2

ZERO HUNGER
ZERO HUNGER Logo

0

Research Products

16

PEACE, JUSTICE AND STRONG INSTITUTIONS
PEACE, JUSTICE AND STRONG INSTITUTIONS Logo

0

Research Products

1

NO POVERTY
NO POVERTY Logo

0

Research Products

11

SUSTAINABLE CITIES AND COMMUNITIES
SUSTAINABLE CITIES AND COMMUNITIES Logo

0

Research Products

7

AFFORDABLE AND CLEAN ENERGY
AFFORDABLE AND CLEAN ENERGY Logo

0

Research Products

10

REDUCED INEQUALITIES
REDUCED INEQUALITIES Logo

0

Research Products

3

GOOD HEALTH AND WELL-BEING
GOOD HEALTH AND WELL-BEING Logo

0

Research Products

6

CLEAN WATER AND SANITATION
CLEAN WATER AND SANITATION Logo

0

Research Products

9

INDUSTRY, INNOVATION AND INFRASTRUCTURE
INDUSTRY, INNOVATION AND INFRASTRUCTURE Logo

0

Research Products

12

RESPONSIBLE CONSUMPTION AND PRODUCTION
RESPONSIBLE CONSUMPTION AND PRODUCTION Logo

0

Research Products

5

GENDER EQUALITY
GENDER EQUALITY Logo

0

Research Products

14

LIFE BELOW WATER
LIFE BELOW WATER Logo

0

Research Products

13

CLIMATE ACTION
CLIMATE ACTION Logo

0

Research Products

15

LIFE ON LAND
LIFE ON LAND Logo

0

Research Products

8

DECENT WORK AND ECONOMIC GROWTH
DECENT WORK AND ECONOMIC GROWTH Logo

0

Research Products

17

PARTNERSHIPS FOR THE GOALS
PARTNERSHIPS FOR THE GOALS Logo

0

Research Products

4

QUALITY EDUCATION
QUALITY EDUCATION Logo

1

Research Products
Documents

42

Citations

1380

h-index

14

Documents

29

Citations

633

Scholarly Output

19

Articles

0

Views / Downloads

3765/2351

Supervised MSc Theses

3

Supervised PhD Theses

0

WoS Citation Count

83

Scopus Citation Count

47

WoS h-index

3

Scopus h-index

5

Patents

0

Projects

3

WoS Citations per Publication

4.37

Scopus Citations per Publication

2.47

Open Access Source

4

Supervised Theses

3

JournalCount
2020 28th Signal Processing and Communications Applications Conference (SIU)3
Turkish Natural Language Processing2
2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC-COLING 2024 - Main Conference Proceedings -- Joint 30th International Conference on Computational Linguistics and 14th International Conference on Language Resources and Evaluation, LREC-COLING 2024 -- 20 May 2024 through 25 May 2024 -- Hybrid, Torino -- 1996201
Conference: 16th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2015) Location: Dresden, GERMANY Date: SEP 06-10, 20151
Conference: 17th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2016) Location: San Francisco, CA Date: SEP 08-12, 20161
Current Page: 1 / 3

Scopus Quartile Distribution

Competency Cloud

GCRIS Competency Cloud

Scholarly Output Search Results

Now showing 1 - 10 of 19
  • Conference Object
    Evaluating Large Language Models in Data Generation for Low-Resource Scenarios: A Case Study on Question Answering
    (International Speech Communication Association, 2025) Arisoy, Ebru; Menevse, Merve Unlu; Manav, Yusufcan; Ozgur, Arzucan
    Large Language Models (LLMs) are powerful tools for generating synthetic data, offering a promising solution to data scarcity in low-resource scenarios. This study evaluates the effectiveness of LLMs in generating question-answer pairs to enhance the performance of question answering (QA) models trained with limited annotated data. While synthetic data generation has been widely explored for text-based QA, its impact on spoken QA remains underexplored. We specifically investigate the role of LLM-generated data in improving spoken QA models, showing performance gains across both text-based and spoken QA tasks. Experimental results on subsets of the SQuAD, Spoken SQuAD, and a Turkish spoken QA dataset demonstrate significant relative F1 score improvements of 7.8%, 7.0%, and 2.7%, respectively, over models trained solely on restricted human-annotated data. Furthermore, our findings highlight the robustness of LLM-generated data in spoken QA settings, even in the presence of noise.
  • Conference Object
    Citation - WoS: 3
    Citation - Scopus: 5
    Developing an Automatic Transcription and Retrieval System for Spoken Lectures in Turkish
    (2017) Arısoy, Ebru
    With the increase of online video lectures, using speech and language processing technologies for education has become quite important. This paper presents an automatic transcription and retrieval system developed for processing spoken lectures in Turkish. The main steps in the system are automatic transcription of Turkish video lectures using a large vocabulary continuous speech recognition (LVCSR) system and finding keywords on the lattices obtained from the LVCSR system using a speech retrieval system based on keyword search. While developing this system, first a state-of-the-art LVCSR system was developed for Turkish using advance acoustic modeling methods, then keywords were extracted automatically front word sequences in the reference transcriptions of video lectures, and a speech retrieval system was developed for searching these keywords in the lattice output of the LVCSR system. The spoken lecture processing system yields 14.2% word error rate and 0.86 maximum term weighted value on the test data.
  • Master Thesis
    E-Commerce Customer Shurn Prediction Based Machine Learning Algortihms
    (MEF Üniversitesi, Fen Bilimleri Enstitüsü, 2018) Eser, Ahmet Yetkin; Arısoy Saraçlar, Ebru
    With the development and popularization of a digital world, human behavior has changed so remarkably. A lot of sectors affected because of this change. One of the most affected areas is the retail sector. People have left their regular shopping habits and started shopping on e-commerce sites. Thanks to increasing of variety and volume of collected data and velocity of new machines, companies can use sophisticated algorithms efficiently on their data. In this paper, we discuss about how companies can predict potential churned customers with machine learning methods.
  • Master Thesis
    Clustering of News in Publications
    (MEF Üniversitesi, Fen Bilimleri Enstitüsü, 2018) Sülün, Erhan; Arısoy Saraçlar, Ebru
    In today’s world, high volume of text is produced and stored continuously by the help of computer systems and Internet. And again by the help of Internet, those huge amount of text data is accessible to everyone. But when considering the size of the produced text, it is really hard for people to analyze the huge amounts of text data and discover the meaningful information in that data. Machine learning techniques and computer power emerges at this point, in order to analyze data and discover meaningful information to help people to access the summarized information. First step to analyze text data is to represent data in a numerical format, as machine learning techniques can only use numerical inputs. There are several methods for data representation; such as TF-IDF (Term Frequency - Inverse Document Frequency), Bag of Words, Word2Vec and Doc2Vec. Second step to analyze text data is to use machine learning algorithms by using the numerical representation of text data as input. There are supervised and unsupervised machine learning techniques to be decided to be used according to the structure of the problem and the data. In this study, news documents published in some publications in United States, such as New York Times, Reuters and Washington Post will be clustered into topics in order to categorize them and ease the investigation of them. Three types of data representation methods will be examined in detail and will be used, which are Bag of Words, TF-IDF and Doc2Vec representations. And finally, as the news data is an unlabeled set of documents, K-Means clustering algorithm will be used which is an unsupervised learning technique, by using both Euclidean Distance and Cosine Similarity metrics. Categorization will be performed multiple times with different category counts, meaning with different K values, and most meaningful category count will be determined after examining the clustering results.
  • Master Thesis
    Predicting Yelp Stars Based on Business Attributes
    (MEF Üniversitesi, Fen Bilimleri Enstitüsü, 2018) Tek, Ahmet; Arısoy Saraçlar, Ebru
    Yelp is a business review website where consumers can comment on a business from their point of view. This allows other consumers to have prior knowledge of the business. Whenever we search something we try and hope to get the most relevant results, and recommender systems can achieve this. Review websites, such as Yelp and TripAdvisor allow users to post online reviews for various businesses, products and services and have been recently shown to have a significant influence on consumer shopping behavior [1]. This paper aims to predict restaurant ratings using their attributes such as alcohol, noise level, Wifi, music, a smoking area and to find the most important attributes for higher ratings. Yelp dataset has lots of information about businesses and consumer behaviors and it is free for academic usage. For these reasons, Yelp dataset has been selected in this project. Machine Learning models have been executed for two-star label classification. Since we aim to find the most important features for a higher rating we only choose 4 and 5-star labels from the dataset. In our research, restaurant rating prediction is implemented as binary-class classification where the class labels are the star ratings. Restaurant attributes are the input features of the classifier. We will investigate Decision Trees, Naive Bayes Classifier, Two-Class Decision Forest, Two-Class Boosted Decision Trees, TwoClass Neural Network, Two-Class Support Vector Machine, Two-Class Logistic Regression and choose the most important 10 attributes resulting in high ratings.
  • Conference Object
    Citation - WoS: 59
    Bidirectional Recurrent Neural Network Language Models for Automatic Speech Recognition
    (2015) Chen, Stanley; Sethy, Abhinav; Ramabhadran, Bhuvana; Arısoy, Ebru
    Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.
  • Conference Object
    Highlighting of Lecture Video Closed Captions
    (IEEE, 2020) Yıldırım, Göktuğ; Öztufan, Huseyin Efe; Arısoy, Ebru
    The main purpose of this study is to automatically highlight important regions of lecture video subtitles. Even though watching videos is an effective way of learning, the main disadvantage of video-based education is limited interaction between the learner and the video. With the developed system, important regions that are automatically determined in lecture subtitles will be highlighted with the aim of increasing the learner's attention to these regions. In this paper first the lecture videos are converted into text by using an automatic speech recognition system. Then continuous space representations for sentences or word sequences in the transcriptions are generated using Bidirectional Encoder Representations from Transformers (BERT). Important regions of the subtitles are selected using a clustering method based on the similarity of these representations. The developed system is applied to the lecture videos and it is found that using word sequence representations in determining the important regions of subtitles gives higher performance than using sentence representations. This result is encouraging in terms of automatic highlighting of speech recognition outputs where sentence boundaries are not defined explicitly.
  • Conference Object
    Citation - Scopus: 2
    Dealing With Data Scarcity in Spoken Question Answering
    (European Language Resources Association (ELRA), 2024) Arısoy, Ebru; Özgür, Arzucan; Ünlü Menevşe, Merve; Manav, Yusufcan
    This paper focuses on dealing with data scarcity in spoken question answering (QA) using automatic question-answer generation and a carefully selected fine-tuning strategy that leverages limited annotated data (paragraphs and question-answer pairs). Spoken QA is a challenging task due to using spoken documents, i.e., erroneous automatic speech recognition (ASR) transcriptions, and the scarcity of spoken QA data. We propose a framework for utilizing limited annotated data effectively to improve spoken QA performance. To deal with data scarcity, we train a question-answer generation model with annotated data and then produce large amounts of question-answer pairs from unannotated data (paragraphs). Our experiments demonstrate that incorporating limited annotated data and the automatically generated data through a carefully selected fine-tuning strategy leads to 5.5% relative F1 gain over the model trained only with annotated data. Moreover, the proposed framework is also effective in high ASR errors. © 2024 ELRA Language Resource Association: CC BY-NC 4.0.
  • Conference Object
    Citation - WoS: 1
    Citation - Scopus: 2
    Improving the Usage of Subword-Based Units for Turkish Speech Recognition
    (IEEE, 2020) Çetinkaya, Gözde; Saraçlar, Murat; Arısoy, Ebru
    Subword units are often utilized to achieve better performance in speech recognition because of the high number of observed words in agglutinative languages. In this study, the proper use of subword units is explored in recognition by a reconsideration of details such as silence modeling and position-dependent phones. A modified lexicon by finite-state transducers is implemented to represent the subword units correctly. Also, we experiment with different types of word boundary markers and achieve the best performance by adding a marker both to the left and right side of a subword unit. In our experiments on a Turkish broadcast news dataset, the subword models do outperform word-based models and naive subword implementations. Results show that using proper subword units leads to a relative word error rate (WER) reductions, which is 2.4%, compared with the word level automatic speech recognition (ASR) system for Turkish.
  • Conference Object
    Citation - WoS: 1
    Citation - Scopus: 1
    Domain Adaptation Approaches for Acoustic Modeling
    (IEEE, 2020) Arısoy, Ebru; Fakhan, Enver
    In the recent years, with the development of neural network based models, ASR systems have achieved a tremendous performance increase. However, this performance increase mostly depends on the amount of training data and the computational power. In a low-resource data scenario, publicly available datasets can be utilized to overcome data scarcity. Furthermore, using a pre-trained model and adapting it to the in-domain data can help with computational constraint. In this paper we have leveraged two different publicly available datasets and investigate various acoustic model adaptation approaches. We show that 4% word error rate can be achieved using a very limited in-domain data.