Elektrik Elektronik Mühendisliği Bölümü Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.11779/1941
Browse
Browsing Elektrik Elektronik Mühendisliği Bölümü Koleksiyonu by Institution Author "Arısoy, Ebru"
Now showing 1 - 14 of 14
- Results Per Page
- Sort Options
Conference Object Citation - Scopus: 2A Decade of Discriminative Language Modeling for Automatic Speech Recognition(2015) Arısoy, Ebru; Saraçlar, Murat; Dikici, ErincThis paper summarizes the research on discriminative language modeling focusing on its application to automatic speech recognition (ASR). A discriminative language model (DLM) is typically a linear or log-linear model consisting of a weight vector associated with a feature vector representation of a sentence. This flexible representation can include linguistically and statistically motivated features that incorporate morphological and syntactic information. At test time, DLMs are used to rerank the output of an ASR system, represented as an N-best list or lattice. During training, both negative and positive examples are used with the aim of directly optimizing the error rate. Various machine learning methods, including the structured perceptron, large margin methods and maximum regularized conditional log-likelihood, have been used for estimating the parameters of DLMs. Typically positive examples for DLM training come from the manual transcriptions of acoustic data while the negative examples are obtained by processing the same acoustic data with an ASR system. Recent research generalizes DLM training by either using automatic transcriptions for the positive examples or simulating the negative examples.Conference Object Citation - Scopus: 6A Framework for Automatic Generation of Spoken Question-Answering Data(Association for Computational Linguistics (ACL), 2022) Manav, Y.; Menevşe, M.Ü.; Özgür, A.; Arısoy, EbruThis paper describes a framework to automatically generate a spoken question answering (QA) dataset. The framework consists of a question generation (QG) module to generate questions automatically from given text documents, a text-to-speech (TTS) module to convert the text documents into spoken form and an automatic speech recognition (ASR) module to transcribe the spoken content. The final dataset contains question-answer pairs for both the reference text and ASR transcriptions as well as the audio files corresponding to each reference text. For QG and ASR systems we used pre-trained multilingual encoder-decoder transformer models and fine-tuned these models using a limited amount of manually generated QA data and TTS-based speech data, respectively. As a proof of concept, we investigated the proposed framework for Turkish and generated the Turkish Question Answering (TurQuAse) dataset using Wikipedia articles. Manual evaluation of the automatically generated question-answer pairs and QA performance evaluation with state-of-the-art models on TurQuAse show that the proposed framework is efficient for automatically generating spoken QA datasets. To the best of our knowledge, TurQuAse is the first publicly available spoken question answering dataset for Turkish. The proposed framework can be easily extended to other languages where a limited amount of QA data is available. © 2022 Association for Computational Linguistics.Conference Object Citation - WoS: 59Bidirectional Recurrent Neural Network Language Models for Automatic Speech Recognition(2015) Chen, Stanley; Sethy, Abhinav; Ramabhadran, Bhuvana; Arısoy, EbruRecurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.Conference Object Citation - WoS: 2Citation - Scopus: 5Compositional Neural Network Language Models for Agglutinative Languages(2016) Saraçlar, Murat; Arısoy, EbruContinuous space language models (CSLMs) have been proven to be successful in speech recognition. With proper training of the word embeddings, words that are semantically or syntactically related are expected to be mapped to nearby locations in the continuous space. In agglutinative languages, words are made up of concatenation of stems and suffixes and, as a result, compositional modeling is important. However, when trained on word tokens, CSLMs do not explicitly consider this structure. In this paper, we explore compositional modeling of stems and suffixes in a long short-term memory neural network language model. Our proposed models jointly learn distributed representations for stems and endings (concatenation of suffixes) and predict the probability for stem and ending sequences. Experiments on the Turkish Broadcast news transcription task show that further gains on top of a state-of-theart stem-ending-based n-gram language model can be obtained with the proposed models.Conference Object Citation - WoS: 3Citation - Scopus: 5Developing an Automatic Transcription and Retrieval System for Spoken Lectures in Turkish(2017) Arısoy, EbruWith the increase of online video lectures, using speech and language processing technologies for education has become quite important. This paper presents an automatic transcription and retrieval system developed for processing spoken lectures in Turkish. The main steps in the system are automatic transcription of Turkish video lectures using a large vocabulary continuous speech recognition (LVCSR) system and finding keywords on the lattices obtained from the LVCSR system using a speech retrieval system based on keyword search. While developing this system, first a state-of-the-art LVCSR system was developed for Turkish using advance acoustic modeling methods, then keywords were extracted automatically front word sequences in the reference transcriptions of video lectures, and a speech retrieval system was developed for searching these keywords in the lattice output of the LVCSR system. The spoken lecture processing system yields 14.2% word error rate and 0.86 maximum term weighted value on the test data.Conference Object Citation - WoS: 1Citation - Scopus: 1Domain Adaptation Approaches for Acoustic Modeling(IEEE, 2020) Arısoy, Ebru; Fakhan, EnverIn the recent years, with the development of neural network based models, ASR systems have achieved a tremendous performance increase. However, this performance increase mostly depends on the amount of training data and the computational power. In a low-resource data scenario, publicly available datasets can be utilized to overcome data scarcity. Furthermore, using a pre-trained model and adapting it to the in-domain data can help with computational constraint. In this paper we have leveraged two different publicly available datasets and investigate various acoustic model adaptation approaches. We show that 4% word error rate can be achieved using a very limited in-domain data.Conference Object Highlighting of Lecture Video Closed Captions(IEEE, 2020) Yıldırım, Göktuğ; Öztufan, Huseyin Efe; Arısoy, EbruThe main purpose of this study is to automatically highlight important regions of lecture video subtitles. Even though watching videos is an effective way of learning, the main disadvantage of video-based education is limited interaction between the learner and the video. With the developed system, important regions that are automatically determined in lecture subtitles will be highlighted with the aim of increasing the learner's attention to these regions. In this paper first the lecture videos are converted into text by using an automatic speech recognition system. Then continuous space representations for sentences or word sequences in the transcriptions are generated using Bidirectional Encoder Representations from Transformers (BERT). Important regions of the subtitles are selected using a clustering method based on the similarity of these representations. The developed system is applied to the lecture videos and it is found that using word sequence representations in determining the important regions of subtitles gives higher performance than using sentence representations. This result is encouraging in terms of automatic highlighting of speech recognition outputs where sentence boundaries are not defined explicitly.Conference Object Citation - WoS: 1Citation - Scopus: 2Improving the Usage of Subword-Based Units for Turkish Speech Recognition(IEEE, 2020) Çetinkaya, Gözde; Saraçlar, Murat; Arısoy, EbruSubword units are often utilized to achieve better performance in speech recognition because of the high number of observed words in agglutinative languages. In this study, the proper use of subword units is explored in recognition by a reconsideration of details such as silence modeling and position-dependent phones. A modified lexicon by finite-state transducers is implemented to represent the subword units correctly. Also, we experiment with different types of word boundary markers and achieve the best performance by adding a marker both to the left and right side of a subword unit. In our experiments on a Turkish broadcast news dataset, the subword models do outperform word-based models and naive subword implementations. Results show that using proper subword units leads to a relative word error rate (WER) reductions, which is 2.4%, compared with the word level automatic speech recognition (ASR) system for Turkish.Book Part Language Modeling for Turkish Text and Speech Processing(Springer, 2018) Arısoy, Ebru; Saraçlar, MuratThis chapter presents an overview of language modeling followed by a discussion of the challenges in Turkish language modeling. Sub-lexical units are commonly used to reduce the high out-of-vocabulary (OOV) rates of morphologically rich languages. These units are either obtained by morphological analysis or by unsupervised statistical techniques. For Turkish, the morphological analysis yields word segmentations both at the lexical and surface forms which can be used as sub-lexical language modeling units. Discriminative language models, which outperform generative models for various tasks, allow for easy integration of morphological and syntactic features into language modeling. The chapter provides a review of both generative and discriminative approaches for Turkish language modeling.Conference Object Citation - WoS: 4Citation - Scopus: 4Multi-Stream Long Short-Term Memory Neural Network Language Model(2015) Saraçlar, Murat; Arısoy, EbruLong Short-Term Memory (LSTM) neural networks are recurrent neural networks that contain memory units that can store contextual information from past inputs for arbitrary amounts of time. A typical LSTM neural network language model is trained by feeding an input sequence. i.e., a stream of words, to the input layer of the network and the output layer predicts the probability of the next word given the past inputs in the sequence. In this paper we introduce a multi-stream LSTM neural network language model where multiple asynchronous input sequences are fed to the network as parallel streams while predicting the output word sequence. For our experiments, we use a sub-word sequence in addition to a word sequence as the input streams, which allows joint training of the LSTM neural network language model using both information sources.Conference Object Citation - WoS: 10Citation - Scopus: 13Question Answering for Spoken Lecture Processing(Institute of Electrical and Electronics Engineers (IEEE), 2019) Ünlü, Merve; Saraçlar, Murat; Arısoy, EbruThis paper presents a question answering (QA) system developed for spoken lecture processing. The questions are presented to the system in written form and the answers are returned from lecture videos. In contrast to the widely studied reading comprehension style QA - the machine understands a passage of text and answers the questions related to that passage - our task introduces the challenge of searching the answers on longer text where the text corresponds to the erroneous transcripts of the lecture videos. Our initial experiments show that searching answers on longer text degrades the performance of the QA system drastically. Therefore, we propose splitting the transcriptions of lecture videos into short passages and determining passage-question matching using question aware passage representations. The proposed approach lets us utilize competitive neural network-based reading comprehension models for our task and improves the performance of the developed QA systemConference Object Citation - WoS: 2Citation - Scopus: 2Turkish Broadcast News Transcription Revisited(2018) Saraçlar, Murat; Arısoy, EbruBu çalışmada yaklaşık on yıl önce gerçeklenen Türkçe haber programları için otomatik konuşma tanımayla yazılandırma sistemi güncel yöntemlerle yenilenerek aynı veri üzerindeki başarımı ölçülmüştür. Son yıllarda yapay sinir ağları temelli derin öğrenme yöntemleri konu¸sma tanıma hata oranlarında belirgin bir iyileşme sağlamıştır ve günümüzde yaygın olarak kullanılmaktadır. Bu bildiride geliştirilen konu¸sma tanıma sisteminin temel bileşenleri olan akustik ve dil modelleri için sinir ağları kullanılmıştır. Akustik modelleme için derin sinir a^gları hem çapraz entropi hem de ayırıcı dizi amaç işlevleriyle eniyilenmiştir. Ayrıca uzun süreli bağımlılıkları modellemek için yinelemeli sinir ağlarına benzer bir başarım gösteren ama daha çabuk eğitilebilen zaman gecikmeli sinir ağları kullanılmıştır. Daha sonra bunların ayırıcı eğitimle eniyilenmesi sonucunda en düşük hata oranlarına ulaşılmoştır. Dil modeli için ise yinelemeli sinir ağları kullanılmıştır. Bu yeni sinir ağları kullanan modeller ile kelime hata oranlarının yarılandığıve %10’un altına düştüğü gözlemlenmiştir.Book Part Turkish Speech Recognition(2018) Arısoy, Ebru; Saraçlar, MuratAutomatic speech recognition (ASR) is one of the most important applications of speech and language processing, as it forms the bridge between spoken and written language processing. This chapter presents an overview of the foundations of ASR, followed by a summary of Turkish language resources for ASR and a review of various Turkish ASR systems. Language resources include acoustic and text corpora as well as linguistic tools such as morphological parsers, morphological disambiguators, and dependency parsers, discussed in more detail in other chapters. Turkish ASR systems vary in the type and amount of data used for building the models. The focus of most of the research for Turkish ASR is the language modeling component covered in Chap. 4.Conference Object Citation - WoS: 1Citation - Scopus: 5Uncertainty-Aware Representations for Spoken Question Answering(Institute of Electrical and Electronics Engineers Inc., 2021) Ünlü, Merve; Arısoy, EbruThis paper describes a spoken question answering system that utilizes the uncertainty in automatic speech recognition (ASR) to mitigate the effect of ASR errors on question answering. Spoken question answering is typically performed by transcribing spoken con-tent with an ASR system and then applying text-based question answering methods to the ASR transcriptions. Question answering on spoken documents is more challenging than question answering on text documents since ASR transcriptions can be erroneous and this degrades the system performance. In this paper, we propose integrating confusion networks with word confidence scores into an end-to-end neural network-based question answering system that works on ASR transcriptions. Integration is performed by generating uncertainty-aware embedding representations from confusion networks. The proposed approach improves F1 score in a question answering task developed for spoken lectures by providing tighter integration of ASR and question answering.

