Elektrik Elektronik Mühendisliği Bölümü Koleksiyonu
Permanent URI for this collectionhttps://hdl.handle.net/20.500.11779/1941
Browse
Browsing Elektrik Elektronik Mühendisliği Bölümü Koleksiyonu by Publisher "Isca-INT Speech Communication Assoc"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference Object Citation - WoS: 2Citation - Scopus: 5Compositional Neural Network Language Models for Agglutinative Languages(Isca-INT Speech Communication Assoc, 2016) Saraçlar, Murat; Arısoy, EbruContinuous space language models (CSLMs) have been proven to be successful in speech recognition. With proper training of the word embeddings, words that are semantically or syntactically related are expected to be mapped to nearby locations in the continuous space. In agglutinative languages, words are made up of concatenation of stems and suffixes and, as a result, compositional modeling is important. However, when trained on word tokens, CSLMs do not explicitly consider this structure. In this paper, we explore compositional modeling of stems and suffixes in a long short-term memory neural network language model. Our proposed models jointly learn distributed representations for stems and endings (concatenation of suffixes) and predict the probability for stem and ending sequences. Experiments on the Turkish Broadcast news transcription task show that further gains on top of a state-of-theart stem-ending-based n-gram language model can be obtained with the proposed models.Conference Object Citation - WoS: 4Citation - Scopus: 4Multi-Stream Long Short-Term Memory Neural Network Language Model(Isca-INT Speech Communication Assoc, 2015) Saraçlar, Murat; Arısoy, EbruLong Short-Term Memory (LSTM) neural networks are recurrent neural networks that contain memory units that can store contextual information from past inputs for arbitrary amounts of time. A typical LSTM neural network language model is trained by feeding an input sequence. i.e., a stream of words, to the input layer of the network and the output layer predicts the probability of the next word given the past inputs in the sequence. In this paper we introduce a multi-stream LSTM neural network language model where multiple asynchronous input sequences are fed to the network as parallel streams while predicting the output word sequence. For our experiments, we use a sub-word sequence in addition to a word sequence as the input streams, which allows joint training of the LSTM neural network language model using both information sources.
