Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.11779/705
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Stanley | - |
dc.contributor.author | Sethy, Abhinav | - |
dc.contributor.author | Ramabhadran, Bhuvana | - |
dc.contributor.author | Arısoy, Ebru | - |
dc.date.accessioned | 2019-02-28T13:04:26Z | |
dc.date.accessioned | 2019-02-28T11:08:19Z | |
dc.date.available | 2019-02-28T13:04:26Z | |
dc.date.available | 2019-02-28T11:08:19Z | |
dc.date.issued | 2015 | - |
dc.identifier.citation | Arisoy, E., Sethy, A., Ramabhadran, B., Chen, S., (APR 19-24, 2015 ). Bidirectional recurrent neural network language models for automatic speech recognition. 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA. 5421-5425. | en_US |
dc.identifier.issn | 1520-6149 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.11779/705 | - |
dc.description | ##nofulltext## | en_US |
dc.description | Ebru Arısoy (MEF Author) | en_US |
dc.description.abstract | Recurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | Conference: 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA Date: APR 19-24, 2015 | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Long short term memory | en_US |
dc.subject | Bidirectional neural networks | en_US |
dc.subject | Language modeling | en_US |
dc.subject | Recurrent neural networks | en_US |
dc.title | Bidirectional Recurrent Neural Network Language Models for Automatic Speech Recognition | en_US |
dc.type | Conference Object | en_US |
dc.description.woscitationindex | Conference Proceedings Citation Index - Science | - |
dc.description.WoSDocumentType | Proceedings Paper | |
dc.description.WoSPublishedMonth | Nisan | en_US |
dc.description.WoSIndexDate | 2015 | en_US |
dc.description.WoSYOKperiod | YÖK - 2014-15 | en_US |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
dc.identifier.endpage | 5425 | en_US |
dc.identifier.startpage | 5421 | en_US |
dc.department | Mühendislik Fakültesi, Elektrik Elektronik Mühendisliği Bölümü | en_US |
dc.identifier.wos | WOS:000427402905108 | en_US |
dc.institutionauthor | Arısoy, Ebru | - |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
item.languageiso639-1 | en | - |
item.openairetype | Conference Object | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.cerifentitytype | Publications | - |
crisitem.author.dept | 02.05. Department of Electrical and Electronics Engineering | - |
Appears in Collections: | Elektrik Elektronik Mühendisliği Bölümü Koleksiyonu WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection |
CORE Recommender
WEB OF SCIENCETM
Citations
51
checked on Nov 16, 2024
Page view(s)
32
checked on Nov 18, 2024
Google ScholarTM
Check
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.