Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11779/705
Full metadata record
DC FieldValueLanguage
dc.contributor.authorArısoy, Ebru-
dc.contributor.authorSethy, Abhinav-
dc.contributor.authorRamabhadran, Bhuvana-
dc.contributor.authorChen, Stanley-
dc.date.accessioned2019-02-28T13:04:26Z
dc.date.accessioned2019-02-28T11:08:19Z
dc.date.available2019-02-28T13:04:26Z
dc.date.available2019-02-28T11:08:19Z
dc.date.issued2015-
dc.identifier.citationArisoy, E., Sethy, A., Ramabhadran, B., Chen, S., (APR 19-24, 2015 ). Bidirectional recurrent neural network language models for automatic speech recognition. 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA. 5421-5425.en_US
dc.identifier.issn1520-6149-
dc.identifier.urihttps://hdl.handle.net/20.500.11779/705-
dc.descriptionEbru Arısoy (MEF Author)en_US
dc.description##nofulltext##en_US
dc.description.abstractRecurrent neural network language models have enjoyed great success in speech recognition, partially due to their ability to model longer-distance context than word n-gram models. In recurrent neural networks (RNNs), contextual information from past inputs is modeled with the help of recurrent connections at the hidden layer, while Long Short-Term Memory (LSTM) neural networks are RNNs that contain units that can store values for arbitrary amounts of time. While conventional unidirectional networks predict outputs from only past inputs, one can build bidirectional networks that also condition on future inputs. In this paper, we propose applying bidirectional RNNs and LSTM neural networks to language modeling for speech recognition. We discuss issues that arise when utilizing bidirectional models for speech, and compare unidirectional and bidirectional models on an English Broadcast News transcription task. We find that bidirectional RNNs significantly outperform unidirectional RNNs, but bidirectional LSTMs do not provide any further gain over their unidirectional counterparts.en_US
dc.language.isoenen_US
dc.relation.ispartofConference: 40th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) Location: Brisbane, AUSTRALIA Date: APR 19-24, 2015en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectLanguage modelingen_US
dc.subjectrecurrent neural networksen_US
dc.subjectlong short term memoryen_US
dc.subjectbidirectional neural networksen_US
dc.titleBidirectional recurrent neural network language models for automatic speech recognitionen_US
dc.typeConference Objecten_US
dc.description.woscitationindexConference Proceedings Citation Index - Science-
dc.description.WoSDocumentTypeProceedings Paper
dc.description.WoSPublishedMonthNisanen_US
dc.description.WoSIndexDate2015en_US
dc.description.WoSYOKperiodYÖK - 2014-15en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.endpage5425en_US
dc.identifier.startpage5421en_US
dc.departmentMühendislik Fakültesi, Elektrik Elektronik Mühendisliği Bölümüen_US
dc.identifier.wosWOS:000427402905108en_US
dc.institutionauthorArısoy, Ebru-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.grantfulltextnone-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.fulltextNo Fulltext-
item.openairetypeConference Object-
Appears in Collections:Elektrik Elektronik Mühendisliği Bölümü koleksiyonu
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Show simple item record



CORE Recommender

WEB OF SCIENCETM
Citations

48
checked on Jun 23, 2024

Page view(s)

8
checked on Jun 26, 2024

Google ScholarTM

Check





Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.