Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.11779/686
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSaraçlar, Murat-
dc.contributor.authorArısoy, Ebru-
dc.date.accessioned2019-02-28T13:04:26Z
dc.date.accessioned2019-02-28T11:08:18Z
dc.date.available2019-02-28T13:04:26Z
dc.date.available2019-02-28T11:08:18Z
dc.date.issued2016-
dc.identifier.citationArisoy, E., Saraclar, M., Compositional Neural Network Language Models for Agglutinative Languages. p. 3494-3498.en_US
dc.identifier.issn2308-457X-
dc.identifier.urihttp://dx.doi.org/10.21437/Interspeech.2016-1239-
dc.identifier.urihttps://hdl.handle.net/20.500.11779/686-
dc.descriptionEbru Arısoy (MEF Author)en_US
dc.description.abstractContinuous space language models (CSLMs) have been proven to be successful in speech recognition. With proper training of the word embeddings, words that are semantically or syntactically related are expected to be mapped to nearby locations in the continuous space. In agglutinative languages, words are made up of concatenation of stems and suffixes and, as a result, compositional modeling is important. However, when trained on word tokens, CSLMs do not explicitly consider this structure. In this paper, we explore compositional modeling of stems and suffixes in a long short-term memory neural network language model. Our proposed models jointly learn distributed representations for stems and endings (concatenation of suffixes) and predict the probability for stem and ending sequences. Experiments on the Turkish Broadcast news transcription task show that further gains on top of a state-of-theart stem-ending-based n-gram language model can be obtained with the proposed models.en_US
dc.language.isoenen_US
dc.relation.ispartofConference: 17th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2016) Location: San Francisco, CA Date: SEP 08-12, 2016en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAgglutinative languagesen_US
dc.subjectSub-word-based language modelingen_US
dc.subjectLong short-term memoryen_US
dc.subjectLanguage modelingen_US
dc.subjectAuthor informationen_US
dc.titleCompositional Neural Network Language Models for Agglutinative Languagesen_US
dc.typeConference Objecten_US
dc.identifier.doi10.21437/Interspeech.2016-1239-
dc.identifier.scopus2-s2.0-84994336850en_US
dc.authoridEbru Arısoy / 0000-0002-8311-3611-
dc.description.woscitationindexConference Proceedings Citation Index - Science - Conference Proceedings Citation Index - Social Science & Humanities-
dc.description.WoSDocumentTypeProceedings Paper
dc.description.WoSPublishedMonthEylülen_US
dc.description.WoSIndexDate2016en_US
dc.description.WoSYOKperiodYÖK - 2016-17en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.endpage3498en_US
dc.identifier.startpage3494en_US
dc.departmentMühendislik Fakültesi, Elektrik Elektronik Mühendisliği Bölümüen_US
dc.identifier.wosWOS:000409394402080en_US
dc.institutionauthorArısoy, Ebru-
item.grantfulltextembargo_20890214-
item.fulltextWith Fulltext-
item.languageiso639-1en-
item.openairetypeConference Object-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
crisitem.author.dept02.05. Department of Electrical and Electronics Engineering-
Appears in Collections:Elektrik Elektronik Mühendisliği Bölümü Koleksiyonu
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Files in This Item:
File Description SizeFormat 
Compositional Neural Network Language Models for Agglutinative Languages.PDF
  Until 2089-02-14
Konferans Dosyası367.92 kBAdobe PDFView/Open    Request a copy
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

4
checked on Nov 16, 2024

WEB OF SCIENCETM
Citations

2
checked on Nov 16, 2024

Page view(s)

24
checked on Nov 18, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.