Automatic sub-word unit discovery and pronunciation lexicon induction for automatic speech recognition with application to under-resourced languages
dc.contributor.advisor | Niesler, T. R. | en_ZA |
dc.contributor.author | Agenbag, Wiehan | en_ZA |
dc.contributor.other | Stellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering. | en_ZA |
dc.date.accessioned | 2020-02-25T07:33:35Z | |
dc.date.accessioned | 2020-04-28T12:21:14Z | |
dc.date.available | 2020-02-25T07:33:35Z | |
dc.date.available | 2020-04-28T12:21:14Z | |
dc.date.issued | 2020-04 | |
dc.description | Thesis (PhD)--Stellenbosch University, 2020. | en_ZA |
dc.description.abstract | ENGLISH ABSTRACT: Automatic speech recognition is an increasingly important mode of human- computer interaction. However, its implementation requires a sub-word unit inventory to be designed and an associated pronunciation lexicon to be crafted, a process that requires linguistic expertise. This step represents a significant bottleneck for most of the world’s under-resourced languages, for which such resources are not available. We address this challenge by developing techniques to automate both the discovery of sub-word units and the induction of corresponding pronunciation lexica. Our first attempts at sub-word unit discovery made use of a shift and scale invariant convolutional sparse coding and dictionary learning framework. After initial investigations showed that this model exhibits significant temporal overlap between units, the sparse codes were constrained to prohibit overlap and the sparse coding basis functions further globally optimised using a metaheuristic search procedure. The result was a unit inventory with a strong correspondence with reference phonemes, but highly variable associated transcriptions. To reduce transcription variability, two lattice-constrained Viterbi training strategies were developed. These involved jointly training either a bigram sub-word unit language model or a unique pronunciation model for each word type along with the unit inventory. By taking this direction, it was necessary to abandon sparse coding in favour of a more conventional HMM-GMM approach. However, the resulting strategies yielded inventories with a higher degree of correspondence with reference phonemes, and led to more consistent transcriptions. The strategies were further refined by introducing a novel sub-word unit discovery approach based on self-organising HMM-GMM states that incorporate orthographic knowledge during sub-word unit discovery. Furthermore, a more sophisticated pronunciation modeling approach and a two-stage pruning process was introduced. We demonstrate that the proposed methods are able to discover sub-word units and associated lexicons that perform as well as expert systems in terms of automatic speech recognition performance for Acholi, and close to this level for Ugandan English. The worst performing language among those evaluated was Luganda, which has a highly agglutinating vocabulary that was observed to make automatic lexicon induction challenging. As a final step, we addressed this by introducing a data-driven morphological segmentation step that is applied before performing lexicon induction. This is demonstrated to close the gap with the expert lexicon for Luganda. The techniques developed in this thesis demonstrate that it is possible to develop an automatic speech recognition system in an underresourced setting using an automatically induced lexicon without sacrificing performance, even in the case of a highly agglutinating language. | en_ZA |
dc.description.abstract | AFRIKAANSE OPSOMMING: Outomatiese spraakherkenning is ’n toenemend belangrike manier van interaksie tussen mens en rekenaar. Die implementering daarvan vereis egter dat ’n inventaris van subwoordeenhede ontwerp word en dat daar ’n gepaardgaande uitspraakleksikon geskep moet word, ’n proses wat taalkundige deskundigheid vereis. Hierdie stap is ’n belangrike bottelnek vir die meeste van die wêreld se hulpbron-beperkte tale, waarvoor sulke hulpbronne nie beskikbaar is nie. Ons pak hierdie uitdaging aan deur tegnieke te ontwikkel om sowel die ontdekking van subwoordeenhede as die induksie van ooreenstemmende uitspraakleksikons te outomatiseer. Ons eerste pogings tot die ontdekking van subwoordeenhede het gebruik gemaak van ’n skuif- en skaalinvariante konvolusionêre ylkodering- en woordeboekleerraamwerk. Nadat aanvanklike ondersoeke getoon het dat hierdie model ’n beduidende temporale oorvleueling tussen eenhede tot gevolg het, is die ylkodes beperk om oorvleueling te verbied, en die ylkoderingsbasisfunksies verder globaal geoptimiseer met behulp van ’n metaheuristiese soekprosedure. Die resultaat was ’n eenheidsinventaries wat ’n sterk ooreenstemming met verwysingsfoneme toon, maar met hoogs wisselvallige gepaardgaande transkripsies. Om die transkripsiewisselvalligheid te verminder, is twee tralie-beperkte Viterbi-opleidingstrategieë ontwikkel. Dit behels gesamentlike opleiding van óf ’n bigram-subwoordeenheidstaalmodel óf ’n unieke uitspraakmodel vir elke woordsoort, tesame met die eenheidsinventaris. Deur hierdie rigting in te neem, was dit nodig om die ylkodering te laat vaar ten gunste van ’n meer konvensionele HMM-GMM benadering. Die gevolglike strategieë het egter subwoordeenheidinventarisse gelewer met ’n hoër mate van korrespondensie met verwysingsfoneme, en het gelei tot meer konsekwente transkripsies. Die strategieë is verder verfyn deur ’n nuwe benadering tot die ontdekking van subwoordeenhede in te stel, gebaseer op selforganiserende HMM-GMM toestande wat ortografiese kennis insluit tydens die ontdekking van subwoordeenhede. Verder is ’n meer gesofistikeerde benadering tot uitspraakmodelering en ’n twee-fase snoeiproses ingestel. Ons demonstreer dat die voorgestelde metodes subwoordeenhede en gepaardgaande leksikons kan ontdek wat so goed presteer soos stelsels ontwerp deur deskundiges in terme van outomatiese spraakherkenning vir Acholi, en naby aan hierdie vlak vir Ugandese Engels. Die taal wat die swakste gevaar het onder die wat beoordeel was, was Luganda, wat ’n uiters agglutinerende woordeskat het wat waargeneem word om outomatiese leksikoninduksie uitdagend te maak. As ’n laaste stap het ons dit aangespreek deur ’n datagedrewe morfologiese segmenteringsstap in te stel wat toegepasword voordat leksikon-induksie uitgevoer word. Dit word getoon om die gaping met die deskundige leksikon vir Luganda te sluit. Die tegnieke wat in hierdie proefskrif ontwikkel is, demonstreer dat dit moontlik is om ’n outomatiese spraakherkenningstelsel te ontwikkel in ’n hulpbronbeperkte omgewing met behulp van ’n outomaties geïnduseerde leksikon, sonder om prestasie in te boet, selfs in die geval van ’n uiters agglutinerende taal. | af_ZA |
dc.description.version | Doctoral | en_ZA |
dc.format.extent | xvi, 126 leaves : illustrations (some color) | |
dc.identifier.uri | http://hdl.handle.net/10019.1/108134 | |
dc.language.iso | en | en_ZA |
dc.publisher | Stellenbosch : Stellenbosch University | en_ZA |
dc.rights.holder | Stellenbosch University | en_ZA |
dc.subject | Automatic sub-word unit discovery | en_ZA |
dc.subject | Automatic speech recognition | en_ZA |
dc.subject | Speech processing system | en_ZA |
dc.subject | UCTD | en_ZA |
dc.title | Automatic sub-word unit discovery and pronunciation lexicon induction for automatic speech recognition with application to under-resourced languages | en_ZA |
dc.type | Thesis | en_ZA |