Browsing by Author "Kamper, Herman"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- ItemThe impact of accent identification errors on speech recognition of South African English(Academy of Science of South Africa, 2014) Kamper, Herman; Niesler, Thomas R.For successful deployment, a South African English speech recognition system must be capable of processing the prevalent accents in this variety of English. Previous work dealing with the different accents of South African English has considered the case in which the accent of the input speech is known. Here we focus on the practical scenario in which the accent of the input speech is unknown and accent identification must occur at recognition time. By means of a set of contrastive experiments, we determine the effect which errors in the identification of the accent have on speech recognition performance. We focus on the specific configuration in which a set of accent-specific speech recognisers operate in parallel, thereby delivering both a recognition hypothesis as well as an identified accent in a single step. We find that, despite their considerable number, the accent identification errors do not lead to degraded speech recognition performance. We conclude that, for our South African English data, there is no benefit of including a more complex explicit accent identification component in the overall speech recognition system.
- ItemLearning dynamics of linear denoising autoencoders(PMLR, 2018) Pretorius, Arnu; Kroon, Steve; Kamper, HermanDenoising autoencoders (DAEs) have proven useful for unsupervised representation learning, but a thorough theoretical understanding is still lacking of how the input noise influences learning. Here we develop theory for how noise influences learning in DAEs. By focusing on linear DAEs, we are able to derive analytic expressions that exactly describe their learning dynamics. We verify our theoretical predictions with simulations as well as experiments on MNIST and CIFAR-10. The theory illustrates how, when tuned correctly, noise allows DAEs to ignore low variance directions in the inputs while learning to reconstruct them. Furthermore, in a comparison of the learning dynamics of DAEs to standard regularised autoencoders, we show that noise has a similar regularisation effect to weight decay, but with faster training dynamics. We also show that our theoretical predictions approximate learning dynamics on real-world data and qualitatively match observed dynamics in nonlinear DAEs.
- ItemMultilingual and Unsupervised Subword Modeling for Zero-Resource Languages(Elsevier Ltd, 2021-04) Hermann, Enno; Kamper, Herman; Goldwater, SharonSubword modeling for zero-resource languages aims to learn low-level representations of speech audio without using transcriptions or other resources from the target language (such as text corpora or pronunciation dictionaries). A good representation should capture phonetic content and abstract away from other types of variability, such as speaker differences and channel noise. Previous work in this area has primarily focused unsupervised learning from target language data only, and has been evaluated only intrinsically. Here we directly compare multiple methods, including some that use only target language speech data and some that use transcribed speech from other (non-target) languages, and we evaluate using two intrinsic measures as well as on a downstream unsupervised word segmentation and clustering task. We find that combining two existing target-language-only methods yields better features than either method alone. Nevertheless, even better results are obtained by extracting target language bottleneck features using a model trained on other languages. Cross-lingual training using just one other language is enough to provide this benefit, but multilingual training helps even more. In addition to these results, which hold across both intrinsic measures and the extrinsic task, we discuss the qualitative differences between the different types of learned features.
- ItemSpeech recognition of South African English accents(Stellenbosch : Stellenbosch University, 2012-03) Kamper, Herman; Niesler, T. R.; Stellenbosch University. Faculty of Engineering. Dept. of Electrical and Electronic Engineering.ENGLISH ABSTRACT: Several accents of English are spoken in South Africa. Automatic speech recognition (ASR) systems should therefore be able to process the di erent accents of South African English (SAE). In South Africa, however, system development is hampered by the limited availability of speech resources. In this thesis we consider di erent acoustic modelling approaches and system con gurations in order to determine which strategies take best advantage of a limited corpus of the ve accents of SAE for the purpose of ASR. Three acoustic modelling approaches are considered: (i) accent-speci c modelling, in which accents are modelled separately; (ii) accent-independent modelling, in which acoustic training data is pooled across accents; and (iii) multi-accent modelling, which allows selective data sharing between accents. For the latter approach, selective sharing is enabled by extending the decision-tree state clustering process normally used to construct tied-state hidden Markov models (HMMs) by allowing accent-based questions. In a rst set of experiments, we investigate phone and word recognition performance achieved by the three modelling approaches in a con guration where the accent of each test utterance is assumed to be known. Each utterance is therefore presented only to the matching model set. We show that, in terms of best recognition performance, the decision of whether to separate or to pool training data depends on the particular accents in question. Multi-accent acoustic modelling, however, allows this decision to be made automatically in a data-driven manner. When modelling the ve accents of SAE, multi-accent models yield a statistically signi cant improvement of 1.25% absolute in word recognition accuracy over accent-speci c and accentindependent models. In a second set of experiments, we consider the practical scenario where the accent of each test utterance is assumed to be unknown. Each utterance is presented simultaneously to a bank of recognisers, one for each accent, running in parallel. In this setup, accent identi cation is performed implicitly during the speech recognition process. A system employing multi-accent acoustic models in this parallel con guration is shown to achieve slightly improved performance relative to the con guration in which the accents are known. This demonstrates that accent identi cation errors made during the parallel recognition process do not a ect recognition performance. Furthermore, the parallel approach is also shown to outperform an accent-independent system obtained by pooling acoustic and language model training data. In a nal set of experiments, we consider the unsupervised reclassi cation of training set accent labels. Accent labels are assigned by human annotators based on a speaker's mother-tongue or ethnicity. These might not be optimal for modelling purposes. By classifying the accent of each utterance in the training set by using rst-pass acoustic models and then retraining the models, reclassi ed acoustic models are obtained. We show that the proposed relabelling procedure does not lead to any improvements and that training on the originally labelled data remains the best approach.