Browsing by Author "Lotz, Susan"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemIt may be copyrighted, but it still needs help : improving research questionnaires by means of intralingual translation(University of Stellenbosch, Department of General Linguistics, 2017) Lotz, SusanClear research questionnaires ultimately help to ensure the reliability and comparability of the data that they gather (Fowler 1992; Lenzner 2012; Moroney and Cameron 2016). This paper explores the intersection of best practices in the fields of questionnaire design and intralingual translation as a means to ensure clarity and comprehensibility in research questionnaires. The questionnaire design perspective on comprehensibility (as represented by the 2010, 2011 and 2012 studies by Lenzner and colleagues, and work done by Knäuper et al. (1997) and Krosnik (1991)) essentially requires intralingual translation for questionnaires that do not meet the clarity requirement. To illustrate how intralingual translation in the form of plain language practice can operationalise comprehensibility (Nisbeth Jensen 2015), a short case study is presented. It chronicles a case of interlingual translation that has evolved into an intralingual translation endeavour. A client had a copyrighted medical research questionnaire, originally in American English, translated into Afrikaans and isiXhosa. Initially, the language service provider was not allowed any interventions in the source text. Testing of this questionnaire and its translations then revealed that the questionnaires were incomprehensible to their respondents. In this paper, the intralingual interventions required to improve comprehensibility of the questionnaire are classified in terms of the four parameters that Zethsen (2009) has identified in this regard, namely knowledge, time, culture and space. In addition, a fourfold text assessment checklist for ensuring clarity in questionnaires is proposed. This checklist may prove valuable for highlighting areas in questionnaires that need intralingual translation – whether used as motivation for a client or as a starting point for an intralingual intervention itself.
- ItemOmission and other sins : tracking the quality of online machine translation output over four years(University of Stellenbosch, Department of General Linguistics, 2016) Lotz, Susan; Van Rensburg, AltaOnline machine translation (MT) has empowered ordinary language users to have texts translated all by themselves. But are these users aware of the pitfalls? This article draws on a longitudinal study that explored the quality of output by online MT application Google Translate in the language combination Afrikaans–English. We investigated the distribution of errors in two sets of translations (slide-show text and news report text) that we had Google Translate produce annually over a period of four years, 2010–2013. Omission, Mistranslation, Non-translation and Grammar were error categories that scored high in the analyses. In addition, we found that although the quality of the translations seemed to improve up to 2012, the pattern of improvement levelled off, with some of the 2013 output containing more errors than that of the previous year. We believe users should be made aware of the risks they unknowingly take when using online MT.
- ItemTranslation technology explored : has a three-year maturation period done Google Translate any good?(Stellenbosch University, Department of Linguistics, 2014) Lotz, Susan; Van Rensburg, AltaLanguage users in multilingual environments who are trying to make sense of the linguistic challenges they face may well regard the advent of online machine translation (MT) applications as a welcome intervention. Such applications have made it possible for virtually anyone to try their hand at translation – with minimum effort, at that. However, the usefulness of the output of these translation applications varies. The empirical research described in this article is a continuation of an investigation into the usefulness of MT in a higher education context. In 2010, Afrikaans and English translations generated by Google Translate and two human translators, based on the same set of source texts, were evaluated by a panel of raters by means of a holistic assessment tool. In 2011 and 2012, the same set of source texts was translated again with Google Translate, and those translations have since been evaluated in exactly the same manner. The results show that the quality of Google Translate’s output has improved over the three years. Subsequently, an error analysis was performed on the translation set of one text type by means of a second assessment tool. Despite an overall improvement in quality, we found that the 2012 translation contained unexpected new errors. In addition, the error analysis showed that mistranslation posed the largest risk when using this MT application. Users of MT should, therefore, understand the risks of their choice and that some text types and contexts are better suited to MT than others. Armed with this knowledge, translators and multilingual communities can make informed decisions regarding MT and translation technology in general.