Professor Tony Hartley on Strengthening Student Competency in Translation Technologies
Last April, Tony Hartley, Emeritus Professor of Translation Studies at the University of Leeds embarked on a journey to Renmin University of China as a visiting professor. With the aim of bolstering their proficiency in translation technologies, students were tasked with a project of adapting a neural machine translation engine and evaluating its performance. Read on to learn about the aims and outcomes of the project, and how Memsource played a role in its success.
The month of April this year saw me relocated from Tokyo to Beijing. More precisely to the campus of Renmin University of China, resplendent with plum blossoms, peonies, wisteria, and fresh willow leaves. And mostly under a smog-free sky. The purpose? To strengthen the competences in translation technologies of students of the School of Foreign Languages. My experience at Leeds (UK) and Rikkyo (Japan) had taught me that Chinese students generally are both hard-working and smart, so the knowledge that RUC ranks among the very top universities in China made me doubly nervous about the value I could add to my new students’ profiles.
In the event, there were 17 of them – 13 first-year Masters of Translation and Interpreting and four PhD students. They brought diverse backgrounds and interests – from marine engineering to classical poetry – and their familiarity with translation technologies ranged from zero to being a project manager (PM) in a small translation company. The Masters students were, in fact, enjoying during that same semester an introduction to another well-known translation management tool, taught by a lecturer from nearby Peking University.
My focus, however, was not tool-centered but project-centered. The students assigned themselves to four groups, each with the goal of adapting a neural machine translation (NMT) engine to a different specific domain and evaluating its performance against the baseline and one or more other engines. Such a project requires a suite of complementary tools. Apart from Memsource, we were grateful for the support of Sketch Engine (for their corpus building and analysis tool), NICT (for their adaptable TexTra English-Chinese NMT engine), and TAUS (for their DQF interface that enables human evaluation of adequacy and fluency).
The course was scheduled in a computer lab in three-hour blocks on Mondays and Thursdays, with an additional block timetabled into the fourth and final week. This is a pattern that allows students to make obvious, rewarding progress within a single session and to rehearse procedures or prepare data for the next session. The first two weeks were a rehearsal phase, and the second two weeks were the production and evaluation phase.
When the tools are seen as a means to an end and the end is clear, it’s stunning how fast progress can be. By the conclusion of the first session the student PMs (i.e., every student) could create projects, translation memories (TMs), term bases (TBs), linguist accounts, assign linguists to split jobs, translate, understand match rates, and export all resources. They got there through a mixture of demos by me, intuition aided by the clean Memsource interface, online manuals and videos, peer support and, no doubt, transfer from their experience of the other TM tool in their wider curriculum. By the conclusion of the second session they had built small (circa 1M words) comparable corpora in English and Chinese for specialized domains and extracted key single and multiword terms, using Sketch Engine. After processing in Excel, these were incorporated into Memsource TBs. Reformatted, these same TBs were imported into the NICT engine for an initial customization and the customized engines were activated within Memsource to enable post-editing. All of this was achieved in just two sessions.
By the end of the second week, the students had rehearsed all the functions of the four tools essential to the completion of the project. This was structured on my part by a checklist – in more or less the required order – of these fine-grained functions, together with a Gantt-type representation of the tasks and dependencies entailed by the project, whose purpose was to motivate and map out the whole scenario. Of course, many individual functions were left untried, but a now self-confident learner can explore these as need or curiosity dictates. The data volumes had also been small. But if you have mastered the mechanics of, say, customizing NICT by creating and uploading a .csv file with 10 term pairs or of setting up a TAUS adequacy evaluation with just 10 segments, then you can do the same with 1,000 terms and 250 segments. The second two weeks were devoted to creating and exploiting larger volumes of training and test data within the chosen domains, which were: shipbuilding, global warming, Belt and Road Initiative, and Civilization documentary subtitles.
The NICT engine permits ‘customization’ with term lists and ‘adaptation’ with aligned sentences. Customization can fix problems but also cause them, as a couple of groups established. The high-level activities of the second phase were: build larger (2.5M word) corpora in each language and domain, create validated bilingual glossaries to customize NICT, connect the customized engine to Memsource and post-edit large volumes of sentence training data, adapt NICT with the resultant TM, translate the test data, and evaluate.
Group-working allows for specialization, sharing, and support internally while encouraging competition externally. Moreover, groups can be encouraged to experiment with contrasting approaches ‘to see what happens’. For this, they need to be reassured that – provided the initial hypothesis is reasonable and the method sound – a project that fails to improve machine translation performance can still earn a good grade. For example, one group aligned legacy data to boost the size of their TM, while another resorted to back translation for the same purpose (with even greater benefits). The four final reports were made available to the whole cohort in order to share the lessons learned.
Let’s return to the choice of project goals, which can be open to question. Yes, we neglected many of the functions of Memsource and of the other tools we used. But it’s never been my aim as an educator to make students into masterminds on a particular tool. That’s for the tool providers to take care of by creating intuitive interfaces and adequate, accessible training materials. And by and large they make a good job of it, Memsource certainly. My aim is to foster reflective users capable of principled evaluation of all the tools of their trade, which can involve comparing tools within the same class, investigating interoperability across classes, or other attributes. But even before that, the teacher/animator needs to impart a sense of excitement. And at a time when many would-be translators appear demotivated by a fear of being enslaved by the NMT ‘beast’, it can be exciting for them to discover that the beast can be trained and that they themselves can be the trainers. That was, indeed, the positive reaction of many of the participants in this course. Moreover, they said they would request the School to sign an academic partnership agreement with Memsource.
Finally, I would like to express my gratitude to Professor Frank Zhu, Director of the MTI Education Center of the School of Foreign Languages at RUC, and his academic and administrator colleagues for their efficiency and hospitality before and during my visit. Also my thanks to every one of the students for their assiduity, curiosity, patience, good humor, and challenging questions. They made it a memorable experience.