Download "Why Not Treat MT as TM?" - Memsource Case Study Presented at LocWorld
Memsource and a leading Danish LSP, Oversætterhuset, presented a joint case study on the challenges organizations face when integrating machine translation (MT), at the recent Localization World conference in Barcelona. (Memsource provided its translation technology, Memsource Cloud and Oversætterhuset supplied data for the case study).
If an LSP or any other organization invests in a new technology like MT, one would expect that they will in return drive home some savings generated by it. The goal is clear but the path leading to it is not. A major problem here is that we do not know what quality the machine translation is for a specific file or even segment.
This is actually a much cited problem of not having a precise MT quality measurement tool, which would allow to fairly reward translators and post-editors for translation jobs supported by or pre-translated with machine translation.
In fact this problem has been aptly described by Kirti Vashee of Asia Online on his blog:
“…to be fair to translators there needs to be an assessment of the scope of the PEMT (post-editing machine translation) task. An MT system that produces 50% usable output should be compensated differently from one the produces 75% usable output, assuming the content has to be raised to the same target quality level. Logically, the greater the quality gap that needs to be filled, the greater the pay rate for doing the work. To do this accurately and fairly, we require rapid and widely accepted MT engine quality measures which do not exist today.”
Kirti wrote this back in February 2011 and **we think we have found a solution **to this problem.
We propose to treat MT as a TM of sorts. What does this mean? It is a very common practice to run an analysis of the source segments against a TM to identify any matches. Why not run a similar analysis also against the machine translation? The only difference being that such an analysis will be generated from the target segments, after the translation, comparing the machine translation against final (post-edited) translation for each segment.
We have developed this idea into a full-blown feature in Memsource Cloud and Memsource Server where users can precisely analyze machine translation quality in a manner very similar to an analysis of a document against a translation memory. In this way LSPs or anyone else using it for that matter gets additional leverage against this new huge translation memory, which is actually a machine translation engine.
A screenshot of a combined TM and MT analysis generated from Memsource Cloud: