The quality of machine translation continues to improve and linguists are increasingly being asked to edit, or post-edit, machine translation output. But the quality of MT still varies considerably, depending on various factors, including content type and language pairs. Linguists have to sift their way through a lot of low-quality machine translation, occasionally
coming across something that only requires a bit of editing or none at all. This wastes a lot of time.
We know that linguists are saving time with quality scores for translation memory matches, so why not provide the same for machine translation. This is where our AI team came in.
Machine Translation Quality Estimation
Eliminate MT guesswork
Assess MT engine quality
Forecast post-editing efforts
The feature can be used with over 70 language pairs and is available for all MT engines supported in Memsource and in all Memsource editions.
Initial testing indicates that for the supported language pairs, MTQE can provide quality scores of between 85% and 100% in up to 14% of segments that are machine translated. This could mean savings of up to 7% on post-editing costs.
The MTQE quality scores are available in the Memsource Web Editor and Desktop Editor. In MTQE Version 1, there are four scoring categories:
Excellent machine translation quality, probably no need for post-editing
Very good quality machine translation, possibly minor post-editing required
Good match, but likely to require some post-editing
When there is no score, this means MTQE cannot identify the quality (it may be high or low), so the output needs to be checked by a linguist