You Need to Avoid These Common Machine Translation Obstacles
Machine translation usage is on the rise, but with new technology comes new challenges to be aware of and avoid. Keep reading to discover how Memsource can help you overcome common machine translation pitfalls.
Haven’t implemented MT yet? Not to worry – we’ve laid out steps to successfully implement machine translation for you.
Otherwise, read on, and enjoy peace of mind knowing that with Memsource, you’re leveraging MT to its full potential.
Machine translation quality continues to improve and linguists are increasingly being asked to edit, or rather post-edit, MT output. But the quality of the MT output still varies considerably. Linguists have to sift their way through a lot of low-quality MT, occasionally coming across something that only requires a bit of editing or none at all. And Neural MT output has improved so much that sometimes there are only small errors that can be quite difficult to spot, even for a seasoned post-editor.
When you go on holiday, do you book a hotel without reading the reviews or checking the hotel rating? If yes, you are braver than most, and we’re willing to bet you’ve ended up in some less than desirable places. If not, you will understand why MTQE is a valuable feature. Using MTQE, you can determine whether MT is the best approach for your content, establish a more accurate estimate of the post-editing effort and time required, and assess the quality of different MT engines. It’s similar to reading through hotel reviews before selecting the best one.
Want to hear how others have been using MTQE to optimize their MT strategy? Check out this webinar.
Another way Memsource can help improve the quality of MT output is by enabling you to select a different MT engine for each language pair. When working on projects with multiple language pairs, it is not always possible to find one MT engine that caters to all combinations you need. By choosing specific engines for specific language pairs - and by testing the engines (which you can do with MTQE) - you can ensure the best output for each language combination.
Monitoring MT Usage
Keeping track of how MT is working for you can be a challenge, especially when you use multiple engines. Rather than accessing data from each MT provider separately, with Memsource, you can get all the information in one place. Dashboards allow you to view MT leverage compared to translation memory (TM) leverage, see your MT savings as a financial value, and monitor the performance of various engines.
You can also get an overview of your MT character usage for the last 30 days with one quick glance – and see how many remaining characters you have for each engine set up in your account without leaving the Memsource platform.
When first integrating MT, it is important to determine a payment model that takes MT post-editing into account and ensures that you’re getting as much value as possible from your MT integration. Traditionally, you may have calculated quotes using just the analysis done at the start of the translation project, the so-called default analysis, which identifies the matches from the TM and the non-translatables (NT) – content, such as numbers, that does not require translation. You may have then applied a weighted score based on the quality of those results. The weighted scores, called a net rate scheme in Memsource, allow you to automatically calculate quotes based on your price lists. You can apply discounts on post-editing rates depending on the quality of the TM and NT output.
Once you have implemented MT, MT quality should also be factored into these payment calculations.
There are two ways you can easily do this in Memsource:
Machine Translation Quality Estimation with Memsource Translate
With MTQE, you can run the same default analysis mentioned above and see a breakdown of the MT quality scores (85-100%) as well as the TM and NT quality. You can apply a net rate scheme to this analysis to calculate a more accurate quote for the post-editing.
As well as the initial default analysis, you should run a post-editing analysis once a job is complete. It provides a clear overview of the post-editing effort based on the edit distance. For example, if an MT suggestion was accepted without further editing (the linguist did not need to change it at all), it would come up as a 100% match in the analysis, but if, on the other hand, the linguist makes any changes, the match rate will appear in one of the lower percentage categories, depending on the level of editing required. The post-editing analysis also includes edit distance results for TM and NT output. As with the default analysis, you should then apply the net rate scheme to calculate an accurate payment for the post-editing effort.