Perform quick human assessment with Adequacy-Fluency
Deeply analyze MT output with MQM error annotation
Run post-editing “lab tests” for Edit Distance
Human evaluation is valuable indeed but it is nor fast or cheap if you outsource it to trained professionals, which is what we do at CPSL. In this sense, moving from spreadsheets to ContentQuo has been a big leap for us.
All our professional evaluators prefer ContentQuo over spreadsheets, and our customers benefit from the speed with which we can send our proposals and scalability helps us keep evaluation costs within budget.
Most companies implementing Machine Translation run human quality evaluation after they are satisfied with automatic quality metric values — e.g. on shortlisted engines when selecting an engine mix, on newly retrained engines when assessing training results, or on post-edited translations on a regular basis to identify recurring errors that could be fixed with training. ContentQuo Evaluate MT makes it easy, fast, and efficient.
ContentQuo Evaluate MT is not tied to any specific Language Service Provider - you choose your own suppliers! Many LSPs from the Global Top-100 already use our platform. Here are some of them: