The TaraXÜ Corpus of Human-Annotated Machine Translations - 4rth evaluation round

TaraXÜ corpus round 4

The corpus was created in the framework of the TaraXU project. The approach rises from the need to detach Machine Translation (MT) evaluation from a pure research-oriented development scenario and to bring it closer to the end users. Therefore, three evaluation rounds were performed in close co-operation with translation industry. The evaluation process has been designed in order to answer particular questions closely related with the applicability of MT within a real-time professional translation environment. All evaluation tasks have been performed by qualified professional translators.

The evaluation rounds, resulting in the corpus discussed in this paper, built on one another in a logical procession: the first round created baseline results, whereas each further round was
concerned with more elaborated measuring methods and more specific factors impacting translation quality. Findings of evaluating the results from these rounds have been published in Avramidis et. al 2012 and Popovic et. al. 2013. Parts of the corpus have more recently been used in the QTLaunchPad project [http://www.qt21.eu/launchpad] where they served as the basis for a more detailed error analysis.

Note that this is one part of the corpus. More parts (to) appear in separate entries.

You don’t have the permission to edit this resource.