A well-known challenge of machine translation (MT) is accurately translating domain-specific terminology. While various methods have been suggested to address this challenge, they all come with limitations and increase the user’s dependence on a specific MT engine. Recently, large language models (LLMs) for various natural language processing tasks, including automated translation, have gained significant attention, urging the need to investigate the potential of these models for terminology translation. Therefore, we compare ChatGPT, an LLM-based chatbot conversing with a user, to DeepL, an MT system converting sequences to sequences. We use both systems to perform translations with and without glossaries. We also combine both systems by post-editing MT output with the chatbot. Automated and manual evaluations indicate that the global translation quality of MT is better than or on par with that of the chatbot with a glossary, but that the latter system excels in terms of terminological accuracy when used for translation or for post-editing. While such post-editing avoids user dependence on a specific MT engine, it sometimes causes new translation issues, such as shifts in meaning, suggesting the need for future improvements. Our experiments focus on two language pairs, English-Russian and English-French, and on two domains (COVID-19 and legal documents).
Professionalizing translator education programs must strive to prepare graduates for a quickly evolving field in which technologies are constantly changing and, in turn, affecting workflows, tasks, and required competences. In doing so, they are subject to the challenges of updating curricula quickly enough, and of keeping up to date with the evolving perceptions and expectations of stakeholders, from prospective students to employers. This case study describes some of the data gathered during a 2023 market study in the context of program reform at the University of Ottawa’s School of Translation and Interpretation. By exploring stakeholders’ priorities, we hope to provide insights into curriculum design and recruitment that may be useful for other programs with similar goals.
This paper introduces the Fairslator API, a software solution for gender rewriting and form-of-address rewriting of translations. Starting with a review of bias (including but not limited to gender-bias) in machine translation and with a brief introduction to the concept of rewriting as a method for solving the problem, the paper then demonstrates how the Fairslator API can be used to rewrite biased translations into alternative genders and forms of address, and surveys the ways in which this technology can be integrated into the translation workflow, for example as a step in machine translation post-editing.
Given the continuous refinement of neural machine translation (NMT) systems, post-editing (PE) is increasingly present in the translation world. Furthermore, new educational realities have led to revolutionary learning techniques, such as gamification, which are already being tested in translation teaching. Against this background, the GAMETRAPP project is framed within the multilingual context and the need for scholars to disseminate science in English irrespective of their disciplines and L1 background. The GAMETRAPP project is funded by the Spanish Ministry for Science and Innovation (TED2021-129789B-I00) and its main goal is to bring the NMT + full PE of research abstracts closer to non-professional translators and scholars using a gamified environment. The project setup is based on the Iberian Spanish>American English language direction. After defining the NMT, PE, and gamification concepts, this article presents the methodology of the project, especially the research abstract collection and abstract processing. In addition, the design of the future gamified environment is also briefly explained. Finally, the conclusions reached in this first year of the project are detailed, as well as the future steps.