08:30 | Registration and welcome coffee in Gallery and Marble Hall |
Morning session
Lecture Theatre | Education Room | |
Chair : Olaf-Michael Stefanov
|
||
09:00 | When is Less Actually More? A Usability Comparison of Two CAT Interfaces The body of evidence is growing that CAT tools have fundamentally altered the tasks that most non-literary translators engage in and possibly also their cognitive processing. Recent research suggests that translators may be exposing themselves to unnecessary cognitive friction by the way they use their tools (O’Brien et al. 2017). If tool settings and features do not align with translators’ ways of working, then flow can be interrupted and cognitive load increased. Fatigue and reduced attention are two consequences of cognitive overload over extended periods, both of which have been associated with an increase in errors and lower productivity. We report on a usability comparison of two interfaces for translation work that differ with respect to the information and functions available on the screen when the factory default settings are used (i.e. one interface has several fields with supporting functions visible and the other has a simpler look). Eye tracking measures and indicators from retrospective commentaries and interviews highlight how novices interact with the two interfaces and various features. We consider the implications of our findings in light of recent calls for less cluttered user interfaces and open the discussion of how cognitive load can be reduced. Martin Kappus and Martin Schuler (Zürcher Hochschule für Angewandte Wissenschaften - Institut für Übersetzen und Dolmetschen Zürich)
Co-authors
Erik Angelone is professor of translation studies at the ZHAW Institute of Translation and Interpreting. His research interests are in process-oriented translator training, translation pedagogy and curricular design, and empirical translation studies.
|
09:00 – 10:00 Workshop Terminology Management Tools for Conference Interpreters – Current Tools and How They Address the Specific Needs of Interpreters Ever since the 1990s, sophisticated terminology management systems have offered a plethora of data fields and management functions to translators, terminologists, and conference interpreters. Nevertheless, interpreter-specific tools have been developed in parallel to suit their special needs. They were mostly inspired, at least initially, by one or very few users and developed by a single developer or a very small team. The intention of this workshop is to give interpreters and other interested participants an overview on which tools are available for their terminology work, highlighting the pros and cons of each of them. Providers and developers of terminology management systems will get valuable insight on the specific needs of conference interpreters and the reasons why, if using terminology management systems at all, conference interpreters tend not to use the sophisticated term databases translators or terminologists use. Due to time restrictions, only the most relevant aspects of terminology management in conference interpreting will be addressed. Which solution is best for filtering and categorising my terminology? Which one offers the best search function for the booth? Which one is best for sharing glossaries and online collaboration, or most convenient for mobile use? Information on price models and supported operating systems will also be provided.
If time allows, generic solutions like Microsoft Excel/Access and Google Sheets will also be discussed as an alternative to interpreter-specific tools. Anja Rütten (freelance interpreter)
|
09:30 | When Google Translate is better than some Human Colleagues, those People are no longer Colleagues Expressing discomfort with machine translation is a recurrent theme in translator forums on social media. We analyse posts from Facebook, LinkedIn, and Twitter, showing that translators spend considerable time and effort on mocking and discussing machine translation failures. Research has shown a disconnect between translation technology researchers and practitioners, while at the same time indicating the benefits of collaboration between them. Reflecting on an analysis of posts and engagement of translators in social media, we outline three suggestions to bridge this gap: (i) identify and report patterns rather than isolated errors, (ii) organise or participate in evaluation campaigns, and (iii) engage in cross-disciplinary discourse. Rather than pointing out each other’s deficiencies, we call for computer scientists, translation scholars, and professional translators to advance translation technology by acting in concert. Samuel Läubli (Universität Zürich) and David Orrego-Carmona (Aston University)
|
Workshop (continued) |
10:00 |
Keynote Address A World Without Language Barriers After centuries of separation and misunderstandings, we are lucky to be living in the generation that will see an end to language barriers between the peoples of our planet. Automatic translation of text is now becoming ubiquitous on the internet, and even communication by voice between people speaking different languages is now becoming a reality for everyone. Early breakthroughs in large vocabulary speech recognition, machine translation and neural networks prepared the way for the development of first speech-to-speech translation systems in the early 90’s. Over the 25 years of research that followed, what seemed a crazy idea at first, blossomed into an array of practical interpreting systems that revolutionize modern human communication today: Cross-language interpretation systems that bring people closer together than ever before. In this talk, I will review the technologies and deployed interpreting solutions available today:
I will review algorithmic advances, progress in performance and usability, and discuss remaining scientific challenges. And we will speculate on a future without language barriers that involves human and machine interpretation. Alexander Waibel - Carnegie Mellon University and Karlruher Institut für Technologie
Dr. Waibel founded and served as chairmen of C-STAR, the Consortium for Speech Translation Advanced Research in 1991. Since then he directed many research programs in speech, translation, multimodal interfaces and machine learning in the US, Europe and Asia. He served as director of EU-Bridge (2012-2015), CHIL (2004-2007), two large scale European multi-site Integrated Project initiatives on intelligent assistants and speech translation services. He also served as co-director of IMMI, a joint venture between KIT, CNRS & RWTH. Dr. Waibel is an IEEE Fellow and received many awards for pioneering work on multilingual and multimodal speech communication and translation. He published extensively (>700 publications, >24,000 citations, h-index 80) in the field and received/filed numerous patents. During his career, Dr. Waibel founded and built 10 successful companies. Following the acquisition of Jibbigo by Facebook, Waibel served as founding director of the Language Technology Group at FB. He also deployed speech translation technologies in humanitarian and disaster relief missions. His team recently deployed the first simultaneous interpretation service for lectures at Universities and interpretation tools at the European Parliament. Dr. Waibel received his BS, MS and PhD degrees at MIT and CMU, respectively. |
|
11:00 | Health Break in Gallery and Marble Hall |
11:10 -11:25 Poster Web Accessibility Compliance in Localisation: the Missing Link for an Optimal End-user Experience In an increasingly competitive business landscape, the ever-evolving localisation industry is now striving for differentiation. One of the strategies adopted, particularly by the largest multinationals, has been to expand their service coverage beyond traditional localisation and the provision of translation technology to satisfy new digital market needs. Concretely, we have observed a considerable increase in the number of companies showcasing knowledge and know-how in Digital Marketing and User Experience Design, always with a clear goal: enhancing the final end user experience when interacting with multilingual web content. But are we really ensuring an optimal experience for all? If the localisation industry is looking to consolidate this strengthened service portfolio, awareness of key human-computer interaction aspects and best practices, including web accessibility standards, could be crucial for success. Drawing upon the data collected through a series of interviews with representatives of six world-leading multinational companies from the localisation industry and one of their clients, this paper will report on the readiness of current localisation workflows and professionals to deliver more accessible multilingual websites for all. We will also review the overlaps between responsive design, SEO and current web accessibility guidelines, and present how their compliance could bring competitive advantage to localisation businesses. Silvia Rodríguez Vázquez (Université de Genève)
|
11:30 | Speech Recognition in the Interpreter Workstation In recent years, computer-assisted interpreting (CAI) programs have been used by professional interpreters to prepare assignments, to organize terminological information, and to share event-related information among colleagues (Fantinuoli, 2016, 2017). One of the key features of such tools is the ability to support users in accessing terminology during simultaneous interpretation (SI). With state-of-the-art CAI tools, interpreters need to manually input a term or part of it in order to query the database. The main drawback to this approach is that from the booth it is considered both time-consuming and, to some extent, distracting during an activity that requires concentration and rapid information processing (Tripepi Winteringham, 2010). However, initial empirical studies on the use of such tools seem to support the idea that interpreters on the job may have the time and the cognitive ability to look up terms. Furthermore, CAI tools seem to contribute to improving terminology and overall interpreting performance (Prandi 2015; Biagini 2016). With this in mind, the automatization of the querying system would represent a step forward in reducing the additional cognitive effort needed to perform this human-machine interaction. With more free cognitive ability at disposal, it is reasonable to assume that a CAI tool equipped with an automatic look up system would contribute to further improving the terminology and overall performance of interpreters during the simultaneous interpretation of specialized texts. Speech Recognition (SR) has been proposed as a methodology and technology to automatize the querying system of CAI tools (Fantinuoli 2016; Hansen-Schirra 2012). In the past, the difficulty to build SR systems that were accurate enough to be useful outside of a carefully controlled environment has hindered its deployment in the interpreting setting. However, recent advances in Artificial Intelligence, especially since the dissemination of deep learning and neural networks, have considerably increased the quality of SR (Yu and Deng, 2015). In order to be successfully integrated in an interpreter workstation, both SR and CAI tools must fulfil a series of specific requirements. For example, SR must be truly speaker-independent, have a short reaction time, and be accurate in the recognition of specialized vocabulary. On the other hand, CAI tools need to overcome some shortcomings of current implementations and need, for instance, to handle morphological variants of the selection of results and new ways to present extracted terminology. In the first part of the paper, a framework for the integration of SR in CAI tools will be defined. In particular, much attention will be devoted to the analysis of state-of-the-art SR and the problems that may arise with its integration into an interpreter workstation. Secondly, the adaptation of traditional querying systems used in CAI tools to allow for keyword spotting will be discussed and a prototype will be presented. Finally, general advantages and shortcomings of SR-CAI integration will be highlighted and prospective developments of the use of SR that support the access of terminological data will be introduced, i.e. the recognition of numbers and entities. Claudio Fantinuoli (Johannes Gutenberg Universität Mainz in Germersheim)
|
11:30 – 12:30 Silver Sponsor Workshop Building Artificial Intelligence on Top of a Translation Waste Mountain The Translation department of the KU Leuven researches the revision and correction of translations. Recently we joined forces with Televic to build our own tools for smart translation revision: we are interested in the “waste”, in the errors that translators make. From a didactical point of view, the analysis of those errors is as interesting as the correct translation. We will show which (game changing) teaching and learning conclusions we can draw from the analysis of the “waste mountain”. TranslationQ and RevisionQ are two tools to evaluate and score translations. Translation evaluation is an important and labour intensive task in the training and in the selection of good translators. Mostly this work is done by human evaluators and has to be repeated for every single translation. Our academic experiments have proven that both tools are as accurate and even more objective than a human evaluation. TranslationQ and RevisionQ are especially useful to evaluate large groups of candidates. Finally, the language correction algorithms have been developed to be language independent, making the tools useable for many language combinations. Bert Wylin and Hendrik Kockaert (Katholieke Universiteit Leuven/Televic)
|
12:00 | Designing a Multimethod Study on the Use of CAI Tools during Simultaneous Interpreting Even though studies on computer-assisted interpreting still represent a very small percentage in the body of research, the topic is starting to gain attention in the interpreting community. So far, only a handful of studies have focused on the use of CAI tools in the interpreting booth (Gacek, 2015; Biagini, 2015; Prandi, 2015a, 2015b). While they did shed some light on the usability and the reception of CAI tools as well as on the terminological quality of simultaneous interpreting performed with the support of such tools, these studies were only product-oriented. We still lack process-oriented, empirical research on computer-aided interpreting. A pilot study currently underway at the University of Mainz/Germersheim (Prandi, 2016, 2017) aims at bridging this gap by combining process- and product-oriented methods. After discussing the theoretical models adopted to date in CAI research, this paper will suggest how an adaptation of Seeber’s (2011) Cognitive Load Model can be better suited then Gile’s (1988, 1997, 1999) Effort Model to operationalize hypotheses on the use of CAI tools in the booth. The paper will then introduce the experimental design adopted in the study with a focus on the features of the texts used and on the rationale behind their creation. Bianca Prandi (Johannes Gutenberg Universität Mainz in Germersheim)
|
Workshop (continued) |
12:30 | Buffet Lunch in Gallery and Marble Hall |
13:25 – 13:40 Poster VIP: Voice-text Integrated System for Interpreters Interpreting is an extremely strenuous task, since they must devote much effort in terms of decoding, memorising and encoding a message. Interpreters should, as translators and other language professionals do, benefit from the development of technology and, thereby, enjoy considerable improvement of their working conditions. However, currently their work relies by and large on traditional or manual methods, and the technological advances in interpreting have been extremely slow. Unlike translators, for whom a myriad of computer-assisted tools are available, interpreters have not benefited from the same level of automation or innovation. Fortunately, there is a growing interest in developing tools addressed at interpreters as end users, although the number of these technology tools is still very low and they are not intended to cover all interpreters needs. The goal of the VIP project is to revolutionise the interpreting industry by creating an interpreting workbench tool which will have the same effect that language technologies for translators have had in the translation industry in recent decades. To this end, we intend to (a) identify the Gloria Corpas Pastor (Universidad de Málaga)
Visiting Professor in Translation Technology at the Research Institute in Information and Language Processing (RIILP) of the University of Wolverhampton, UK (since 2007), and Professor in Translation and Interpreting (2008). Published and cited extensively, member of several international and national editorial and scientific committees. Spanish delegate for AEN/CTN 174 and CEN/BTTF 138, actively involved in the development of the UNE-EN 15038:2006 and currently involved in the future ISO Standard (ISO TC37/SC2-WG6 “Translation and Interpreting”). Regular evaluator of University programmes and curriculum design for the Spanish Agency for Quality Assessment and Accreditation (ANECA) and various research funding bodies. President of AIETI (Iberian Association of Translation and Interpreting Studies), member of the Advisory council of EUROPHRAS (European Society of Phraseology) and Vice-President of AMIT-A (Association of Women in Science and Technology of Andalusia). 13:40 – 13:55 Poster Using Online and/or Mobile Virtual Communication Tools in Interpreter and Translator Training: Pedagogical Advantages and Drawbacks In this paper we discuss some preliminary results of a comparative study into the use of online and/or mobile virtual communication tools in the master programmes of interpreting and translation at Vrije Universiteit Brussel. Both master programmes are based on a situated learning model, which is generally understood as a didactic method in which translation and interpreting students learn the profession and acquire professional skills through hands on experience by exposing them to simulated or real work environments, situations and tasks. In recent years, this learning-by-doing approach (or authentic experiential learning) has gained quite some traction in translator and interpreter education. In creating authentic learning contexts for student translators and interpreters, technology has become an important factor to take into consideration, given the unmistakable impact that it has on professional translation and interpreting practices. After a review of previous studies dealing with the use of virtual technologies in translator and interpreter training, several virtual communication tools will be tested and evaluated both from the trainers’ and the trainees’ perspectives. Finally, we will reflect on the tools’ pedagogical advantages and drawbacks in order to formulate recommendations for using these technologies in translator and interpreter training contexts. Koen Kerremans and Helene Stengers (Vrije Universiteit Brussel)
Helene Stengers is professor in Spanish proficiency, translation and Interpreting at the department of Linguistics and Literary Studies of Vrije Universiteit Brussel, where she obtained her PhD in 2009. Her research interests lie in applied comparative linguistics (especially English and Spanish), cognitive linguistics, phraseology and Foreign Language acquisition (mainly vocabulary acquisition) from a multilingual and intercultural perspective with a view to optimize Foreign Langage pedagogy, as well as translation and interpreter training. She is the research director of the Brussels Institute for Applied Linguistics. |
Afternoon session
Lecture Theatre | Education Room | |
Chair : Joanna Drugan
|
||
14:00 | A Comparative User Evaluation of Tablets and Tools for Consecutive Interpreters Since the release of the first modern tablets, practicing interpreters have begun to consider how tablets could be used to support their interpreting practice. The first phase of a recent mixed methods assessed the pros and cons of different tablets, applications and styluses, finding that professional interpreters were effectively using tablets for consecutive interpreting in a wide range of settings. Results also indicated that certain types of tablets, applications and styluses were especially appreciated by practitioners (Goldsmith & Holley (2015). This paper presents the second phase of that study, building on previous conclusions to derive an instrument for carrying out a comparative user evaluation of these tablet interpreting tools. Using this instrument, it compares and contrasts the different tablets and accessories currently available on the market. Its conclusions are expected to serve as a useful guide to allow interpreters to pick the tablets, applications and styluses which best meet their needs. Joshua Goldsmith (Interpreter and Université de Genève)
|
Silver Sponsor Workshop Setting up SDL Trados Studio for best match scenarios This workshop will cover “Setting up SDL Trados Studio 2017 to make the most of your available resources and maximise productivity in the most common translation scenarios” with the following agenda:
Moderated by Neil Ferguson
As Product Marketing Manager at SDL for Translation Productivity solutions including SDL Trados Studio, SDL MultiTerm SDL Studio GroupShare, Neil is a firm believer that even though today’s technology has dramatically aided the delivery and management of localised content, the next few years ahead are going to be even more exciting and dramatic, trends such as IOT and on demand digital experiences that will only serve to accelerate the demand for content in local language. So the need to be ready is paramount! |
14:30 | Panel Discussion Moderated by Danielle D’Hayer (London Metropolitan University)
New Frontiers in interpreting technology Technology has the potential to shape the future of interpreting. Indeed, it has already begun to do so. From tools that assist interpreters to devices that may replace them altogether, technologies for delivering interpreting services to tools to teach interpreting, this panel discussion will span the gamut of technology in interpreting, considering current developments and future innovations. Anja Rütten, Alexander Drechsel, Joshua Goldsmith, Marcin Feder, and Barry Olsen
|
Starting at 15:00, ending at 16:00 Workshop
The Localization Industry Word Count Standard: GMX-V - Slaying the Word Count Dragon Word and character counts are the basis of virtually all metrics relating to costs in the L10N Industry. An enduring problem with these metrics has been the lack of consistency between various computer assisted tools (CAT) and translation management systems (TMS). Notwithstanding these inconsistencies there are also issues with common word counts generated by word processing systems such as Microsoft Word. Not only do different CAT and TMS systems generate differing word and character counts, but there is also a complete lack of transparency as to how these counts are arrived at: specifications aren’t published and systems can produce quite widely different metrics. To add clarity, consistency and transparency to the issue of word and character counts the Global Information Management Metrics Volume (GMX-V) standard was created. Starting with version 1.0 and then as version 2.0 GMX-V addresses the problem of counting words and characters in a localization task, and how to exchange such data electronically. This workshop goes through the details of how to identify and count words and characters using a standard canonical form, including documents in Chinese, Japanese and Thai, as well as how to exchange such data between systems. Andrzej Zydroń (XTM)
CTO @ XTM International, Andrzej Zydroń is one of the leading IT experts on Localization and related Open Standards. Zydroń sits/has sat on, the following Open Standard Technical Committees: 1. LISA OSCAR GMX Zydroń has been responsible for the architecture of the essential word and character count GMX-V (Global Information Management Metrics eXchange) standard, as well as the revolutionary xml:tm (XML based text memory) standard which will change the way in which we view and use translation memory. Zydroń is also chair of the OASIS OAXAL (Open Architecture for XML Authoring and Localization) reference architecture technical committee which provides an automated environment for authoring and localization based on Open Standards. Specific areas of specialization: |
16:00 | Health Break in Gallery and Marble Hall |
16:10-16:25 Poster Crowdsourcing for NMT Evaluation: Professional Translators versus the Crowd The use of machine translation (MT) has become widespread in many areas, from household users to the translation and localization industries. Recently, the great interest shown in neural machine translation (NMT) models by the research community has made more detailed evaluation of this new paradigm essential, since several comparative studies using human and automatic evaluation of statistical and neural MT have shown that results concerning the improvements of NMT are not yet conclusive (e.g. Castilho et al. 2017). Crowdsourcing has become a frequently-employed option to evaluate MT output. In the field of translation, such crowds may consist of translation professionals, bilingual fans or amateurs, or a combination thereof. Crowdsourcing activities are at the heart of the European-funded research and innovation project TraMOOC (Translation for Massive Open Online Courses). In this presentation, we will focus on the MT evaluation crowdsourcing activities performed by professional translators and amateur crowd contributors. We will present the results of this evaluation based on automated metrics and post-editing effort and compare how translators and the general crowd assessed the quality of the NMT output. Sheila Castilho, Joss Moorkens, Yota Georgakopoulou and Maria Gialama (Dublin City University) Sheila Castilho is a post-doc researcher in the ADAPT Centre in Dublin City University. Her research interests include human and usability evaluation of machine translation, translation technology and audio-visual translation.
Co-authors
|
16:30 | Learning from Sparse Data - Breaking the Big Data Stranglehold The Bible Societies have for many years built systems to help translators working in scores of languages. Their focus is in linguistic analysis rather than synthesis. But there is a problem, shared by all MT systems. Until there is enough text to train the system the output is limited. By the time there is enough training data much of the task may already be complete. To address this, United Bible Societies has begun to re-imagine what we expect from our translation support systems. A system is now in development which begins learning about the target language at the very start of a Bible translation project. Rather than building stand alone morphology analysers, glossing engines and aligners the project constructs a learning framework within which all of these machines, and more, can operate with very small amounts of text, using outputs from one context to strengthen a hypothesis from another. This paper describes a framework within which such processing might take place, how that framework enables learning to take place from very small amounts of data, how that learning is gradually aggregated into a coherent language model and how this model is used to support the translator in their work.
Jon Riding and Neil Boulton (United Bible Societies)
|
SCATE Prototype: A Smart Computer-Aided Translation Environment We present SCATE: A Smart Computer-Aided Translation Environment developed in the SCATE research project. It is a carefully designed prototype of the user interface of a translation environment, that displays different translation suggestions coming from different resources in an intelligible and interactive way. Our environment contains carefully designed representations that show relevant context to clarify why certain suggestions are given. In addition, several relationships between the source text and the suggestions are made explicit, to help the user understand how a suggestion can be used and select the most appropriate one. Well designed interaction techniques are included that improve the efficiency of the user interface. The suggestions are generated through different web services, such as translation memory (TM) fuzzy matching, machine translation (MT) and support for terminology. A lookup mechanism highlights terms in the source segment that are available with their translation equivalents in the bilingual glossary. Tom Vanallemeersch and Sven Coppers (Katholieke Universiteit Leuven)
|
17:00 | Closing Ceremony: AsLing President and TC39 Organising Committee | |
17:30 | End of TC39 |