Friday 17 November

08:30 Registration and welcome coffee in Gallery and Marble Hall

Morning session

Lecture Theatre Education Room
 Chair : Olaf-Michael Stefanov


 09:00
When is Less Actually More? A Usability Comparison of Two CAT Interfaces

The body of evidence is growing that CAT tools have fundamentally altered the tasks that most non-literary translators engage in and possibly also their cognitive processing. Recent research suggests that translators may be exposing themselves to unnecessary cognitive friction by the way they use their tools (O’Brien et al. 2017). If tool settings and features do not align with translators’ ways of working, then flow can be interrupted and cognitive load increased. Fatigue and reduced attention are two consequences of cognitive overload over extended periods, both of which have been associated with an increase in errors and lower productivity. We report on a usability comparison of two interfaces for translation work that differ with respect to the information and functions available on the screen when the factory default settings are used (i.e. one interface has several fields with supporting functions visible and the other has a simpler look). Eye tracking measures and indicators from retrospective commentaries and interviews highlight how novices interact with the two interfaces and various features. We consider the implications of our findings in light of recent calls for less cluttered user interfaces and open the discussion of how cognitive load can be reduced.

Martin Kappus and Martin Schuler (Zürcher Hochschule für Angewandte Wissenschaften - Institut für Übersetzen und Dolmetschen Zürich)

Martin Kappus is a lecturer in the ZHAW Institute of Translation and Interpreting. Before joining the ZHAW faculty, he worked for a CAT tool manufacturer and a large language service provider. His research and teaching interests are language technology in general, translation technology in particular, and barrier-free communication.

Martin Schuler is a research associate and head of the usability lab at the ZHAW School of Applied Linguistics. He has a BA in technical communication and an MA in Human Computer Interaction Design. He has been involved in several types of usability projects for a variety of clients.

Co-authors

Maureen Ehrensberger-Dow is professor of translation studies in the ZHAW Institute of Translation and Interpreting. She has been the (co)investigator in several interdisciplinary projects investigating the reality and ergonomics of professional translation.

Romina Schaub-Torsello is a research assistant at the ZHAW Institute of Translation and Interpreting. She is a trained translator and graduated from the ZHAW MA program in Applied Linguistics with a thesis about the impact that disturbances can have on translator’s cognitive flow and behaviour.

Erik Angelone is professor of translation studies at the ZHAW Institute of Translation and Interpreting. His research interests are in process-oriented translator training, translation pedagogy and curricular design, and empirical translation studies.

 

 

 

09:00 – 10:00 Workshop

Terminology Management Tools for Conference Interpreters – Current Tools and How They Address the Specific Needs of Interpreters

Ever since the 1990s, sophisticated terminology management systems have offered a plethora of data fields and management functions to translators, terminologists, and conference interpreters. Nevertheless, interpreter-specific tools have been developed in parallel to suit their special needs. They were mostly inspired, at least initially, by one or very few users and developed by a single developer or a very small team.

The intention of this workshop is to give interpreters and other interested participants an overview on which tools are available for their terminology work, highlighting the pros and cons of each of them. Providers and developers of terminology management systems will get valuable insight on the specific needs of conference interpreters and the reasons why, if using terminology management systems at all, conference interpreters tend not to use the sophisticated term databases translators or terminologists use.
Some of the tools presented are not being further developed any more, and no support is offered. As they still run perfectly well in the versions at hand, they will nevertheless be shown in order to complete the picture. Due to time restrictions, some tools will be shown “live” and others with the help of screenshots.

Due to time restrictions, only the most relevant aspects of terminology management in conference interpreting will be addressed. Which solution is best for filtering and categorising my terminology? Which one offers the best search function for the booth? Which one is best for sharing glossaries and online collaboration, or most convenient for mobile use? Information on price models and supported operating systems will also be provided.
The programs presented will be

  • Glossarmanager by Glossarmanager GbR/Frank Brempe, Bonn, Germany
  • Glossary Assistant by Reg Martin, Switzerland
  • Interplex by Peter Sand, Eric Hartner, Geneva, Switzerland
  • InterpretBank by Claudio Fantinuoli, Germersheim, Germany
  • Interpreters’ Help by Benoît Werner/Yann Plancqueel, Berlin/Paris, Germany/France
  • Intragloss by Dan Kenig and Daniel Pohoryles, Paris, France
  • Lookup by Christoph Stoll, Heidelberg, Germany
  • Terminus by Nils Wintringham, Zürich, Switzerland

If time allows, generic solutions like Microsoft Excel/Access and Google Sheets will also be discussed as an alternative to interpreter-specific tools.

Anja Rütten (freelance interpreter)

Dr Anja Rütten (Sprachmanagement.net) is a freelance conference interpreter for German A, Spanish B, English and French C based in Düsseldorf, Germany since 2001. Apart from the private market, she works for the EU institutions and is a lecturer at the TH Cologne. She holds a degree in conference interpreting as well as a phD of the University of Saarbrücken (doctoral thesis on Information and Knowledge Management in Conference Interpreting, 2006). As a member of AIIC, the international conference interpreters’ association, she is actively involved in the German region’s working group on profitability. She has specialised in knowledge management since the mid-1990s and shares her insights in her blog on www.dolmetscher-wissen-alles.de.

 09:30
When Google Translate is better than some Human Colleagues, those People are no longer Colleagues

Expressing discomfort with machine translation is a recurrent theme in translator forums on social media. We analyse posts from Facebook, LinkedIn, and Twitter, showing that translators spend considerable time and effort on mocking and discussing machine translation failures. Research has shown a disconnect between translation technology researchers and practitioners, while at the same time indicating the benefits of collaboration between them. Reflecting on an analysis of posts and engagement of translators in social media, we outline three suggestions to bridge this gap: (i) identify and report patterns rather than isolated errors, (ii) organise or participate in evaluation campaigns, and (iii) engage in cross-disciplinary discourse. Rather than pointing out each other’s deficiencies, we call for computer scientists, translation scholars, and professional translators to advance translation technology by acting in concert.

Samuel Läubli (Universität Zürich) and David Orrego-Carmona (Aston University)

Samuel Läubli
I am a PhD Student in Machine Translation at the University of Zurich (CH). I obtained a BA in Computational Linguistics and Language Technology from the University of Zurich (CH) in 2012, and a Master of Science in Artificial Intelligence from the University of Edinburgh (UK) in 2014. From 2014 to 2016, I implemented machine translation systems for post-editing as a Senior Computational Linguist at Autodesk, Inc. My research focus is the intersection of Machine Translation, Translation Process Research, and Human–Computer Interaction.

David Orrego-Carmona
I am a Lecturer in Translation Studies at Aston University (UK) and an associate research fellow at the University of the Free State (SA). After completing a BA in Translation at the Universidad de Antioquia (Colombia) and working as an in-house translator, I gained an MA (2011) and a PhD (2015) in Translation and Intercultural Studies from the Universitat Rovira i Virgili in Tarragona (Spain). In 2016 I was a post-doctoral research fellow at the University of the Free State. My areas of expertise include audiovisual translation and translation technologies. In particular, my research analyses how translation technologies empower users and how the democratization of technology allows these users to become translators.
Other interests include translation process research and the cognitive exploration of translation production and reception, mainly using eye tracking technologies.

 Workshop (continued)
10:00

Keynote Address

A World Without Language Barriers

After centuries of separation and misunderstandings, we are lucky to be living in the generation that will see an end to language barriers between the peoples of our planet.  Automatic translation of text is now becoming ubiquitous on the internet, and even communication by voice between people speaking different languages is now becoming a reality for everyone.

Early breakthroughs in large vocabulary speech recognition, machine translation and neural networks prepared the way for the development of first speech-to-speech translation systems in the early 90’s.  Over the 25 years of research that followed, what seemed a crazy idea at first, blossomed into an array of practical interpreting systems that revolutionize modern human communication today:  Cross-language interpretation systems that bring people closer together than ever before.

In this talk, I will review the technologies and deployed interpreting solutions available today:

  • Speech translators running on servers, laptops and smartphones for tourists, medical doctors and international relief workers,
  • Communication on tablets in Humanitarian and Government Missions
  • Road sign interpreters that translate road signs while traveling abroad
  • Multilingual subtitling and translation of TV broadcasts
  • Automatic simultaneous Interpretation of lectures given in foreign languages
  • Tools and Technology that facilitate and support human interpreters at the European Parliament

I will review algorithmic advances, progress in performance and usability, and discuss remaining scientific challenges.  And we will speculate on a future without language barriers that involves human and machine interpretation.

Alexander Waibel - Carnegie Mellon University and Karlruher Institut für Technologie

Dr. Alexander Waibel is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the Karlsruhe Institute of Technology, Germany. He is the director of the International Center for Advanced Communication Technologies (interACT). The Center works in a network with eight of the world’s top research institutions. The Center’s mission is to develop multimodal and multilingual human communication technologies based on advanced machine learning algorithms to improve human-human and human-machine communication. Prof. Waibel and his team developed many statistical and neural network learning algorithms that made a number of communication breakthroughs possible. These included early multimodal interfaces, first neural network speech and language processing systems, the first speech translation systems in Europe&USA (1990/1991), the world’s first simultaneous lecture translation system (2005), and Jibbigo, the world’s first commercial speech translator on a phone (2009).

Dr. Waibel founded and served as chairmen of C-STAR, the Consortium for Speech Translation Advanced Research in 1991. Since then he directed many research programs in speech, translation, multimodal interfaces and machine learning in the US, Europe and Asia. He served as director of EU-Bridge (2012-2015), CHIL (2004-2007), two large scale European multi-site Integrated Project initiatives on intelligent assistants and speech translation services. He also served as co-director of IMMI, a joint venture between KIT, CNRS & RWTH.

Dr. Waibel is an IEEE Fellow and received many awards for pioneering work on multilingual and multimodal speech communication and translation. He published extensively (>700 publications, >24,000 citations, h-index 80) in the field and received/filed numerous patents.

During his career, Dr. Waibel founded and built 10 successful companies. Following the acquisition of Jibbigo by Facebook, Waibel served as founding director of the Language Technology Group at FB. He also deployed speech translation technologies in humanitarian and disaster relief missions. His team recently deployed the first simultaneous interpretation service for lectures at Universities and interpretation tools at the European Parliament.

Dr. Waibel received his BS, MS and PhD degrees at MIT and CMU, respectively.

11:00  Health Break in Gallery and Marble Hall

11:10 -11:25  Poster

Web Accessibility Compliance in Localisation: the Missing Link for an Optimal End-user Experience

In an increasingly competitive business landscape, the ever-evolving localisation industry is now striving for differentiation. One of the strategies adopted, particularly by the largest multinationals, has been to expand their service coverage beyond traditional localisation and the provision of translation technology to satisfy new digital market needs. Concretely, we have observed a considerable increase in the number of companies showcasing knowledge and know-how in Digital Marketing and User Experience Design, always with a clear goal: enhancing the final end user experience when interacting with multilingual web content. But are we really ensuring an optimal experience for all?

If the localisation industry is looking to consolidate this strengthened service portfolio, awareness of key human-computer interaction aspects and best practices, including web accessibility standards, could be crucial for success. Drawing upon the data collected through a series of interviews with representatives of six world-leading multinational companies from the localisation industry and one of their clients, this paper will report on the readiness of current localisation workflows and professionals to deliver more accessible multilingual websites for all. We will also review the overlaps between responsive design, SEO and current web accessibility guidelines, and present how their compliance could bring competitive advantage to localisation businesses.

Silvia Rodríguez Vázquez (Université de Genève)

Dr Silvia Rodríguez Vázquez is a postdoctoral researcher at the Department of Translation Technology (TIM) of the University of Geneva, Switzerland. Silvia’s research interests include web accessibility, localisation, universal design and usability, and the accessibility of translation technologies. Over the last years, she has been a strong advocate for the achievement of an accessible Multilingual Web for all, disseminating her research both through academic and industry-focused publications, including an article on the relevance of considering accessibility best practices in web localisation processes for MultiLingual magazine.

Dr Sharon O’Brien is a senior lecturer in the School of Applied Language and Intercultural Studies, Dublin City University, Ireland. She is also a Funded Investigator in the Science Foundation Ireland funded research centre, ADAPT, and was Director of the Centre for Translation and Textual Studies at DCU. Her research interests include translator-computer interaction, localisation, cognitive ergonomics in translation and translation quality assessment. She previously worked in the localisation sector as a language technology specialist.

11:30
Speech Recognition in the Interpreter Workstation

In recent years, computer-assisted interpreting (CAI) programs have been used by professional interpreters to prepare assignments, to organize terminological information, and to share event-related information among colleagues (Fantinuoli, 2016, 2017).

One of the key features of such tools is the ability to support users in accessing terminology during simultaneous interpretation (SI). With state-of-the-art CAI tools, interpreters need to manually input a term or part of it in order to query the database. The main drawback to this approach is that from the booth it is considered both time-consuming and, to some extent, distracting during an activity that requires concentration and rapid information processing (Tripepi Winteringham, 2010). However, initial empirical studies on the use of such tools seem to support the idea that interpreters on the job may have the time and the cognitive ability to look up terms. Furthermore, CAI tools seem to contribute to improving terminology and overall interpreting performance (Prandi 2015; Biagini 2016). With this in mind, the automatization of the querying system would represent a step forward in reducing the additional cognitive effort needed to perform this human-machine interaction. With more free cognitive ability at disposal, it is reasonable to assume that a CAI tool equipped with an automatic look up system would contribute to further improving the terminology and overall performance of interpreters during the simultaneous interpretation of specialized texts.

Speech Recognition (SR) has been proposed as a methodology and technology to automatize the querying system of CAI tools (Fantinuoli 2016; Hansen-Schirra 2012). In the past, the difficulty to build SR systems that were accurate enough to be useful outside of a carefully controlled environment has hindered its deployment in the interpreting setting. However, recent advances in Artificial Intelligence, especially since the dissemination of deep learning and neural networks, have considerably increased the quality of SR (Yu and Deng, 2015). In order to be successfully integrated in an interpreter workstation, both SR and CAI tools must fulfil a series of specific requirements. For example, SR must be truly speaker-independent, have a short reaction time, and be accurate in the recognition of specialized vocabulary. On the other hand, CAI tools need to overcome some shortcomings of current implementations and need, for instance, to handle morphological variants of the selection of results and new ways to present extracted terminology.

In the first part of the paper, a framework for the integration of SR in CAI tools will be defined. In particular, much attention will be devoted to the analysis of state-of-the-art SR and the problems that may arise with its integration into an interpreter workstation. Secondly, the adaptation of traditional querying systems used in CAI tools to allow for keyword spotting will be discussed and a prototype will be presented. Finally, general advantages and shortcomings of SR-CAI integration will be highlighted and prospective developments of the use of SR that support the access of terminological data will be introduced, i.e. the recognition of numbers and entities.

Claudio Fantinuoli (Johannes Gutenberg Universität Mainz in Germersheim)

Claudio Fantinuoli is Lecturer at the Johannes Gutenberg University Mainz in Germersheim and at the Institute for Translation Studies in Innsbruck. His research and teaching areas are Language Technologies in Translation and Interpreting.

11:30 – 12:30 Silver Sponsor Workshop

Building Artificial Intelligence on Top of a Translation Waste Mountain

The Translation department of the KU Leuven researches the revision and correction of translations. Recently we joined forces with Televic to build our own tools for smart translation revision: we are interested in the “waste”, in the errors that translators make. From a didactical point of view, the analysis of those errors is as interesting as the correct translation. We will show which (game changing) teaching and learning conclusions we can draw from the analysis of the “waste mountain”. TranslationQ and RevisionQ are two tools to evaluate and score translations. Translation evaluation is an important and labour intensive task in the training and in the selection of good translators. Mostly this work is done by human evaluators and has to be repeated for every single translation. Our academic experiments have proven that both tools are as accurate and even more objective than a human evaluation. TranslationQ and RevisionQ are especially useful to evaluate large groups of candidates. Finally, the language correction algorithms have been developed to be language independent, making the tools useable for many language combinations.

Bert Wylin and Hendrik Kockaert (Katholieke Universiteit Leuven/Televic)

Bert Wylin (MA Applied Linguistics – Roman languages) has both an academic and a business profile. He works at the K.U.Leuven since 1993, doing research in the fields of technology supported teaching and (language) learning. In 2001, he founded a spin-off company, now merged into Televic Education, developing and servicing educational e-learning and e-assessment projects worldwide and specializing in languages and computer assisted language learning (CALL).

 12:00
Designing a Multimethod Study on the Use of CAI Tools during Simultaneous Interpreting

Even though studies on computer-assisted interpreting still represent a very small percentage in the body of research, the topic is starting to gain attention in the interpreting community. So far, only a handful of studies have focused on the use of CAI tools in the interpreting booth (Gacek, 2015; Biagini, 2015; Prandi, 2015a, 2015b). While they did shed some light on the usability and the reception of CAI tools as well as on the terminological quality of simultaneous interpreting performed with the support of such tools, these studies were only product-oriented. We still lack process-oriented, empirical research on computer-aided interpreting. A pilot study currently underway at the University of Mainz/Germersheim (Prandi, 2016, 2017) aims at bridging this gap by combining process- and product-oriented methods. After discussing the theoretical models adopted to date in CAI research, this paper will suggest how an adaptation of Seeber’s (2011) Cognitive Load Model can be better suited then Gile’s (1988, 1997, 1999) Effort Model to operationalize hypotheses on the use of CAI tools in the booth. The paper will then introduce the experimental design adopted in the study with a focus on the features of the texts used and on the rationale behind their creation.

Bianca Prandi (Johannes Gutenberg Universität Mainz in Germersheim)

Bianca Prandi is a doctoral student at the University of Mainz/Germersheim. She holds a BA in Intercultural Linguistic Mediation and a MA in Interpreting from the University of Bologna/Forlì. She graduated with a dissertation on the integration of the CAI tool InterpretBank in the curriculum of interpreting students. She is currently working on her doctoral dissertation at Mainz University under the supervision of Prof. Dr. Hansen-Schirra. Her main research interests are new technologies in interpreting and cognition.

Workshop (continued)
 12:30 Buffet Lunch in Gallery and Marble Hall

13:25 – 13:40 Poster

VIP: Voice-text Integrated System for Interpreters

Interpreting is an extremely strenuous task, since they must devote much effort in terms of decoding, memorising and encoding a message. Interpreters should, as translators and other language professionals do, benefit from the development of technology and, thereby, enjoy considerable improvement of their working conditions. However, currently their work relies by and large on traditional or manual methods, and the technological advances in interpreting have been extremely slow.

Unlike translators, for whom a myriad of computer-assisted tools are available, interpreters have not benefited from the same level of automation or innovation. Fortunately, there is a growing interest in developing tools addressed at interpreters as end users, although the number of these technology tools is still very low and they are not intended to cover all interpreters needs.

The goal of the VIP project is to revolutionise the interpreting industry by creating an interpreting workbench tool which will have the same effect that language technologies for translators have had in the translation industry in recent decades. To this end, we intend to (a) identify the
real needs of interpreters and how and to what extent their work can be automated, (b) survey existing interpreting technologies, and (c) develop the first integrated system to enhance the productivity of the work of interpreters (professional, trainers and trainees), both during the
interpretation process and in the preparation of various interpretation tasks.

Gloria Corpas Pastor (Universidad de Málaga)

Gloria Corpas Pastor, BA in German Philology (English) from the University of Malaga. PhD in English Philology from the Universidad Complutense de Madrid (1994).

Visiting Professor in Translation Technology at the Research Institute in Information and Language Processing (RIILP) of the University of Wolverhampton, UK (since 2007), and Professor in Translation and Interpreting (2008). Published and cited extensively, member of several international and national editorial and scientific committees. Spanish delegate for AEN/CTN 174 and CEN/BTTF 138, actively involved in the development of the UNE-EN 15038:2006 and currently involved in the future ISO Standard (ISO TC37/SC2-WG6 “Translation and Interpreting”).

Regular evaluator of University programmes and curriculum design for the Spanish Agency for Quality Assessment and Accreditation (ANECA) and various research funding bodies.

President of AIETI (Iberian Association of Translation and Interpreting Studies), member of the Advisory council of EUROPHRAS (European Society of Phraseology) and Vice-President of AMIT-A (Association of Women in Science and Technology of Andalusia).

13:40 – 13:55 Poster

Using Online and/or Mobile Virtual Communication Tools in Interpreter and Translator Training: Pedagogical Advantages and Drawbacks

In this paper we discuss some preliminary results of a comparative study into the use of online and/or mobile virtual communication tools in the master programmes of interpreting and translation at Vrije Universiteit Brussel. Both master programmes are based on a situated learning model, which is generally understood as a didactic method in which translation and interpreting students learn the profession and acquire professional skills through hands on experience by exposing them to simulated or real work environments, situations and tasks. In recent years, this learning-by-doing approach (or authentic experiential learning) has gained quite some traction in translator and interpreter education. In creating authentic learning contexts for student translators and interpreters, technology has become an important factor to take into consideration, given the unmistakable impact that it has on professional translation and interpreting practices. After a review of previous studies dealing with the use of virtual technologies in translator and interpreter training, several virtual communication tools will be tested and evaluated both from the trainers’ and the trainees’ perspectives. Finally, we will reflect on the tools’ pedagogical advantages and drawbacks in order to formulate recommendations for using these technologies in translator and interpreter training contexts.

Koen Kerremans and Helene Stengers (Vrije Universiteit Brussel)

Koen Kerremans is professor in terminology, specialised translation and translation technology at the department of Linguistics and Literary Studies of Vrije Universiteit Brussel, where he obtained his PhD in 2014. His research interests pertain to applied linguistics, language technologies, ontologies, terminology, special language and translation studies. He is the coordinator of VUB’s master programme of translation and teaches courses on terminology, technical translation and technologies for translators in the master programmes of translation and interpreting.

Helene Stengers is professor in Spanish proficiency, translation and Interpreting at the department of Linguistics and Literary Studies of Vrije Universiteit Brussel, where she obtained her PhD in 2009. Her research interests lie in applied comparative linguistics (especially English and Spanish), cognitive linguistics, phraseology and Foreign Language acquisition (mainly vocabulary acquisition) from a multilingual and intercultural perspective with a view to optimize Foreign Langage pedagogy, as well as translation and interpreter training. She is the research director of the Brussels Institute for Applied Linguistics.

Afternoon session

Lecture Theatre Education Room
Chair : Joanna Drugan


 14:00
A Comparative User Evaluation of Tablets and Tools for Consecutive Interpreters

Since the release of the first modern tablets, practicing interpreters have begun to consider how tablets could be used to support their interpreting practice. The first phase of a recent mixed methods assessed the pros and cons of different tablets, applications and styluses, finding that professional interpreters were effectively using tablets for consecutive interpreting in a wide range of settings. Results also indicated that certain types of tablets, applications and styluses were especially appreciated by practitioners (Goldsmith & Holley (2015). This paper presents the second phase of that study, building on previous conclusions to derive an instrument for carrying out a comparative user evaluation of these tablet interpreting tools. Using this instrument, it compares and contrasts the different tablets and accessories currently available on the market. Its conclusions are expected to serve as a useful guide to allow interpreters to pick the tablets, applications and styluses which best meet their needs.

Joshua Goldsmith (Interpreter and Université de Genève)

Josh Goldsmith is an EU-accredited interpreter working from Spanish, French, Italian and Catalan into English. He splits his time between interpreting and working as a trainer and researcher at the University of Geneva, where he focuses on the intersection between interpreting, technology and education. A lover of all things tech, Josh shares tips about technology and interpreting in conferences and workshops, the Interpreter’s Toolkit column (https://aiic.net/search/tags/the-interpreter’s-toolkit), and on Twitter (@Goldsmith_Josh).

Silver Sponsor Workshop

Setting up SDL Trados Studio for best match scenarios

This workshop will cover “Setting up SDL Trados Studio 2017 to make the most of your available resources and maximise productivity in the most common translation scenarios” with the following agenda:

  • The realities of the translation industry today
  • The life of a CAT
  • Everyday translation scenarios
  • Four key Productivity Aids
  • Best Match Service
  • What’s Next?
Moderated by Neil Ferguson

Over the last 20 years Neil has worked in a variety of European Product Management & Marketing Management positions for international companies and is well versed in the challenges that come with localising content for multiple European markets.

As Product Marketing Manager at SDL for Translation Productivity solutions including SDL Trados Studio, SDL MultiTerm SDL Studio GroupShare, Neil is a firm believer that even though today’s technology has dramatically aided the delivery and management of localised content, the next few years ahead are going to be even more exciting and dramatic, trends such as IOT and on demand digital experiences that will only serve to accelerate the demand for content in local language. So the need to be ready is paramount!

 14:30 Panel Discussion Moderated by Danielle D’Hayer (London Metropolitan University)

New Frontiers in interpreting technology

Technology has the potential to shape the future of interpreting. Indeed, it has already begun to do so. From tools that assist interpreters to devices that may replace them altogether, technologies for delivering interpreting services to tools to teach interpreting, this panel discussion will span the gamut of technology in interpreting, considering current developments and future innovations.
A panel of leading practitioners, researchers and trainers with experience in the private and institutional markets will invite audience members to engage with the state-of-the-art of technology in our industry.
Topics will include:
● enhancements to conference technology, including digital consoles, bone-conduction headphone, and efforts to leverage mobile phone technology for simultaneous interpreting;
● advances in collaborative glossary management;
● the impact of remote interpreting delivery platforms on different market segments;
● how software, apps and mobile device can be used in all phases of the interpreting process;
● tools for online and technology-assisted interpreter training, including SmartPens and online communities like Speechpool and InterpretimeBank; and
● innovations in virtual reality.
Join us for this interactive conversation about the present and future of technology for interpreters, and consider how these technologies may shape your personal practice and the industry as a whole.

Anja Rütten, Alexander Drechsel, Joshua Goldsmith, Marcin Feder, and Barry Olsen

Danielle D’Hayer is an associate professor in interpreting studies at London Metropolitan University. She is the course director of the MA Conference Interpreting, the MA Interpreting, MA Public Service Interpreting and interpreting short courses that include a Training the Trainers for Interpreting Studies programme and a portfolio of Continuous Professional Development (CPD) activities. These courses, which she developed single-handedly, have attracted both professional interpreters and novices from the UK and abroad.
Danielle researches communities of practice for interpreting studies. Her main interests include innovative ways to enhance formal and informal blended leaning using social media, new technologies and on-line platforms. You can follow her on Twitter @DDhayer.

Dr Anja Rütten (Sprachmanagement.net) is a freelance conference interpreter for German A, Spanish B, English and French C based in Düsseldorf, Germany since 2001. Apart from the private market, she works for the EU institutions and is a lecturer at the TH Cologne. She holds a degree in conference interpreting as well as a PhD of the University of Saarbrücken (doctoral thesis on Information and Knowledge Management in Conference Interpreting, 2006). As a member of AIIC, the international conference interpreters’ association, she is actively involved in the German region’s working group on profitability. She has specialised in knowledge management since the mid-1990s and shares her insights in her blog on www.dolmetscher-wissen-alles.de .

Alexander Drechsel has been a staff interpreter with the European Commission’s Directorate-General for Interpretation since 2007. He has studied at universities in Germany, Romania and Russia and his working languages are German (A), English (B), French and Romanian (C). Alexander is also a bit of a ‘technology geek’ with a special interest in tablets and other mobile devices, and regularly shares his passion and knowledge with fellow interpreters during training sessions and on the web at http://www.tabletinterpreter.eu/ .

Joshua Goldsmith is an EU-accredited interpreter working from Spanish, French, Italian and Catalan into English. He splits his time between interpreting and working as a trainer and researcher at the University of Geneva, where he focuses on the intersection between interpreting, technology and education. A lover of all things tech, Josh shares tips about technology and interpreting in conferences and workshops, the Interpreter’s Toolkit column ( https://aiic.net/search/tags/the-interpreter’s-toolkit ), and on Twitter (@Goldsmith_Josh).

Marcin Feder has been an interpreter at the European Parliament since 2003 and the Head of the Polish Interpretation Unit from 2012 to 2016. He is now the Head of Interpreter Support and Training Unit and the acting Head of Multilingualism and Succession Planning Unit. He studied at Adam Mickiewicz University in Poznan, Poland (MA in English and PhD in Linguistics focusing on Computer Assisted Translation) and Monterey Institute of International Studies, USA (Junior Fulbright Scholarship). These days, apart from the regular managerial duties, his main interests are the use of tablets in the booth, new technologies to support interpreters in their daily work and all things paper-smart. He is also an avid runner.

Barry Slaughter Olsen is a veteran conference interpreter and technophile with over two decades of experience interpreting, training interpreters and organizing language services. He is an associate professor at the Middlebury Institute of International Studies at Monterey (MIIS), the founder and co-president of InterpretAmerica, and General Manager of Multilingual Operations at ZipDX. He is a member of the International Association of Conference Interpreters (AIIC). Barry is the author of “The Tech-Savvy Interpreter” a monthly column and video series published in Jost Zetzsche’s Tool Box Journal focusing on interpreting technology. For updates on interpreting, technology and training, follow him on Twitter @ProfessorOlsen.

 Starting at 15:00, ending at 16:00  Workshop

The Localization Industry Word Count Standard: GMX-V - Slaying the Word Count Dragon

Word and character counts are the basis of virtually all metrics relating to costs in the L10N Industry. An enduring problem with these metrics has been the lack of consistency between various computer assisted tools (CAT) and translation management systems (TMS). Notwithstanding these inconsistencies there are also issues with common word counts generated by word processing systems such as Microsoft Word. Not only do different CAT and TMS systems generate differing word and character counts, but there is also a complete lack of transparency as to how these counts are arrived at: specifications aren’t published and systems can produce quite widely different metrics. To add clarity, consistency and transparency to the issue of word and character counts the Global Information Management Metrics Volume (GMX-V) standard was created. Starting with version 1.0 and then as version 2.0 GMX-V addresses the problem of counting words and characters in a localization task, and how to exchange such data electronically. This workshop goes through the details of how to identify and count words and characters using a standard canonical form, including documents in Chinese, Japanese and Thai, as well as how to exchange such data between systems.

Andrzej Zydroń (XTM)

Andrzej Zydroń MBCS CITP

CTO @ XTM International, Andrzej Zydroń is one of the leading IT experts on Localization and related Open Standards. Zydroń sits/has sat on, the following Open Standard Technical Committees:

1. LISA OSCAR GMX
2. LISA OSCAR xml:tm
3. LISA OSCAR TBX
4. W3C ITS
5. OASIS XLIFF
6. OASIS Translation Web Services
7. OASIS DITA Translation
8. OASIS OAXAL
9. ETSI LIS
10. DITA Localization
11. Interoperability Now!
12. Linport

Zydroń has been responsible for the architecture of the essential word and character count GMX-V (Global Information Management Metrics eXchange) standard, as well as the revolutionary xml:tm (XML based text memory) standard which will change the way in which we view and use translation memory. Zydroń is also chair of the OASIS OAXAL (Open Architecture for XML Authoring and Localization) reference architecture technical committee which provides an automated environment for authoring and localization based on Open Standards.
Zydroń has worked in IT since 1976 and has been responsible for major successful projects at Xerox, SDL, Oxford University Press, Ford of Europe, DocZone and Lingo24 in the fields of document imaging, dictionary systems and localization. Zydroń is currently working on new advances in localization technology based on XML and linguistic methodology.
Highlights of his career include:
1. The design and architecture of the European Patent Office patent data capture system for Xerox Business Services.
2. Writing a system for the automated optimal typographical formatting of generically encoded tables (1989).
3. The design and architecture of the Xerox Language Services XTM translation memory system.
4. Writing the XML and SGML filters for SDL International’s SDLX Translation Suite.
5. Assisting the Oxford University Press, the British Council and Oxford University in work on the New Dictionary of the National Biography.
6. Design and architecture of Ford’s revolutionary CMS Localization system and workflow.
7. Technical Architect of XTM International’s revolutionary Cloud based CAT and translation workflow system: XTM.

Specific areas of specialization:
1. Advanced automated localization workflow
2. Author memory
3. Controlled authoring
4. Advanced Translation memory systems
5. Terminology extraction
6. Terminology Management
7. Translation Related Web Services
8. XML based systems
9. Web 2.0 Translation related technology

16:00 Health Break in Gallery and Marble Hall

16:10-16:25 Poster

Crowdsourcing for NMT Evaluation: Professional Translators versus the Crowd

The use of machine translation (MT) has become widespread in many areas, from household users to the translation and localization industries. Recently, the great interest shown in neural machine translation (NMT) models by the research community has made more detailed evaluation of this new paradigm essential, since several comparative studies using human and automatic evaluation of statistical and neural MT have shown that results concerning the improvements of NMT are not yet conclusive (e.g. Castilho et al. 2017). Crowdsourcing has become a frequently-employed option to evaluate MT output. In the field of translation, such crowds may consist of translation professionals, bilingual fans or amateurs, or a combination thereof. Crowdsourcing activities are at the heart of the European-funded research and innovation project TraMOOC (Translation for Massive Open Online Courses). In this presentation, we will focus on the MT evaluation crowdsourcing activities performed by professional translators and amateur crowd contributors. We will present the results of this evaluation based on automated metrics and post-editing effort and compare how translators and the general crowd assessed the quality of the NMT output.

Sheila Castilho, Joss Moorkens, Yota Georgakopoulou and Maria Gialama (Dublin City University)

Sheila Castilho is a post-doc researcher in the ADAPT Centre in Dublin City University. Her research interests include human and usability evaluation of machine translation, translation technology and audio-visual translation.

 

Co-authors

Joss Moorkens is an Assistant Professor and researcher in the ADAPT Centre, within the School of Applied Languages and Intercultural Studies in Dublin City University (DCU) with interests in human evaluation of translation technology, ethics and translation technology, and translation evaluation.

Federico Gaspari teaches English linguistics and translation studies at the University for Foreigners “Dante Alighieri” of Reggio Calabria (Italy) and is a postdoctoral researcher at the ADAPT Centre in Dublin City University, where he works on EU projects focusing on machine translation evaluation.

Andy Way is a Professor of Computing at Dublin City University (DCU) and Deputy Director of the ADAPT Centre. He is a former President of the European Association for Machine Translation and edits the journal Machine Translation.

Panayota (Yota) Georgakopoulou holds a PhD in translation and subtitling and is a seasoned operations executive in the subtitling and translation industries, with significant experience in the translation academia as well. She is currently Senior Director, Research & Int’l Development at Deluxe Media, leading translation initiatives and research on language technologies and tools, and their application in subtitling workflows.

Maria Gialama is currently working as Account Manager, R&D at Deluxe Media, focusing in the application of language technologies in subtitling. Maria received her MA in translation and subtitling from the University of Surrey and has extensive experience in translation ops management.

Vilelmini Sosoni is Lecturer at the Ionian University in Greece. She has taught Specialised Translation in the UK and Greece and has extensive industrial experience. Her research interests lie in the areas of the translation of institutional texts, translation technology and audiovisual translation.

Rico Sennrich is a research associate at the University of Edinburgh. His main research interest is machine learning, especially in the area of machine translation and natural language processing.

 

16:30
Learning from Sparse Data - Breaking the Big Data Stranglehold

The Bible Societies have for many years built systems to help translators working in scores of languages. Their focus is in linguistic analysis rather than synthesis. But there is a problem, shared by all MT systems. Until there is enough text to train the system the output is limited. By the time there is enough training data much of the task may already be complete.

To address this, United Bible Societies has begun to re-imagine what we expect from our translation support systems. A system is now in development which begins learning about the target language at the very start of a Bible translation project. Rather than building stand alone morphology analysers, glossing engines and aligners the project constructs a learning framework within which all of these machines, and more, can operate with very small amounts of text, using outputs from one context to strengthen a hypothesis from another.

This paper describes a framework within which such processing might take place, how that framework enables learning to take place from very small amounts of data, how that learning is gradually aggregated into a coherent language model and how this model is used to support the translator in their work.

 

Jon Riding and Neil Boulton (United Bible Societies)

Jon Riding leads the Glossing Technologies Project for United Bible Societies. The project develops language independent NLP systems to assist Bible translators by automatically analysing elements of natural languages. He is a Visiting Researcher at Oxford Brookes University.
In addition to his work in computational linguistics for UBS Jon teaches Koine Greek, Classical Hebrew and Biblical Studies for various institutions in the UK including Sarum College – (where he is an associate lecturer).
Research interests include the automatic analysis of complex non-concatenative structures in natural language, the development of the New Testament text and the writings of the early Church Fathers.
E: jonriding@biblesocieties.org

 

 

Neil Boulton works as part of the Glossing Technologies Project for United Bible Societies. The project develops language independent NLP systems to assist Bible translators by automatically analysing elements of natural languages. Previously most of his working life has been spent in various IT roles for British and Foreign Bible Society, based in Swindon, UK.
E: neilboulton@biblesocieties.org

SCATE Prototype: A Smart Computer-Aided Translation Environment

We present SCATE: A Smart Computer-Aided Translation Environment developed in the SCATE research project. It is a carefully designed prototype of the user interface of a translation environment, that displays different translation suggestions coming from different resources in an intelligible and interactive way. Our environment contains carefully designed representations that show relevant context to clarify why certain suggestions are given. In addition, several relationships between the source text and the suggestions are made explicit, to help the user understand how a suggestion can be used and select the most appropriate one. Well designed interaction techniques are included that improve the efficiency of the user interface. The suggestions are generated through different web services, such as translation memory (TM) fuzzy matching, machine translation (MT) and support for terminology. A lookup mechanism highlights terms in the source segment that are available with their translation equivalents in the bilingual glossary.

Tom Vanallemeersch and Sven Coppers (Katholieke Universiteit Leuven)

Dr. Vincent Vandeghinste is a post-doctoral researcher at the KU Leuven, and has been working on natural language processing and translation technologies since 1998. He is the project coordinator of the SCATE project (Smart Computer-Aided Translation Environment), and (co)-authored about 70 publications in the areas of corpus building, treebanking, machine translation, augmented alternative communication and text-to-pictograph translation. He teaches Natural Language Processing, Language Engineering Applications and Linguistics And Artificial Intelligence in the advanced masters program for Artificial Intelligence at KU Leuven, as well as Computational Linguistics to students of Linguistics.Vincent Vandeghinste is a post-doc researcher at the University of Leuven and has been working on machine translation since 2004. He is the project coordinator of the SCATE project, a 3 million euro Flemish project to improve the translation environment of professional translators.

Sven Coppers studied computer science at Hasselt University (UHasselt) and is interested in various aspects of Human Computer Interaction, such as 2D and 3D visualizations, user-centered software engineering, context-awareness and intelligibility (comprehensibility). Currently, he is doing a PhD about making context-aware Internet-of-things applications more understandable and controllable for end-users. In addition, he is working on user interfaces for translation environment within the SCATE project, with a focus on usability, intelligibility, customization and collaboration.

Dr. Jan Van den Bergh is a post-doctoral researcher and research assistant at Hasselt University and member of the HCI group in the research institute Expertise Centre for Digital Media. His research is situated in user-centred engineering of context-aware, mobile or collaborative systems. His recent research is focused on how interactive technology can support knowledge workers and/or end users in specific domains, including professional translation and human-robot collaboration in manufacturing. He obtained a PhD in computer science (human-computer interaction) from Hasselt University. He co-organized several scientific workshops and served as PC member for several conferences and workshops. He is a member of the IFIP working group 13.2 on User-Centred Systems Design.

Tom Vanallemeersch is a researcher in the field of translation technology at KU Leuven. After his studies in translation and in language technology during the early nineties, he focused his attention on various forms of translation software, including translation memories, automated alignment and machine translation. His career involves both academia and industry. He worked in two Belgian translation agencies (Xplanation, LNE International), a French MT development company (Systran), and the MT team of the Commission’s DG Translation in Luxembourg. In academia, he taught the ins and outs of TM and MT to Applied Linguistics students at Lessius Hogeschool (now KU Leuven), then started working at the University’s Centre for Computational Linguistics. He currently performs research in SCATE (Smart Computer-Aided Translation Environment), an extensive, six-team project coordinated by the Centre. While Tom is passionate about translation technology, his career sporadically shifted to other types of natural language processing, such as terminology extraction (coordination of the TermTreffer project at Dutch Language Union).

Dr. Els Lefever is an assistant professor at the LT3 language and translation technology team at Ghent University. She started her career as a computational linguist at the R&D-department of Lernout & Hauspie Speech products. She holds a PhD in computer science from Ghent University on ParaSense: Parallel Corpora for Word Sense Disambiguation (2012). She has a strong expertise in machine learning of natural language and multilingual natural language processing, with a special interest for computational semantics, cross-lingual word sense disambiguation, event extraction and multilingual terminology extraction. She is currently involved in the SCATE project (work package on bilingual terminology extraction from comparable corpora) and the Multilingual IsA project (multilingual database of hypernym relations) and supervises PhD projects on terminology extraction from comparable corpora, semantic operability of medical terminology, irony detection and disambiguation of terminology in a cross-disciplinary context. She teaches Terminology and Translation Technology, Language Technology and Digital Humanities courses.

Ayla Rigouts Terryn is a PhD researcher at the Language and Translation Technology Team (LT3) research group, at the Department of Translation, Interpreting and Communication of Ghent University. She graduated from the University of Antwerp in 2014 with a Master’s in Translation and worked there as a scientific fellow on a one-year research project about translation revision competence. In 2015, she joined the LT3 research group to work on the terminology work package of the SCATE project. She is currently working as an FWO scholar on her PhD about bilingual terminology extraction from comparable corpora.

Bram Bulté obtained an MA in linguistics and literature (2005) and a PhD in linguistics (2013) from Brussels University, and an MA in statistics (2016) and in artificial intelligence, option speech and language technology (2017) from KU Leuven. He worked as a translator for the European Parliament (2007-2015) and as a guest professor at Brussels University (2015-2017). He currently works for the Centre for Computational Linguistics at KU Leuven. His research focuses on second language acquisition, multilingual education and natural language processing.

Iulianna van der Lek is passionate about Language Technologies, always looking for ways to improve the translators’ efficiency. She is currently working as a Research Associate and teaching assistant at KU Leuven, Faculty of Arts, Campus Antwerp. Her research focuses on computer-assisted translation tools and their impact on the translation process, usability, and methods of acquiring domain-specific terminology. As a certified memoQ, Memsource and SDL trainer, she is teaching computer-assisted translation tools both to students and freelance translators. Besides research and teaching activities, she is also coordinating the Postgraduate Programme in Specialised Translation, developing new modules on language technologies and training programs for professional translators.

Prof. Dr. Karin Coninx is full professor at Hasselt University (UHasselt), Belgium. She obtained a PhD in sciences, computer science after a study of Human-Computer interaction (HCI) in immersive virtual environments. Her research interests include user-centred methodologies, persuasive applications in the context of eHealth, technology-supported rehabilitation, serious games, (multimodal) interaction in virtual environments, haptic feedback, intelligibility, mobile and context-sensitive systems, interactive work spaces, and the model-based realisation of user interfaces.
Karin Coninx has co-authored more than 300 international publications in scientific journals and conference proceedings. She teaches several courses on computer science and specific HCI subjects at Hasselt University, initiated a Master in HCI and co-initiated master profiles in Health Informatics and Engineering Interactive Systems. She presides the Interfaculty Board of the School for Educational Studies at Hasselt University (since 2017).

Prof. Dr. Frieda Steurs is full professor at the KU Leuven, Faculty of Arts, campus Antwerpen. She works in the field of terminology, language technology, specialized translation and multilingual document management. She is a member of the research group Quantitative Lexicology and Variation Linguistics (QLVL). Her research includes projects with industrial partners and public institutions. She is the founder and former president of NL-TERM, the Dutch terminology association for both the Netherlands and Flanders. She is also the head of the ISO TC/37 standardization committee for Flanders and the Netherlands.. She is the president of TermNet, the International Network for Terminology (Vienna). Since 2016, she is the head of research of the INT, the Dutch Language Institute in Leiden. In this capacity, she is responsible for the collection, development and hosting of all digital language resources for the Dutch Language. The INT is the CLARIN centre for Flanders, Belgium.

17:00 Closing Ceremony: AsLing President and TC39 Organising Committee
17:30 End of TC39
Thursday detailed programme