Short Bio

I graduated in Computer Science Engineering in 2003 at the Technical University of Catalonia - Barcelona TECH (UPC).

During my bachelor, I worked in a software development company. First, developing my first sites, then designing them, and finally managing my first projects.

I moved to the Chemical Engineering Department, where I developed my Degree Project. I obtained a grant for research support to work in several research projects. In the European project VIP-NET, I developed a tool for managing and forecasting data-series related to chemical processes. I also contributed to the development of a tool for the generation of Gantt diagrams. It was part of the project MOPP. Finally, I contributed to the development of the CAPE-OPEN standard and a tool to certify the standard compliance ESCAPE-14 .

After the research experiences in the chemical department, I started the PhD in Artificial Intelligence. In 2007, I obtained the DEA degree and I presented my thesis project under the supervision of prof. Marta Gatius.

During that time, I participated in the European project HOPS (IST-2002-507967, Enabling an Intelligent Natural Language based Hub for the Deployment of Advanced Semantically Enriched Multi-channel Mass-scale Online Public Services). In this project, I focused on the design of a multi-channel and multi-lingual dialogue system for accessing web services. In particular, I developed a flexible dialogue manager and the natural language generator. I also developed the language resources for Catalan and Spanish.

In 2009, together with Juan Roldán, doctor and researcher in the Hospital Universitario Germans Trias i Pujol, we released the first on-line biobank of DNA for pneumology diseases in Spain. http://www.lungnome.com

From September 2009 to April 2010 I joined the Department of Computer Science and Engineering in the University of Trento. I had an Internship within the ADAMACH project, under the supervision of Dr. Silvia Quarteroni and prof. Giuseppe Riccardi.

In October 2010 I defended my PhD thesis, which is related with the design of plans for representing the dialogue tasks, the management of the task and the adaptive generation of responses.

After my PhD, I was involved in a pilot project about the use of natural language technologies for learning in classrooms. The pilot has been sponsored by the Telefónica Chair at UPC and we focus virtual assitants for language learning and specially, English Language Learning (ELL). As a result we developed ALICE , a system for the Acquisition of Language through an Interactive Comprehension Environment.

Since May 2011 to December 2014, I was a member of the MT group at TALP-GPLN along with prof. Lluís Màrquez, Dra. Cristina España-Bonet, Dr. Daniele Pighin and Dr. Alberto Barrón-Cedeño.

I was involved in the TACARDI project (TIN2012-38523-C02-00) where I worked on document-level machine translation evaluation.

We also conducted three projects related to different MT fields. I was involved in the MOLTO project (FP7-ICT-247914, Multilingual Online Translation) where I worked on Patents Machine Translation and Patents Retrieval systems. I also participated in the OPENMT-2 project, where I developed tools for MT Evaluation and Error Analysis. And, I also collaborated in the FAUST project, where we dealt with the quality estimation of translations and the user feedback to develop adaptive machine translation systems.

From January 2015 to December 2016, I was a Language Technologist at Oxford University Press. The Dictionaries division was launching the Oxford Global Languages (OGL) programme which develops digital lexical resources with communities across a wide range of languages. I was a member of the team building LEAP (Lexical Engine and Platform), a platform that makes lexical data available. It serves as the fundamental infrastructure for the OGL initiative.

From January 2017 to December 2018, I was an NLP developer at Artificial Solutions, where I contributed to the development of Teneo.

Teneo is a powerful platform for virtual assistants development and analytics. It allows not only to develop virtual assistants in ultra-rapid time, but also helps business users and developers to collaborate on creating sophisticated natural language applications in record time without the need for specialist linguistic skills.

Since January 2019 I'm back at Oxford Languages, OUP, where I'm specializing in Natural Language Processing approaches applied to linguistic content. I'm now focused in leveraging the vast amount of corpora data at Oxford Languages to create linguistic resources for unlocking the power of language for learning and for life.