Building multi-domain conversational systems from single domain resources Articles uri icon

publication date

  • enero 2018

start page

  • 59

end page

  • 69

volume

  • 271

international standard serial number (ISSN)

  • 0925-2312

electronic international standard serial number (EISSN)

  • 1872-8286

abstract

  • Current advances in the development of mobile and smart devices have generated a growing demand for natural human-machine interaction and favored the intelligent assistant metaphor, in which a single interface gives access to a wide range of functionalities and services. Conversational systems constitute an important enabling technology in this paradigm. However, they are usually defined to interact in semantic-restricted domains in which users are offered a limited number of options and functionalities. The design of multi-domain systems implies that a single conversational system is able to assist the user in a variety of tasks. In this paper we propose an architecture for the development of multi-domain conversational systems that allows: (1) integrating available multi and single domain speech recognition and understanding modules, (2) combining available system in the different domains implied so that it is not necessary to generate new expensive resources for the multi-domain system, (3) achieving better domain recognition rates to select the appropriate interaction management strategies. We have evaluated our proposal combining three systems in different domains to show that the proposed architecture can satisfactory deal with multi-domain dialogs. (C) 2017 Elsevier B.V. All rights reserved.

keywords

  • dialog systems; multi-domain; spoken interaction; human-machine interaction; neural networks; statistical methodologies