Content area

Abstract

Abstract

User simulation is an important research area in the field of spoken dialogue systems (SDSs) because collecting and annotating real human-machine interactions is often expensive and time-consuming. However, such data are generally required for designing, training and assessing dialogue systems. User simulations are especially needed when using machine learning methods for optimizing dialogue management strategies such as Reinforcement Learning, where the amount of data necessary for training is larger than existing corpora. The quality of the user simulation is therefore of crucial importance because it dramatically influences the results in terms of SDS performance analysis and the learnt strategy. Assessment of the quality of simulated dialogues and user simulation methods is an open issue and, although assessment metrics are required, there is no commonly adopted metric. In this paper, we give a survey of User Simulations Metrics in the literature, propose some extensions and discuss these metrics in terms of a list of desired features. [PUBLICATION ABSTRACT]

Details

10000008
Business indexing term
Title
A survey on metrics for the evaluation of user simulations
Publication title
Volume
28
Issue
1
Pages
59-73
Number of pages
15
Publication year
2013
Publication date
Mar 2013
Publisher
Cambridge University Press
Place of publication
Cambridge
Country of publication
United Kingdom
ISSN
02698889
e-ISSN
14698005
Source type
Scholarly Journal
Language of publication
English
Document type
Feature
Document feature
References
ProQuest document ID
1289469692
Document URL
https://www.proquest.com/scholarly-journals/survey-on-metrics-evaluation-user-simulations/docview/1289469692/se-2?accountid=208611
Copyright
Copyright © Cambridge University Press 2012
Last updated
2024-12-03
Database
ProQuest One Academic