A Dialogue-Management Evaluation Study
Abstract
We present a highly portable and cooperative dialogue-manager component of a developing, Slovenian and Croatian spoken dialogue system for weather-information retrieval. In order to evaluate the performance of this component, two Wizard-of-Oz experiments were performed. The only difference between both experiment settings was in the dialogue-management manner, i.e., while in the first experiment dialogue management was performed by a human, the wizard, in the second experiment it was performed by the newly-implemented dialogue-manager component. The data from both Wizard-of-Oz experiments was evaluated with the PARADISE evaluation framework, a potential general methodology for evaluating and comparing different versions of spoken-language dialogue systems. The study ascertains remarkable differences in the performance functions when taking different satisfaction-measure sums or even individual scores as the target to be predicted, it demonstrates the need for the introduced dialogue costs em database parameters, and it confirms the dialogue manager's cooperativity subject to the incorporated knowledge representation.
Full Text:
PDFDOI: https://doi.org/10.2498/cit.1000795
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.