Home // ACHI 2014, The Seventh International Conference on Advances in Computer-Human Interactions // View article
Modeling User's State During Dialog Turn Using HMM For Multi-modal Spoken Dialog System
Authors:
Yuya Chiba
Masashi Ito
Akinori Ito
Keywords: multi-modal information processing; user's state; spoken dialog system
Abstract:
Conventional spoken dialog systems cannot estimate the user's state while waiting for an input from the user because the estimation process is triggered by observing the user's utterance. This is a problem when, for some reason, the user cannot make an input utterance in response to the system's prompt. To help these users before they give up, the system should handle the requests expressed by them unconsciously. Based on this assumption, we have examined a method to estimate the state of a user before making an utterance by using the non-verbal behavior of the user. The present paper proposes an automatic discrimination method by using time sequential non-verbal information of the user. In this method, the user's internal state is estimated using multi-modal information such as speech, facial expression and gaze, modeled using a Hidden Markov Model (HMM).
Pages: 343 to 346
Copyright: Copyright (c) IARIA, 2014
Publication date: March 23, 2014
Published in: conference
ISSN: 2308-4138
ISBN: 978-1-61208-325-4
Location: Barcelona, Spain
Dates: from March 23, 2014 to March 27, 2014