next up previous
Next: The phenomenon of

A plan-based model of misunderstandings in cooperative dialogue

This paper is going to appear on the International Journal of Human-Computer Studies (Special issue on Detecting, Repairing, and Preventing Human-Machine Miscommunication)

LILIANA ARDISSONO, GUIDO BOELLA AND ROSSANA DAMIANO
Dipartimento di Informatica, Università di Torino, Corso Svizzera n.185, 10149 Torino, Italy. email: {liliana, guido}@di.unito.it

We describe a plan-based agent architecture that models misunderstandings in cooperative NL agent communication; it exploits a notion of coherence in dialogue based on the idea that the explicit and implicit goals which can be identified by interpreting a conversational turn can be related with the previous explicit / implicit goals of the interactants. Misunderstandings are hypothesized when the coherence of the interaction is lost (i.e. an unrelated utterance comes). The processes of analysis (and treatment) of a misunderstanding are modeled as rational behaviors caused by the acquisition of a supplementary goal, when an incoherent turn comes: the agent detecting the incoherence commits to restore the intersubjectivity in the dialogue; so, he restructures his own contextual interpretation, or he induces the partner to restructure his (according to who seems to have made the mistake). This commitment leads him to produce a repair turn, which initiates a subdialogue aimed at restoring the common interpretation ground. Since we model speech acts uniformly with respect to the other actions (the domain level actions), our model is general and covers misunderstandings occurring at the linguistic level as well as at the underlying domain activities of the interactants.





Guido Boella Dottorando
Fri Aug 29 11:33:46 MET DST 1997