next up previous
Next: References Up: A plan-based model of Previous: Related work

Conclusions

 

In this paper, we have described an agent model that provides a computational treatment of misunderstandings in dialogue. We have focused on third and fourth position repairs, and we have described how they can be detected when a turn incoherent with the dialogue context occurs. On the contrary, we have left apart first and second turn repairs, because they are rather different phenomena: third and forth position repairs are caused by the recognition that one of the interactants has committed to a wrong interpretation of the previous part of the dialogue (they are concerned with interpretation mistakes which occurred before the repair turn); instead, first turn repairs are performed by a speaker in order to avoid that the hearer has interpretation problems; finally, second turn repairs are typically performed by the receiver of a turn, when he experiences some problems in the interpretation of his partner's turn, so he has not yet committed to a specific interpretation (e.g. they are usually related with misspellings, and other similar phenomena).

Our dialogue model consists of a plan-based representation of the knowledge about the way to reach one's own goals. This knowledge takes into account the normal planning behavior of an agent, as well as his capability to recover from problems in the interaction with other agents. Misunderstandings are treated as one of the specific problems that can occur when interacting with other agents and are dealt with by adopting the goal of maintaining the safety of the intersubjectivity in dialogue. In summary, the main ideas underlying our work are:

1) Misunderstandings in dialogue are explained in the same way as problems occurring in non-linguistic interaction. In fact, in our agent model, linguistic actions are means to obtain one's goals exactly as the other (domain level) actions. The maintenance of such a general interaction context allows us to model misunderstandings without posing any limitations to the object of the misunderstanding and to the distance between the misunderstood turns and the repair turns performed by the agents.

2) Different levels of interpretation have been identified as the possible objects of a misunderstanding, from the utterance level (which regards the syntactic and semantic interpretation), to the pragmatic level, which covers misunderstandings on the illocutionary force of speech acts, and on the domain level activity underlying the linguistic behavior of agents.gif

3) The recognition of misunderstandings follows from an agent's attempt to resume the intersubjectivity, after he has not been able to interpret the last turn coherently. However, the identification of the mistake and the reinterpretation of the dialogue are fully embedded into his goal-directed behavior, in that the agents participating to an interaction share the common goal of maintaining their intersubjectivity and act to restore it, when threatened. Restoring intersubjectivity is a goal shared among the interactants because the intersubjectivity is the basis for any social action Schegloff:92.

4) There is a clear distinction among repairs to misunderstandings (i.e. requests of repair) and notifications that one of the interactants has just adjusted his dialogue context to recover from a misunderstanding.

5) Repairs can be the object of a misunderstanding, too; moreover, an agent can believe that he has been misunderstood by another one, so generating a repair, but he might be wrong (so, the other one could make another repair). Our model takes into account these phenomena, as it can be seen in the example described in section 4.

Our agent model is implemented in Common Lisp and runs on workstations. The interpretation of utterances is fully implemented, starting from the NL (Italian) form, to the construction of the dialogue context of the interaction; the generation of agent behavior is under development.

Some aspects of our model have to be fully developed and represent an avenue for future work:

1) A deeper study of the structure of dialogue, as mentioned in section 1.1, in order to analyze subdialogues and topic shifts. The cases where a topic shift is an acceptable interpretation hypothesis, even in a cooperative environment, can be determined by the satisfaction of the previous goals or by the urgency of the newly introduced goals (e.g. consider the problem of managing interactions in highly dynamic environments, as studied in the reactive planning research).

2) The development of the linguistic strategies to perform a repair (in terms of sequences of speech acts Schegloff:92). An interesting related problem is to find the minimal information necessary to disambiguate the meaning of the trouble source turn when a misunderstanding occurs.

The idea of finding some discriminating information for disambiguation purposes recalls the strategies used in plan recognition to deal with ambiguous hypotheses on the observed agent's plans (e.g. see the initiation of clarification dialogues to disambiguate the partner contribution vanBeek-Cohen:91,Cohen-etal:94).

3) The ability of the speakers to talk about what is happening in dialogue, by means of anaphoras, verbs and nouns referring to speech acts (such as ``to criticize'', ``to order'', etc.); [Goy & Lesmo1997] provide a first analysis of the lexical semantics of communication verbs.

4) The development of a method to analyze the ``weaknesses'' of one's own utterances (in terms of ambiguities and underspecification).

Empirical data show that humans identify the causes of the misunderstanding very easily, and that they repair them in an effective way by ``filling the gaps'' of their previous turns. We believe that the type of failure arising when an agent tries to relate the incoherent turn to the previous context could be useful for an efficient search of the alternative interpretation for the misunderstood turn.

5) The ability to prevent misunderstandings by performing early repairs, like first-turn repairs. While we have not worked at this problem, there are some dialogue models which try to predict how the hearer will interpret an utterance, in his mental state. For example, [Ndiaye & Jameson1996] use the concept of anticipation feedback loops (introduced in [Wahlster & Kobsa1989]) to choose the ``most promising'' utterance that can be expressed to the user, in order to convey an information. Although this does not model the occurrence of utterances immediately followed by repairs (it is a way to plan what to say), the idea underlying anticipation feedback loops could be exploited to recognize first turn repairs in the interpretation of a turn.

6) The ability of a speaker to identify a misconception of the partner. Usually misconceptions make it impossible to interpret a turn locally (e.g. consider the problems studied in [McCoy1986] and [McCoy1988]); however, if they aren't promptly recognized, they can lead to misunderstandings. When an agent looks for an alternative interpretation of the previous dialogue, he should take into account also the possibility that a misconception of the partner led him to a different understanding of what has been said.gif

7) It also must be noticed that our analysis of coherence does not take into account the role of cognitive load in dialogue interpretation. As evident from the analyzed corpora, speakers often misinterpret complex utterances, or they forget what has been previously said Jordan-Thomason:96. However, this problem is currently outside our interests.

Acknowledgements

We are very grateful to Leonardo Lesmo, Carla Bazzanella and Morena Danieli for the fruitful discussions and good advice that they provided us with. We also thank them and the anonymous reviewers for having carefully helped us to improve our paper with their comments on the first version. Finally, we thank Sandra Carberry for her comments to the agent modeling architecture which represents the conceptual framework of this work. This work was partially supported by MURST 60% and by the Italian National Research Council (CNR), project ``Pianificazione e riconoscimento di piani nella comunicazione''. In particular, the analysis of the corpora which guided us in the construction of our model of misunderstanding was carried out within this project by Carla Bazzanella and her students, and we thank them a lot for their collaboration.



next up previous
Next: References Up: A plan-based model of Previous: Related work



Guido Boella Dottorando
Fri Aug 29 11:33:46 MET DST 1997