next up previous
Next: Example Up: A computational model Previous: RECOGNITION OF A

RECOVERY FROM A MISUNDERSTANDING

 

After the agent has identified the trouble source turn of the dialogue (i.e. the first misunderstood turn), he can react in two ways, according to who is the speaker of the turn:

1) If he has been misunderstood, he can try to persuade his partner to restructure his

interpretation by performing the ``Restructure'' action.gif[1]In some cases, agents recognize that their interlocutors have misunderstood them but, for politeness reasons, they let the interaction go on, without making any repairs. In general, any planned action (in this case ``Restructure'') can be discarded if it is in conflict with other goals of the agents. In fact, their behavior is influenced by several factors, like the relationship among the interactants, and how much the misunderstanding can influence the subsequent talk. Here, we don't deal with these side behaviors. As described in section 3.1.1, the (object level) action ``Get-to-do'', is used to induce other agents to perform actions for one's sake; so, in this case, a request to restructure the context is performed (see T4 in Example 1, section 1.2).

2) If, instead, he has misunderstood his partner, he has to execute the ``Restructure" action himself. As a result of this execution, his model of the previous dialogue is changed and he can go on with the interaction, having reestablished the interpretation context. In this new context, both agents share the same interpretation. Note that, before continuing the dialogue, the agent has still something to do: he notifies the partner that he has succeeded in realigning the interpretation.

In both cases, the recovery goal is shared among the speakers; when everything ends up well, the agent informs the partner about the success of their aims, otherwise, if no repair is feasible, the agent warns his partner that the intersubjectivity is unrecoverable.
Our model directly supports the (positive and negative) notifications to the partner.gif[2]In principle, a notification is necessary only when the agent has produced some turn which could mislead his partner. It is a limitation of our model the fact that it always prescribes an explicit notification.
The notification goals are managed by conditional steps of the ``Satisfy'' action (see the rightmost steps in Figure 1): ``Satisfy(A, A, Know(B, done(Restructure(...))))'' and
``Satisfy(A, A, Know(B, achievable(inter(A, ..., ctx) inter(B, ..., ) equal(ctx, ))))''. In fact, the hearer is committed to the goal that the two dialogue interpretations meet, and this goal is naturally shared with the partner, in that speakers involved in a cooperative interaction want that their intersubjectivity is maintained.

Moreover, when, after the occurrence of a repair, the interactants have finally restored the intersubjectivity, they can find out that some utterances expressed between the trouble source turn and the repair turn may be no longer interesting.

So, the pending intentions that were created by them should be considered irrelevant, too, and no reply to them would be expected.

Some more words must be spent about the recognition of the turns expressing a request for a repair: differently from the recognition of a misunderstanding, which is triggered by a failure in the interpretation of the last turn, this type of recognition takes place in the standard interpretation process. The interpreting agent B accepts this topic shift since the dialogue could not go on anymore and the partner is still collaborating with him in some way.



next up previous
Next: Example Up: A computational model Previous: RECOGNITION OF A



Guido Boella Dottorando
Fri Aug 29 11:33:46 MET DST 1997