Guido Boella, Rossana Damiano, Jelle Gerbrandy, Roberto Grenna, Serena Villata

During year 2002, the research about autonomous agents theory focused on the topics of the reactivity of agents to the changes in their environment and on the formalization of norms in multi-agent systems.

Using a hierarchical planning paradigm we developed a replanning strategy for dealing with the problem of updating the current intentions of an agent to face a new situation which occurred. The replanning process proceeds by making the plan more partial and then refining it again without restarting the planning process by first principles. After the implementation of such algorithm, we started an evaluation phase to compare the performance of the replanning strategy with respect to the performance of a planner which has to build an entire new plan; the evaluation is providing us with promising results.

For what concerns the formalization of the concept of norms for multiagent systems we proposed a definition of obligation based on sanctions: an agent who is subject to an obligation, in order to decide what to do, has to take into account the fact that if he does not fulfil the obligation may be sanctioned by a normative agent. We use this definition both in a framework based on a decision and game theoretic planner and in a logical framework, the latter work is made in cooperation with the Computer Science Department of Vrije University of Amsterdam.

Previous Research

Current Research



When the user utters a new turn,  the system looks for a coherence link with the previous dialog context. Coherence links are based on the participants' underlying intentions, both at linguistic and domain level [Ardissono, Boella and Lesmo 2000]. The goals underlying a certain turncould berelated to the interlocutor's pending goals (goal adherence and goal adoption), or could be related to a plan that the speaker is carrying on.



By exploiting the above-mentioned notion of coherence, a model of misunderstandings has been built [Ardissono, Boella and Damiano 1997,Ardissono, Boella and Damiano 1998]. When a participant detects a loss of coherence in the interaction, a misunderstanding is hypothesized and a the goal to repair it is consequently set. In order to restore the common ground and to realign the diverging interpretations, the repairing agent looks for an alternative and coherent interpretation of the past turns, up to the misunderstood one.  Then, he proceeds to repair his own interpretation (self-repair) or the other's (other-repair), depending on who is the misunderstandig agent.



We are currently developing a utility-based approach to cooperation among agents [Boella, Damiano and Lesmo 1999, Boella, Damiano and Lesmo 1999b, Boella 2000]. Agents - we assume - are rational, and they choose among alternative courses of action by exploiting a multi-attribute utility function that expresses the extent to which a certain course of action contributes to the achievement of the agent's goals. Most interactions among agents, however, involve some form of cooperation: when a group of agents is cooperating to a shared plan, a combination of individual utility and grouputility should be used. Each socially responsible agent, when evaluating alternative courses of action, considers also the consequences on the others' choices, in the light of this hybrid measure of group and individual utility. The effort to maximize it leads to some interesting behavors, including goal adoption, helpfulness, appropriate generation of communicative acts, and so on: all these phenomena contibute to improve coordination and to reduce conflicts among the individuals' actions.



A dialog agent for interpreting Natural Language  [Ardissono, Boella and Lesmo 2000], whose knowledge is encoded in plan libraries, can be extended to create a system that undestands narratives that contain descriptions of domain and linguistic actions [Boella, Damiano and Lesmo 1999c, Boella, Damiano and Lesmo 1999d]. The system incrementally reconstructs coherence links among described actions, by attributing intentions to the described agents.
Moreover, the intention attribution activity can be extended to the narrator's linguistic planning, by identifying communicative intentions underlying the structure of the narrative. At the same time, the dialog agent architecture lends itself to build a system that produces a description his own activity.


Previous Research

The research in the plan recognition and pragmatics area is concerned with the study and definition of techniques for modeling cooperative interaction among agents [Ardissono, Boella and Sestero 1996]. We are developing a prototype consultation system for a restricted domain (the University domain)[Adissono, Lesmo, Lombardo and Sestero 1993].

The consultation system has a plan-based representation of the knowledge about actions; it uses the knowledge about how people act when they try to obtain their goals in order to interpret the behavior of an observed agent, and to identify the intentions underlying his communication with other agents.

Our system is based on the GULL semantic interpreter for the Italian language and accepts Italian NL sentences in input. The system reasons on the sentences in input to identify the plans of the speaker; this identification is important in order to select the contents of a cooperative response to the user's questions. The input sentences undergo a sequence of interpretation steps: syntactic, semantic and contextual interpretation; speech-act interpretation and local domain-level analysis (where the domain-level actions addressed by the speech acts are identified); integration of the local interpretation of the input into the previous pragmatic context. We are currently working at the answer generation phase, in order to decide which contents should be included into the answer of the system, in order to make it as cooperative as possible.

The system processes the input by exploiting different knowledge sources:

The system maintains a User Model which stores the description of a specific user and is updated during the analysis of his input sentences [Ardissono, Lesmo and Sestero 1994], [Ardissono and Sestero 1996]. The contents of the User Model are represented by means of semantic nets, representing assumptions about the user's beliefs, goals, knowledge, properties, preferences and intentions. Stereotypical information about users is exploited to predict typical features of the user, on the basis of the class of agents to which he belongs. The contents of the User Model are exploited in order to reduce the number of hypotheses on the user's intentions (when interpretation ambiguities arise) and to decide which contents should be added to the answers of the system in order to make them suitable to the background knowledge of the user [Ardissono and Cohen 1995], [Ardissono, Boella, Lesmo, Rizzo and Sestero 1996 ].

Bibliography on dialog and agents

Last updated 05.2009