• No results found

Nå søker vi kandidater til vårt internasjonale Graduate Programme

In document Download free books at  (sider 43-46)

Klikk herr

Please click the advert

Download free ebooks at bookboon.com 44

The term proto-agent (North and Macal, 2007) is often used in agent modelling and simulation to cover ‘lower-level’ agents such as turtle agents in NetLogo to distinguish them from agents that adhere to stronger definitions such as Nwana’s. North and Macal define a proto-agent as an entity used in modelling and simulation that maintains a set of properties and behaviours that need not exhibit learning behaviour. If they gain learning behaviour, they become agents.

For the purposes of these books, rather than making an arbitrary distinction between a proto-agent and agent, we consider all the examples above as having some degree of agent-hood as defined in Section 2.1. We therefore will use the term agent throughout rather than proto-agent, since in reality no agent-oriented system currently exists that has yet achieved the full set of properties as per Nwana’s definition, or the set of desirable properties as described in more detail in the next section.

2.4 Desirable Properties of Agents

The concept of an agent can be defined by listing the desirable properties that we wish the agents to exhibit (see Tables 2.4 to 2.6). Russell & Norvig (2004) identified the first four properties in Table 2.4 as key attributes of an agent from the AI perspective: autonomy (acting on one’s own behalf without intervention); reactivity (reacting to stimuli); proactivity (being proactive); and social-ability (able to communicate in some manner with other agents). Autonomy, in particular, is an important key attribute – in fact, the term ‘autonomous agents’ is often used in the literature as a synonym for agent-oriented systems to emphasize this point. The six properties in Table 2.4 are often designated as belonging to a weak agent, adding the ability to set goals and temporal continuity as two further key attributes (Wooldridge and Jennings (1995)). The properties in Table 2.5 are associated with a strong notion of an agent as they are properties usually applied to humans (Wooldridge and Jennings (1995); Etzioni and Weld (1995)). Taskin et al. (2006) list three further properties in Table 2.6 that are combinations of the basic properties: coordination, cooperative ability and planning ability.

Property Description

Autonomy The agent exercises control over its own actions; it runs asynchronously.

Reactivity The agent responds in a timely fashion to changes in the environment and decides for itself when to act.

Proactivity The agent responds in the best possible way to possible future actions that are anticipated to happen.

Social ability (Ability to communicate)

The agent has the ability to communicate in a complex manner with other agents, including people, in order to obtain information or elicit help in achieving its goals.

Ability to set

goals The agent has a purpose.

Temporal

continuity The agent is a continually running process.

Table 2.4 Properties associated with the weak notion of an agent. Based on Russell and Norvig (2004) and Wooldridge and Jennings (1995).

Download free ebooks at bookboon.com 45

Property Description

Mobility The agent is able to transport itself around its environment.

Adaptivity The agent has the ability to learn. It is able to change its behaviour based on the basis of its previous experience.

Benevolence The agent performs its actions for the benefit of others.

Rationality The agent makes rational, informed decisions.

Collaborative

ability The agent collaborates with other agents or humans to perform its tasks.

Flexibility The agent is able to dynamically respond to the external state of the environment by choosing its own actions.

Personality The agent has a well-defined, believable personality and emotional state.

Cognitive ability

The agent is able to explicitly reason about its own intentions or the state and plans of other agents.

Versatility The agent is able to have multiple goals at the same time.

Veracity The agent will not knowingly communicate false information.

Persistency The agent will continue steadfastly in pursuit of any plan.

Table 2.5 Properties associated with the strong notion of an agent. Based on Wooldridge and Jennings (1995) and Etzioni and Weld (1995).

Property Description

Coordination The agent has the ability to manage resources when they need to be distributed or synchronised.

Cooperation The agent makes use of interaction protocols beyond simple dialogues, for example negotiations on finding a common position, solving conflicts or distributing tasks

Ability to plan The agent has the ability to pro-actively plan and coordinating its reactive behavior in the presence of the dynamical environment formed by the other acting agents.

Table 2.6 Further properties associated with an agent. Based on Taskin et al. (2006).

These definitions are interesting from a philosophical point of view, but their meaning is often vague and imprecise. For example, if one were to attempt to classify existing agent-based systems using these labels, one would find the task is fraught with difficulties and inconsistencies.

A simple exercise in the application of these properties to classifying examples of agent-oriented systems will demonstrate some of the shortcomings of such a classification system. For example, Googlebot, the Web crawler used by Google to construct its index of the Web, has the properties autonomy, reactivity, temporal continuity, mobility and benevolence, but whether it exhibits the other properties is unclear – for example, it does not have the rationality property (the informed decisions it makes are not its own). Therefore it exhibits both weak and strong properties, but could be construed to be neither. A chatbot on the other hand exhibits all of these properties in various strengths. Perhaps the strangest of the properties is benevolence. It is not clear why this is a necessary property of an agent-oriented system – computer viruses are clearly not benevolent; and interaction between competing multiple agents may also not be benevolent (such as the wolf-sheep predation model that comes with the NetLogo Models Library described in Chapter 4), but can lead to stability in the overall system.

Download free ebooks at bookboon.com 46

Also, underlying many of these properties is the implicit assumption that the agent has some degree of consciousness – for example, that it consciously makes rational decisions, that it consciously sets goals and make plans to achieve them, and that it does not consciously communicate false information and so on. Therefore, it may not be possible to build a computational agent with these properties without first having the capabilities that a human agent with full consciousness has.

The other failing is that these are qualitative attributes, rather than quantitative. An engineer would prefer to have attributes that were defined more precisely – for example, what does it mean for an agent to be rational? However, the classification does have merit in the sense that it highlights some of the attributes that we may wish to design into our systems. For example, we can use properties 1 to 3 as a starting point to suggest some minimal design principles that a system must adhere to before it can be deemed to be an agent oriented system as follows:

An agent-oriented system should adhere to the following agent design objectives – it is autonomous; it is reactive; it is proactive (at least):

Design Principle 2.1: An agent-oriented system should be autonomous.

Design Principle 2.2: An agent-oriented system should be reactive.

Design Principle 2.3: An agent-oriented system should be proactive.

In document Download free books at  (sider 43-46)