ent agent types and the other party can
be any one of these types.
Human-Agent negotiations
The issue of automated negotiation
is too broad to cover in a short review
paper. To this end, we have decided
to concentrate on adversarial bilateral
bargaining in which the automated
agent is matched with people. The
challenges in this area could motivate
readers to pursue this field (note that
this sets the focus and leaves most auction settings outside the scope of this
article, even though automated agents
that bid in auctions competing with
humans have been proposed and evaluated in the literature; for example,
Grossklags and Schmidt11).
Automated Negotiator Agents. The
problem of developing an automated
agent for negotiations is not new for
researchers in the fields of multiagent
systems and game theory (for example,
Kraus20 and Muthoo26). However, designing an automated agent that can
successfully negotiate with a human
counterpart is quite different from
negotiating with another automated
agent. Although an automated agent
that played in the Diplomacy game
with other human players was introduced by Kraus and Lehmann22 some
20 years ago, the difficulties of designing proficient automated negotiators
have not been resolved.
In essence, assumptions in most research are made that do not necessarily apply in genuine negotiations with
humans, such as assuming complete
information or the rationality of the
opponent negotiator. In this sense,
both parties are assumed to be rational in their behavior (for example,
the decisions made by the agents are
described as rational and the agents
are considered to be expected utility
maximizing agents that cannot deviate from their prescribed behavior).
Yet, when dealing with human counterparts, one must take into consideration the fact that humans do not
necessarily maximize expected utility or behave rationally. In particular,
results from social sciences suggest
that people do not follow equilibrium
strategies.
6, 25 Moreover, when playing
with humans, the theoretical equilibrium strategy is not necessarily the
optimal strategy.
38 In this respect,
equilibrium-based automated agents
that play with people must incorporate heuristics to allow for “unknown”
deviations in the behavior of the other
party. Moreover, when people are the
ones who design agents, they do not
always design them to follow equilibrium strategies.
12 Nonetheless,
some assumptions are made, mainly
that the other party will not necessarily maximize its expected utility. However, if given two offers, it will prefer
the one with the highest utility value.
Lastly, it has been shown that whether
the opponent is oblivious or has full
knowledge that its counterpart is a
computer agent can change the overall result. For example, Grossklags and
Schmidt11 showed that efficient market prices were achieved when human
subjects knew that computer agents
existed in a double auction market environment. Sanfey34 matched humans
with other humans and with computer agents in the Ultimatum Game and
showed that people rejected unfair offers made by humans at significantly
higher rates than those made when
matched with a computer agent.
Automated Agents Negotiating with
People. Researchers have tried to take
some of these issues into consideration when designing agents that are
capable of proficiently negotiating
with people. For example, dealing only
with the bounded rationality of the opponent, several researchers have suggested new notions of equilibria (for
example, the trembling hand equilibrium described in Rasmusen30). Approximately 10 years ago, Kasbah, a seminal
negotiation model between agents
designed by humans, was presented
in the virtual marketplace by Chavez
and Maes.
5 Here, the agent’s behavior
was fully controlled by human players. The main idea was to help users
in the negotiation process between
buyers and sellers by using automated
negotiators. Chavez and Maes’s main
innovation was not so much the sophisticated design of the automated
negotiators but rather the creation of a
multiagent negotiation environment.
Kraus21 describe an automated agent
that negotiates proficiently with humans. Although they also deal with negotiation with humans, there is complete information in their settings.
Other researchers have suggested a
shift from quantitative decision theory
to qualitative decision theory.
36 In using such a model it is not necessary to
assume that the opponent will follow
the equilibrium strategy or try to be a
utility maximizer. Another approach
was to develop heuristics for negotiations motivated by the behavior of
people in negotiations.
22 However, the
fundamental question of whether it
is possible to build automated agents
for negotiations with humans in open
environments has not been fully addressed by these researchers.
Another direction being pursued is
the development of virtual humans to
train people in interpersonal skills (for
example, Kenny19). Achieving this goal
requires cognitive and emotional modeling, natural language processing,
speech recognition, knowledge representation, as well as the construction
and implementation of the appropriate
logic for the task at hand (for example,
negotiation), is in order to make the
virtual human into a good trainer. An
example of the researchers’ prototype,
in which trainees conduct real-time negotiations with a virtual human doctor
and a village elder to move a clinic to
another part of the town out of harm’s
way is given in Figure 2.
Commercial companies and
schools have also displayed interest
in automated negotiation technologies. Many courses and seminars are
offered for the public and for institutions. These courses often guarantee
that upon completion you will “know
many strategies on which to base the
negotiation,” “Discover the negotiation secrets and techniques,” “Learn
common rival’s tactics and how to neutralize them” and “Be able to apply an
efficient negotiation strategy.”
1, 27 Yet,
in many of these courses, the agents
are restricted to one domain and cannot be generalized. Some of the automated agents cannot be adapted to
the user and are restricted to a single
attribute negotiation with no time
constraints. Nonetheless, human factors and results of laboratory and field
experiments reviewed in esteemed
publications9, 29 provide guidelines for
the design of automated negotiators.
Yet, it is still a great challenge to incorporate these guidelines in the inherent design of an agent to allow it to
proficiently negotiate with people.