• No results found

T RUST AND ADVICE UTILIZATION

In document In bots we (dis)trust? (sider 25-28)

2. THEORETICAL BACKGROUND

2.6. T RUST AND ADVICE UTILIZATION

When making important decisions, people often rely on advice from various sources with the expectation that the advice can reduce their uncertainty and improve their judgement. In doing so, decision-makers make themselves vulnerable to the competence and intentions of the advisor (Van Swol & Sniezek, 2005). Consequently, relying on advisors and utilizing advice is often associated with trust. Indeed, Doney, Cannon and Mullen (1998, p. 604) define trust as a

‘willingness to rely on another party and to take action in circumstances where such action makes one vulnerable to the other party.’ Moreover, previous research on advice utilization report a strong relationship between trust and the degree to which an advice is taken into account. For example, Sniezek and Van Swol (2001) find that trust increases the likelihood of taking an advice, Jungermann and Fischer (2005) note that people largely rely on their trust in the advisor when deciding to accept or reject advice and Prahl and Van Swol (2017) argue that advice utilization is a behavioral measure of trust.

2.6.1. The Judge-Advisor System

To examine how people utilize advice, researchers on judgement and decision-making have often employed the ‘Judge-Advisor System’ (hereafter, ‘JAS’). A ‘typical’ JAS study consists of a judge (the decision-maker) and an advisor. First, the judge is asked to provide an initial decision before being presented with a recommendation from an advisor. Next, the judge must decide to follow the advice or not. Importantly, they are under no obligation to follow the advisor’s recommendation and can therefore choose whether to take the recommendation into consideration or not (Bonaccio & Dalal, 2006). In some studies, the advice is dichotomous (accept or reject), while in others, the judge can adjust their initial decision towards the

advisor’s recommendation (e.g. forecasting tasks) (Bonaccio & Dalal, 2006). Adjusting the final decision towards the advice is referred to as advice utilization, while advice discounting exist if a judge chooses not to follow advice, but rather follow their own instincts (Bonaccio &

Dalal, 2006).

Several findings are worth noting from the JAS literature (see Bonnacio & Dalal, 2006 for a complete review). Despite the fact that following advice generally helps judges make better decisions, multiple studies have found evidence of ‘egocentric advice discounting’, a phenomenon where people ‘overweigh their own opinion relative to that of their advisor’

(Bonnacio & Dalal, 2006, p. 129). Harvey and Fischer (1997) claim that egocentric advice discounting occurs because people are overconfident in their own abilities and anchored towards their initial estimates, while Yaniv and Kleinberger (2000) note that people have full access to their own thoughts and reasonings’ and less information about the advisor’s. Another finding is that advice utilization increases with the advisors’ perceived expertise (Bonnacio &

Dalal, 2006; Jungermann & Fischer, 2005). Related is the finding that people are more likely to follow what they perceive to be a good advice compared to what they see as a poor advice (Yaniv & Kleinberger, 2000). Moreover, Gino and Moore (2006) find that advice utilization increases with the complexity of the task (see also Schrah, Dalal & Sniezek, 2006), while Bonaccio and Dalal (2006) note that it decreases if the judge questions the intentions of the advisor. Hence, it is argued that trust in the advisor is an important determinant of advice utilization (Sniezek & Van Swol, 2001; Van Swol & Sniezek, 2005; Prahl & Van Swol, 2017).

Finally, Heath and Gonzalez (1995) find that receiving an advice increases judges’ confidence in their final decisions and Van Swol (2009) report that judges’ confidence is strongly correlated with how much they trust an advice.

The majority of these studies have investigated how people react to the advice from human sources. However, as algorithms, computers and expert systems have evolved to become alternatives to human advisors, researchers have started to examine the degree to which people rely on advice from non-human sources.

2.6.2. Relying on advice from non-human sources

Researchers studying advice utilization have also investigated how people rely on advice that emanates from non-human advisors. Several domains have been investigated, ranging from medical recommendations (e.g. Promberger & Baron, 2006) and financial recommendations (e.g. Önkal, 2009), to more subjective domains like humor and attractiveness (e.g Yeomans,

Shah, Mullainathan & Kleinberg, 2019; Logg et al., 2019). Results are ambiguous and seem to depend on the task under investigation (Lee, 2018). In the medical domain, people seem to prefer an advice from a medical professional as opposed to a computer program, even though the computer program is more likely to provide a better advice (Promberger & Baron, 2006).

Yeomans et al. (2019) also found algorithm depreciation when studying joke recommendations.

People relied more on advice from friends rather than algorithms. In contrast, Logg et al. (2019) found that people utilize advice more when it comes from an algorithm than when it comes from a person. They studied advice utilization through several domains, including estimation of people’s weight, popularity of songs and attractiveness. In all their experiments, Logg et al.

(2019) found evidence of advice appreciation.

In the domain of financial forecasting, Önkal et al. (2009) studied how subjects utilized advice from human experts versus statistical methods when presented with a financial forecasting task.

The findings indicate that people rely more on the advice given by the human experts. For forecasting tasks in other domains, Dietvorst et al. (2015) investigated how people utilized advice after seeing the algorithm perform. The findings suggest that after seeing an algorithmic advisor err, the algorithm is punished harder than a human advisor. Consequently, people seem to tolerate mistakes from human advisors more than algorithmic advisors. In fact, the results showed that after observing an algorithmic advisor outperform a human advisor, people were still more willing to depend on the human advisor (Dietvorst et al., 2015).

One suggested explanation of the tendency to rely more on human advisors, despite the fact that non-human advisors like statistical methods, computer programs and algorithms are often more precise than human expertise (e.g. Meehl, 1954; Dawes, 1979), is that human advisors can be accountable for their recommendations. Relying on a human’s advice therefore shifts the responsibility of the decision, as human advisors can be blamed for their inaccurate precision (Harvey & Fischer, 1997).

In document In bots we (dis)trust? (sider 25-28)