Anchoring and Adjustment Effects on Audit Judgments: Experimental Evidence from Switzerland

Research Article

Austin J Bus Adm Manage. 2017; 1(2): 1006.

Anchoring and Adjustment Effects on Audit Judgments: Experimental Evidence from Switzerland

Henrizi PE*, Hunziker S and Himmelsbach D

Institute of Financial Services, Lucerne Business School, Switzerland

*Corresponding author: Philipp Henrizi, Institute of Financial Services, Lucerne Business School, Grafenauweg 10, 6302 Zug, Switzerland

Received: May 08, 2017; Accepted: May 19, 2017; Published: May 30, 2017

Abstract

Auditors are faced with the task of formulating opinions about the financial statements by maintaining an attitude of professional skepticism and using their professional judgment to determine the type and amount of information to collect, the timing and manner, and the implications of the information collected. The anchoring and adjustment heuristic describes the effect of estimates that start from an initial value, which is then adjusted to yield the final answer. Such adjustments are often biased towards the initial value for which reason it is called ‘the anchor’. According to literature, heuristics and biases in auditing are mostly encountered in the course of analytical audit procedures and interviews with the client. This study reports the results of an experimental research design analyzing the audit judgment of 85 auditors in Switzerland. Four out of five experimental cases used in this paper are adapted from former studies in order to guarantee comparability among the findings. Based on the results of the experiment, the results indicate evidence on the existence of the anchoring and adjustment heuristic in Swiss audit judgments. Further, we could identify an influence of the audit company size (Big4 vs. Non-Big4), the auditors’ experience, and the auditors’ knowledge about behaviorism and anchor heuristic with regard of the anchoring and adjustment effect on audit judgment. The purpose of this paper is to provide background on the heuristics and biases approaches to decision making and to increase auditors’ awareness of the anchoring and adjustment effects that can adversely affect audit judgments.

Keywords: Audit judgment; Heuristics and biases; Professional skepticism; Decision making

Introduction

Behaviorism is nowadays a pervasive topic in many disciplines of science and practice. During the last few decades, scientists and practitioners have been applying behavioral models to fields such as economics, medicine, psychology, sociology and politics. Research designs and methodologies from numerous studies in these fields were adapted to other fields of research such as audit, raising also the topic of behavioral audit. In the 1970s, US scientists of psychology such as Nobel laureate Tversky and Kahneman assessed behavior of people and set up heuristically models that has dominated the judgment and decision making literature ever since. They argued that humans make use of cognitive heuristics which reduce the complexity of making probabilistic judgments [1]. Based on these models, several behavioral studies have been conducted pursuing behavioral theory and methodology. Studies on behavioral audit date back to early 1980s, when research on heuristics and biases already seemed to peak. Nevertheless, in the 80s and 90s, several studies investigated on behavioral impacts on audit judgments, which helped to improve decision making in auditing [2-11]. According to the studies of Presutti [10] and Cheng [12], auditors are affected to the same heuristics and cognitive pitfalls as other people having limited cognitive ability dealing with probabilistic data in complex environments. It is thus important for auditors to understand behavioral aspects of audit in order to make appropriate judgments. Professional skepticism is an essential component of every audit and the foundation for detecting fraud and maintaining an independent attitude [13]. While the concept of professional skepticism has been part of auditing standards for decades, there is increasing recognition of both the importance of professional skepticism and deficiencies in the application of professional skepticism [14,15]. However, only few notable studies on anchoring and adjustment effects on audit judgments have been published ever since.

Accordingly, the main purpose of this study is to illustrate the potentially detrimental effects on audit decision making of certain judgmental heuristics, or rules of thumb, which can lead to systematic judgmental biases. The aim of this study is to increase auditors’ awareness of some of the subtle biases that can adversely affect audit decision making, and – hopefully - to improve their abilities to avoid such biases and maintain an attitude of professional skepticism in audit judgments. The current study adds to the existing literature in several ways: First, to the best of our knowledge, this is the first study analyzing the impact of auditors’ experience and knowledge about behaviorism and anchor heuristic on the anchoring and adjustment effect on audit judgment. Second, this analysis extends prior research by distinguishing between Big4 and Non-Big4 auditors. Both are useful to auditing educators, showing that both knowledge about behaviorism and anchor heuristic and higher standardized audit procedures might lead to less biased audit judgment. Third, this is the first paper to analyze the anchoring and adjustment effects on audit judgments in Europe which contributes reducing the lack of knowledge and awareness on this issue in Europe.

The remainder of the paper is organized as follows. The next section provides some theoretical background and a literature review on the bias of anchoring effects in financial audit. The third section addresses the research methodology, including the presentation of the hypotheses and the approach to data collection. The fourth section presents the empirical results, while the final section provides a discussion of the results and draws conclusions.

Background and Literature Review

Heuristics and biases

According to the Oxford English Dictionary, the word ‘heuristic’ originates from the Greek word heuriskein, which means ‘to find’. It describes heuristic as “proceeding to a solution by trial and error or by rules that are only loosely defined” (Oxford English Dictionary, online). In a scientific context, Simon [16] described heuristics as limited rationality due to cognitive limitations. Simon argued that heuristics are simplified models of the complex world that enable people to make decisions more efficient. Tversky and Kahneman [1] see heuristics as ‘mental shortcuts’, which allow people to form judgments with incomplete information. Therefore, heuristics are not only restricted to laymen, also experts (e.g. partners in audit firms) are prone to biases. Nevertheless, the authors mention that heuristics (also known as rule of thumbs) are quite useful, but sometimes lead to systematic errors. These may occur when judgments are based on subjective assessments. Tversky and Kahneman provide an example about the estimation of distance. The closer an object appears to be, the sharper it is seen. This ‘rule of thumb’ may have some validity, since an object becomes sharper the closer it appears. Under circumstances of poor visibility, for instance, a subject might then overestimate the distance of the object. By relying on this heuristic, the subject’s judgment suffered from systematic errors.

The word ‘bias’ is defined as a “systematic distortion of a statistical result due to a factor not allowed for in its derivation” (Oxford English Dictionary, online). Gilovich and Griffin [17] define biases as deviations from rational thinking. These deviations (i.e. biases) are underlying assumptions for a model or rules and the authors point out that most of the observed biases violate basic laws of statistics and probability. As a result, people may draw irrational or false conclusion from their model (i.e. heuristic). Returning to the example of the distance estimation, the bias here would be, among many others, the omission of the visibility condition in the model. To sum up, decisions are often based on beliefs about an outcome of uncertain events.

Kahneman and Tversky [1] define heuristics and biases as follows: “In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead, they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors” [1]. They defined three heuristics which are used when predicting values and assessing probabilities:

Representativeness: This heuristic deals with some probability judgments, where people have to compute probabilities of instances [1]. However, a newer study of Kahneman demonstrated that in such judgments people often replace probability calculation with the assessment of resemblance [18]. The idea behind this heuristic is that a difficult question such as computation of probabilities is answered by easier questions. Tversky and Kahneman [1] provided the following example: “Steve is very shy and withdrawn, invariably helpful, but with little interest in people, or in the world of reality” [1]. Then they asked US citizens to assess the probability in Steve’s occupation. The options given were a) farmer b) salesman c) airline pilot or d) librarian. No surprise, many participants chose ‘librarian’ as Steve’s occupation as the description exhibits the highest similarity to the occupation of a librarian. From a statistical perspective, librarians may yield the weakest probability since the job is less common in the United States compared to the other professions.

Availability: Availability is used to assess “the frequency or probability by the ease with which instances or occurrences come to mind” [1]. In other words, people tend to judge upon the salience of information. One may assess, for example, the risk of lung cancer by recalling the occurrence among family and friends. Even though availability might be a good heuristic for assessing frequency and probability, retrieving instances or occurrences can lead to severe over- and underestimation of the truth [1]. The phenomenon of underestimation for instance has been observed in the demand for disaster insurance, e.g. foods or hurricane insurance. Richter, Schiller and Schlesinger argue that people’s willingness to buy disaster insurance is very low since such catastrophes are unlikely to affect one. However, the authors outlined that people tend to buy more insurance in the aftermath of such incidents, even though the probability of catastrophes remains unchanged.

Anchoring and adjustment: This heuristic is often used when people are given an anchor (i.e. initial value, starting point) and need to make estimates. Whether that happens consciously or not, people tend to adjust their estimate from the anchor until they have reached an acceptable level. Supporting evidence comes from biases in evaluation of conjunctive and disjunctive events, insufficient revision of probabilities relative to Bayes’Theorem [1], and framing (problem restatement) effects [19]. It does not matter whether the anchor is random or a result of previous calculation, the adjustments from those anchors are often insufficient since these estimates are all biased toward the anchor [20]. An illustrative example of the anchoring and adjustment heuristic is the estimation of Gandhi’s age at death. With an anchor of 114 years old, people overestimated his age at death while participants with an anchor of 35 years old, remarkably underestimated Gandhi’s age [21].

As the anchoring and adjustment heuristic is subject to this study, the underlying theory is discussed in more detail. Following Wilson, Houston, Etling and Brekke [22], the anchoring and adjustment process is illustrated in (Figure 1) and described as follows: