More than four decades of research has revealed the power of numerical anchors—numerical values to which people’s judgments often assimilate. In their initial study, for example, Tversky and Kahneman (1974) asked some participants if the percentage of African countries in the United Nations was smaller or larger than 10, whereas they asked others if the percentage was smaller or larger than 65. When subsequently asked to estimate the actual percentage of African countries, participants exposed to the low anchor of 10 gave lower estimates than participants exposed to the high anchor of 65. This procedure has since become the standard anchoring paradigm, in which participants first answer a comparative question containing the numerical anchor and then make an absolute numerical estimate of a target value.
The extent of anchoring effects is impressive (for a review, see Furnham & Boo, 2011). Indeed, numerical anchors can influence judgments ranging from estimates of probability (e.g., Plous, 1989) to estimates of appropriate food portion sizes (Marchioro, Papies, & Klein, 2014), and even random, implausible, or clearly irrelevant anchors can influence people’s judgments (e.g., Ariely, Loewenstein, & Prelec, 2003; Cheek, Coe-Odess, & Schwartz, 2015a; Englich, Mussweiler, & Strack, 2006; Tversky & Kahneman, 1974). Anchoring has potentially more serious consequences as well, because anchors can shape judgments in important domains such as real estate purchases (Bucchianeri & Minson, 2013) and negotiations (e.g., Galinsky & Mussweiler, 2001), and even experts such as judges (Englich et al., 2006) and doctors (Brewer, Chapman, Schwartz, & Bergus, 2007) exhibit standard anchoring effects. In a review of the substantial anchoring literature, Kahneman (2011) concluded that anchoring effects are “one of the most reliable and robust results of experimental psychology” (p. 119).
Recently, researchers have become more interested in identifying contextual and individual difference moderators of anchoring effects (the “third wave” of anchoring research; Epley & Gilovich, 2010). Several promising moderators have been identified, such as subjective power (Lammers & Burgmer, 2017), knowledge (e.g., Smith & Windschitl, 2015), anchor precision (e.g., Janiszewski & Uy, 2008), debiasing training experience (e.g., Adame, 2016), and mood (Bodenhausen, Gabriel, & Lineberger, 2000). However, there have also been many null results in moderator studies (e.g., Epley & Gilovich, 2006; Furnham, Boo, & McClelland, 2012; Oechssler, Roider, & Schmitz, 2009; Stanovich & West, 2008; Welsh, Delfabbro, Burns, & Begg, 2014). Indeed, Furnham and Boo (2011) argued that literature on individual difference and contextual moderators is inconsistent and that, overall, it has been difficult for researchers to identify reliable moderators.
The identification of reliable moderators of anchoring effects (a) has important theoretical implications for the understanding of anchoring effects (e.g., Chapman & Johnson, 2002; Englich, 2008; Simmons et al., 2010); (b) has proven to be surprisingly difficult in previous research (e.g., Furnham & Boo, 2011); (c) has become an increasingly prominent goal for anchoring researchers (Epley & Gilovich, 2010); and (d) has clear applications for understanding and enhancing everyday judgments (e.g., Chapman & Johnson, 2002; Furnham & Boo, 2011; Kahneman, 2011). In this article, we argue that one possible reason for past difficulty in identifying reliable moderators may be the tendency to ignore estimate direction when analyzing participants’ estimates, an issue which we elaborate below. We draw attention to this limitation in the analysis strategy of previous anchoring studies and suggest that improving analyses of anchoring data may subsequently improve researchers’ ability to detect moderators of interest. We outline our argument about the importance of estimate direction in the next section, with the main goal of highlighting the potential drawbacks to traditional analyses of moderators in anchoring studies.
In the standard anchoring paradigm, participants first answer a comparative question containing an anchor value (e.g., “Is the population of Chicago larger or smaller than 15 million?”), after which they are asked to make an absolute estimate (e.g., “What is the population of Chicago?”). When participants make an absolute estimate after exposure to an anchor value, their estimate can be in one of two directions: it can be higher or lower than the anchor value. The vast majority of anchoring research interested in explanatory mechanisms employs the standard anchoring paradigm, yet studies rarely take estimate direction into account. We seek to make a methodological contribution by illustrating the potentially negative implications of ignoring estimate direction.
Consider, for example, a hypothetical study of the role of shyness in anchoring susceptibility (a study that, to our knowledge, has never been conducted). A traditional analytic method for investigating the influence of shyness would be to see if trait shyness and anchoring condition (i.e., high- and low-anchor conditions) interact to predict people’s absolute estimates. If we assume that, in this hypothetical example, shyness predicts stronger anchoring, it may be that an interaction emerges between shyness and anchoring condition. However, it may also be that no interaction emerges as a result of ignoring estimate direction, despite the true influence of shyness on anchoring susceptibility.
If shyness increases anchoring, then shy participants exposed to a high anchor should give higher estimates than non-shy participants when they make estimates in what we will call the inner direction (i.e., lower than the high anchor). However, shy participants exposed to a high anchor should give lower estimates than non-shy participants when they make estimates in what we will call the outer direction (i.e., higher than the high anchor), because shy participants’ estimates should be closer to the anchor value than non-shy participants’ estimates in both directions. In other words, in the high-anchor condition, shyness would potentially predict both higher and lower estimates, depending on whether participants made estimates in the inner or outer direction. The same is true for participants in the low-anchor condition: shy participants should give lower estimates than non-shy participants when they make estimates in the inner direction (i.e., higher than the low anchor), whereas they should give higher estimates than non-shy participants when they make estimates in the outer direction (i.e., lower than the low anchor). In all cases, however, estimates from shy participants should be closer to the anchor value (whether they are above or below that value) than the estimates of non-shy participants (see Figure 1).
In this example, it is quite possible that even if shyness did predict stronger anchoring, no significant anchor condition × shyness interaction would emerge, because the influence of shyness on participants’ estimates when participants made estimates in the inner direction would be cancelled out by the influence of shyness on estimates when participants made estimates in the outer direction. In other words, ignoring estimate direction may result in failure to detect moderators of anchoring effects (or, at least, underestimation of the moderator’s influence, depending on the percentage of estimates in either direction). In fact, previous research has found that up to a third of participants often provide estimates in the outer direction, despite researchers’ assumptions that the vast majority of participants provide estimates in the inner direction (Grau & Bohner, 2014; Jacowitz & Kahneman, 1995). This is particularly important because studies in psychology tend to be underpowered even when using optimal analytic methods, and because moderator effects in anchoring research are likely to be relatively small (e.g., because there are so many different psychological routes to anchoring; Chapman & Johnson, 2002; for recent examples of potentially meaningful but small effects see Brandt, Evans, & Crawford, 2015; Cheek & Norem, 2017), such that researchers could fail to detect a small effect. It is in the interest of researchers investigating moderator effects to use the most appropriate and sensitive analyses in order to avoid false negatives and reach better estimates of effect size, and thus we argue that failing to take estimate direction into account in moderator studies of anchoring represents an important and problematic issue in the existing anchoring literature.
The overall effect of ignoring estimate direction will depend on how many participants answer in different directions—in some studies, only a few participants may answer in a given direction. Yet, as long as some participants do answer in different directions, it is still analytically inappropriate to ignore estimate direction: the logic of analyzing the interaction between anchoring condition and a potential moderator variable without taking into account estimate direction does not hold if participants answer in both inner and outer directions, even if that interaction is significant. Moreover, even if researchers do successfully detect a true interaction, ignoring estimate direction will likely result in an underestimation of a moderator’s effects. Thus, researchers will benefit from attending to estimate direction because it will increase the sensitivity of their analyses, thereby reducing the chance of Type II errors and increasing the accuracy of estimates of effect size.
There are at least three ways to address the role of estimate direction. First, researchers can plan a priori to exclude estimates that are more extreme than the anchor values (i.e., in the outer direction; see, e.g., Brandt et al., 2015). This method successfully avoids the risks of a Type II error caused by ignoring estimate direction (assuming the study is adequately powered), but potentially results in the exclusion of a substantial percentage of otherwise valid data. That is, we do not think that participants who make estimates that are more extreme than anchor values have provided invalid estimates; they are not outliers simply because their estimates are not between two anchor values. Moreover, excluding estimates in the outer direction may result in vastly different exclusion rates based on anchor condition—for example, more people may make estimates that are higher than the high anchor than estimates that are lower than the low anchor. Thus, simply excluding estimates in the outer direction seems, to us, an imperfect solution to the estimate direction problem, particularly when many participants provide estimates in different directions.
A second potential strategy is to include estimate direction as a factor in analyses (e.g., Grau & Bohner, 2014). For instance, if a study involved a high and low anchor and positive and negative mood manipulation, researchers could conduct a 2 (anchor condition: high vs. low) × 2 (mood condition: positive vs. negative) × 2 (estimate direction: inner vs. outer) ANOVA instead of just a 2 (anchor condition) × 2 (mood condition) ANOVA. This solution, however, requires more statistical power to detect a potential three-way interaction. Furthermore, the estimate direction factor is not independent from participants’ estimates: estimates will be more extreme in the outer direction by definition. Thus, adding estimate direction as a factor may also partially solve researchers’ problems, but it is not without limitations.
A third potential strategy offers a solution without such drawbacks. Instead of analyzing participants’ estimates as the dependent variable, researchers can instead analyze the gap between participants’ estimates and the anchor value to which they were exposed (i.e., the absolute value of the difference between estimates and their respective anchor values; Epley & Gilovich, 2001; Simmons et al., 2010). This solution prevents estimate direction from undermining analyses because if, returning the shyness example, shy participants anchor more, their estimates will be closer to anchor values whether they make estimates above or below them. Accordingly, shy participants will have smaller anchor-estimate gaps—reflecting stronger anchoring—than non-shy participants. Thus, researchers can detect moderation by observing that shyness is negatively related to anchor-estimate gaps, overcoming the limitations of ignoring estimate direction. In what follows, we provide a secondary analysis of an anchoring data set that illustrates the potential use of considering estimate direction by analyzing anchor-estimate gaps.
We re-examined data collected by Cheek, Coe-Odess, and Schwartz (2015b) to examine the influence of numerical anchors on physical judgments (i.e., judgments of numerosity and weight). Participants in this study made two psychophysical judgments—the number of M&Ms in a jar and the weight of a bag—after exposure to anchors in the standard anchoring paradigm. For each judgment, all participants received the same second question prompting them to make absolute estimates, but the initial comparative question was varied in order to explore the influence of different anchoring processes. Hence, for the present study, the content of the comparative question—i.e., the object of judgment—served as the moderator of interest.
Cheek et al. (2015b) predicted that the variation in the content of the comparative question would produce different degrees of anchoring. In the same-object condition, the object of judgment and the scale of judgment were the same in the comparative question and the absolute judgment. For the numerosity judgment, participants in the same-object condition first considered whether the number of M&Ms in a jar was more or less than an anchor value, and then made an absolute estimate of the number of M&Ms. For the weight judgment task, participants first considered whether a bag weighed more or less than an anchor value, and then made an absolute estimate of the bag’s weight. In the different-object condition, however, the object of judgment differed between the comparative question and the absolute judgment. For the numerosity judgment task, participants in the different-object condition considered whether the number of pennies in a clear bag was more or less than an anchor value, after which they made an absolute estimate of the number of M&Ms. For the weight judgment task, participants in the different-object condition first considered whether a box weighed more or less than an anchor value, and then made an absolute estimate of a bag’s weight. Anchor values for the numerosity and weight judgments are presented in Table 1. Additional details about the methods of this study are available in the online supplemental material.
|Condition||Low Anchor||High Anchor||Comparative Question Object||Absolute Estimate Object|
|Same-Object Condition||88||264||Jar of M&Ms||Jar of M&Ms|
|Different-Object Condition||88||264||Bag of Pennies||Jar of M&Ms|
|Same-Object Condition||14||42||Duffle Bag||Duffle Bag|
|Different-Object Condition||14||42||Cardboard Box||Duffle Bag|
This design was modelled after previous research exploring different theories of anchoring (e.g., Strack & Mussweiler, 1997; Wong & Kwong, 2000; see also Cheek, 2016). For example, Strack and Mussweiler (1997, Study 2) asked participants to first answer a comparative question about whether the mean winter temperature in Antarctica was higher or lower than an anchor value, after which participants either estimated the mean winter temperature of Antarctica (same-object condition) or the mean temperature of Hawaii (different-object condition). Strack and Mussweiler argued that because the comparative question is thought to activate anchor-consistent information (a process they formalized as the Selective Accessibility Model of anchoring), anchoring effects should be stronger when the object of judgment is the same in the comparative question and absolute estimate. A similar prediction derives from pragmatic accounts of anchoring (e.g., Zhang & Schwarz, 2013), which argue that anchor values should be perceived as less informative in the different-object condition, and therefore exert a weaker influence on participants’ estimates. Thus, based on previous research, Cheek et al. (2015b) predicted that although anchoring effects may emerge in both object conditions, they should be stronger in the same-object condition than in the different-object condition.
Surprisingly, however, Cheek et al. (2015b)’s analyses, which followed the traditional method of conducting a 2 (anchor condition: high vs. low) × 2 (object condition: same-object vs. different object) ANOVA on participants’ estimates, did not provide evidence to suggest that the content of the comparative question moderated the effect of anchors on participants’ estimates. Here, we sought to re-analyze these data while taking estimate direction into account, with the prediction that the content of the comparative question would indeed moderate the anchoring effect, as in previous research, but perhaps this moderation would only be detected when considering estimate direction.
To illustrate the consequences of ignoring estimate direction, we analyzed the data in two ways. First, we followed Cheek et al. (2015b) by conducting a 2 (anchor condition: high vs. low) × 2 (object condition: same-object vs. different-object) ANOVA on participant’s estimates. This method can be interpreted as ignoring estimate direction; it focuses only on participants’ estimates regardless of whether estimates are higher or lower than the anchor value. For example, if anchoring is stronger in the same-object than in the different-object condition, then estimates should be higher in the same-object condition when participants in the high-anchor condition make estimates in the inner direction, but lower when participants in the high-anchor condition make estimates in the outer direction. The ANOVA on participants’ estimates fails to account for this pattern, because it pools all estimates into one dependent variable regardless of their direction relative to anchor values. Accordingly, this method of analysis may not reveal any differences in the strength of anchoring across conditions, though a significant main effect of anchor condition should emerge.
For the second analysis, we took estimate direction into account by calculating anchor-estimate gap scores (e.g., Epley & Gilovich, 2001; Simmons et al., 2010). To do so, we took the absolute value of the difference between participants’ estimates and the anchor value to which they were exposed. Thus, a participant exposed to the low anchor of 14 pounds for the weight judgment who gave an estimate of 10 pounds would have the same anchor-estimate gap score as a participant who gave an estimate of 18 pounds. Using gap scores therefore eliminates the problem with the first analytical method—now, we have a measure of the distance between estimates and the anchor values regardless of the direction of participants’ estimates. In this case, if anchoring is stronger in the same-object condition than in the different-object condition, anchor-estimate gap scores should be lower in the former condition (reflecting the fact that participants’ judgments were closer to the anchor value and thus that anchoring was stronger).
Table 2 presents the percentage of participants who gave inner and outer estimates for the numerosity and weight judgments. Overall, participants gave answers in the outer direction about one fifth of the time.
|Condition||Numerosity Judgment||Weight Judgment|
Following Cheek et al. (2015b), we first conducted a 2 (anchor condition: high vs. low) × 2 (object condition: same-object vs. different object) ANOVA on numerosity estimates. Analyzing these estimates is crucial to demonstrating that there is actually an anchoring effect for a particular task or question. There was a significant main effect of anchor condition, F(1, 171) = 28.23, p < .001, ηp2 = .14, indicating that participants gave higher estimates in the high-anchor condition (M = 153.45, SD = 81.65) than in the low-anchor condition (M = 100.65, SD = 41.09). Thus, anchors clearly influenced participants’ estimates. There was no main effect of object condition, F(1, 171) = .08, p = .783, ηp2 = .00, and no interaction, F(1, 171) = 2.06, p = .154, ηp2 = .01. As found by Cheek et al. (2015b), this ANOVA suggested that object condition did not influence the degree of anchoring.
We then conducted a 2 (anchor condition) × 2 (object condition) ANOVA on anchor-estimate gap scores. There was a main effect of anchor condition, F(1, 171) = 189.72, p < .001, ηp2 = .53, indicating that, consistent with previous research (e.g., Jacowitz & Kahneman, 1995), anchor-estimate gap scores were higher in the high-anchor condition (M = 124.08, SD = 58.76) than in the low-anchor condition (M = 30.90, SD = 29.75). Importantly, as predicted, there was also a significant main effect of object condition, F(1, 171) = 10.30, p = .002, ηp2 = .06, indicating that there was a difference in the strength of anchoring between the two object conditions, with the same-object condition resulting in smaller anchor-estimate gap scores (M = 66. 46, SD = 58.09), and thus stronger anchoring, than the different-object condition (M = 88.57, SD = 72.12). The interaction between anchor condition and object condition was not significant, F(1, 171) = 2.59, p = .109, ηp2 = .02.
As with the numerosity judgment, we first conducted a 2 (anchor condition) × 2 (object condition) ANOVA on weight estimates, which yielded a significant main effect of anchor condition, F(1,164) = 14.98, p < .001, ηp2 = .09, indicating that weight estimates were higher in the high-anchor condition (M = 41.80, SD = 17.52) than in the low-anchor condition (M = 32.34, SD = 13.89). There was not a significant interaction, F (1, 164) = .48, p = .489, ηp2 = .00, which was taken by Cheek et al. (2015b) to suggest that there was no influence of object condition on anchoring strength. Interestingly, however, there was an unexpected main effect of object condition, F(1, 164) = 10.97, p = .001, ηp2 = .06, such that estimates were higher in the different-object condition than in the same-object condition for both the low and the high anchor conditions (M = 41.36, SD = 18.01 vs. M = 33.36, SD = 14.06). Although not predicted, one possibility is that holding the lighter box before the heavier bag created a contrast effect (e.g., Bevan & Darby, 1955; Sherif, Taub, & Hovland, 1958) in the different object condition, but because this result is not the focus of the present research, we will not discuss it further.
Next, we conducted a 2 × 2 ANOVA on anchor-estimate gap scores, which again yielded a main effect of anchor condition F(1, 164) = 11.08, p = .001, ηp2 = . 06, indicating that, in this case, gap scores were higher in the low-anchor condition (M = 18.66, SD = 13.45) than in the high-anchor condition (M = 12.37, SD = 12.33). Importantly, the predicted main effect of object condition was also significant, F(1, 164) = 11.92, p = .001, ηp2 = .07, indicating that, as with the numerosity judgment, anchor-estimate gap scores were larger in the different-object condition (M = 19.05, SD = 14.94) than in the same-object condition (M = 12.39, SD = 10.71), reflecting stronger anchoring in the latter condition. The interaction between object condition and anchor condition was not significant, F(1, 164) = 1.40, p = .239, ηp2 = .01.
The results of our analyses provide an illustration of our argument about the importance of taking the direction of participants’ estimates relative to anchor values into account when investigating moderators of anchoring in the standard anchoring paradigm. Indeed, Cheek et al.’s (2015b) analyses using raw estimates as the dependent variable, which mirror the analytic strategy of most studies that examine moderators of anchoring, provided no evidence that the object of judgment in the comparative question moderates the strength of anchoring effects. That the content of the comparative question moderates anchoring effects has been a key finding in the development of theories of anchoring (e.g., Strack & Mussweiler, 1997), and this important pattern would have gone undetected had we ignored the direction of participants’ estimates.
It is important to note that both sets of analyses conducted above provide information necessary to fully interpret the pattern of anchoring effects observed. The first ANOVA on raw estimates, although it did not reveal the moderating effect of object condition, did reveal that anchoring effects emerged, a result not fully addressed when considering only anchor-estimate gap scores. Indeed, one could calculate anchor-estimate gap scores even when no anchoring effects emerged, and in analyzing gap scores, fail to realize the—perhaps most important—fact that there are no anchoring effects. Thus, we argue that when conducting moderation analyses, future researchers should examine both estimates and anchor-estimate gap scores to completely interpret both standard anchoring effects and potential moderation. Of course, some investigations may not need to consider the direction of participants’ estimates—for example, studies interested only in whether or not anchoring effects emerge at all.
Although, in our view, analyzing anchor-estimate gap scores is often the simplest way to take estimate direction into account, it is not without its own limitations. For example, as suggested to us by a reviewer, whether participants provide an estimate in the inner or outer direction may itself be relevant to research questions. Indeed, it seems likely that less knowledgeable participants are more likely to provide extreme estimates in the outer direction, indicating a reduced understanding of the plausible range of target values (see Smith & Windschitl, 2015). In these cases, researchers may find it useful to analyze anchor-estimate gap scores separately based on whether estimates are in the inner or outer direction in addition to analyzing a simple average anchor-estimate gap score.
Considering extreme values also raises another methodological question for researchers: what to do about outlier estimates. To our knowledge, researchers typically follow one of two strategies—they either exclude estimates that are two or three standard deviations away from the mean (e.g., Englich & Soder, 2009), or they rank-transform estimates (e.g., Brandt et al., 2015). Prescribing methods to handle outliers is outside the scope of the present article, but we tentatively suggest that as long as researchers decide a priori (and ideally pre-register) what they will do—and present analyses with and without outlier correction (e.g., in a footnote)—the particular method may be less important in the case of outliers than in the case of estimate direction. Future work, however, should further explore optimal methods of considering outlier estimates, as well as methods for considering estimate direction. More broadly, both the question of how to handle outliers and the role of estimate direction underline the importance of paying close attention to the analysis and interpretation of data in anchoring studies. Anchoring effects have potentially powerful influences on judgments, and researchers should study them with equally powerful methods.
Data analyzed in this article are available through the Open Science Framework: https://osf.io/e37xk/.
We thank the editor and two reviewers for helpful comments on an earlier version of this article.
The first author was supported by a Graduate Research Fellowship from the National Science Foundation while working on this article. The second author received research support from the Margaret Hamm Research Fund.
The authors have no competing interests to declare.
Drafted the article: NC, JK.
Revised the article: NC, JK.
Adame, B. J. (2016). Training in the mitigation of anchoring bias: A test of the consider-the-opposite strategy. Learning and Motivation, 53, 36–48. DOI: https://doi.org/10.1016/j.lmot.2015.11.002
Ariely, D., Loewenstein, G., & Prelec, D. (2003). “Coherent arbitrariness”: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118, 73–105. DOI: https://doi.org/10.1162/00335530360535153
Bevan, W., & Darby, C. L. (1955). Patterns of experience and the constancy of an indifference point for perceived weight. The American Journal of Psychology, 68, 575–584. DOI: https://doi.org/10.2307/1418785
Bodenhausen, G. V., Gabriel, S., & Lineberger, M. (2000). Sadness and susceptibility to judgmental bias: The case of anchoring. Psychological Science, 11, 320–323. DOI: https://doi.org/10.1111/1467-9280.00263
Brandt, M. J., Evans, A., & Crawford, J. T. (2015). The unthinking or confident extremist? Political extremists are more likely than moderates to reject experimenter-generated anchors. Psychological Science, 26, 189–202. DOI: https://doi.org/10.1177/0956797614559730
Brewer, N. T., Chapman, G. B., Schwartz, J. A., & Bergus, G. R. (2007). The influence of irrelevant anchors on the judgments and choices of doctors and patients. Medical Decision Making, 27, 203–211. DOI: https://doi.org/10.1177/0272989X06298595
Bucchianeri, G. W., & Minson, J. A. (2013). A homeowner’s dilemma: Anchoring in residential real estate transactions. Journal of Economic Behavior & Organization, 89, 76–92. DOI: https://doi.org/10.1016/j.jebo.2013.01.010
Chapman, G. B., & Johnson, E. J. (2002). Incorporating the irrelevant: Anchors in judgments of belief and value. In Gilovich, T., Griffin, D., & Kahneman, D. (Eds.), Heuristics and biases: The psychology of intuitive judgment. Cambridge, United Kingdom: Cambridge University Press, pp. 120–138. DOI: https://doi.org/10.1017/CBO9780511808098.008
Cheek, N. N. (2016). Semantic versus numeric priming and the consider-the-opposite strategy: Comment on Adame (2016). Learning and Motivation, 53, 49–51. DOI: https://doi.org/10.1016/j.lmot.2016.03.001
Cheek, N. N., Coe-Odess, S., & Schwartz, B. (2015b). Explaining the effect of numerical anchors on physical judgments: Testing two theories. Poster presented at the 86th Annual Meeting of the Eastern Psychological Association, Philadelphia, PA. March.
Cheek, N. N., & Norem, J. K. (2017). Holistic thinkers anchor less: Exploring the roles of self-construal and thinking styles in anchoring susceptibility. Personality and Individual Differences, 115, 174–176. DOI: https://doi.org/10.1016/j.paid.2016.01.034
Englich, B. (2008). When knowledge matters—differential effects of available knowledge in standard and basic anchoring tasks. European Journal of Social Psychology, 38, 896–904. DOI: https://doi.org/10.1002/ejsp.479
Englich, B., Mussweiler, T., & Strack, F. (2006). Playing dice with criminal sentences: The influence of irrelevant anchors on experts’ judicial decision making. Personality and Social Psychology Bulletin, 32, 188–200. DOI: https://doi.org/10.1177/0146167205282152
Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396. DOI: https://doi.org/10.1111/1467-9280.00372
Epley, N., & Gilovich, T. (2006). The anchor-and-adjustment heuristic: Why the adjustments are insufficient. Psychological Science, 17, 311–318. DOI: https://doi.org/10.1111/j.1467-9280.2006.01704.x
Epley, N., & Gilovich, T. (2010). Anchoring unbound. Journal of Consumer Research, 20, 20–24. DOI: https://doi.org/10.1016/j.jcps.2009.12.005
Furnham, A., & Boo, H. C. (2011). A literature review of the anchoring effect. The Journal of Socio-Economics, 40, 35–42. DOI: https://doi.org/10.1016/j.socec.2010.10.008
Furnham, A., Boo, H. C., & McClelland, A. (2012). Individual differences and the susceptibility to the influence of anchoring cues. Journal of Individual Differences, 33, 89–93. DOI: https://doi.org/10.1027/1614-0001/a000076
Galinsky, A. D., & Mussweiler, T. (2001). First offers as anchors: The role of perspective-taking and negotiator focus. Journal of Personality and Social Psychology, 81, 657–669. DOI: https://doi.org/10.1037//0022-3522.214.171.1247
Grau, I., & Bohner, G. (2014). Anchoring revisited: The role of the comparative question. PLoS ONE, 9, e86056. DOI: https://doi.org/10.1371/journal.pone.0086056
Jacowitz, K. E., & Kahneman, D. (1995). Measures of anchoring in estimation tasks. Personality and Social Psychology Bulletin, 21, 1161–1166. DOI: https://doi.org/10.1177/01461672952111004
Lammers, J., & Burgmer, P. (2017). Power increases anchoring effects on judgment. Social Cognition, 35, 40–53. DOI: https://doi.org/10.1521/soco.2017.35.1.40
Marchiori, D., Papies, E. K., & Klein, O. (2014). The portion size effect on food intake. An anchoring and adjustment process? Appetite, 81, 108–115. DOI: https://doi.org/10.1016/j.appet.2014.06.018
Oechssler, J., Roider, A., & Schmitz, P. W. (2009). Cognitive abilities and behavioral biases. Journal of Economic Behavior & Organization, 72, 147–152. DOI: https://doi.org/10.1016/j.jebo.2009.04.018
Plous, S. (1989). Thinking the unthinkable: The effects of anchoring on likelihood estimates of nuclear war. Journal of Applied Social Psychology, 19, 67–91. DOI: https://doi.org/10.1111/j.1559-1816.1989.tb01221.x
Sherif, M., Taub, D., & Hovland, C. I. (1958). Assimilation and contrast effects of anchoring stimuli on judgments. Journal of Experimental Psychology, 55, 150–155. DOI: https://doi.org/10.1037/h0048784
Simmons, J. P., LeBoeuf, R. A., & Nelson, L. D. (2010). The effect of accuracy motivation on anchoring and adjustment: Do people adjust away from provided anchors? Journal of Personality and Social Psychology, 99, 917–932. DOI: https://doi.org/10.1037/a0021540
Smith, A. R., & Windschitl, P. D. (2015). Resisting anchoring effects: The roles of metric and mapping knowledge. Memory & Cognition, 43, 1071–1084. DOI: https://doi.org/10.3758/s13421-015-0524-4
Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94, 672–695. DOI: https://doi.org/10.1037/0022-35126.96.36.1992
Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73, 437–446. DOI: https://doi.org/10.1037/0022-35188.8.131.527
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. DOI: https://doi.org/10.1126/science.185.4157.1124
Welsh, M. B., Delfabbro, P. H., Burns, N. R., & Begg, S. H. (2014). Individual differences in anchoring: Traits and experience. Learning and Individual Differences, 29, 131–140. DOI: https://doi.org/10.1016/j.lindif.2013.01.002
Wong, K. F. E., & Kwong, J. Y. Y. (2000). Is 7300 m equal to 7.3 km? Same semantics but different anchoring effects. Organizational Behavior and Human Decision Processes, 82, 314–333. DOI: https://doi.org/10.1006/obhd.2000.2900
Zhang, Y. C., & Schwarz, N. (2013). The power of precise numbers: A conversational logic analysis. Journal of Experimental Social Psychology, 49, 944–946. DOI: https://doi.org/10.1016/j.jesp.2013.04.002
The author(s) of this paper chose the Open Review option, and the peer review comments are available at: http://doi.org/10.1525/collabra.125.pr