Publications of Mercier, Hugo
Bounded reason in a social world
Standard dual process theories see reason (System 2) as an individual cognitive mechanism able to correct the mistake of intuition (System 1) and to perform as classical rational system, imperfect but not bounded in Simon’s sense. This chapter suggests instead that reason is another intuitive cognitive mechanism, with a specific domain—reasons—and specific functions—to produce and evaluate justifications and arguments in social settings. In this interactionist perspective, reason is a clear instance of bounded rationality, making the best of its social and cognitive environment to find and evaluate reasons. It performs these tasks well enough in the right social and cognitive context, but not otherwise.
Utilizing simple cues to informational dependency
Studies have shown that participants can adequately take into account several cues regarding the weight they should grant majority opinions, such as the absolute and relative size of the majority. However, participants do not seem to consistently take into account cues about whether the members of the majority have formed their opinions independently of each other. Using an evolutionary framework, we suggest that these conflicting results can be explained by distinguishing evolutionarily valid cues (i.e. they were present and reliable during human evolution) from other cues. We use this framework to derive and test five hypotheses (H1 to H5). Our first three experiments reveal that participants discount majority opinion when the members of the majority owe their opinions to the same hearsay (H1), owe their opinions to having perceived the same event (H2), or owe their opinions to a common motivation (H3). Experiment 4 suggests that, by contrast, participants do not discount majority opinion when the members of the majority owe their opinions to sharing similar cognitive traits (H4). Finally, Experiment 5 suggests that participants adequately discount majority opinion when one of the members of the majority is untrustworthy (H5). This set of experiments shows that participants can be quite skilled at dealing with informational dependency, and that an evolutionary framework helps make sense of their strengths and weaknesses in this domain.
Why a modular approach to reason?
In their reviews, Chater and Oaksford, Dutilh Novaes, and Sterelny are critical of our modularist approach to reason. In this response, we clarify our claim that reason is one of many cognitive modules that produce intuitive inferences each in its domain; the reason module producing intuitions about reasons. We argue that in‐principle objections to the idea of massive modularity based on Fodor's peculiar approach are not effective against other interpretations that have led to insightful uses of the notion in psychology and biology. We explain how the reason module evaluates reasons on the basis of their metacognitive properties. We show how the module fulfils a social function, that of producing reasons to justify oneself and convince others and of evaluating the reasons others produce to convince us.
Willingness to transmit and the spread of pseudoscientific beliefs
Pseudoscientific beliefs are widespread and can be damaging. If several studies have examined the factors leading people to accept pseudoscientific beliefs, no attention has been paid to the factors contributing to people's willingness to transmit these beliefs. To test whether the willingness to transmit pseudoscientific beliefs contributes to their spread, independent of their believability, we asked participants to rate statements corresponding either to pseudoscientific beliefs (Myths), or to their (correct) negations (Non‐Myths). Statements were rated on believability, on how willing participants would be to transmit them, and on how knowledgeable they would make someone who produces them. Results revealed that participants who believed in Myths were more willing to transmit them than the participants who believed in Non‐Myths were willing to transmit Non‐Myths. A potential factor driving the increased willingness to transmit both Myths and Non‐Myths might be participants' belief that holding the beliefs makes one seem more knowledgeable.
{Scientists' Argumentative Reasoning}
Reasoning, defined as the production and evaluation of reasons, is a central process in science. The dominant view of reasoning, both in the psychology of reasoning and in the psychology of science, is of a mechanism with an asocial function: bettering the beliefs of the lone reasoner. Many observations, however, are difficult to reconcile with this view of reasoning; in particular, reasoning systematically searches for reasons that support the reasoner's initial beliefs, and it only evaluates these reasons cursorily. By contrast, reasoners are well able to evaluate others' reasons: accepting strong arguments and rejecting weak ones. The argumentative theory of reasoning accounts for these traits of reasoning by postulating that the evolved function of reasoning is to argue: to find arguments to convince others and to change one's mind when confronted with good arguments. Scientific reasoning, however, is often described as being at odds with such an argumentative mechanisms: scientists are supposed to reason objectively on their own, and to be pigheaded when their theories are challenged, even by good arguments. In this article, we review evidence showing that scientists, when reasoning, are subject to the same biases as are lay people while being able to change their mind when confronted with good arguments. We conclude that the argumentative theory of reasoning explains well key features of scientists' reasoning and that differences in the way scientists and laypeople reason result from the institutional framework of science.
{The place of evolved cognition in scientific thinking}
There are three ways in which scientific cognition can me more ‚``natural" than McCauley suggests, in his book `Why Religion is Natural and Science is Not': (1) reasoning, which is at the heart of scientific cognition, is a very `natural' activity when it is conducted in a dialogic context; (2) even when reasoning is used in a slower, more effortful manner, it can be recruited in ‚``natural" ways; (3) unintuitive scientific beliefs are built on a scaffold of more intuitive beliefs and core knowledge.
{Epistemic Vigilance}
Humans massively depend on communication with others, but this leaves them open to the risk of being accidentally or intentionally misinformed. To ensure that, despite this risk, communication remains advantageous, humans have, we claim, a suite of cognitive mechanisms for epistemic vigilance. Here we outline this claim and consider some of the ways in which epistemic vigilance works in mental and social life by surveying issues, research and theories in different domains of philosophy, linguistics, cognitive psychology and the social sciences.