Publications of Csibra, G.

Images of objects are interpreted as symbols: A case study of automatic size measurement

Are photographs of objects presented on a screen in an experimental context treated as the objects themselves or are they interpreted as symbols standing for objects? We addressed this question by investigating the size Stroop effect—the finding that people take longer to judge the relative size of two pictures when the real-world size of the depicted objects is incongruent with their display size. In Experiment 1, we replicated the size Stroop effect with new stimuli pairs (e.g., a zebra and a watermelon). In Experiment 2, we replaced the large objects in Experiment 1 with small toy objects that usually stand for them (e.g., a toy zebra), and found that the Stroop effect was driven by what the toys stood for, not by the toys themselves. In Experiment 3, we showed that the association between an image of a toy and the object the toy typically stands for is not automatic: when toys were pitted against the objects they typically represent (e.g., a toy zebra versus a zebra), images of toys were interpreted as representations of small objects, unlike in Experiment 2. We argue that participants interpret images as discourse-bound symbols and automatically compute what the images stand for in the discourse context of the experimental situation.

The co-evolution of cooperation and communication: alternative accounts

We challenge the proposal that partner-choice ecology explains the evolutionary emergence of ostensive communication in humans. The good fit between these domains might be due to the opposite relation (ostensive communication promotes the evolution of cooperation) or to the dependence of both these human-specific traits on a more ancient contributor to human cognitive evolution: the use of technology.

Infants’ representation of asymmetric social influence

In social groups, some individuals have more influence than others, for example because they are learnt from, or because they coordinate collective actions. Identifying these influential individuals is crucial to learn about one’s social environment. Here we tested whether infants represent asymmetric social influence among individuals from observing the imitation of movements, in the absence of any observable coercion or order. We defined social influence in terms of Granger causality: if A influences B, then past behaviors of A contain information that predicts the behaviors and mental states of B above and beyond the information contained in the past behaviors and mental states of B alone. Twelve-, fifteen- and eighteen-month-old infants were familiarized with agents (imitators) influenced by the actions of another one (target). During the test, the infants observed either: an imitator who is no longer influenced by the target (incongruous test), or the target who is not influenced by an imitator (neutral test). The participants looked significantly longer at the incongruent than at the neutral test. This result shows that infants represent and generalize individuals' potential to influence others' actions, and that they are sensitive to the asymmetric nature of social influence: upon learning that A influences B, they expect that the influence of A over B will remain stronger than the influence of B over A in a novel context. Because of the pervasiveness of social influence in many social interactions and relationships, its representation during infancy is fundamental to understand and predict others' behaviors.

A short history of theories of intuitive theories

Intuitive theories are sets of integrated concepts and causal laws that people adopt to comprehend, explain, and predict certain phenomena they encounter in the world. These theories are ‘intuitive’ because they are thought to drive our intui?ons about how the physical and biological world, the mental life of people, and the society we live in work, without mee?ng the standards of explicit scientific theorizing. The proposal that people adopt such theories has been around at least since the 1970s. However, how psychologists think about intuitive theories has been changing since they have been first proposed. In this chapter, we provide a short overview of the approaches to the function of intuitive theories and belief-forming systems more generally. While early characterization of intuitive theories emphasized their epistemic function, later attempts took an evolutionary view, claiming that they serve adaptive functions that are not always aligned with the goal of accurately tracking environmental states. A recent twist in this story is the proposal that shared intuitive theories may also serve social functions by providing a ‘theoretical common ground’ on which people interpret unobservable entities, such as memories, character traits, entitlements, and obligations. Such shared theories might be essential for social coordination via communication.

Nonverbal action interpretation guides novel word disambiguation in 12-month-olds

Whether young infants can exploit socio-pragmatic information to interpret new words is a matter of debate. Based on findings and theories from the action interpretation literature, we hypothesized that 12-month-olds should distinguish communicative object-directed actions expressing reference from instrumental object-directed actions indicative of one’s goals, and selectively use the former to identify referents of novel linguistic expressions. This hypothesis was tested across four eye-tracking experiments. Infants watched pairs of unfamiliar objects, one of which was first targeted by either a communicative or an instrumental action and then labeled with a novel word. As predicted, infants fast-mapped the novel words onto the targeted object after pointing (Experiments 1 and 4) but not after grasping (Experiment 2) unless the grasping action was preceded by an ostensive signal (Experiment 3). We also found that whenever infants mapped a novel word onto the object indicated by the action, they tended to map a different novel word onto the distractor object, displaying a mutual exclusivity effect. In sum, communicative actions enabled infants both to rapidly map labels onto highlighted objects and to carry out mutual-exclusivity inferences about other words. This reliance on nonverbal action interpretation in the disambiguation of novel words indicates that socio-pragmatic inferences about reference likely supplement associative and statistical learning mechanisms from the outset of word learning.

Young domestic chicks spontaneously represent the absence of objects

Absence is a notion that is usually captured by language-related concepts like zero or negation. Whether nonlinguistic creatures encode similar thoughts is an open question, as everyday behavior marked by absence (of food, of social partners) can be explained solely by expecting presence somewhere else. We investigated 8-day-old chicks’ looking behavior in response to events violating expectations about the presence or absence of an object. We found different behavioral responses to violations of presence and absence, suggesting distinct underlying mechanisms. Importantly, chicks displayed an avian signature of novelty detection to violations of absence, namely a sex-dependent left-eye-bias. Follow-up experiments excluded accounts that would explain this bias by perceptual mismatch or by representing the object at different locations. These results suggest that the ability to spontaneously form representations about the absence of objects likely belongs to the initial cognitive repertoire of vertebrate species.

Structural asymmetries in the representation of giving and taking events

Across languages, GIVE and TAKE verbs have different syntactic requirements: GIVE mandates a patient argument to be made explicit in the clause structure, whereas TAKE does not. Experimental evidence suggests that this asymmetry is rooted in prelinguistic assumptions about the minimal number of event participants that each action entails. The present study provides corroborating evidence for this proposal by investigating whether the observation of giving and taking actions modulates the inclusion of patients in the represented event. Participants were shown events featuring an agent (A) transferring an object to, or collecting it from, an animate target (B) or an inanimate target (a rock), and their sensitivity to changes in pair composition (AB vs. AC) and action role (AB vs. BA) was measured. Change sensitivity was affected by the type of target approached when the agent transferred the object (Experiment 1), but not when she collected it (Experiment 2), or when an outside force carried out the transfer (Experiment 3). Although these object-displacing actions could be equally interpreted as interactive (i.e., directed towards B), this construal was adopted only when B could be perceived as putative patient of a giving action. This evidence buttresses the proposal that structural asymmetries in giving and taking, as reflected in their syntactic requirements, may originate from prelinguistic assumptions about the minimal event participants required for each action to be teleologically well-formed.

Infants expect agents to minimize the collective cost of collaborative actions

This paper argues that human infants address the challenges of optimizing, recognizing, and interpreting collaborative behaviors by assessing their collective efficiency. This hypothesis was tested by using a looking-time study. Fourteen-month-olds (N = 32) were familiarized with agents performing a collaborative action in computer animations. During the test phase, the looking times were measured while the agents acted with various efficiency parameters. In the critical condition, the agents’ actions were individually efficient, but their combination was either collectively efficient or inefficient. Infants looked longer at test events that violated expectations of collective efficiency (p = .006, d = 0.79). Thus, preverbal infants apply expectations of collective efficiency to actions involving multiple agents.

Three cognitive mechanisms for knowledge tracking

We welcome Phillips et al.’s proposal to separate the understanding of ‘knowledge’ from that of ‘beliefs’. We argue that this distinction is best specified at the level of the cognitive mechanisms. Three distinct mechanisms are discussed: tagging one’s own representations with those who share the same reality; representing others’ representations (metarepresenting knowledge); and attributing dispositions to provide useful information.

Can infants adopt underspecified contents into attributed beliefs? Representational prerequisites of theory of mind

Recent evidence suggests that young infants, as well as nonhuman apes, can anticipate others’ behavior based on their false beliefs. While such behaviors have been proposed to be accounted by simple associations between agents, objects, and locations, human adults are undoubtedly endowed with sophisticated theory of mind abil- ities. For example, they can attribute mental contents about abstract or non-existing entities, or beliefs whose content is poorly specified. While such endeavors may be human specific, it is unclear whether the represen- tational apparatus that allows for encoding such beliefs is present early in development. In four experiments we asked whether 15-month-old infants are able to attribute beliefs with underspecified content, update their content later, and maintain attributed beliefs that are unknown to be true or false. In Experiment 1, infants observed as an agent hid an object to an unspecified location. This location was later revealed in the absence or presence of the agent, and the object was then hidden again to an unspecified location. Then the infants could search for the object while the agent was away. Their search was biased to the revealed location (that could be represented as the potential content of the agent’s belief when she had not witnessed the re-hiding), suggesting that they (1) first attributed an underspecified belief to the agent, (2) later updated the content of this belief, and (3) were primed by this content in their own action even though its validity was unknown. This priming effect was absent when the agent witnessed the re-hiding of the object, and thus her belief about the earlier location of the object did not have to be sustained. The same effect was observed when infants searched for a different toy (Experiment 2) or when an additional spatial transformation was introduced (Experiment 4), but not when the spatial transformation disrupted belief updating (Experiment 3). These data suggest that infants’ representational apparatus is prepared to efficiently track other agents’ beliefs online, encode underspecified beliefs and define their content later, possibly reflecting a crucial characteristic of mature theory of mind: using a meta- representational format for ascribed beliefs.

Twelve-month-olds disambiguate new words using mutual-exclusivity inferences

Representing objects in terms of their kinds enables inferences based on the long-term knowledge made available through kind concepts. For example, children readily use lexical knowledge linked to familiar kind concepts to disambiguate new words (e.g., “find the toma”): they exclude members of familiar kinds falling under familiar kind labels (e.g., a ball) as potential referents and link new labels to available unfamiliar objects (e.g., a funnel), a phenomenon dubbed as ‘mutual exclusivity’. Younger infants’ failure in mutual exclusivity tasks has been commonly interpreted as a limitation of early word-learning or inferential abilities. Here, we investigated an alternative explanation, according to which infants do not spontaneously represent familiar objects under kind concepts, hence lacking access to the information necessary for rejecting them as referents of novel labels. Building on findings about conceptual development and communication, we hypothesized that nonverbal communication could prompt infants to set up kind-based representations which, in turn, would promote mutual exclusivity inferences. This hypothesis was tested in a looking-while-listening task involving novel word disambiguation. Twelve-month-olds saw pairs of objects, one familiar and one unfamiliar, and heard familiar kind labels or novel words. Across two experiments providing a cross-lab replication in two different languages, infants successfully disambiguated novel words when the familiar object had been pointed at before labeling, but not when it had been highlighted in a non-communicative manner (Experiment 1) or not highlighted at all (Experiment 2). Nonverbal communication induced infants to recruit kind-based representations of familiar objects that they failed to recruit in its absence and that, once activated, supported mutual-exclusivity inferences. Developmental changes in children’s appreciation of communicative contexts may modulate the expression of early inferential and word learning competences.

The effect of disagreement on children’s source memory performance

Source representations play a role both in the formation of individual beliefs as well as in the social transmission of such beliefs. Both of these functions suggest that source information should be particularly useful in the context of interpersonal disagreement. Three experi- ments with an identical design (one original study and two replications) with 3- to 4-year-old- children (N = 100) assessed whether children’s source memory performance would improve in the face of disagreement and whether such an effect interacts with different types of sources (first- vs. second-hand). In a 2 x 2 repeated-measures design, children found out about the contents of a container either by looking inside or being told (IV1). Then they were questioned about the contents of the container by an interlocutor puppet who either agreed or disagreed with their answer (IV2). We measured children’s source memory performance in response to a free recall question (DV1) followed by a forced-choice question (DV2). Four-year-olds (but not three-year-olds) performed better in response to the free recall source memory question (but not the forced-choice question) when their interlocutor had disagreed with them compared to when it had agreed with them. Children were also better at recalling ‘having been told’ than ‘having seen’. These results demonstrate that by four years of age, source memory capacities are sensitive to the communicative context of assertions and serve social functions.

The effect of source claims on statement believability and speaker accountability

What is the effect of source claims (such as “I saw it” or “Somebody told me”) on the believability of statements, and what mechanisms are responsible for this effect? In this study, we tested the idea that source claims impact statement believability by modulating the extent to which a speaker is perceived to be committed to (and thereby accountable for) the truth of her assertion. Across three experiments, we presented participants with statements associated with different source claims, asked them to judge how much they believed the statements, and how much the speaker was responsible if the statement turned out to be false. We found that (1) statement believability predicted speaker accountability independently of a statement’s perceived prior likelihood or associated source claim; (2) being associated with a claim to first-hand (“I saw that . . .”) or second-hand (“Somebody told me that . . .”) evidence strengthened this association; (3) bare assertions about specific circumstances were commonly interpreted as claims to first-hand evidence; and (4) (everything else being equal) claims to first-hand evidence increased while claims to second-hand evidence decreased both statement believability and speaker accountability. These results support the idea that the believability of a statement is closely related to how committed to its truth the speaker is perceived to be and that source claims modulate the extent of this perceived commitment.

For 19-month-olds, what happens on-screen stays on-screen

Humans rely extensively on external representations such as drawings, maps, and animations. While animations are widely used in infancy research, little is known about how infants interpret them. In this study, we asked whether 19-month-olds take what they see on a screen to be happening here and now, or whether they think that on-screen events are decoupled from the immediate environment. In Experiments 1-3, we found that infants did not expect a falling animated ball to end up in boxes below the screen, even though they could track the ball (i) when the ball was real or (ii) when the boxes were also part of the animation. In Experiment 4, we tested whether infants think of screens as spatially bounded physical containers that do not allow objects to pass through. When two location cues were pitted against each other, infants individuated the protagonist of an animation by its virtual location (the animation to which it belonged), not by its physical location (the screen on which the animation was presented). Thus, 19-month-olds reject animation-reality crossovers but accept the depiction of the same animated environment on multiple screens. These results are consistent with the possibility that 19-month-olds interpret animations as external representations.

Computing joint action costs: Co-actors minimize the aggregate individual costs in an action sequence

Successful performance in cooperative activities relies on efficient task distribution between co-actors. Previous research found that people often forgo individual efficiency in favor of co- efficiency (i.e., joint-cost minimization) when planning a joint action. The present study investigated the cost computations underlying co-efficient decisions. We report a series of experiments that tested the hypothesis that people compute the joint costs of a cooperative action sequence by summing the individual action costs of their co-actor and themselves. We independently manipulated the parameters quantifying individual and joint action costs and tested their effects on decision-making by fitting and comparing Bayesian logistic regression models. Our hypothesis was confirmed: people weighed their own and their partner’s costs similarly to estimate the joint action costs as the sum of the two individual parameters. Participants minimized the aggregate cost to ensure co-efficiency. The results provide empirical support for behavioral economics and computational approaches that formalize cooperation as joint utility maximization based on a weighted sum of individual action costs.

A two-lab direct replication attempt of Southgate, Senju, & Csibra (2007)

The study by Southgate, V., Senju, A., and Csibra, G. (Southgate et al., 2007) has been widely cited as evidence for the ability of false-belief attribution in young children. Recent replication attempts of this paradigm have yielded mixed results: several studies were unable to replicate the original finding, raising doubts about the suitability of the paradigm to assess non-verbal action prediction and Theory of Mind. In a preregistered collaborative study including two of the original authors, we tested 160 24- to 26-month-olds across two locations following the original stimuli, procedure, and analyses as closely as possible. We found no evidence for action anticipation: only about half of the infants correctly anticipated the protagonist’s actions when action prediction did not require taking into account the agent’s beliefs. In addition, even those who appeared to anticipate failed to do so when a false belief was involved. These findings indicate that the paradigm of Southgate et al. (2007) cannot reliably elicit anticipatory action prediction and is unsuitable for testing false belief understanding in 2-year-olds.

Joint action planning: co-actors minimize the aggregate individual costs of actions

Successful cooperative activities rely on the efficient distribution of sub-tasks between co-actors. Previous research has found that people often forgo individual efficiency in favor of group-level efficiency (i.e., joint cost minimization) when planning a joint action. The present study investigated the cost computations underlying such "co-efficient" decisions: We tested the hypothesis that people compute the joint costs of a shared action sequence by summing the individual costs of their own and their co-actor's actions. We independently manipulated the parameters quantifying individual and joint action costs and tested their effects on decision-making. Participants weighed their own and their partner’s costs equally to estimate the joint action costs as the sum of the two individual parameters. The results provide empirical support for computational approaches that formalize cooperation as joint utility maximization based on a sum of individual action costs.

Ten-month-olds infer relative costs of different goal-directed actions

While it is straightforward to compare the costs of different variants of the same action (e.g., walking to a coffeeshop at the end of the block will always be less costly than walking to a coffeeshop three blocks away), the relative costs of different actions are not directly comparable (e.g., would it be easier to jump over or walk around a fence?). Across two experiments we demonstrate that 10-month-old infants spontaneously encode the manner of different goal-directed actions (jumping over an obstacle vs. detouring around it, Experiment 1) and use the principle of cost-efficiency to infer their relative costs (jumping is less costly to bypass low walls, but detouring is less costly to bypass high walls, Experiment 2). By relating action choices to the physical parameters of the environment, infants identify the least costly actions given the circumstances, which allows them to make behavioral predictions in new environments and may also enable them to infer others’ motor competence.

For 19-month-olds, what happens on the screen stays on the screen

Fictional entities in animations and puppet shows are widely used in infancy research, and there is plenty of evidence suggesting that infants are able to make inferences about them (e.g., ascribing agency to self-propelled 2-D figures). In the present set of experiments, we asked whether 19-month-olds take what they see on the screen to be happening in the here and now, or whether they think that on-screen events are spatiotemporally decoupled from the immediate environment. We found that infants do not expect an animated ball falling on a screen to end up in real boxes below the screen, even though they can track the ball (i) when the ball is real, and (ii) when the boxes are also part of the animation. These findings indicate that infants separate animations from the surrounding environment and cast doubt on the assumption that infants are naïve realists about iconic representations.

Do infants think that agents choose what’s best?

The naïve utility calculus theory of early social cognition argues that by relating an agent’s incurred effort to the expected value of a goal state, young children and infants can reason about observed behaviors. Here we report a series of experiments that tested the scope of such utility-based reasoning adopted to choice situations in the first year of life. We found that 10-month-olds (1) did not expect an agent to prefer a higher quantity of goal objects, given equal action cost (Experiment 1) and (2) did not expect an agent to prefer a goal item that can be reached at lower cost, given equal rewards (Experiment 2a and 2b). Our results thus suggest that young infants’ utility calculus for action understanding may be more limited than previously thought in situations where an agent faces a choice between outcome options.

Witnessing, remembering and testifying: Why the past is special for human beings

The past is undeniably special for human beings. To a large extent, both individuals and collectives define themselves through history. Moreover, humans seem to have a special way of cognitively representing the past: episodic memory. As opposed to other ways of representing knowledge, remembering the past in episodic memory brings with it the ability to become a witness. Episodic memory allows us to determine what of our knowledge about the past comes from our own experience and thereby what parts of the past we can give testimony about. In this article, we aim to give an account of the special status of the past by asking why humans have developed the ability to give testimony about it. We argue that the past is special for human beings because it is regularly, and often principally, the only thing that can determine present social realities like commitments, entitlements, and obligations. Since the social effects of the past often do not leave physical traces behind, remembering the past and the ability to bear testimony it brings, is necessary in order to coordinate social realities with other individuals.

Giving, but not taking, actions are spontaneously represented as social interactions: Evidence from modulation of lower alpha oscillations

Unlike taking, which can be redescribed in non-social and object-directed terms, acts of giving are invariably expressed across languages in a three-argument structure relating agent, patient, and object. Developmental evidence suggests this difference in the syntactic entailment of the patient role to be rooted in a prelinguistic understanding of giving as a patient-directed, hence obligatorily social, action. We hypothesized that minimal cues of possession transfer, known to induce this interpretation in preverbal infants, should similarly encourage adults to perceive the patient of giving, but not taking, actions as integral participant of the observed event, even without cues of overt involvement in the transfer. To test this hypothesis, we measured a known electrophysi- ological correlate of action understanding (the suppression of alpha-band oscillations) during the observation of giving and taking events, under the assumption that the functional grouping of agent and patient should have induced greater suppression that the representation of individual object-directed actions. As predicted, the observation of giving produced stronger lower alpha suppression than superficially similar acts of object disposal, whereas no difference emerged between taking from an animate patient or an inanimate target. These results suggest that the participants spontaneously represented giving, but not kinematically identical taking actions, as social interactions, and crucially restricted this interpretation to transfer events featuring animate patients. This evidence gives empirical traction to the idea that such asymmetry, rather than being an inter- pretive propensity circumscribed to the first year of life, is attributable to an ontogenetically stable system dedicated to the efficient identification of interactions based on active transfer.

Do 15-month-old infants prefer helpers? A replication of Hamlin et al. (2007)

Hamlin et al. found in 2007 that preverbal infants displayed a preference for helpers over hinderers. The robustness of this finding and the conditions under which infant sociomoral evaluation can be elicited has since been debated. Here, we conducted a replication of the original study, in which we tested 14- to 16-month-olds using a familiarization procedure with 3D-animated video stimuli. Unlike previous replication attempts, ours uniquely benefitted from detailed procedural advice by Hamlin. In contrast to the original results, only 16 out of 32 infants (50%) in our study reached for the helper; thus, we were not able to replicate the findings. A possible reason for this failure is that infants’ preference for prosocial agents may not be reliably elicited with the procedure and stimuli adopted. Alternatively, the effect size of infants’ preference may be smaller than originally estimated. The study addresses ongoing methodological debates on the replicability of influential findings in infant cognition.

Electrophysiological investigation of infants’ understanding of understanding

Social cognition might play a critical role in language acquisition and comprehension, as mindreading may be necessary to infer the intended meaning of linguistic expressions uttered by communicative partners. In three electrophysiological experiments, we explored the interplay between belief attribution and language comprehension of 14-month-old infants. First, we replicated our earlier finding: infants produced an N400 effect to correctly labelled objects when the labels did not match a communicative partner’s beliefs about the referents. Second, we observed no N400 when we replaced the object with another category member. Third, when we named the objects incorrectly for infants, but congruently with the partner’s false belief, we observed large N400 responses, suggesting that infants retained their own perspective in addition to that of the partner. We thus interpret the observed social N400 effect as a communicational expectancy indicator because it was contingent not on the attribution of false beliefs but on semantic expectations by both the self and the communicative partner. Additional exploratory analyses revealed an early, frontal, positive- going electrophysiological response in all three experiments, which was contingent on infants’ computing the comprehension of the social partner based on attributed beliefs.

Fourteen-month-old infants track the language comprehension of communicative partners

Infants employ sophisticated mechanisms to acquire their first language, including some that rely on taking the perspective of adults as speakers or listeners. When do infants first show awareness of what other people understand? We tested 14-month-old infants in two experiments measuring event-related potentials. In Experiment 1, we established that infants produce the N400 effect, a brain signature of semantic violations, in a live object naming paradigm in the presence of an adult observer. In Experiment 2, we induced false beliefs about the labelled objects in the adult observer to test whether infants keep track of the other person’s comprehension. The results revealed that infants reacted to the semantic incongruity heard by the other as if they encountered it themselves: they exhibited an N400-like response, even though labels were congruous from their perspective. This finding demonstrates that infants track the linguistic understanding of social partners.

Rationality in joint action: Maximizing co-efficiency in coordination

When people perform simple actions, they often behave efficiently, minimizing the costs of movement for the expected benefit. The present study addressed the question whether this efficiency scales up to dyads working together to achieve a shared goal: do people act efficiently as a group, or do they minimize their own or their partner’s individual costs even if this increases the overall cost for the group? We devised a novel, touchscreen-based, sequential object transfer task to measure how people choose between different paths to coordinate with a partner. Across multiple experiments, we found that participants did not simply minimize their own or their partner’s movement costs but made co-efficient decisions about paths, which ensured that the aggregate costs of movement for the dyad were minimized. These results suggest that people are able and motivated to make co-efficient, collectively rational decisions when acting together.

Minimal cues of possession transfer compel infants to ascribe the goal of giving

Human infants’ readiness to interpret impoverished object-transfer events as acts of giving suggests the existence of a dedicated action schema for identifying interactions based on active object transfer. Here we investigated the sensitivity of this giving schema by testing whether 15-month-olds would interpret the displacement of an object as an agent’s goal even if it could be dismissed as side effect of a different goal. Across two looking-time experiments, we showed that, when the displacement only resulted in a change of object location, infants expected the agent to pursue the other goal. However, when the same change of location resulted in a transfer of object possession, infants reliably adopted this outcome as the agent’s goal. The interpretive shift that the mere presence of a potential recipient induced is testament to the infants’ susceptibility to cues of benefit delivery: an action efficiently causing a transfer of object possession appeared sufficient to induce the interpretation of goal-directed giving even if the transfer was carried out without any interaction between Giver and Givee and was embedded in an event affording an alternative goal interpretation.

Why do we remember? The communicative function of episodic memory

Episodic memory has been analyzed in a number of different ways in both philosophy and psychology, and most controversy has centered on its self-referential, ‘autonoetic’ character. Here, we offer a comprehensive characterization of episodic memory in representational terms, and propose a novel functional account on this basis. We argue that episodic memory should be understood as a distinctive epistemic attitude taken towards an event simulation. On this view, episodic memory has a metarepresentational format and should not be equated with beliefs about the past. Instead, empirical findings suggest that the contents of human episodic memory are often constructed in the service of the explicit justification of such beliefs. Existing accounts of episodic memory function that have focused on explaining its constructive character through its role in ‘future-oriented mental time travel’ neither do justice to its capacity to ground veridical beliefs about the past nor to its representational format. We provide an account of the metarepresentational structure of episodic memory in terms of its role in communicative interaction. The generative nature of recollection allows us to represent and communicate the reasons for why we hold certain beliefs about the past. In this process, autonoesis corresponds to the capacity to determine when and how to assert epistemic authority in making claims about the past. A domain where such claims are indispensable are human social engagements. Such engagements commonly require the justification of entitlements and obligations, which is often possible only by explicit reference to specific past events.

Longitudinal development of attention and inhibitory control during the first year of life

Executive functions (EFs) are key abilities that allow us to control our thoughts and actions. Research suggests that two EFs, inhibitory control (IC) and working memory (WM), emerge around 9 months. Little is known about IC earlier in infancy and whether basic attentional processes form the “building blocks” of emerging IC. These questions were investigated longitudinally in 104 infants tested behaviorally on two screen-based attention tasks at 4 months, and on IC tasks at 6 and 9 months. Results provided no evidence that basic attention formed precursors for IC. However, there was full support for coherence in IC at 9 months and partial support for stability in IC from 6 months. This suggests that IC emerges earlier than previously assumed.

Retrospective attribution of false beliefs in 3-year-old children.

A current debate in psychology and cognitive science concerns the nature of young children’s ability to attribute and track others’ beliefs. Beliefs can be attributed in at least two different ways: prospectively, during the observation of belief-inducing situations, and in a retrospective manner, based on episodic retrieval of the details of the events that brought about the beliefs. We developed a task in which only retrospective attribution, but not prospective belief tracking, would allow children to correctly infer that someone had a false belief. Eighteen- and 36-month-old children observed a displacement event, which was witnessed by a person wearing sunglasses (Experiment 1). Having later discovered that the sunglasses were opaque, 36-month-olds correctly inferred that the person must have formed a false belief about the location of the objects and used this inference in resolving her referential expressions. They successfully performed retrospective revision in the opposite direction as well, correcting a mistakenly attributed false belief when this was necessary (Experiment 3). Thus, children can compute beliefs retrospectively, based on episodic memories, well before they pass explicit false-belief tasks. Eighteen-month-olds failed in such a task, suggesting that they cannot retrospectively attribute beliefs or revise their initial belief attributions. However, an additional experiment provided evidence for prospective tracking of false beliefs in 18-month-olds (Experiment 2). Beyond identifying two different modes for tracking and updating others’ mental states early in development, these results also provide clear evidence of episodic memory retrieval in young children.

Motor activation during action perception depends on action interpretation

Since the discovery of motor mirroring, the involvement of the motor system in action interpretation has been widely discussed. While some theories proposed that motor mirroring underlies human action understanding, others suggested that it is a corollary of action interpretation. We put these two accounts to the test by employing superficially similar actions that invite radically different interpretations of the underlying intentions. Using an action-observation task, we assessed motor activation (as indexed by the suppression of the EEG mu rhythm) in response to actions typically interpreted as instrumental (e.g., grasping) or referential (e.g., pointing) towards an object. Only the observation of instrumental actions resulted in enhanced mu suppression. In addition, the exposure to grasping actions failed to elicit mu suppression when they were preceded by speech, suggesting that the presence of communicative signals modulated the interpretation of the observed actions. These results suggest that the involvement of sensorimotor cortices during action processing is conditional on a particular (instrumental) action interpretation, and that action interpretation relies on inferential processes and top-down mechanisms that are implemented outside of the motor system.

Statistical treatment of looking-time data

Looking times (LTs) are frequently measured in empirical research on infant cognition. We analyzed the statistical distribution of LTs across participants in order to develop recommendations for their treatment in infancy research. Our analyses focused on a common within-subject experimental design, in which longer looking to novel or unexpected stimuli is predicted. We analyzed data from two sources: an in-house set of LTs that included data from individual participants (47 experiments, 1584 observations), and a representative set of published papers reporting group-level LT statistics (149 experiments from 33 papers). We established that LTs are log-normally distributed across participants, and therefore should always be log-transformed before parametric statistical analyses. We estimated the typical size of significant effects in LT studies, which allowed us to make recommendations about setting sample sizes. We show how our estimate of the distribution of effect sizes of LT studies can be used to design experiments to be analyzed by Bayesian statistics, where the experimenter is required to determine in advance the predicted effect size rather than the sample size. We demonstrate the robustness of this method in both sets of LT experiments.

An object memory bias induced by communicative reference

In humans, a good proportion of knowledge, including knowledge about objects and object kinds, is acquired via social learning by direct communication from others. If communicative signals raise the expectation of social learning about objects, intrinsic (permanent) features that support object recognition are relevant to store into memory, while extrinsic (accidental) object properties can be ignored. We investigated this hypothesis by instructing participants to memorise shape-colour associations that constituted either an extrinsic object property (the colour of the box that contained the object, Experiment 1) or an intrinsic one (the colour of the object, Experiment 2). Compared to a non-communicative context, communicative presentation of the objects impaired participants’ performance when they recalled extrinsic object properties, while their incidental memory of the intrinsic shape-colour associations was not affected. Communicative signals had no effect on performance when the task required the memorisation of intrinsic object properties. The negative effect of communicative reference on the memory of extrinsic properties was also confirmed in Experiment 3, where this property was object location. Such a memory bias suggests that referent objects in communication tend to be seen as representatives of their kind rather than as individuals.

Seeing behind the surface: Communicative demonstration boosts category disambiguation in 12-month-olds

In their first years, infants acquire an incredible amount of information regarding the objects present in their environment. While often it is not clear what specific information should be prioritized in encoding from the many characteristics of an object, different types of object representations facilitate different types of generalizations. We tested the hypotheses that one-year-old infants distinctively represent familiar objects as exemplars of their kind, and that ostensive communication plays a role in determining kind membership for ambiguous objects. In the training phase of our experiment, infants were exposed to movies displaying an agent sorting objects from two categories (cups and plates) into two locations (left or right). Afterwards, different groups of infants saw either an ostensive or a non-ostensive demonstration performed by the agent revealing that a new object that looked like a plate can be transformed into a cup. A third group of infants experienced no demonstration regarding the new object. During test, infants were presented with the ambiguous object in the plate format, and we measured generalization by coding anticipatory looks to the plate or the cup side. While infants looked equally often towards the two sides when the demonstration was non-ostensive, and more often to the plate side when there was no demonstration, they performed more anticipatory eye movements to the cup side when the demonstration was ostensive. Thus, ostensive demonstration likely highlighted the hidden dispositional properties of the target object as kind-relevant, guiding infants’ categorization of the foldable cup as a cup, despite that it looked like a plate. These results suggest that infants likely encode familiar objects as exemplars of their kind and that ostensive communication can play a crucial role in disambiguating what kind an object belongs to, even when this requires disregarding salient surface features.

Predictive action tracking without motor experience in 8-month-old infants

A popular idea in cognitive neuroscience is that to predict others’ actions, observers need to map those actions onto their own motor repertoire. If this is true, infants with a relatively limited motor repertoire should be unable to predict actions with which they have no previous motor experience. We investigated this idea by presenting pre-walking infants with videos of upright and inverted stepping actions that were briefly occluded from view, followed by either a correct (time-coherent) or an incorrect (time-incoherent) continuation of the action (Experiment 1). Pre-walking infants looked significantly longer to the still frame after the incorrect compared to the correct continuations of the upright, but not the inverted stepping actions. This demonstrates that motor experience is not necessary for predictive tracking of action kinematics. In a follow-up study (Experiment 2), we investigated sensorimotor cortex activation as a neural indication of predictive action tracking in another group of pre-walking infants. Infants showed significantly more sensorimotor cortex activation during the occlusion of the upright stepping actions that the infants in Experiment 1 could predictively track, than during the occlusion of the inverted stepping actions that the infants in Experiment 1 could not predictively track. Taken together, these findings are inconsistent with the idea that motor experience is necessary for the predictive tracking of action kinematics, and suggest that infants may be able to use their extensive experience with observing others’ actions to generate real-time action predictions.

Nonverbal generics: Human infants interpret objects as symbols of object kinds

Human infants are involved in communicative interactions with others well before they start to speak or understand language. It is generally thought that this communication is useful for establishing interpersonal relations and supporting joint activities, but, in the absence of symbolic functions that language provides, these early communicative contexts do not allow infants to learn about the world. However, recent studies suggest that when someone demonstrates something using an object as the medium of instruction, infants can conceive the object as an exemplar of the whole class of objects of the same kind. Thus, an object, just like a word, can play the role of a symbol that stands for something else than itself, and infants can learn general knowledge about a kind of object from non-verbal communication about a single item of that kind. This rudimentary symbolic capacity may be one of the roots of the development of symbolic understanding in children.

Learning in and about opaque worlds

We argue that direct active teaching in humans exhibits at least two properties (open-endedness and content opacity) that make the recognition of teaching episodes without ostension untenable. Thus, while we welcome Kline’s functional approach to the analysis of teaching, we think that she ignores important features of the socio- environmental niche in which human teaching likely evolved.

Infants learn enduring functions of novel tools from action demonstrations

According to recent theoretical proposals, one function of infant goal attribution is to support early social learning of artifact functions from instrumental actions, and one function of infant sensitivity to communication is to support early acquisition of generic knowledge about enduring, kind-relevant properties of the referents. The present study tested two hypotheses, derived from these proposals, about the conditions that facilitate the acquisition of enduring functions for novel tools in human infancy. Using a violation-of-expectation paradigm, we show that 13.5-months-old infants encode arbitrary end-states of action-sequences in relation to the novel tools employed to bring them about. These mappings are not formed if the same end states of action sequences cannot be interpreted as action goals. Moreover, the tool-goal mappings acquired from infant-directed communicative demonstrations are more resilient to counter-evidence than those acquired from non-infant-directed presentations, and thus show similarities to generic rather than episodic representations. These findings suggest that the acquisition of tool functions in infancy is guided by both teleological action interpretation mechanisms and the expectation that communicative demonstrations reveal enduring dispositional properties of tools.

Giving and taking: Representational building blocks of active resource-transfer events in human infants

Active resource transfer is a pervasive and distinctive feature of human sociality. We hypothesized that humans possess an action schema of GIVING specific for representing social interactions based on material exchange, and specified the set of necessary assump- tions about giving events that this action schema should be equipped with. We tested this proposal by investigating how 12-month-old infants interpret abstract resource-transfer events. Across eight looking-time studies using a violation-of-expectation paradigm we found that infants were able to distinguish between kinematically identical giving and tak- ing actions. Despite the surface similarity between these two actions, only giving was rep- resented as an object-mediated social interaction. While we found no evidence that infants expected the target of a giving or taking action to reciprocate, the present results suggest that infants interpret giving as an inherently social action, which they can possibly use to map social relations via observing resource-transfer episodes.

Toddlers favor communicatively presented information over statistical reliability in learning about artifacts

Observed associations between events can be validated by statistical information of reliability or by testament of communicative sources. We tested whether toddlers learn from their own observation of efficiency, assessed by statistical information on reliability of interventions, or from communicatively presented demonstration, when these two potential types of evidence of validity of interventions on a novel artifact are contrasted with each other. Eighteen-month-old infants observed two adults, one operating the artifact by a method that was more efficient (2/3 probability of success) than that of the other (1/3 probability of success). Compared to the Baseline condition, in which communicative signals were not employed, infants tended to choose the less reliable method to operate the artifact when this method was demonstrated in a communicative manner in the Experimental condition. This finding demonstrates that, in certain circumstances, communicative sanctioning of reliability may override statistical evidence for young learners. Such a bias can serve fast and efficient transmission of knowledge between generations.

Are you talking to me? Neural activations in 6-month-old infants in response to being addressed during natural interactions

Human interactions are guided by continuous communication among the parties involved, in which verbal communication plays a primary role. However, speech does not necessarily reveal to whom it is addressed, especially for young infants who are unable to decode its semantic content. To overcome such difficulty, adults often explicitly mark their communication as infant-directed. In the present study we investigated whether ostensive signals, which would disambiguate the infant as the addressee of a communicative act, would modulate the brain responses of 6-month-old infants to speech and gestures in an ecologically valid setting. In Experiment 1, we tested whether the gaze direction of the speaker modulates cortical responses to infant-direct speech. To provide a naturalistic environment, two infants and their parents participated at the same time. In Experiment 2, we tested whether a similar modulation of the cortical response would be obtained by varying the intonation (infant versus adult directed speech) of the speech during face-to-face communication, one on one. The results of both experiments indicated that only the combination of ostensive signals (infant directed speech and direct gaze) led to enhanced brain activation. This effect was indicated by responses localized in regions known to be involved in processing auditory and visual aspects of social communication. This study also demonstrated the potential of fNIRS as a tool for studying neural responses in naturalistic scenarios, and for simultaneous measurement of brain function in multiple participants.

Concept-based word learning in human infants

It is debated whether infants initially learn object labels by mapping them onto similarity-defining perceptual features or onto concepts of object kinds. We addressed this question by attempting to teach infants words for behaviorally defined action roles. In a series of experiments, we found that 14-month-olds could rapidly learn a label for the role the chaser plays in a chasing scenario, even when the different instances of chasers did not share perceptual features. Furthermore, when infants could choose, they preferred to interpret a novel label as expressing the actor’s role within the observed interaction rather than as being associated with the actor’s appearance. These results demonstrate that infants can learn labels as easily, or even easier, for concepts identified by abstract behavioral characteristics than by perceptual features. Thus, already at early stages of word learning, infants expect that novel words express concepts.

Probing the strength of infants' preference for helpers over hinderers: Two replication attempts of Hamlin and Wynn (2011)

Several studies indicate that infants prefer individuals who act prosocially over those who act antisocially toward unrelated third parties. In the present study, we focused on a paradigm published by Kiley Hamlin and Karen Wynn in 2011. In this study, infants were habituated to a live puppet show in which a protagonist tried to open a box to retrieve a toy placed inside. The protagonist was either helped by a second puppet (the “Helper”), or hindered by a third puppet (the “Hinderer”). At test, infants were presented with the Helper and the Hinderer, and encouraged to reach for one of them. In the original study, 75% of 9-month-olds selected the Helper, arguably demonstrating a preference for prosocial over antisocial individuals. We conducted two studies with the aim of replicating this result. Each attempt was performed by a different group of experimenters. Study 1 followed the methods of the published study as faithfully as possible. Study 2 introduced slight modifications to the stimuli and the procedure following the guidelines generously provided by Kiley Hamlin and her collaborators. Yet, in our replication attempts, 9-month-olds’ preference for helpers over hinderers did not differ significantly from chance (62.5% and 50%, respectively, in Studies 1 and 2). Two types of factors could explain why our results differed from those of Hamlin and Wynn: minor methodological dissimilarities (in procedure, materials, or the population tested), or the effect size being smaller than originally assumed. We conclude that fine methodological details that are crucial to infants’ success in this task need to be identified to ensure the replicability of the original result.

Neural signatures for sustaining object representations attributed to others in preverbal human infants

A major feat of social beings is to encode what their conspecifics see, know or believe. While various nonhuman animals show precursors of these abilities, humans perform uniquely sophisticated inferences about other people’s mental states. However, it is still unclear how these possibly human-specific capacities develop and whether preverbal infants, similarly to adults form representations of other agents’ mental states, specifically metarepresentations. We explored the neuro-cognitive bases of 8-month-olds’ ability to encode the world from another person’s perspective, using gamma-band EEG activity over the temporal lobes, an established neural signature for sustained object representation after occlusion. We observed such gamma-band activity when an object was occluded from the infants’ perspective, as well as when it was occluded only from the other person (Study 1), and also when subsequently the object disappeared but the person falsely believed the object to be present (Study 2). These findings suggest that the cognitive systems involved in representing the world from infants’ own perspective are also recruited for encoding others’ beliefs. Such results point to an early developing, powerful apparatus suitable to deal with multiple concurrent representations; and suggest that infants can have a metarepresentational understanding of other minds even before the onset of language.

Pointing as epistemic request: 12-month-olds point to receive new information

Infants start pointing systematically to objects or events around their first birthday. It has been proposed that infants point to an event in order to share their appreciation of it with others. In the current study, we tested another hypothesis, according to which infants' pointing could also serve as an epistemic request directed to the adult. Thus, infants' motivation for pointing could include the expectation that adults would provide new information about the referent. In two experiments, an adult reacted to 12-month-olds’ pointing gestures by exhibiting 'informing' or 'sharing' behavior. In response, infants pointed more frequently across trials in the informing than in the sharing condition. This suggests that the feedback that contained new information matched infants' expectations more than mere attention sharing. Such a result is consistent with the idea that not just the comprehension but also the production of early communicative signals is tuned to assist infants' learning from others.

Are all beliefs equal? Implicit belief attributions recruiting core brain regions of Theory of Mind.

Humans possess efficient mechanisms to behave adaptively in social contexts. They ascribe goals and beliefs to others and use these for behavioural predictions. Researchers argued for two separate mental attribution systems: an implicit and automatic one involved in online interactions, and an explicit one mainly used in offline deliberations. However, the underlying mechanisms of these systems and the types of beliefs represented in the implicit system are still unclear. Using neuroimaging methods, we show that the right temporo-parietal junction and the medial prefrontal cortex, brain regions consistently found to be involved in explicit mental state reasoning, are also recruited by spontaneous belief tracking. While the medial prefrontal cortex was more active when both the participant and another agent believed an object to be at a specific location, the right temporo-parietal junction was selectively activated during tracking the false beliefs of another agent about the presence, but not the absence of objects. While humans can explicitly attribute to a conspecific any possible belief they themselves can entertain, implicit belief tracking seems to be restricted to beliefs with specific contents, a content selectivity that may reflect a crucial functional characteristic and signature property of implicit belief attribution.

Gergely G, Csibra G. Natural pedagogy. In: Banaji MR, Gelman SA, editors. Navigating the Social World: What Infants, Chidren, and Other Species Can Teach Us. Oxford University Press; 2013. p. 127-32.

Natural pedagogy

This chapter proposes that the mechanism of natural pedagogy is ostensive communication, which incorporates evolved interpretive biases that allow and foster the transmission of generic and culturally shared knowledge to others. Such communication is not necessarily linguistic but always referential. There is extensive evidence that infants and children are especially sensitive to being communicatively addressed by adults, and that even newborns attend to and show preference for ostensive signals, such as eye contact, infant-directed speech, or infant-induced contingent reactivity. Such ostensive cues generate referential expectations in infants, triggering a tendency to gaze-follow the other's subsequent orientation responses (such as gaze-shifts) to their referential target, which may contribute to learning about referential signals such as deictic gestures and words. The chapter also addresses some of the most frequently asked questions about natural pedagogy in order to resolve some typical misunderstandings about what is and what is not claimed by the theory.

Csibra G, Gergely G. Teleological understanding of actions. In: Banaji MR, Gelman SA, editors. Navigating the Social World: What Infants, Chidren, and Other Species Can Teach Us. Oxford University Press; 2013. p. 38-43.

Teleological understanding of actions

An observed behavior is interpreted as an action directed to a particular end state if it is judged to be the most efficient means available to the agent for achieving this goal in the given environment. When such an interpretation is established, it creates a teleological representation of the action, which is held together by the principle of efficiency. The paradigmatic situation in which the functioning of teleological interpretation can be tested is when one observes a behavior (e.g., an agent jumps into the air while moving in a certain direction) leading to an end state (e.g., the agent stops next to another object). If, and only if, the behavior (jumping) is justified by environmental factors (by the presence of a barrier over which the jumping occurs) will this behavior be interpreted as a means action to achieve the end state as the goal of the action (to get in contact with the other object). Researchers have published extensive evidence that infants from at least six months of age form this kind of teleological representations of actions. This chapter attempts to clarify commonly raised issues about this theory in a question-and-answer format.

Electrophysiological evidence for the understanding of maternal speech by 9-month-old infants

Early word learning in infants relies on statistical, prosodic, and social cues that support speech segmentation and the attachment of meaning to words. It is debated whether such early word knowledge represents mere associations between sound patterns and visual object features, or reflects referential understanding of words. By using event-related brain potentials, we demonstrate that 9-month-old infants detect the mismatch between an object appearing from behind an occluder and a preceding label with which their mother introduces it. The N400 effect has been shown to reflect semantic priming in adults, and its absence in infants has been interpreted as a sign of associative word learning. By setting up a live communicative situation for referring to objects, we demonstrate that a similar priming effect also occurs in young infants. This finding may indicate that word meaning is referential from the outset, and it drives, rather than results from, vocabulary acquisition in humans.

Representation of stable social dominance relations by human infants

What are the origins of humans’ capacity to represent social relations? We approached this question by studying human infants’ understanding of social dominance as a stable relation. We presented infants with interactions between animated agents in conflict situations. Studies 1 and 2 targeted expectations of stability of social dominance. They revealed that 15-montholds (and to a lesser extent 12-month-olds) expect an asymmetric relationship between two agents to remain stable from one conflict to another. To do so, infants need to infer that one of the agents (the dominant) will consistently prevail when her goals conflict with those of the other (the subordinate). Study 3 and 4 targeted the format of infants’ representation of social dominance. In these studies, we found that 12- and 15-month-olds did not extend their expectations of dominance to unobserved relationships, even when they could have been established by transitive inference. This suggests that infants' expectation of stability originates from their representation of social dominance as a relationship between two agents rather than as an individual property. Infants’ demonstrated understanding of social dominance reflects the cognitive underpinning of humans’ capacity to represent social relations, which may be evolutionarily ancient, and may be shared with non-human species.

Near-infrared spectroscopy: A report from the McDonnell Infant Methodology Consortium

Near-infrared spectroscopy (NIRS) is a new and increasingly widespread brain imaging technique, particularly suitable for young infants. The laboratories of the McDonnell Consortium have contributed to the technological development and research applications of this technique for nearly a decade. The present paper provides a general introduction to the technique as well as a detailed report of the methodological innovations developed by the Consortium. The basic principles of NIRS and some of the existing developmental studies are reviewed. Issues concerning technological improvements, parameter optimization, possible experimental designs and data analysis techniques are discussed and illustrated by novel empirical data.

Natural pedagogy as evolutionary adaptation

We propose that the cognitive mechanisms that enable the transmission of cultural knowledge by communication between individuals constitute a system of 'natural pedagogy' in humans, and represent an evolutionary adaptation along the hominin lineage. We discuss three kinds of arguments that support this hypothesis. First, natural pedagogy is likely to be human-specific: while social learning and communication are both widespread in non-human animals, we know of no example of social learning by communication in any other species apart from humans. Second, natural pedagogy is universal: despite the huge variability in child-rearing practices, all human cultures rely on communication to transmit to novices a variety of different types of cultural knowledge, including information about artefact kinds, conventional behaviours, arbitrary referential symbols, cognitively opaque skills, and know-how embedded in means-end actions. Third, the data available on early hominin technological culture are more compatible with the assumption that natural pedagogy was an independently selected adaptive cognitive system than considering it as a by-product of some other human-specific adaptation, such as language. By providing a qualitatively new type of social learning mechanism, natural pedagogy is not only the product but also one of the sources of the rich cultural heritage of our species.

Automated gaze-contingent objects elicit orientation following in 8-month-old infants

The current study tested whether the purely amodal cue of contingency elicit orientation following behaviour in 8-months-old infants. We presented 8-month-old infants with automated objects without human features that did or did not react contingently to the infants' fixations recorded by an eye-tracker. We found that an object's occasional orientation towards peripheral targets was reciprocated by a congruent visual orientation following response by infants only when it had displayed gaze-contingent interactivity. Our finding demonstrates that infants' gaze following behaviour does not depend on the presence of a human being. The results are consistent with the idea that the detection of contingent reactivity, like other communicative signals, can itself elicit the illusion of being addressed in 8-months-old infants.

Motor system activation reveals infants’ on-line prediction of others’ goals

Despite much research demonstrating infants’ abilities to attribute goals to others’ actions, it is unclear whether infants can generate on-line predictions about action outcomes, an ability crucial for the human propensity to cooperate and collaborate with others. This lack of evidence is mainly due to methodological limitations restricting the interpretation of behavioral data. Here, we exploited the fact that observers’ motor systems are recruited during the observation of goal-directed actions. We presented 9-month-old infants with part of an action. For this action to be interpreted as goal directed, the infants would need to predict an outcome for the action. Measuring the attenuation of the sensorimotor alpha signal during observation of action, we found that infants exhibited evidence of motor activation only if the observed action permitted them to infer a likely outcome. This result provides evidence for on-line goal prediction in infancy, and our method offers a new way to explore infants’ cognitive abilities.

Polymorphisms in dopamine system genes are associated with individual differences in attention in infancy

Knowledge about the functional status of the frontal cortex in infancy is limited. This study investigated the effects of polymorphisms in four dopamine system genes on performance in a task developed to assess such functioning, the Freeze-Frame task, at 9 months of age. Polymorphisms in the catechol-Omethyltransferase (COMT) and the dopamine D4 receptor (DRD4) genes are likely to impact directly on the functioning of the frontal cortex, whereas polymorphisms in the dopamine D2 receptor (DRD2) and dopamine transporter (DAT1) genes might influence frontal cortex functioning indirectly via strong frontostriatal connections. A significant effect of the COMT valine158methionine (Val158Met) polymorphism was found. Infants with the Met/Met genotype were significantly less distractible than infants with the Val/Val genotype in Freeze-Frame trials presenting an engaging central stimulus. In addition, there was an interaction with the DAT1 3 variable number of tandem repeats polymorphism; the COMT effect was present only in infants who did not have two copies of the DAT1 10-repeat allele. These findings indicate that dopaminergic polymorphisms affect selective aspects of attention as early as infancy and further validate the Freeze-Frame task as a frontal cortex task.

Recognizing communicative intentions in infancy

I make three related proposals concerning the development of receptive communication in human infants. First, I propose that the presence of communicative intentions can be recognized in others' behaviour before the content of these intentions is accessed or inferred. Second, I claim that such recognition can be achieved by decoding specialized ostensive signals. Third, I argue on empirical bases that, by decoding ostensive signals, human infants are capable of recognizing communicative intentions addressed to them. Thus, learning about actual modes of communication benefits from, and is guided by, infants' preparedness to detect infant-directed ostensive communication.

Absence of spontaneous action anticipation by false belief attribution in children with autism spectrum disorder

Recently, a series of studies demonstrated false belief understanding in young children through completely nonverbal measures. These studies have revealed that children younger than 3 years of age, who consistently fail the standard verbal false belief test, can anticipate others’ actions based on their attributed false beliefs. The current study examined whether children with autism spectrum disorder (ASD), who are known to have difficulties in the verbal false belief test, may also show such action anticipation in a nonverbal false belief test.We presented video stimuli of an actor watching an object being hidden in a box. The object was then displaced while the actor was looking away. We recorded children’s eye movements and coded whether they spontaneously anticipated the actor’s subsequent behavior, which could only have been predicted if they had attributed a false belief to her. Although typically developing children correctly anticipated the action, children with ASD failed to show such action anticipation. The results suggest that children with ASD have an impairment in false belief attribution, which is independent of their verbal ability.

Verbal labels modulate perceptual object processing in one-year-old children

It has been debated whether acquiring verbal labels helps infants' visual processing and categorization of objects. Using electroencephalography, we investigated whether possessing or learning verbal labels for objects directly enhances one-year-old infants' neural processes underlying the perception of those objects. We found enhanced gamma-band (20 to 60 Hz) oscillatory activity over the visual cortex in response to seeing objects whose names one-year-old infants knew (Experiment 1), or for which they had just been taught a label (Experiment 2). No such effect was observed for objects with which the infants were simply familiar without having a label for them. These results demonstrate that learning verbal labels modulates how the visual system processes the images of the associated objects, and suggest a possible route of top-down influence of semantic knowledge on object perception.

Communicative function demonstration induces kind-based artifact representation in preverbal infants

Human infants grow up in environments populated by artifacts. In order to acquire knowledge about different kinds of human-made objects, children have to be able to focus on the information that is most relevant for sorting artifacts into categories. Traditional theories emphasize the role of superficial, perceptual features in object categorization. In the case of artifacts, however, it is possible that abstract, non-obvious properties, like functions, may form the basis of artifact kind representations from an early age. Using an object individuation paradigm we addressed the question whether non-verbal communicative demonstration of the functional use of artifacts makes young infants represent such objects in terms of their kinds. When two different functions were sequentially demonstrated on two novel objects as they emerged one-by-one from behind a screen, 10-month-old infants inferred the presence of two objects behind the occluder. We further show that both the presence of communicative signals and causal intervention are necessary for 10-month-olds to generate such a numerical expectation. We also found that communicative demonstration of two different functions of a single artifact generated the illusion of the presence of two objects. This suggests that information on artifact function was used as an indicator of kind membership, and infants expected one specific function to define one specific artifact kind. Thus, contrary to previous accounts, preverbal infants' specific sensitivity to object function underlies, guides, and supports their learning about artifacts.

Seventeen-month-olds appeal to false beliefs to interpret others' referential communication

Recent studies have demonstrated infants’ pragmatic abilities for resolving the referential ambiguity of non-verbal communicative gestures, and for inferring the intended meaning of a communicator's utterances. These abilities are difficult to reconcile with the view that it is not until around four years that children can reason about the internal mental states of others. In the current study, we tested whether 17-month-old infants are able to track the status of a communicator's epistemic state and use this to infer what she intends to refer to. Our results show that manipulating whether or not a communicator has a false belief leads infants to different interpretations of the same communicative act, and demonstrate early mental state attribution in a pragmatic context.

Southgate V, Gergely G, Csibra G. Does the mirror neuron system and its impairment explain human imitation and autism? In: Pineda JA, editor. Mirror Neuron Systems:The Role of Mirroring Processes in Social Cognition. Berlin: Springer; 2009. p. 331-54.

Does the mirror neuron system and its impairment explain human imitation and autism?

The proposal that the understanding and imitation of observed actions is made possible through the ‘mirror neuron system’ (Rizzolatti, Fogassi & Gallese, 2001) has led to much speculation that a dysfunctional mirror system may be at the root of the social deficits characteristic of autism (e.g. Ramachandran & Oberman, 2006). This chapter will critically examine the hypothesis that those with ASD may be in possession of a 'broken' mirror neuron system (MNS) and propose that the deficits seen in imitation in individuals with ASD reflect not a dysfunctional MNS, but a lack of sensitivity to those cues that would help them identify what to imitate. In doing this, we will also argue that imitation in typically developing children cannot be explained by appealing to a direct-matching mechanism, and that the process by which young children imitate involves a far more complex but effortless analysis of the communication of those who they learn from.

Functional understanding facilitates learning about tools in human children

Human children benefit from a possibly unique set of adaptations facilitating the acquisition of knowledge about material culture. They represent artifacts (man-made objects) as tools with specific functions and seek for functional information about novel objects. Even young infants pay attention to functionally relevant features of objects, and learn tool use and infer tool functions from others’ goal-directed actions and demonstrations. Children tend to imitate causally irrelevant elements of tool use demonstrations, which helps them to acquire means actions even before they fully understand their causal role in bringing about the desired goal. Although non-human animals use and make tools, and recognize causally relevant features of objects in a given task, they - unlike human children - do not appear to form enduring functional representations of tools as being for achieving particular goals when they are not in use.

Neural correlates of eye gaze processing in the infant broader autism phenotype

Background: Studies of infant siblings of children diagnosed with autism have allowed for a prospective approach to study the emergence of autism in infancy and revealed early behavioral characteristics of the broader autism phenotype. In view of previous findings of atypical eye gaze processing in children and adults with autism, the aim of this study was to examine the early autism phenotype in infant siblings of children diagnosed with autism spectrum disorder (sib-ASD), focusing on the neural correlates of direct compared with averted gaze. Methods: A group of 19 sib-ASD was compared with 17 control infants with no family history of ASD (mean age = 10 months) on their response to direct versus averted gaze in static stimuli. Results: Relative to the control group, the sib-ASD group showed prolonged latency of the occipital P400 event-related potentials component in response to direct gaze, but they did not differ in earlier components. Similarly, time-frequency analysis of high-frequency oscillatory activity in the gamma band showed group differences in response to direct gaze, where induced gamma activity was late and less persistent over the right temporal region in the sib-ASD group. Conclusion: This study suggests that a broader autism phenotype, which includes an atypical response to direct gaze, is manifest early in infancy.

Visual orienting in the early broader autism phenotype: disengagement and facilitation

Recent studies of infant siblings of children diagnosed with autism have allowed for a prospective approach to examine the emergence of symptoms and revealed behavioral differences in the broader autism phenotype within the early years. In the current study we focused on a set of functions associated with visual attention, previously reported to be atypical in autism. We compared performance of a group of 9-10-month-old infant siblings of children with autism to a control group with no family history of autism on the 'gap-overlap task', which measures the cost of disengaging from a central stimulus in order to fixate a peripheral one. Two measures were derived on the basis of infants' saccadic reaction times. The first is the Disengagement effect, which measures the efficiency of disengaging from a central stimulus to orient to a peripheral one. The second was a Facilitation effect, which arises when the infant is cued by a temporal gap preceding the onset of the peripheral stimulus, and would orient faster after its onset. Infant siblings of children with autism showed longer Disengagement latencies as well as less Facilitation relative to the control group. The findings are discussed in relation to how differences in visual attention may relate to characteristics observed in autism and the broader phenotype.

Natural pedagogy

We propose that human communication is specifically adapted to allow the transmission of generic knowledge between individuals. Such a communication system, which we call 'natural pedagogy', enables fast and efficient social learning of cognitively opaque cultural knowledge that would be hard to acquire relying on purely observational learning mechanisms alone. We argue that human infants are prepared to be at the receptive side of natural pedagogy (i) by being sensitive to ostensive signals that indicate that they are being addressed by communication, (ii) by developing referential expectations in ostensive contexts and (iii) by being biased to interpret ostensive-referential communication as conveying information that is kind-relevant and generalizable.

One-year-old infants appreciate the referential nature of deictic gestures and words

One-year-old infants have a small receptive vocabulary and follow deictic gestures, but it is still debated whether they appreciate the referential nature of these signals. Demonstrating understanding of the complementary roles of symbolic (word) and indexical (pointing) reference provides evidence of referential interpretation of communicative signals. We presented 13-month-old infants with video sequences of an actress indicating the position of a hidden object while naming it. The infants looked longer when the named object was revealed not at the location indicated by the actress's gestures, but on the opposite side of the display. This finding suggests that infants expect that concurrently occurring communicative signals co-refer to the same object. Another group of infants, who were shown video sequences in which the naming and the deictic cues were provided concurrently but by two different people, displayed no evidence of expectation of co-reference. These findings suggest that a single communicative source, and not simply co-occurrence, is required for mapping the two signals onto each other. By 13 months of age, infants appreciate the referential nature of words and deictic gestures alike.

Rapid orienting toward face-like stimuli with gaze-relevant contrast information

Human faces under natural illumination, and human eyes in their unique morphology, include specific contrast polarity relations that face-detection mechanisms could capitalise on. Newborns have been shown to preferentially orient to simple face-like patterns only when they contain face- or gaze-relevant contrast. We investigated whether human adults show similar preferential orienting towards schematic face-like stimuli, and whether this effect depends on the contrast polarity of the stimuli. In two experiments we demonstrate that upright schematic face-like patterns elicit faster eye movements in adult humans than inverted ones, and that this Occurs only if they contain face- or gaze-relevant contrast information in the whole stimulus or in the eye region only. These results suggest that primitive mechanisms underlying the orienting bias towards faces and eyes influence and modulate social cognition not just in infants but in adults as well.

Differential sensitivity to human communication in dogs, wolves and human infants

Ten-month-old infants search for a hidden object persistently at its initial hiding place even after observing it being hidden at another location. Recent evidence suggests that communicative cues from the experimenter contribute to the emergence of this perseverative search error. Here we replicate these results with dogs, who also commit more search errors in ostensive-communicative (in 75% of the total trials) than in non-communicative (39%) or non-social (17%) hiding contexts. However, comparative investigations suggest that communicative signals serve different functions for dogs and infants, while human-reared wolves do not show dog-like context-dependent differences of search errors. We propose that shared sensitivity to human communicative signals stems from convergent social evolution of the Homo and the Canis genera.

Temporal-nasal asymmetry of rapid orienting to face-like stimuli

Recent work suggests that a subcortical visual route may mediate rapid orienting towards facial configuration in the visual periphery and not only to visual threat in faces. We demonstrate that the orienting bias towards faces shows a temporal-nasal visual field asymmetry of responses, suggesting its extrageniculate mediation. Upright schematic face-like pattern elicited faster behavioural responses than inverted one in the temporal but not in the nasal hemifield of each eye, and this effect occurred for saccades but not for manual responses. The presence of a similar asymmetry of the orienting bias in newborns supports the role of extrageniculate pathways in face detection in both neonates and adults.

Sensitivity to communicative relevance tells young children what to imitate

How do children decide which elements of an action demonstration are important to reproduce in the context of an imitation game? We tested whether selective imitation of a demonstrator’s actions may be based on the same search for relevance that drives adult interpretation of ostensive communication. Three groups of 18-month-old infants were shown a toy animal either hopping or sliding (action style) into a toy house (action outcome), but the communicative relevance of the action style differed depending on the group. For the no prior information group, all the information in the demonstration was new and so equally relevant. However, for infants in the ostensive prior information group, the potential action outcome was already communicated to the infant prior to the main demonstration, rendering the action style more relevant. Infants in the ostensive prior information group imitated the action style significantly more than infants in the no prior information group, suggesting that the relevance manipulation modulated their interpretation of the action demonstration. A further condition (non-ostensive prior information) confirmed that this sensitivity to new information is only present when the ‘old ’ information had been communicated, and not when infants discovered this information for themselves. These results indicate that, like adults, human infants expect communication to contain relevant content, and imitate action elements that, relative to their current knowledge state or to the common ground with the demonstrator, is identified as most relevant.

Inferring the outcome of an ongoing novel action at 13 months

Many studies have demonstrated that infants can attribute goals to observed actions, whether they are presented live by familiar agents, or on a computer screen by abstract figures. However, because most, if not all, of these studies rely on the repeated action presentations typical of infant studies, it is not clear whether infants are simply recognizing the completed action as goal-directed, or whether they can productively infer a not-yet-achieved outcome from an ongoing action. We investigated this question by presenting 13-month-old infants with a single animated chasing event. Infants looked longer at the outcome of this action when, given the opportunity, the chaser did not catch the chasee, than when it did so. Crucially, this result was dependent on whether the chasing behaviour could be construed as an efficient action with regards to this goal state. This finding demonstrates predictive goal attribution to an ongoing novel action, and illustrates the productivity of one-year-olds' action understanding.

Predictive motor activation during action observation in human infants

Certain regions of the human brain are activated both during action execution and action observation. This so-called ‘mirror neuron system’ has been proposed to enable an observer to understand an action through a process of internal motor simulation. Although there has been much speculation about the existence of such a system from early in life, to date there is little direct evidence that young infants recruit brain areas involved in action production during action observation. To address this question, we identified the individual frequency range in which sensorimotor alpha-band activity was attenuated in nine-month-old infants’ electroencephalographs (EEGs) during elicited reaching for objects, and measured whether activity in this frequency range was also modulated by observing others’ actions. We found that observing a grasping action resulted in motor activation in the infant brain, but that this activity began prior to observation of the action, once it could be anticipated. These results demonstrate not only that infants, like adults, display overlapping neural activity during execution and observation of actions, but that this activation, rather than being directly induced by the visual input, is driven by infants’ understanding of a forthcoming action. These results provide support for theories implicating the motor system in action prediction.

Csibra G, Kushnerenko E, Grossmann T. Electrophysiological methods in studying infant cognitive development. In: Handbook of Developmental Cognitive Neuroscience. Cambridge, Mass.: MIT Press; 2008. p. 247-62.
Johnson MH, Mareschal D, Csibra G. The development and integration of dorsal and ventral visual pathways in object processing. In: Handbook of Developmental Cognitive Neuroscience. Cambridge, Mass.: MIT Press; 2008. p. 467-78.

Goal attribution to inanimate agents by 6.5-month-old infants

Human infants' tendency to attribute goals to observed actions may help us to understand where people's obsession with goals originates from. While one-year-old infants liberally interpret the behaviour of many kinds of agents as goal-directed, a recent report [Kamewari, K., Kato, M., Kanda, T., Ishiguro, H., & Hiraki, K. (2005). Six-and-a-half-month-old children positively attribute goals to human action and to humanoid-robot motion. Cognitive Development, 20, 303-320] suggested that younger infants restrict goal attribution to humans and human-like creatures. The present experiment tested whether 6.5-month-old infants would be willing to attribute a goal to a moving inanimate box if it slightly varied its goal approach within the range of the available efficient actions. The results were positive, demonstrating that featural identification of agents is not a necessary precondition of goal attribution in young infants and that the single most important behavioural cue for identifying a goal-directed agent is variability of behaviour. This result supports the view that the bias to give teleological interpretation to actions is not entirely derived from infants' experience. (C) 2007 Elsevier B.V. All rights reserved.

Infants can infer the presence of hidden objects from referential gaze information

Infants' apparent failure in gaze-following tasks is often interpreted as a sign of lack of understanding the referential nature of looking. In the present study, 8- and 12-month-old infants followed the gaze of a model to one of two locations hidden from their view by occluders. When the occluders were removed, an object was revealed either at the location where the model had looked or at the other side. Infants at both ages looked longer at the empty location when it had been indicated by the model's looking behaviour, and this effect held up even when their first look after gaze following was discounted. This result demonstrates that even young infants hold referential expectations when they follow others' gaze and infer the location of hidden objects accordingly.

Freeze-Frame: A new infant inhibition task and its relation to frontal cortex tasks during infancy and early childhood

The current study investigated a new, easily administered, visual inhibition task for infants termed the Freeze-Frame task. In the new task, 9-month-olds were encouraged to inhibit looks to peripheral distractors. This was done by briefly freezing a central animated stimulus when infants looked to the distractors. Half of the trials presented an engaging central stimulus, and the other half presented a repetitive central stimulus. Three measures of inhibitory function were derived from the task and compared with performance on a set of frontal cortex tasks administered at 9 and 24 months of age. As expected, infants' ability to learn to selectively inhibit looks to the distractors at 9 months predicted performance at 24 months. However, performance differences in the two Freeze-Frame trial types early in the experiment also turned out to be an important predictor. The results are discussed in terms of the validity of the Freeze-Frame task as an early measure of different components of inhibitory function. (C) 2007 Elsevier Inc. All rights reserved.

Electrophysiological evidence of illusory audiovisual speech percept in human infants

How effortlessly and quickly infants acquire their native language remains one of the most intriguing questions of human development. Our study extends this question into the audiovisual domain, taking into consideration visual speech cues, which were recently shown to have more importance for young infants than previously anticipated [Weikum WM, Vouloumanos A, Navarra J, Soto-Faraco, S, Sebastian-Galles N, Werker JF (2007) Science 316:1159]. A particularly interesting phenomenon of audiovisual speech perception is the McGurk effect [McGurk H, MacDonald J (1976) Nature 264:746-748], an illusory speech percept resulting from integration of incongruent auditory and visual speech cues. For some phonemes, the human brain does not detect the mismatch between conflicting auditory and visual cues but automatically assimilates them into the closest legal phoneme, sometimes different from both auditory and visual ones. Measuring event-related brain potentials in 5-month-old infants, we demonstrate differential brain responses when conflicting auditory and visual speech cues can be integrated and when they cannot be fused into a single percept. This finding reveals a surprisingly early ability to perceive speech cross-modally and highlights the role of visual speech experience during early postnatal development in learning of the phonemes and phonotactics of the native language.

Early cortical specialization for face-to-face communication in human infants

This study examined the brain bases of early human social cognitive abilities. Specifically, we investigated whether cortical regions implicated in adults' perception of facial communication signals are functionally active in early human development. Four-month-old infants watched two kinds of dynamic scenarios in which a face either established mutual gaze or averted its gaze, both of which were followed by an eyebrow raise with accompanying smile. Haemodynamic responses were measured by near-infrared spectroscopy, permitting spatial localization of brain activation (experiment 1), and gamma-band oscillatory brain activity was analysed from electroencephalography to provide temporal information about the underlying cortical processes (experiment 2). The results revealed that perceiving facial communication signals activates areas in the infant temporal and prefrontal cortex that correspond to the brain regions implicated in these processes in adults. In addition, mutual gaze itself, and the eyebrow raise with accompanying smile in the context of mutual gaze, produce similar cortical activations. This pattern of results suggests an early specialization of the cortical network involved in the perception of facial communication cues, which is essential for infants' interactions with, and learning from, others.

Gaze following in human infants depends on communicative signals

Humans are extremely sensitive to ostensive signals, like eye contact or having their name called, that indicate someone's communicative intention toward them [1-3]. Infants also pay attention to these signals [4-6], but it is unknown whether they appreciate their significance in the initiation of communicative acts. In two experiments, we employed video presentation of an actor turning toward one of two objects and recorded infants' gaze-following behavior [7-13] with eye tracking techniques [11, 12]. We found that 6-month-old infants followed the adult's gaze (a potential communicative-referential signal) toward an object only when such an act is preceded by ostensive cues such as direct gaze (experiment 1) and infant-directed speech (experiment 2). Such a link between the presence of ostensive signals and gaze-following suggests that this behavior serves a functional role in assisting infants to effectively respond to referential communication directed to them. Whereas gaze following in many nonhuman species supports social information gathering [14-18], in humans it initially appears to reflect the expectation of a more active, communicative role from the information source.

Understanding the referential nature of looking: Infants' preference for object-directed gaze

In four experiments, we investigated whether 9-month-old infants are sensitive to the relationship between gaze direction and object location and whether this sensitivity depends on the presence of communicative cues like eye contact. Infants observed a face, which repeatedly shifted its eyes either toward, or away from, unpredictably appearing objects. We found that they looked longer at the face when the gaze shifts were congruent with the location of the object. A second experiment ruled out that this effect was simply due to spatial congruency, while a third and a fourth experiment revealed that a preceding period of eye contact is required to elicit the gaze-object congruency effect. These results indicate that infants at this age can encode eye direction in referential terms in the presence of communication cues and are biased to attend to scenes with object-directed gaze. (c) 2008 Elsevier B.V. All rights reserved.

Yoon JM, Johnson MH, Csibra G. Communication-induced memory biases in preverbal infants. Proceedings of the National Academy of Sciences of the United States of America. 2008;105(36):13690-5.

Communication-induced memory biases in preverbal infants

Human teaching, a highly specialized form of cooperative information transmission, depends not only on the presence of benevolent communicators in the environment, but also on the preparedness of the students to learn from communication when it is addressed to them. We tested whether 9-month-old human infants can distinguish between communicative and noncommunicative social contexts and whether they retain qualitatively different information about novel objects in these contexts. We found that in a communicative context, infants devoted their limited memory resources to encoding the identity of novel objects at the expense of encoding their location, which is preferentially retained in noncommunicative contexts. We propose that infants' sensitivity to, and interpretation of, the social cues distinguishing infant-directed communication events represent important mechanisms of social learning by which others can help determine what information even preverbal human observers retain in memory.

Distinct processing of objects and faces in the infant brain

Previous work has shown that gamma-band electroencephalogram oscillations recorded over the posterior cortex of infants play a role in maintaining object representations during occlusion. Although it is not yet known what kind of representations are reflected in these oscillations, behavioral data suggest that young infants maintain spatiotemporal (but not featural) information during the occlusion of graspable objects, and surface feature (but not spatiotemporal) information during the occlusion of faces. To further explore this question, we presented infants with an occlusion paradigm in which they would, on half of the trials, see surface feature violations of either a face or an object. Based on previous studies, we predicted higher gamma-band activation when infants were presented with a surface feature violation of a face, but not of an object. These results were confirmed. A further analysis revealed that whereas infants exhibited a significant increase in gamma during the occlusion of an object (as reported in previous studies), no such increase was evident during the occlusion of a face. These data suggest markedly different processing of objects and faces in the infant brain and, furthermore, indicate that the representation underpinned by the posterior gamma increase may contain only spatiotemporal information.

Infants attribute goals even to biomechanically impossible actions

Human infants readily interpret the actions of others in terms of goals, but the origins of this important cognitive skill are keenly debated. We tested whether infants recognize others' actions as goal-directed on the basis of their experience with carrying out and observing goal-directed actions, or whether their perception of a goal-directed action is based on the recognition of a specific event structure. Counterintuitively, but consistent with our prediction, we observed that infants appear to extend goal attribution even to biomechanically impossible actions so long as they are physically efficient, indicating that the notion of 'goal' is unlikely to be derived directly from infants' experience. (C) 2007 Elsevier B.V. All rights reserved.

Visual speech contributes to phonetic learning in 6-month-old infants

Previous research has shown that infants match vowel sounds to facial displays of vowel articulation [Kuhl, P. K., & Meltzoff, A. N. (1982). The bimodal perception of speech in infancy. Science, 218, 1138-1141; Patterson, M. L., & Werker, J. F. (1999). Matching phonetic information in lips and voice is robust in 4.5-month-old infants. Infant Behaviour & Development, 22, 237-2471], and integrate seen and heard speech sounds [Rosenblum, L. D., Schmuckler, M. A., & Johnson,J. A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59, 347-357; Burnham, D., & Dodd, B. (2004). Auditory-visual speech integration by prelinguistic infants: Perception of an emergent consonant in the McGurk effect. Developmental Psychobiology, 45, 204-220]. However, the role of visual speech in language development remains unknown. Our aim was to determine whether seen articulations enhance phoneme discrimination, thereby playing a role in phonetic category learning. We exposed 6-month-old infants to speech sounds from a restricted range of a continuum between /ba/ and /da/, following a unimodal frequency distribution. Synchronously with these speech sounds, one group of infants (the two-category group) saw a visual articulation of a canonical /ba/ or /da/, with the two alternative visual articulations, /ba/ arid /da/, being presented according to whether the auditory token was on the /ba/ or /da/ side of the midpoint of the continuum. Infants in a second (one-category) group were presented with the same unimodal distribution of speech sounds, but every token for any particular infant was always paired with the same syllable, either a visual /ba/ or a visual /da/. A stimulusalternation preference procedure following the exposure revealed that infants in the former, and not in the latter, group discriminated the /ba/-/da/ contrast. These results not only show that visual information about speech articulation enhances phoneme discrimination, but also that it may contribute to the learning of phoneme boundaries in infancy. (C) 2008 Elsevier B.V. All rights reserved.

Infants' perseverative errors are induced by pragmatic misinterpretation.

Having repeatedly retrieved an object from a location, human infants tend to search the same place even when they observe the object being hidden at another location. This perseverative error is usually explained by infants' inability to inhibit a previously rewarded search response or to recall the new location. We show that the tendency to commit this error is substantially reduced (from 81 to 41%) when the object is hidden in front of 10-month-old infants without the experimenter using the communicative cues that normally accompany object hiding in this task. We suggest that this improvement is due to an interpretive bias that normally helps infants learn from demonstrations but misleads them in the context of a hiding game. Our finding provides an alternative theoretical perspective on the nature of infants' perseverative search errors

Csibra G, Gergely G. Ember és kultúra. A kulturális tudás eredete és átadásának mechanizmusai. Vol 11. Budapest, Hungary: Akadémiai Kiadó; 2007. (Pszichológiai Szemle Könyvtár; vol 11).
Csibra G. Action mirroring and action interpretation: An alternative account. In: Haggard P, Rosetti Y, Kawato M, editors. Attention and Perfomnace XXII: Sensorimotor Foundations of Higher Cognition. Oxford: Oxford University Press; 2007. p. 435-59.
Csibra G, Gergely G. Társas tanulás és társas megismerés. A pedagógia szerepe. In: Ember és kultúra : a kulturális tudás eredete és átadásának mechanizmusai. Vol 11. Budapest, Hungary, H: Akadémiai Kiadó; 2007. p. 5-30. (Pszichológiai Szemle Könyvtár; vol 11).
Csibra G, Johnson MH. Investigating event-related oscillations in infancy. In: De Haan M, editor. Infant EEG and Event-Related Potentials. Hove, England: Psychology Press; 2007. p. 289-304.

Teachers in the wild

Three recent studies challenge the apparent consensus about the absence of teaching in non-human animals by providing evidence that certain behaviours of ants, birds and mammals satisfy a strict definition of teaching. However, these behaviours, although capable of facilitating information or skill acquisition in youngsters, could not support the transmission of cultural knowledge across individuals, which human teaching arguably serves.

'Obsessed with goals': Functions and mechanisms of teleological interpretation of actions in humans

Humans show a strong and early inclination to interpret observed behaviours of others as goal-directed actions. We identify two main epistemic functions that this 'teleological obsession' serves: on-line prediction and social learning. We show how teleological action interpretations can serve these functions by drawing on two kinds of inference ('action-to-goal' or 'goal-to-action'), and argue that both types of teleological inference constitute inverse problems that can only be solved by further assumptions. We pinpoint the assumptions that the three currently proposed mechanisms of goal attribution (action-effect associations, simulation procedures, and teleological reasoning) imply, and contrast them with the functions they are supposed to fulfil. We argue that while action-effect associations and simulation procedures are generally well suited to serve on-line action monitoring and prediction, social learning of new means actions and artefact functions requires the inferential productivity of teleological reasoning. (c) 2006 Elsevier B.V. All rights reserved.

Electrophysiological correlates of common-onset visual masking

In common-onset visual masking (COVM) the target and the mask come into view simultaneously. Masking occurs when the mask remains on the screen for longer after deletion of the target. Enns and Di Lollo [Enns, J. T., & Di Lollo, V. (2000). What's new in visual masking? Trends in Cognitive Sciences, 4(9), 345-352] have argued that this type of masking can be explained by re-entrant visual processing. In the present studies we used high-density event-related brain potentials (HD-ERP) to obtain neural evidence for re-entrant processing in COVM. In two experiments the participants' task was to indicate the presence or absence of a vertical bar situated at the lower part of a ring highlighted by the mask. The only difference between the experiments was the duration of the target: 13 and 40 ms for the first and second experiment respectively. Behavioral results were consistent between experiments: COVM was stronger as a joint function of a large set size and longer trailing mask duration. Electrophysiological data from both studies revealed modulation of a posterior P2 component around 220 ms post-stimulus onset associated with masking. Further, in the critical experimental condition we revealed a significant relation between the amplitude of the P2 and behavioural response accuracy. We hypothesize that this re-activation of early visual areas reflects re-entrant feedback from higher to lower visual areas, providing converging evidence for re-entrance as an explanation for COVM. (c) 2007 Elsevier Ltd. All rights reserved.

Seeing the face through the eyes: A developmental perspective on face expertise

Most people are experts in face recognition. We propose that the special status of this particular body part in telling individuals apart is the result of a developmental process that heavily biases human infants and children to attend towards the eyes of others. We review the evidence supporting this proposal, including neuroimaging results and studies in developmental disorders, like autism. We propose that the most likely explanation of infants’ bias towards eyes is the fact that eye gaze serves important communicative functions in humans.

Social perception in the infant brain: gamma oscillatory activity in response to eye gaze

Gamma band oscillatory brain activity was measured to examine the neural basis of 4-month-old infants’ perception of eye gaze direction. Infants were presented with photographic images of upright and inverted female faces directing their gaze towards them or to the side. Direct gaze compared to averted gaze in upright faces elicited increased early evoked gamma activity at occipital channels indicating enhanced neural processing during the earliest steps of face encoding. Direct gaze also elicited a later induced gamma burst over right prefrontal channels, suggesting that eye contact detection might recruit very similar cortical regions as in adults. An induced gamma burst in response to averted gaze was observed over right posterior regions, which might reflect neural processes associated with shifting spatial attention. Inverted faces did not produce such effects, confirming that the gamma band oscillations observed in response to gaze direction are specific to upright faces. These data demonstrate the use of gamma band oscillations in examining the development of social perception and suggest an early specialization of brain regions known to process eye gaze.

Neural correlates of the perception of goal-directed action in infants

We investigated the neural correlates of the perception of human goal-directed action by 8-month-old infants. Infants viewed video loops of complete and incomplete actions, which they could discriminate according to our pilot study, while we recorded their electrophysiological brain activity. Analysis of bursts of gamma-band oscillations resulting from passive viewing of these stimuli indicated increased gamma-band activity over left frontal regions when viewing incomplete actions as compared with complete actions. These results suggest that by 8 months infants are sensitive to the disruption of perceived goal-directed actions. (c) 2006 Elsevier B.V. All rights reserved.

Infant pointing: Communication to cooperate or communication to learn?

Tomasello, Carpenter, and Liszkowski (2007) present compelling data to support the view that infant pointing, from the outset, is communicative and deployed in many of the same situations in which adults would ordinarily point for one another, either to share their interest in something, or to informatively help the other person. This commentary concurs with the view that infant pointing is a communicative gesture, but challenges their interpretation of the motives behind pointing in 12-month-olds. An alternative account is proposed, according to which infant pointing is neither declarative nor imperative, but interrogative, and rather than being driven by the motive to share or help, it may serve a powerful cultural learning mechanism by which infants can obtain information from knowledgeable adults.