Publications of Yin, J.

Structural asymmetries in the representation of giving and taking events

Across languages, GIVE and TAKE verbs have different syntactic requirements: GIVE mandates a patient argument to be made explicit in the clause structure, whereas TAKE does not. Experimental evidence suggests that this asymmetry is rooted in prelinguistic assumptions about the minimal number of event participants that each action entails. The present study provides corroborating evidence for this proposal by investigating whether the observation of giving and taking actions modulates the inclusion of patients in the represented event. Participants were shown events featuring an agent (A) transferring an object to, or collecting it from, an animate target (B) or an inanimate target (a rock), and their sensitivity to changes in pair composition (AB vs. AC) and action role (AB vs. BA) was measured. Change sensitivity was affected by the type of target approached when the agent transferred the object (Experiment 1), but not when she collected it (Experiment 2), or when an outside force carried out the transfer (Experiment 3). Although these object-displacing actions could be equally interpreted as interactive (i.e., directed towards B), this construal was adopted only when B could be perceived as putative patient of a giving action. This evidence buttresses the proposal that structural asymmetries in giving and taking, as reflected in their syntactic requirements, may originate from prelinguistic assumptions about the minimal event participants required for each action to be teleologically well-formed.

Giving, but not taking, actions are spontaneously represented as social interactions: Evidence from modulation of lower alpha oscillations

Unlike taking, which can be redescribed in non-social and object-directed terms, acts of giving are invariably expressed across languages in a three-argument structure relating agent, patient, and object. Developmental evidence suggests this difference in the syntactic entailment of the patient role to be rooted in a prelinguistic understanding of giving as a patient-directed, hence obligatorily social, action. We hypothesized that minimal cues of possession transfer, known to induce this interpretation in preverbal infants, should similarly encourage adults to perceive the patient of giving, but not taking, actions as integral participant of the observed event, even without cues of overt involvement in the transfer. To test this hypothesis, we measured a known electrophysi- ological correlate of action understanding (the suppression of alpha-band oscillations) during the observation of giving and taking events, under the assumption that the functional grouping of agent and patient should have induced greater suppression that the representation of individual object-directed actions. As predicted, the observation of giving produced stronger lower alpha suppression than superficially similar acts of object disposal, whereas no difference emerged between taking from an animate patient or an inanimate target. These results suggest that the participants spontaneously represented giving, but not kinematically identical taking actions, as social interactions, and crucially restricted this interpretation to transfer events featuring animate patients. This evidence gives empirical traction to the idea that such asymmetry, rather than being an inter- pretive propensity circumscribed to the first year of life, is attributable to an ontogenetically stable system dedicated to the efficient identification of interactions based on active transfer.

Are you talking to me? Neural activations in 6-month-old infants in response to being addressed during natural interactions

Human interactions are guided by continuous communication among the parties involved, in which verbal communication plays a primary role. However, speech does not necessarily reveal to whom it is addressed, especially for young infants who are unable to decode its semantic content. To overcome such difficulty, adults often explicitly mark their communication as infant-directed. In the present study we investigated whether ostensive signals, which would disambiguate the infant as the addressee of a communicative act, would modulate the brain responses of 6-month-old infants to speech and gestures in an ecologically valid setting. In Experiment 1, we tested whether the gaze direction of the speaker modulates cortical responses to infant-direct speech. To provide a naturalistic environment, two infants and their parents participated at the same time. In Experiment 2, we tested whether a similar modulation of the cortical response would be obtained by varying the intonation (infant versus adult directed speech) of the speech during face-to-face communication, one on one. The results of both experiments indicated that only the combination of ostensive signals (infant directed speech and direct gaze) led to enhanced brain activation. This effect was indicated by responses localized in regions known to be involved in processing auditory and visual aspects of social communication. This study also demonstrated the potential of fNIRS as a tool for studying neural responses in naturalistic scenarios, and for simultaneous measurement of brain function in multiple participants.

Concept-based word learning in human infants

It is debated whether infants initially learn object labels by mapping them onto similarity-defining perceptual features or onto concepts of object kinds. We addressed this question by attempting to teach infants words for behaviorally defined action roles. In a series of experiments, we found that 14-month-olds could rapidly learn a label for the role the chaser plays in a chasing scenario, even when the different instances of chasers did not share perceptual features. Furthermore, when infants could choose, they preferred to interpret a novel label as expressing the actor’s role within the observed interaction rather than as being associated with the actor’s appearance. These results demonstrate that infants can learn labels as easily, or even easier, for concepts identified by abstract behavioral characteristics than by perceptual features. Thus, already at early stages of word learning, infants expect that novel words express concepts.