Interfering with human communication using TMS

The right posterior superior temporal sulcus (pSTS), a brain region at the junction of the temporal and parietal lobe, has been strongly implicated with our ability to understand the intended meaning of another's actions. Using transcranial magnetic stimulation (TMS), we have temporarily disturbed neural functioning in this brain region to qualify its contribution to (experimentally controlled) social interaction. The results of this study indicate that pSTS might be necessary for learning the meaning of communicative actions by incorporating knowledge abstracted from the recent history with a communicative partner.

Details of this study can be found here.

Neural mechanisms of human communication

Our everyday conversations appear to revolve around our linguistic abilities. But closer inspection reveals that an effective conversation involves more than formulating grammatically correct and semantically coherent sentences.

If a person on the street comes to you and asks directions for a supermarket, the presence of a foreign accent, a car, or the time of the day are just a few contextual elements that will influence your reply. You may decide to speak particularly clearly, give directions on how to drive there by car, or consider convenience stores. Yet, it remains a mystery how we manage to rapidly integrate those contextual elements into a conversation, and produce an utterance that evokes the intended meaning in the mind of our interlocutor.

Building on the notion that we select a communicative action on the basis of predictions of the intentions that an interlocutor might attribute to that communicative action, I investigated how this predictive mechanism is implemented in the brain, and how contextual knowledge can be rapidly integrated into that mechanism. We had pairs of participants play a challenging computer game requiring them to jointly solve a series of novel communicative problems (see figure below), and measured neural activity with magnetoencephalography (MEG).

We found that members of a pair engaged in communication had very similar patterns of neural activity, coming from the same brain regions. This finding suggests that we use our own intention-recognition system to predict how our communicative actions will be interpreted. We also noticed that those regions were already active before a new communicative problem was presented. This finding suggests that we can make sensible communicative predictions because we continuously infer and update the knowledge shared with our interlocutor.

The findings of this study have been reported here. Larry O'Dwyer wrote a piece about it in The Guardian.

The computer game required subjects to jointly recreate a spatial configuration of two tokens. In this example, the blue circle has to end up in the top left corner at the end of an interaction, whereas the orange triangle should end up in the bottom right corner. This configuration was visible only to the player in blue, who then needed to devise movements on the game board with his token, visible to the player in orange, to indicate to her where and how she should position her token. In this game there are no at forehand correct actions for the blue player to select: the meanings of the movements depend on how the player in orange interprets them. See this video for a full sequence of events occurring during an interaction in this game.

Development of human communication

Children who spent an increased number of days in daycare may benefit from a greater exposure to social and communicative situations.

We found that the style in which five year olds tried to communicate changed depending on who the children thought their co-player was. When they thought they were playing with a two year old, they spent a great deal of time trying to patiently indicate the location of the acorn. When they were told that they were playing with a child their own age, their communication style was not as laboured. We also found that the more days children spent in daycare, the better they were able to adjust their communication style.

Details of this study can be found here. The study has also been covered by several Dutch newspapers, such as the Volkskrant and the Telegraaf.

For each game, the child is shown the location of an acorn on the grid of a touch-screen computer. He then has to move a cartoon bird over the screen in order to indicate to the second player where the acorn is located. The second player cannot see the acorn, but can only see the cartoon bird that the child moves. In this way, for each game, the child must hit on a way to communicate effectively with the other player without using words.


Online head movement compensation in MEG

MEG (magnetoencephalography) measurements are performed with superconductive magnetic field sensors that are mounted inside a helmet-shaped dewar. The subject's head is positioned under the dewar as close as possible to the sensors, but in most MEG experiment settings the subject's head is not fixated. The consequence is that the subject's head, and thereby the neuronal sources in the brain generating the neuromagnetic fields, can move relative to the MEG sensors. Robert Oostenveld and I have developed an online method, ft_realtime_headlocalizer.m, that allows repositioning a subject within, but also between MEG recordings. This method is now part of the standard experimental setup at the Donders Institute (Centre for Cognitive Neuroimaging), as well as other neuroimaging labs. This page documents how to set up the realtime head localizer for your MEG system. Benefits of this tool have been described here.
Repositioning of a subject can be performed by visualizing the anatomical fiducials from a reference dataset (nasion, left and right ear canals, black unfilled markers) and the graphically updating of the real-time head position. To aid the subject with repositioning, the real-time fiducial positions are color coded to indicate the distances to the targets (green < 1.5 mm, orange < 3 mm, and red > 3 mm). Click this link for an animated version.

Offline head movement compensation in MEG

Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Robert Oostenveld and I have developed a functionality that allows accounting for variations in the MEG signal that are due to different head positions within a recording, in the offline analysis. ft_regressconfound.m fits and regresses out contributions from head movement to the data, and has shown able to yield improvements of group-level statistical sensitivity of up to 29%.

MEG / EEG qualitycheck

Recordings of magnetoencephalography (MEG) and electoencephalography (EEG) datasets tend to be rich in information. ft_qualitycheck.m, uploaded to the open source FieldTrip toolbox, allows a quick overview, providing summary statistics on acquisition and signal properties, headpositions, trigger events, jump artifacts, etc. The output of this functionality is stored in a .mat file in a format that allows easy further inspection using existing FieldTrip functionalities. The summary statistics are also visualized in a figure (see example below). At the Donders Institute (Centre for Cognitive Neuroimaging), ft_qualitycheck.m is automatically executed every night on datasets that have not been analyzed by it before. Accordingly, individual researchers, but also the technical department, have easy access to a qualitative impression of recordings, and their history.

Conjunction analysis

Just as with many experimental sciences, neuroscience relies on the statistical comparison of neural activities evoked under different circumstances. These statistical assessments involve comparing distributions of the data collected under those conditions. The resulting contrast can be described in many ways, for instance the commonly used T-value, an indication of how consistent a difference may be between the two data distributions. In some studies, it may be useful to investigate whether there is overlap between two contrasts, e.g. whether the same brain region is involved during both vision and audition. A so called 'conjunction analysis' allows doing this. I have uploaded this functionality to the open source FieldTrip toolbox, labeled as 'ft_conjunctionanalysis.m'. Besides the possibility to find spatially overlapping clusters of activity, it computes the minimum statistics (black line in figure) of two or more sets of observations.

MEG-compatible hand-held controller

In collaboration with the technical workshop at the Radboud University I have developed an MEG-co
mpatible hand-held controller. This controller is comparable to regular joysticks, but with all electronics replaced by fiberoptics (see images). Accordingly it can be used in the magnetically shielded room without interference of the brain signals recorded with magnetoencephalography (MEG). The controller has 4 face buttons on the right, two shoulders buttons, and can be optically connected to bo
th the MEG acquisition system and to the serial port of the stimulus presentation computer. We have used this tool to study experimentally controlled social interactions between two people. Here is a link to that study.