Study VR2F: EEG-decodability of facial expressions and their stereoscopic depth cues in immersive virtual reality
[Last update: August 20, 2025]
Status: preprint
Author: Felix Klotzsche
Contact: klotzsche@cbs.mpg.de
💽 Data: https://doi.org/10.17617/3.CQ2VXX
📑 Preprint: https://doi.org/10.1101/2025.08.18.670974
Face perception typically occurs in three-dimensional space, where stereoscopic depth cues enrich the perception of facial features. Yet, most neurophysiological research on face processing relies on two-dimensional displays, potentially overlooking the role of stereoscopic depth information. Here, we combine immersive virtual reality (VR), electroencephalography (EEG), and eye tracking to examine the neural representation of faces under controlled manipulations of stereoscopic depth. Thirty-four participants viewed computer-generated faces with neutral, happy, angry, and surprised expressions in frontal view under monoscopic and stereoscopic viewing conditions. Using time-resolved multivariate decoding, we show that EEG signals in immersive VR conditions can reliably differentiate facial expressions. Stereoscopic depth cues elicited a distinct and decodable neural signature, confirming the sensitivity of our approach to depth-related processing. Yet, expression decoding remained robust across depth conditions, indicating that under controlled frontal viewing, the neural encoding of facial expressions is invariant to binocular depth cues. Eye tracking showed that expression-related gaze patterns contained comparable information but did not account for neural representations, while depth information was absent in gaze patterns—consistent with dissociable representational processes. Our findings demonstrate the feasibility of EEG-based neural decoding in fully immersive VR as a tool for investigating face perception in naturalistic settings and provide new evidence for the stability of expression representations across depth variations in three-dimensional viewing conditions.
In case, there is no project-related virtual / conda environment yet, create one for the project:
conda create -n vr2f python=3.10And activate it:
conda activate vr2fThen install the code of the research project as python package:
# assuming your current working dircetory is the project root
pip install -e .Note
The -e flag installs the package in editable mode,
i.e., changes to the code will be directly reflected in the installed package.
Download the data set (or parts of it) from EDMOND:
https://doi.org/10.17617/3.CQ2VXX
There are data-readme files on EDMOND which explain which files/folders you might want to work with (the entire data set is quite spacious).
Put it into the hierarchy next to the code/ folder in a folder called data/. Your working tree should look like this:
In case, you want to start with the raw, continuous files, you should use the data sets in data/eeg/00_raw_fif. To run my preprocessing, you'd run:
For the decoding, you should use decoding.py from the command line (or use the HPC script).
Example:
python3.10 ./code/vr2f/decoding/decoding.py 0 emotion allWill run the decoding of the emotional expressions in trials from both (all) viewing conditions (mono and stereo) from the first participant.
Arguments
| Pos | Name | Type | Choices / Values | Description |
|---|---|---|---|---|
| 0 | participant_idx |
int | 0–33 |
Participant index. |
| 1 | contrast |
str | emotion, emotion_pairwise, viewcond, avatar_id |
What to decode. |
· emotion – multiclass facial expression |
||||
· emotion_pairwise – all binary pairs of expressions |
||||
· viewcond – depth condition (mono vs stereo) |
||||
· avatar_id – stimulus identity |
||||
| 2 | viewcond |
str | mono, stereo, all |
Trial subset to include. |
For cross-decoding (only implemented for decoding the emotional expression), run crossdecoding.py from the command line (or via HPC). It only takes a single CL argument -> participant index (0–34).
To calculate the source activation timecourses, run decoding_calc_source_timecourses.py
(1 CL argument -> participant idx).
Check out the notebook decoding_plotters.ipynb.
For nice & interactive 3Dplots of the sources, use plot_stc.py.
Use the notebook behavior_analysis.ipynb.
Use and follow instructions in the notebooks in /code/notebooks/eyetracking.
Klotzsche et al. (2023). Decoding the neural representations of emotional faces in stereo- versus monoscopic viewing conditions. Journal of Vision, 23(9), 5618. https://doi.org/10.1167/jov.23.9.5618 [Poster @VSS 2023]
Klotzsche et al. (2025). EEG-decodability of facial expressions and their stereoscopic depth cues in immersive virtual reality. bioRxiv. https://doi.org/10.1101/2025.08.18.670974 [preprint]
Felix Klotzsche*,
Ammara Nasim
Simon M. Hofmann,
Arno Villringer,
Vadim V. Nikulin,
Werner Sommer,
Michael Gaebler
* corresponding author
This study was conducted at the Max Planck Institute for Human Cognitive and Brain Sciences as part of the NEUROHUM project.
This repository was structured by the power of scilaunch 🚀.




