About QUB-PHEO
One of the core stages of efficient human-robot collaboration (HRC) is human-intention inference, enabling robots to anticipate and respond to human actions seamlessly. Existing approaches often rely on rule-based models or handcrafted heuristics, which lack adaptability to dynamic environments. In contrast, learning-based approaches leverage data-driven models to infer human intent, but their effectiveness depends on the availability of high-quality, multi-view datasets that capture rich spatial-temporal cues. To address this, we introduce QUB-PHEO, a novel visual-based dyadic multi-view dataset designed to enhance intention inference in HRC. The dataset consists of synchronized multi-view recordings of 70 participants performing 36 distinct assembly subtasks, providing fine-grained labels for action recognition, gaze estimation, and object tracking. By enabling deep learning models to learn intent prediction from diverse viewpoints, QUB-PHEO is a step towards proactive and adaptive robotic collaboration in real-world settings.
Thank you for your interest in our dataset.
If you are looking for our QUB-PHEO dataset preprocessing pipeline, please refer to
this repository.
To request access to the dataset, please fill out this form and send to s.mcloone@qub.ac.uk for approval. Please specify the intended use of the dataset and the name of the research group or institution.
We are currently working on the dataset documentation and will be available soon.
If you find this dataset useful, please cite the following paper:
@ARTICLE{qub_pheo_2024,
author={Adebayo, Samuel and McLoone, Seán and Dessing, Joost C.},
journal={IEEE Access},
title={QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention
Inference in Collaborative Assembly},
year={2024},
volume={12},
number={},
pages={157050-157066},
doi={10.1109/ACCESS.2024.3485162}}
