This dataset is a set of spatially aligned and temporally synchronized records of random human movements from a marker-based motion capture system and eight video cameras. The main purpose of this dataset is to benchmark the 3D accuracy of any multi-camera human motion capture method.
You can download them here or with mirror link. Password can be found in the paper.
WARNING: All the files are ~21GB in total.
The runnable Python source code is available here. It simply playback a video with an overlay of 2D marker projections. This code allows you to understand our file structure to continue your work.
If you gain something from our dataset, please cite our publication. Volume and number will be updated after the official release. Note that you can cite an early access article with this IEEE guideline.
@ARTICLE{10591328,
author={Jatesiktat, Prayook and Lim, Guan Ming and Lim, Wee Sen and Ang, Wei Tech},
journal={IEEE Journal of Biomedical and Health Informatics},
title={Anatomical-Marker-Driven 3D Markerless Human Motion Capture},
year={2024},
volume={},
number={},
pages={1-14},
keywords={Three-dimensional displays;Solid modeling;Motion capture;Feature extraction;Deep learning;Cameras;Accuracy;Anatomical landmarks;biomechanics;data collection;deep learning;markerless motion capture},
doi={10.1109/JBHI.2024.3424869}
}
We are from Nanyang Technological University.