BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband
Published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2022
Hyunchul Lim, Yaxuan Li, Matthew Dressa, Fang Hu, Jae Hoon Kim, Ruidong Zhang, and Cheng Zhang. 2022. BodyTrak: Inferring Full-body Poses from Body Silhouettes Using a Miniature Camera on a Wristband. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 3, Article 154 (September 2022), 21 pages. https://doi.org/10.1145/3552312 https://doi.org/10.1145/3552312
In this paper, we present BodyTrak, an intelligent sensing technology that can estimate full body poses on a wristband. It only requires one miniature RGB camera to capture the body silhouettes, which are learned by a customized deep learning model to estimate the 3D positions of 14 joints on arms, legs, torso, and head. We conducted a user study with 9 participants in which each participant performed 12 daily activities such as walking, sitting, or exercising, in varying scenarios (wearing different clothes, outdoors/indoors) with a different number of camera settings on the wrist. The results show that our system can infer the full body pose (3D positions of 14 joints) with an average error of 6.9 cm using only one miniature RGB camera (11.5mm x 9.5mm) on the wrist pointing towards the body. Based on the results, we disscuss the possible application, challenges, and limitations to deploy our system in real-world scenarios.