Project 6: Comparative Analysis of Optical, Inertial, and VR Tracking Modalities for Human-Centered Simulation
Project 6: Comparative Analysis of Optical, Inertial, and VR Tracking Modalities for Human-Centered Simulation
This REU project focuses on benchmarking and validating diverse motion capture technologies to optimize human-in-the-loop simulations. Students will develop a unified testing framework to compare four distinct tracking modalities: a high-fidelity 12-camera OptiTrack optical system (serving as the ground truth), consumer-grade HTC Vive trackers, the inertial-based Noitom motion capture suit, and the markerless, depth-sensing Microsoft Kinect. The project integrates sensor fusion, rigid body mechanics, computer vision, Unity/Unreal engine development, and statistical error analysis to evaluate trade-offs regarding precision, drift, occlusion, and latency. The project exposes students to essential hardware and software skills in Extended Reality (XR), biomechanics, digital twinning, and physiological computing. The comparative data will be used to establish best practices for selecting the right tracking tool—whether prioritizing sub-millimeter accuracy or markerless ease-of-use for industrial training and virtual rehabilitation scenarios. Figure 1 shows the OptiTrack cameras on the truss, the participant wearing the Noitom suit and Vive trackers, and the Kinect sensor positioned for markerless depth sensing.
Fig 1: The multi-modal motion capture setup