
Dynamic Facial Projection Mapping (DFPM) overlays computer-generated images onto human faces to create immersive experiences that have been used in the makeup and entertainment industries. In this study, we propose two concepts to reduce the misalignment artifacts between projected images and target faces, which is a persistent challenge for DFPM.


Our first concept is a high-speed face-tracking method that exploits temporal information. We first introduce a cropped-area-limited inter/extrapolation-based face detection framework, which allows parallel execution with facial landmark detection. We then propose a novel hybrid facial landmark detection method that combines fast Ensemble of Regression Trees (ERT)-based detections and an auxiliary detection. ERT-based detections rapidly produce results in 0.107 ms using temporal information with the support of auxiliary detection to recover from detection errors. To train the facial landmark detection method, we propose an innovative method for simulating high-frame-rate video annotations to address the lack of publicly available high-frame-rate annotated datasets.

Our second concept is a lens-shift co-axial projector-camera setup that maintains a high optical alignment with only a 1.274-pixel error between 1 m and 2 m depth. This setup reduces misalignment by applying the same optical designs to the projector and camera without causing large misalignment as in conventional methods.
Based on these concepts, we developed a novel high-speed DFPM system that achieves nearly perfect alignment with human visual perception.



Reference
- Hao-Lun Peng, Kengo Sato, Soran Nakagawa, Yoshihiro Watanabe: Perceptually-Aligned Dynamic Facial Projection Mapping by High-Speed Face-Tracking Method and Lens-Shift Co-Axial Setup, IEEE Transactions on Visualization and Computer Graphics, DOI: 10.1109/TVCG.2025.3527203 (2025)