Accepted at ACM SIGGRAPH 2025
Facial expressions are key to human communication, conveying emotions and intentions. Given the rising popularity of digital humans and avatars, the ability to accurately represent facial expressions in real time has become an important topic. However, quantifying perceived differences between pairs of expressions is difficult, and no comprehensive subjective datasets are available for testing. This work introduces a new dataset targeting this problem: FaceExpressions-70k. Obtained via crowdsourcing, our dataset contains 70,500 subjective expression comparisons rated by over 1,000 study participants. We demonstrate the applicability of the dataset for training perceptual expression difference models and guiding decisions on acceptable latency and sampling rates for facial expressions when driving a face avatar.
If you use our dataset in your work, please cite:
@inproceedings{saha2025faceexpressions, author = {Saha, Avinab and Chen, Yu-Chih and Bazin, Jean-Charles and H{\"a}ne, Christian and Katsavounidis, Ioannis and Chapiro, Alexandre and Bovik, Alan C}, title = {FaceExpressions-70k: A Dataset of Perceived Expression Differences}, year = {2025}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, booktitle = {ACM SIGGRAPH 2025 Conference Proceedings}, series = {SIGGRAPH '25} }