Zhao H, Röddiger T, Feng Y, Beigl M (2024)
Publication Language: English
Publication Type: Conference contribution
Publication year: 2024
Publisher: Association for Computing Machinery
City/Town: New York, NY
Pages Range: 679-684
Conference Proceedings Title: UbiComp '24: Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing
Event location: Melbourne, VIC
ISBN: 9798400710582
Earphones, due to their deep integration into daily life, have been developed for unobtrusive and ubiquitous health monitoring. However, these advanced algorithms greatly rely on the high quality sensing data. However, the data collected with universal earplugs could potentially generate undesirable noise, such as vibrations or even falling off. As a result, the algorithms may exhibit limited performance. In this regard, we build a dataset containing RGBD and IMU data captured by a smartphone. To provide a precise and solid ground truth, we employ additional control information from a robotic arm that holds the smartphone scanning ears along a predefined trajectory. With this dataset, we propose a tightly coupled information fusion algorithm for the ground truth ear modeling. Finally, we fabricate the earplugs and conduct an end-to-end evaluation of the wearability of the modeled earplugs in a user study.
APA:
Zhao, H., Röddiger, T., Feng, Y., & Beigl, M. (2024). Fit2Ear: Generating Personalized Earplugs from Smartphone Depth Camera Images. In UbiComp '24: Companion of the 2024 on ACM International Joint Conference on Pervasive and Ubiquitous Computing (pp. 679-684). Melbourne, VIC: New York, NY: Association for Computing Machinery.
MLA:
Zhao, Haibin, et al. "Fit2Ear: Generating Personalized Earplugs from Smartphone Depth Camera Images." Proceedings of the 2024 ACM International Joint Conference on Pervasive and Ubiquitous Computing, UbiComp Companion 2024, Melbourne, VIC New York, NY: Association for Computing Machinery, 2024. 679-684.
BibTeX: Download