Learning Kinematic Machine Models from Videos for VR/AR training

Thies LTN, Stamminger M, Bauer F (2020)


Publication Language: English

Publication Type: Conference contribution, Original article

Publication year: 2020

Publisher: IEEE

Conference Proceedings Title: 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)

Event location: Utrecht, Netherlands

ISBN: 978-1-7281-7463-1

URI: https://www.lgdv.tf.fau.de/?p=2280

DOI: 10.1109/AIVR50618.2020.00028

Abstract

VR/AR applications, such as virtual training or coaching, often require a digital twin of a machine. Such a virtual twin must also include a kinematic model that defines its motion behavior. This behavior is usually expressed by constraints in a physics engine. In this paper, we present a system that automatically derives the kinematic model of a machine from RGB video with an optional depth channel. Our system records a live session while a user performs all typical machine movements. It then searches for trajectories and converts them into linear, circular and helical constraints. Our system can also detect kinematic chains and coupled constraints, for example, when a crank moves a toothed rod.

Authors with CRIS profile

How to cite

APA:

Thies, L.T.-N., Stamminger, M., & Bauer, F. (2020). Learning Kinematic Machine Models from Videos for VR/AR training. In 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). Utrecht, Netherlands: IEEE.

MLA:

Thies, Lucas Till-Nicolas, Marc Stamminger, and Frank Bauer. "Learning Kinematic Machine Models from Videos for VR/AR training." Proceedings of the 3rd International Conference on Artificial Intelligence & Virtual Reality, Utrecht, Netherlands IEEE, 2020.

BibTeX: Download