![]() ![]() However, there is little research that focuses on locomotion taxonomies. Like locomotion methods themselves, there exist multiple locomotion taxonomies, each with a different focus and, consequently, a different possible outcome. Locomotion taxonomies are classification schemes that group multiple locomotion methods and can aid in the design and selection of locomotion methods. The complexity of this task has increased with the growing number of locomotion methods and design choices in recent years. Thus, it is important that a locomotion method achieves the intended effects. Effects of locomotion, such as simulator sickness or disorientation, depend on the specific design of the locomotion method and can influence the task performance as well as the overall acceptance of the virtual reality system. The change of the user's viewpoint in an immersive virtual environment, called locomotion, is one of the key components in a virtual reality interface. Session Chair: Chong Cao Exploring the Social Influence of Virtual Humans Unintentionally Conveying Conflicting Emotions We report and discuss the findings and limitations of our system and study, and directions for future research. Our user study found using virtual replica manipulation was more efficient than using 3D annotation drawing in the MR remote collaboration. The remote user can manipulate them to explain the task and guide the partner who can rapidly and accurately understand the remote expert's intentions and instructions. Our approach segments the foreground manipulable objects in the local environment and creates virtual replicas of them. ![]() In this paper, we explore how virtual replicas can enhance MR remote collaboration with a 3D reconstruction of the task space and study how they can work as a spatial cue to improve MR remote collaboration. Session Chair: Hai-Ning Liang Comparing Visual Attention with Leading and Following Virtual Agents in a Collaborative Perception-Action Task in VR We will release our code and captured dataset to stimulate future research. Extensive quantitative and qualitative experiments on LIPD and other open datasets all demonstrate the capability of our approach for compelling motion capture in large-scale scenarios, which outperforms other methods by an obvious margin. Moreover, we collect a LiDAR-IMU multi-modal mocap dataset, LIPD, with diverse human actions in long-range scenarios. We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly. ![]()
0 Comments
Leave a Reply. |