Current fully autonomous robots are unable to navigate effectively in visually complex environments due to limitations in sensing and cognition. Full teleoperation using current interfaces is difficult and the operator often makes navigation mistakes due to lack of operating environment information and a limited field of view. We present a novel method for combining the sensing and cognition of a robot with that of a human. Our collaborative approach is different from most in that we address bi-directional considerations. It provides the human a mechanism to supplement the robot's capabilities in a new and unique way and provides novel forms of feedback from the robot to enhance the human's understanding of the current state of the system and its intentions.
Related links
Details
Title
Human-Robot Team Navigation in Visually Complex Environments
Publication Details
2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, pp.3043-3050
Resource Type
Conference proceeding
Conference
IEEE/RSJ International Conference on Intelligent Robots and Systems (St. Louis, MO, USA, 10/10/2009–10/15/2009)
Publisher
IEEE
Identifiers
WOS:000285372901185; 99380461296706600
Academic Unit
Office of Teaching, Learning, and Technology; College of Arts, Social Sciences, and Humanities; Center for Cybersecurity and AI