Humans issue control signals to robot systems in contexts ranging from teleoperation to instruction to shared autonomy, and in domains as spread as space exploration to assistive robotics. For a human to issue a control signal to a robot platform requires physical actuation of an interface—whether via overt body movement or electrical signals from the muscles or brain.

However, robot control systems overwhelmingly are agnostic to interface source and actuation mechanism: a velocity command is handled the same regardless of whether it derives from joystick deflection or sip-and-puff respiration. Yet, deviations—in magnitude, direction, or timing—between the true signal intended by the human and that received by the autonomy can have rippling effects throughout a control system.

Our premise is that when robot systems that depend on human input do not consider the physical source of the human control signal—their physical capabilities, the interface actuation mechanism, signal transmission limitations—a fundamental, and artificial, upper limit on team synergy and success is imposed.

In this project, we propose a framework to model, novel algorithmic work to engender, and extensive human subject studies to evaluate, interface-awareness in robot teleoperation and autonomy. Our work will demonstrate both the need for and utility of interface-aware robotic intelligence.

Building on our seminal work [46], we relax a number of assumptions and constraints in the initial formulation. Specifically: (1) We scale up the control space and complexity of the robot operation, from a 3-DoF virtual point robot to a 7-DoF real hardware robotic arm. This dramatically complicates teleoperation using a 1-D sip-and-puff control interface. (2) We no longer assume the human’s policy to be known, and make minimal assumptions on this policy (that relate only to safety). Our case study evaluations with two spinal-cord injured participants operating the robotic arm found safety to increase, despite the assistance being undetectable to the participants [59].

Funding Source: National Science Foundation (NSF/FRR-2208011 Interface-Aware Intelligence for Robot Teleoperation and Autonomy)

© argallab