Assistive robots have the potential to help people perform everyday tasks.
However, these robots first need to learn what it is their user wants them to
do. Teaching assistive robots is hard for inexperienced users, elderly users,
and users living with physical disabilities, since often these individuals are
unable to show the robot their desired behavior. We know that inclusive
learners should give human teachers credit for what they cannot demonstrate.
But today's robots do the opposite: they assume every user is capable of
providing any demonstration. As a result, these robots learn to mimic the
demonstrated behavior, even when that behavior is not what the human really
meant! Here we propose a different approach to reward learning: robots that
reason about the user's demonstrations in the context of similar or simpler
alternatives. Unlike prior works -- which err towards overestimating the
human's capabilities -- here we err towards underestimating what the human can
input (i.e., their choice set). Our theoretical analysis proves that
underestimating the human's choice set is risk-averse, with better worst-case
performance than overestimating. We formalize three properties to generate
similar and simpler alternatives. Across simulations and a user study, our
resulting algorithm better extrapolates the human's objective. See the user
study here: https://youtu.be/RgbH2YULVRo
28
0
I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice Set
attributed to: Ananth Jonnavittula, Dylan P. Losey
Assistive robots have the potential to help people perform everyday tasks.
However, these robots first need to learn what it is their user wants them to
do. Teaching assistive robots is hard for inexperienced users, elderly users,
and users living with physical disabilities, since often these individuals are
unable to show the robot their desired behavior. We know that inclusive
learners should give human teachers credit for what they cannot demonstrate.
But today's robots do the opposite: they assume every user is capable of
providing any demonstration. As a result, these robots learn to mimic the
demonstrated behavior, even when that behavior is not what the human really
meant! Here we propose a different approach to reward learning: robots that
reason about the user's demonstrations in the context of similar or simpler
alternatives. Unlike prior works -- which err towards overestimating the
human's capabilities -- here we err towards underestimating what the human can
input (i.e., their choice set). Our theoretical analysis proves that
underestimating the human's choice set is risk-averse, with better worst-case
performance than overestimating.
0
Vulnerabilities & Strengths