Collaborating with humans requires rapidly adapting to their individual
strengths, weaknesses, and preferences. Unfortunately, most standard
multi-agent reinforcement learning techniques, such as self-play (SP) or
population play (PP), produce agents that overfit to their training partners
and do not generalize well to humans. Alternatively, researchers can collect
human data, train a human model using behavioral cloning, and then use that
model to train "human-aware" agents ("behavioral cloning play", or BCP). While
such an approach can improve the generalization of agents to new human
co-players, it involves the onerous and expensive step of collecting large
amounts of human data first. Here, we study the problem of how to train agents
that collaborate well with human partners without using human data. We argue
that the crux of the problem is to produce a diverse set of training partners.
Drawing inspiration from successful multi-agent approaches in competitive
domains, we find that a surprisingly simple approach is highly effective. We
train our agent partner as the best response to a population of self-play
agents and their past checkpoints taken throughout training, a method we call
Fictitious Co-Play (FCP). Our experiments focus on a two-player collaborative
cooking simulator that has recently been proposed as a challenge problem for
coordination with humans. We find that FCP agents score significantly higher
than SP, PP, and BCP when paired with novel agent and human partners.
Furthermore, humans also report a strong subjective preference to partnering
with FCP agents over all baselines.
21
0
Collaborating with Humans without Human Data
attributed to: DJ Strouse, Kevin R. McKee, Matt Botvinick, Edward Hughes, Richard Everett
Collaborating with humans requires rapidly adapting to their individual
strengths, weaknesses, and preferences. Unfortunately, most standard
multi-agent reinforcement learning techniques, such as self-play (SP) or
population play (PP), produce agents that overfit to their training partners
and do not generalize well to humans. Alternatively, researchers can collect
human data, train a human model using behavioral cloning, and then use that
model to train "human-aware" agents ("behavioral cloning play", or BCP). While
such an approach can improve the generalization of agents to new human
co-players, it involves the onerous and expensive step of collecting large
amounts of human data first. Here, we study the problem of how to train agents
that collaborate well with human partners without using human data. We argue
that the crux of the problem is to produce a diverse set of training partners.
Drawing inspiration from successful multi-agent approaches in competitive
domains, we find that a surprisingly simple approach is highly effective.
0
Vulnerabilities & Strengths