We propose a policy improvement algorithm for Reinforcement Learning (RL)
which is called Rerouted Behavior Improvement (RBI). RBI is designed to take
into account the evaluation errors of the Q-function. Such errors are common in
RL when learning the $Q$-value from finite past experience data. Greedy
policies or even constrained policy optimization algorithms which ignore these
errors may suffer from an improvement penalty (i.e. a negative policy
improvement). To minimize the improvement penalty, the RBI idea is to attenuate
rapid policy changes of low probability actions which were less frequently
sampled. This approach is shown to avoid catastrophic performance degradation
and reduce regret when learning from a batch of past experience. Through a
two-armed bandit with Gaussian distributed rewards example, we show that it
also increases data efficiency when the optimal action has a high variance. We
evaluate RBI in two tasks in the Atari Learning Environment: (1) learning from
observations of multiple behavior policies and (2) iterative RL. Our results
demonstrate the advantage of RBI over greedy policies and other constrained
policy optimization algorithms as a safe learning approach and as a general
data efficient learning algorithm. An anonymous Github repository of our RBI
implementation is found at https://github.com/eladsar/rbi.
34
0
Constrained Policy Improvement for Safe and Efficient Reinforcement Learning
attributed to: Elad Sarafian, Aviv Tamar, Sarit Kraus
We propose a policy improvement algorithm for Reinforcement Learning (RL)
which is called Rerouted Behavior Improvement (RBI). RBI is designed to take
into account the evaluation errors of the Q-function. Such errors are common in
RL when learning the $Q$-value from finite past experience data. Greedy
policies or even constrained policy optimization algorithms which ignore these
errors may suffer from an improvement penalty (i.e. a negative policy
improvement). To minimize the improvement penalty, the RBI idea is to attenuate
rapid policy changes of low probability actions which were less frequently
sampled. This approach is shown to avoid catastrophic performance degradation
and reduce regret when learning from a batch of past experience. Through a
two-armed bandit with Gaussian distributed rewards example, we show that it
also increases data efficiency when the optimal action has a high variance. We
evaluate RBI in two tasks in the Atari Learning Environment: (1) learning from
observations of multiple behavior policies and (2) iterative RL.
0
Vulnerabilities & Strengths