Reward function specification can be difficult. Rewarding the agent for
making a widget may be easy, but penalizing the multitude of possible negative
side effects is hard. In toy environments, Attainable Utility Preservation
(AUP) avoided side effects by penalizing shifts in the ability to achieve
randomly generated goals. We scale this approach to large, randomly generated
environments based on Conway's Game of Life. By preserving optimal value for a
single randomly generated reward function, AUP incurs modest overhead while
leading the agent to complete the specified task and avoid many side effects.
Videos and code are available at https://avoiding-side-effects.github.io/.
46
0
Avoiding Side Effects in Complex Environments
attributed to: Alexander Matt Turner, Neale Ratzlaff, Prasad Tadepalli
Reward function specification can be difficult. Rewarding the agent for
making a widget may be easy, but penalizing the multitude of possible negative
side effects is hard. In toy environments, Attainable Utility Preservation
(AUP) avoided side effects by penalizing shifts in the ability to achieve
randomly generated goals. We scale this approach to large, randomly generated
environments based on Conway's Game of Life. By preserving optimal value for a
single randomly generated reward function, AUP incurs modest overhead while
leading the agent to complete the specified task and avoid many side effects.
Videos and code are available at https://avoiding-side-effects.github.io/.
0
Vulnerabilities & Strengths