43
0
Conservative Agency via Attainable Utility Preservation
attributed to: Alexander Matt Turner, Dylan Hadfield-Menell, Prasad Tadepalli
Reward functions are easy to misspecify; although designers can make
corrections after observing mistakes, an agent pursuing a misspecified reward
function can irreversibly change the state of its environment. If that change
precludes optimization of the correctly specified reward function, then
correction is futile. For example, a robotic factory assistant could break
expensive equipment due to a reward misspecification; even if the designers
immediately correct the reward function, the damage is done. To mitigate this
risk, we introduce an approach that balances optimization of the primary reward
function with preservation of the ability to optimize auxiliary reward
functions. Surprisingly, even when the auxiliary reward functions are randomly
generated and therefore uninformative about the correctly specified reward
function, this approach induces conservative, effective behavior.
0
Vulnerabilities & Strengths