Avoiding Side Effects By Considering Future Tasks

Designing reward functions is difficult: the designer has to specify what to do (what it means to complete the task) as well as what not to do (side effects that should be avoided while completing the task). To alleviate the burden on the reward designer, we propose an algorithm to automatically generate an auxiliary reward function that penalizes side effects. This auxiliary objective rewards the ability to complete possible future tasks, which decreases if the agent causes side effects during the current task. The future task reward can also give the agent an incentive to interfere with events in the environment that make future tasks less achievable, such as irreversible actions by other agents. To avoid this interference incentive, we introduce a baseline policy that represents a default course of action (such as doing nothing), and use it to filter out future tasks that are not achievable by default. We formally define interference incentives and show that the future task approach with a baseline policy avoids these incentives in the deterministic case. Using gridworld environments that test for side effects and interference, we show that our method avoids interference and is more effective for avoiding side effects than the common approach of penalizing irreversible actions.

[1]  Laurent Orseau,et al.  Measuring and avoiding side effects using relative reachability , 2018, ArXiv.

[2]  Prasad Tadepalli,et al.  Avoiding Side Effects in Complex Environments , 2020, NeurIPS.

[3]  Jessica Taylor,et al.  Quantilizers: A Safer Alternative to Maximizers for Limited Optimization , 2016, AAAI Workshop: AI, Ethics, and Society.

[4]  Chrystopher L. Nehaniv,et al.  All Else Being Equal Be Empowered , 2005, ECAL.

[5]  Anca D. Dragan,et al.  Cooperative Inverse Reinforcement Learning , 2016, NIPS.

[6]  Anca D. Dragan,et al.  Inverse Reward Design , 2017, NIPS.

[7]  Shane Legg,et al.  Deep Reinforcement Learning from Human Preferences , 2017, NIPS.

[8]  Dylan Hadfield-Menell,et al.  Conservative Agency via Attainable Utility Preservation , 2019, AIES.

[9]  Owain Evans,et al.  Trial without Error: Towards Safe Reinforcement Learning via Human Intervention , 2017, AAMAS.

[10]  Edmund H. Durfee,et al.  Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes , 2018, IJCAI.

[11]  Anca D. Dragan,et al.  Preferences Implicit in the State of the World , 2018, ICLR.

[12]  John McCarthy,et al.  SOME PHILOSOPHICAL PROBLEMS FROM THE STANDPOINT OF ARTI CIAL INTELLIGENCE , 1987 .

[13]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[14]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.

[15]  Pieter Abbeel,et al.  Safe Exploration in Markov Decision Processes , 2012, ICML.

[16]  Shie Mannor,et al.  Risk-Sensitive and Robust Decision-Making: a CVaR Optimization Approach , 2015, NIPS.

[17]  Sergey Levine,et al.  Leave no Trace: Learning to Reset for Safe and Autonomous Reinforcement Learning , 2017, ICLR.

[18]  Jianfeng Gao,et al.  Combating Reinforcement Learning's Sisyphean Curse with Intrinsic Fear , 2016, ArXiv.

[19]  Laurent Orseau,et al.  Penalizing Side Effects using Stepwise Relative Reachability , 2018, AISafety@IJCAI.

[20]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[21]  Christoph Salge,et al.  Empowerment - an Introduction , 2013, ArXiv.

[22]  Tom Schaul,et al.  Universal Value Function Approximators , 2015, ICML.

[23]  Laurent Orseau,et al.  AI Safety Gridworlds , 2017, ArXiv.

[24]  Stuart Armstrong,et al.  Low Impact Artificial Intelligences , 2017, ArXiv.

[25]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.