Agents and Devices: A Relative Definition of Agency

According to Dennett, the same system may be described using a `physical' (mechanical) explanatory stance, or using an `intentional' (belief- and goal-based) explanatory stance. Humans tend to find the physical stance more helpful for certain systems, such as planets orbiting a star, and the intentional stance for others, such as living animals. We define a formal counterpart of physical and intentional stances within computational theory: a description of a system as either a device, or an agent, with the key difference being that `devices' are directly described in terms of an input-output mapping, while `agents' are described in terms of the function they optimise. Bayes' rule can then be applied to calculate the subjective probability of a system being a device or an agent, based only on its behaviour. We illustrate this using the trajectories of an object in a toy grid-world domain.

[1]  Stuart J. Russell Learning agents for uncertain environments (extended abstract) , 1998, COLT' 98.

[2]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[3]  Frans M. J. Willems,et al.  Switching between two universal source coding algorithms , 1998, Proceedings DCC '98 Data Compression Conference (Cat. No.98TB100225).

[4]  Jürgen Schmidhuber,et al.  The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions , 2002, COLT.

[5]  Marcus Hutter,et al.  Loss Bounds and Time Complexity for Speed Priors , 2016, AISTATS.

[6]  Ming Li,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 1997, Texts in Computer Science.

[7]  Eyal Amir,et al.  Bayesian Inverse Reinforcement Learning , 2007, IJCAI.

[8]  Joel Veness,et al.  Context Tree Switching , 2011, 2012 Data Compression Conference.

[9]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[10]  S. Legg Machine super intelligence , 2008 .

[11]  Chris L. Baker,et al.  Action understanding as inverse planning , 2009, Cognition.

[12]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[13]  E. Koechlin,et al.  What Are They Up To? The Role of Sensory Evidence and Prior Knowledge in Action Understanding , 2011, PloS one.

[14]  Marcus Hutter,et al.  Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability (Texts in Theoretical Computer Science. An EATCS Series) , 2006 .

[15]  Kee-Eung Kim,et al.  Hierarchical Bayesian Inverse Reinforcement Learning , 2015, IEEE Transactions on Cybernetics.

[16]  Daniel C. Dennett,et al.  Intentional Systems Theory , 2009 .

[17]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part I , 1964, Inf. Control..