A Probabilistic Reinforcement-Based Approach to Conceptualization

Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron's role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.

[1]  C. Priebe Adaptive Mixtures , 2010 .

[2]  R. Siegwart,et al.  A Bayesian approach to conceptualization using reinforcement learning , 2007, 2007 IEEE/ASME international conference on advanced intelligent mechatronics.

[3]  Hossein Mobahi,et al.  A BIOLOGICALLY INSPIRED METHOD FOR CONCEPTUAL IMITATION USING REINFORCEMENT LEARNING , 2007, Appl. Artif. Intell..

[4]  T. Zentall,et al.  Categorization, concept learning, and behavior analysis: an introduction. , 2002, Journal of the experimental analysis of behavior.

[5]  A. S. Thoke,et al.  International Journal of Electrical and Computer Engineering 3:16 2008 Fault Classification of Double Circuit Transmission Line Using Artificial Neural Network , 2022 .

[6]  Andrew James Smith,et al.  Applications of the self-organising map to reinforcement learning , 2002, Neural Networks.

[7]  Sridhar Mahadevan,et al.  Automatic Programming of Behavior-Based Robots Using Reinforcement Learning , 1991, Artif. Intell..

[8]  Richard E. Neapolitan,et al.  Learning Bayesian networks , 2007, KDD '07.

[9]  Hossein Mobahi,et al.  Concept Oriented Imitation Towards Verbal Human-Robot Interaction , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[10]  ierre,et al.  Bayesian Robot Programming , 2022 .

[11]  David G. Stork,et al.  Pattern classification, 2nd Edition , 2000 .

[12]  Aude Billard,et al.  Learning human arm movements by imitation: : Evaluation of a biologically inspired connectionist architecture , 2000, Robotics Auton. Syst..

[13]  Kenji Doya,et al.  Reinforcement Learning in Continuous Time and Space , 2000, Neural Computation.

[14]  G. Rizzolatti,et al.  Neural Circuits Underlying Imitation Learning of Hand Actions An Event-Related fMRI Study , 2004, Neuron.