Abstract State Spaces with History

In this article, we consider learning problems in which the learning agent has only imprecise information about the current state of the environment. To deal with the uncertainty of the agent, an abstract representation of the state space is built which can be used to define near optimal policies. Starting with only a few abstract states, the state space is incrementally refined by employing statistical tests. Parallel to the refinement process, a model-free reinforcement learning algorithm is used to learn a policy