Search space reduction for strategy learning in sequential decision processes

Sequential decision making in large domains requires high computational expense. With the classical dynamic programming approach, a rising problem size soon leads to intractability because of time and memory constraints. This situation can be significantly remedied by using more advanced reinforcement learning techniques in combination with generalizing function approximators. However, this may lead to unstable learning behaviour as the strict convergence results are no longer valid. The paper presents an approach to stabilize learning by gradually reducing the search space for the optimal decision policy. This is done by iteratively adapting the action set according to the progress of learning. Experiments are described within the FYNESSE control architecture that is a framework for autonomously learning adaptive control strategies.