Event-based optimization for dispatching policies in material handling systems of general assembly lines

A material handling (MH) system of a general assembly line dispatching parts from inventory to working buffers could be complicated and costly to operate. Generally it is extremely difficult to find the optimal dispatching policy due to the complicated system dynamics and the large problem size. In this paper, we formulate the dispatching problem as a Markov decision process (MDP), and use event-based optimization framework to overcome the difficulty caused by problem dimensionality and size. By exploiting the problem structures, we focus on responding to certain events instead of all state transitions, so that the number of aggregated potential function (i.e., value function) is scaled to the square of the system size despite of the exponential growth of the state space. This effectively reduces the computational requirements to a level that is acceptable in practice. We then develop a sample path based algorithm to estimate the potentials, and implement a gradient-based policy optimization procedure. Numerical results demonstrate that the policies obtained by the event-based optimization approach significantly outperform the current dispatching method in production.

[1]  Chao-Bo Yan,et al.  Efficient simulation for serial production lines based on aggregated event-scheduling , 2008, 2008 IEEE International Conference on Automation Science and Engineering.

[2]  Gilbert Laporte,et al.  Loop based facility planning and material handling , 2002, Eur. J. Oper. Res..

[3]  Xi-Ren Cao,et al.  Basic Ideas for Event-Based Optimization of Markov Systems , 2005, Discret. Event Dyn. Syst..

[4]  Hing Kai Chan,et al.  The State of the Art in Simulation Study on FMS Scheduling: A Comprehensive Survey , 2002 .

[5]  Jairo R. Montoya-Torres,et al.  A literature survey on the design approaches and operational issues of automated wafer-transport systems for wafer fabs , 2006 .

[6]  Li Xia,et al.  A Structure Property of Optimal Policies for Maintenance Problems WithSafety-Critical Components , 2008, IEEE Transactions on Automation Science and Engineering.

[7]  Bhaba R. Sarker,et al.  An overview of path design and dispatching methods for automated guided vehicles , 1998 .

[8]  Xi-Ren Cao,et al.  A unified approach to Markov decision problems and performance sensitivity analysis , 2000, at - Automatisierungstechnik.

[9]  John A. Buzacott,et al.  Stochastic models of manufacturing systems , 1993 .

[10]  Xi-Ren Cao,et al.  Perturbation realization, potentials, and sensitivity analysis of Markov processes , 1997, IEEE Trans. Autom. Control..

[11]  T. Karthikeyan,et al.  Scheduling decisions in FMS using a heuristic approach , 2003 .

[12]  Stanley B. Gershwin,et al.  Manufacturing Systems Engineering , 1993 .

[13]  Xi-Ren Cao,et al.  The Relations Among Potentials, Perturbation Analysis, and Markov Decision Processes , 1998, Discret. Event Dyn. Syst..

[14]  Zhiyuan Ren,et al.  A time aggregation approach to Markov decision processes , 2002, Autom..

[15]  James R. Wilson,et al.  Using SLAM to design the material handling system of a flexible manufacturing system , 1986 .

[16]  Warren B. Powell,et al.  Approximate Dynamic Programming - Solving the Curses of Dimensionality , 2007 .

[17]  Geoff Buxey,et al.  Simulation studies of conveyor-paced assembly lines with buffer capacity , 1976 .

[18]  Warren B. Powell,et al.  Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics) , 2007 .

[19]  Xi-Ren Cao,et al.  Stochastic learning and optimization - A sensitivity-based approach , 2007, Annu. Rev. Control..

[20]  Lazar N Spasovic,et al.  Scheduling material handling vehicles in a container terminal , 2003 .

[21]  Panos M. Pardalos,et al.  Approximate dynamic programming: solving the curses of dimensionality , 2009, Optim. Methods Softw..

[22]  John N. Tsitsiklis,et al.  Neuro-Dynamic Programming , 1996, Encyclopedia of Machine Learning.

[23]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .

[24]  B. Krogh,et al.  State aggregation in Markov decision processes , 2002, Proceedings of the 41st IEEE Conference on Decision and Control, 2002..