Total Memory Optimiser: Proof of Concept and Compromises

For most usual optimisation problems, the nearer is better assumption is true (in probability). Classical iterative algorithms take this property into account, either explicitly or implicitly, by forgetting some information collected during the process, assuming it is not useful any more. However, when the property is not globally true, i.e., for deceptive problems, it may be necessary to keep all the sampled points and their values, and to exploit this increasing amount of information. Such a basic total memory optimiser is presented here. We show on an example that this technique can outperform classical methods on deceptive problems. As it gets very computing time expensive when the dimension of the problem increases, a few compromises are suggested to speed it up.