Optimisation on support vector machines

We deal with the optimisation problem involved in determining the maximal margin separation hyperplane in support vector machines. We consider three different formulations, based on L/sub 2/ norm distance (the standard case), L/sub 1/ norm, and L/sub /spl infin// norm. We consider separation in the original space of the data (i.e., there are no kernel transformations). For any of these cases, we focus on the following problem: having the optimal solution for a given training data set, one is given a new training example. The purpose is to use the information about the solution of the problem without the additional example in order to speed up the new optimisation problem. We also consider the case of re-optimisation after removing an example from the data set. We report results obtained for some standard benchmark problems.