Solving Large Regression Problems using an Ensemble of GPU-accelerated ELMs

This paper presents an approach that allows for performing regression on large data sets in reasonable time. The main component of the approach consists in speeding up the slowest operation of the used al- gorithm by running it on the Graphics Processing Unit (GPU) of the video card, instead of the processor (CPU). The experiments show a speedup of an order of magnitude by using the GPU, and competitive performance on the regression task. Furthermore, the presented approach lends itself for further parallelization, that has still to be investigated.