Memo No . 072 December 27 , 2017 Theory of Deep Learning IIb : Optimization Properties of SGD by

In Theory IIb we characterize with a mix of theory and experiments the optimization of deep convolutional networks by Stochastic Gradient Descent. The main new result in this paper is theoretical and experimental evidence for the following conjecture about SGD: SGD concentrates in probability like the classical Langevin equation – on large volume, “flat” minima, selecting flat minimizers which are with very high probability also global minimizers. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. This memo reports those parts of CBMM Memo 067 that are focused on optimization. The main reason is to straighten up the titles of the theory trilogy.