Spencer Frei (Department of Statistics, UCLA): Generalization of SGD-trained neural networks of any width in the presence of adversarial label noise
Ort: MPI für Mathematik in den Naturwissenschaften Leipzig, , Videobroadcast
Video broadcast: Math Machine Learning seminar MPI MIS + UCLA Can overparameterized neural networks trained by SGD provably generalize when the labels are corrupted with substantial random noise? We answer this question in the affirmative by showing that for a broad class of distributions, one-hidden-layer networks trained by SGDgeneralize when the distribution is linearly separable but corrupted with adversarial label noise, despite the capacity to overfit. Equivalently, such networks have classification accuracy competitive with that of the best halfspace over the distribution. Our results hold for networks of arbitrary width and for arbitrary initializations of SGD. In particular, we do not rely upon the approximations to infinite width networks that are typically used in theoretical analyses of SGD-trained neural networks.
Beginn: 11. März 2021 17:00
Ende: 11. März 2021 18:30