Predicting Generalization in Deep Learning via Local Measures of Distortion

Published in PGDL Competition at NeurIPS 2020, 2020

Recommended citation: A Rajagopal, Vamshi C Madala, S Chandrasekaran, P Larson - arXiv preprint arXiv:2012.06969, 2020 https://arxiv.org/pdf/2012.06969

We study generalization in deep learning by appealing to complexity measures originally developed in approximation and information theory. While these concepts are challenged by the high-dimensional and data-defined nature of deep learning, we show that simple vector quantization approaches such as PCA, GMMs, and SVMs capture their spirit when applied layer-wise to deep extracted features giving rise to relatively inexpensive complexity measures that correlate well with generalization performance. We discuss our results in 2020 NeurIPS PGDL challenge.

Download paper here: Link1 Link2

Recommended citation: ‘A Rajagopal, Vamshi C Madala, S Chandrasekaran, P Larson - arXiv preprint arXiv:2012.06969, 2020’