New paper "Variational Dropout Sparsifies Deep Neural Networks"
January 19, 2017
Dmitry Verov and Dmitry Molchanov finilized new publication "Variational Dropout Sparsifies Deep Neural Networks". In this work variational Dropout technique was applied for parameter number reduction in fully-connected and convolutional layers in neural networks.
Abstract. We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.