Sankar, Adepu Ravi and Balasubramanian, Vineeth N
(2018)
Are saddles good enough for neural networks.
In: Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, 11-13, January 2018, Goa ; India.
Full text not available from this repository.
(
Request a copy)
Abstract
Recent years have seen a growing interest in understanding neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training neural networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as 'good saddles'. We also verified the famous Wigner's Semicircle Law in our experimental results.
Actions (login required)
|
View Item |