Deep Neural Networks (DNN): Easing the Understand of ML/NN models

An additional contribution from Google Research in learning networks was announced on March 13, 2020.

This is a conversation starter about how highly parameterized DNN can be applied to large data sets, and successfully create useful generalizations.

[Google] invites everyone to explore the infinite-width versions of their models with Neural Tangents, and help us open the black box of deep learning. To get started, please check out the paper, the tutorial Colab notebook, and the Github repo — contributions, feature requests, and bug reports are very welcome. This work has been accepted as a spotlight at ICLR 2020.

 

Fast and Easy Infinitely Wide Networks with Neural Tangents

The widespread success of deep learning across a range of domains such as natural language processing, conversational agents, and connectomics, has transformed the landscape of research in machine learning and left researchers with a number of interesting and important open questions such as: Why do deep neural networks (DNNs) generalize so well despite being overparameterized?