Why Does Deep and Cheap Learning Work So Well?

September 12, 2016

Figure 3: Causal hierarchy examples relevant to physics (left) and image classi cation (right).

In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players.

But there is a problem. There is no mathematical reason why networks arranged in layers should be so good at these challenges. Mathematicians are flummoxed. Despite the huge success of deep neural networks, nobody is quite sure how they achieve their success.

Today that changes thanks to the work of Henry Lin at Harvard University and Max Tegmark at MIT...

Read more at MIT Technology Review: "The Extraordinary Link Between Deep Neural Networks and the Nature of the Universe" Also see the original article: Henry W. Lin and Max Tegmark, "Why does deep and cheap learning work so well?" arXiv:1608.08225v1.