Publication Date

6-2015

Comments

Technical Report: UTEP-CS-15-55

To appear in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg.

Abstract

In the past, the most widely used neural networks were 3-layer ones. These networks were preferred, since one of the main advantages of the biological neural networks -- which motivated the use of neural networks in computing -- is their parallelism, and 3-layer networks provide the largest degree of parallelism. Recently, however, it was empirically shown that, in spite of this argument, multi-layer ("deep") neural networks leads to a much more efficient machine learning. In this paper, we provide a possible theoretical explanation for the somewhat surprising empirical success of deep networks.

Share

COinS