Abstract

In the past, the most widely used neural networks were 3-layer ones. These networks were preferred, since one of the main advantages of the biological neural networks—which motivated the use of neural networks in computing—is their parallelism, and 3-layer networks provide the largest degree of parallelism. Recently, however, it was empirically shown that, in spite of this argument, multi-layer (“deep”) neural networks leads to a much more efficient machine learning. In this paper, we provide a possible theoretical explanation for the somewhat surprising empirical success of deep networks.

LanguageEnglish (US)
Title of host publicationStudies in Systems, Decision and Control
PublisherSpringer International Publishing
Pages1-5
Number of pages5
Volume100
DOIs
StatePublished - 2018

Publication series

NameStudies in Systems, Decision and Control
Volume100
ISSN (Print)2198-4182
ISSN (Electronic)2198-4190

Fingerprint

neural network
Neural Networks
Neural networks
Parallelism
Multilayer Neural Network
Network layers
Biological Networks
Learning systems
Machine Learning
Computing
Deep neural networks
learning

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Decision Sciences (miscellaneous)
  • Economics, Econometrics and Finance (miscellaneous)
  • Automotive Engineering
  • Control and Systems Engineering
  • Control and Optimization
  • Social Sciences (miscellaneous)

Cite this

Baral, C., Fuentes, O., & Kreinovich, V. (2018). Why deep neural networks: A possible theoretical explanation. In Studies in Systems, Decision and Control (Vol. 100, pp. 1-5). (Studies in Systems, Decision and Control; Vol. 100). Springer International Publishing. https://doi.org/10.1007/978-3-319-61753-4_1

Why deep neural networks : A possible theoretical explanation. / Baral, Chitta; Fuentes, Olac; Kreinovich, Vladik.

Studies in Systems, Decision and Control. Vol. 100 Springer International Publishing, 2018. p. 1-5 (Studies in Systems, Decision and Control; Vol. 100).

Research output: Chapter in Book/Report/Conference proceedingChapter

Baral, C, Fuentes, O & Kreinovich, V 2018, Why deep neural networks: A possible theoretical explanation. in Studies in Systems, Decision and Control. vol. 100, Studies in Systems, Decision and Control, vol. 100, Springer International Publishing, pp. 1-5. https://doi.org/10.1007/978-3-319-61753-4_1
Baral C, Fuentes O, Kreinovich V. Why deep neural networks: A possible theoretical explanation. In Studies in Systems, Decision and Control. Vol. 100. Springer International Publishing. 2018. p. 1-5. (Studies in Systems, Decision and Control). https://doi.org/10.1007/978-3-319-61753-4_1
Baral, Chitta ; Fuentes, Olac ; Kreinovich, Vladik. / Why deep neural networks : A possible theoretical explanation. Studies in Systems, Decision and Control. Vol. 100 Springer International Publishing, 2018. pp. 1-5 (Studies in Systems, Decision and Control).
@inbook{8eee1a47d9be4cb69aad1d67a8ab18ad,
title = "Why deep neural networks: A possible theoretical explanation",
abstract = "In the past, the most widely used neural networks were 3-layer ones. These networks were preferred, since one of the main advantages of the biological neural networks—which motivated the use of neural networks in computing—is their parallelism, and 3-layer networks provide the largest degree of parallelism. Recently, however, it was empirically shown that, in spite of this argument, multi-layer (“deep”) neural networks leads to a much more efficient machine learning. In this paper, we provide a possible theoretical explanation for the somewhat surprising empirical success of deep networks.",
author = "Chitta Baral and Olac Fuentes and Vladik Kreinovich",
year = "2018",
doi = "10.1007/978-3-319-61753-4_1",
language = "English (US)",
volume = "100",
series = "Studies in Systems, Decision and Control",
publisher = "Springer International Publishing",
pages = "1--5",
booktitle = "Studies in Systems, Decision and Control",

}

TY - CHAP

T1 - Why deep neural networks

T2 - A possible theoretical explanation

AU - Baral, Chitta

AU - Fuentes, Olac

AU - Kreinovich, Vladik

PY - 2018

Y1 - 2018

N2 - In the past, the most widely used neural networks were 3-layer ones. These networks were preferred, since one of the main advantages of the biological neural networks—which motivated the use of neural networks in computing—is their parallelism, and 3-layer networks provide the largest degree of parallelism. Recently, however, it was empirically shown that, in spite of this argument, multi-layer (“deep”) neural networks leads to a much more efficient machine learning. In this paper, we provide a possible theoretical explanation for the somewhat surprising empirical success of deep networks.

AB - In the past, the most widely used neural networks were 3-layer ones. These networks were preferred, since one of the main advantages of the biological neural networks—which motivated the use of neural networks in computing—is their parallelism, and 3-layer networks provide the largest degree of parallelism. Recently, however, it was empirically shown that, in spite of this argument, multi-layer (“deep”) neural networks leads to a much more efficient machine learning. In this paper, we provide a possible theoretical explanation for the somewhat surprising empirical success of deep networks.

UR - http://www.scopus.com/inward/record.url?scp=85029219978&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85029219978&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-61753-4_1

DO - 10.1007/978-3-319-61753-4_1

M3 - Chapter

VL - 100

T3 - Studies in Systems, Decision and Control

SP - 1

EP - 5

BT - Studies in Systems, Decision and Control

PB - Springer International Publishing

ER -