A neural-network learning theory and a polynomial time RBF algorithm

Asim Roy, Sandeep Govil, Raymond Miranda

Research output: Contribution to journalArticle

62 Citations (Scopus)

Abstract

This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of "truncated" RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this "mixed"RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well-known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an on-line adaptive algorithm.

Original languageEnglish (US)
Pages (from-to)1301-1313
Number of pages13
JournalIEEE Transactions on Neural Networks
Volume8
Issue number6
DOIs
StatePublished - 1997

Fingerprint

Learning Theory
Radial Functions
Basis Functions
Polynomial time
Polynomials
Neural Networks
Neural networks
Time series
Brain
Polynomial-time Complexity
Chaotic Time Series
Time Series Forecasting
Logistic map
Function Approximation
Network Algorithms
Adaptive Algorithm
Adaptive algorithms
Computational Results
Linear programming
Learning Algorithm

Keywords

  • Designing neural networks
  • Feedforward nets
  • Learning complexity
  • Learning theory
  • Linear programming
  • Polynomial time complexity
  • Radial basis function networks

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Electrical and Electronic Engineering
  • Artificial Intelligence
  • Computational Theory and Mathematics
  • Hardware and Architecture

Cite this

A neural-network learning theory and a polynomial time RBF algorithm. / Roy, Asim; Govil, Sandeep; Miranda, Raymond.

In: IEEE Transactions on Neural Networks, Vol. 8, No. 6, 1997, p. 1301-1313.

Research output: Contribution to journalArticle

Roy, Asim ; Govil, Sandeep ; Miranda, Raymond. / A neural-network learning theory and a polynomial time RBF algorithm. In: IEEE Transactions on Neural Networks. 1997 ; Vol. 8, No. 6. pp. 1301-1313.
@article{55daedd9a77a4cd2a9e3c8c724705970,
title = "A neural-network learning theory and a polynomial time RBF algorithm",
abstract = "This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of {"}truncated{"} RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this {"}mixed{"}RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well-known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an on-line adaptive algorithm.",
keywords = "Designing neural networks, Feedforward nets, Learning complexity, Learning theory, Linear programming, Polynomial time complexity, Radial basis function networks",
author = "Asim Roy and Sandeep Govil and Raymond Miranda",
year = "1997",
doi = "10.1109/72.641453",
language = "English (US)",
volume = "8",
pages = "1301--1313",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "6",

}

TY - JOUR

T1 - A neural-network learning theory and a polynomial time RBF algorithm

AU - Roy, Asim

AU - Govil, Sandeep

AU - Miranda, Raymond

PY - 1997

Y1 - 1997

N2 - This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of "truncated" RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this "mixed"RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well-known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an on-line adaptive algorithm.

AB - This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of "truncated" RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this "mixed"RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well-known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an on-line adaptive algorithm.

KW - Designing neural networks

KW - Feedforward nets

KW - Learning complexity

KW - Learning theory

KW - Linear programming

KW - Polynomial time complexity

KW - Radial basis function networks

UR - http://www.scopus.com/inward/record.url?scp=0031276993&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0031276993&partnerID=8YFLogxK

U2 - 10.1109/72.641453

DO - 10.1109/72.641453

M3 - Article

C2 - 18255732

AN - SCOPUS:0031276993

VL - 8

SP - 1301

EP - 1313

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 6

ER -