Approximation errors of state and output trajectories using recurrent neural networks

Binfan Liu, Jennie Si

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. We present training error bounds of trajectories between the recurrent neural network models and the target systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b]m ∈ Rm.

Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages803-808
Number of pages6
Volume1112 LNCS
ISBN (Print)3540615105, 9783540615101
DOIs
StatePublished - 1996
Event1996 International Conference on Artificial Neural Networks, ICANN 1996 - Bochum, Germany
Duration: Jul 16 1996Jul 19 1996

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1112 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other1996 International Conference on Artificial Neural Networks, ICANN 1996
CountryGermany
CityBochum
Period7/16/967/19/96

Fingerprint

Recurrent neural networks
Recurrent Neural Networks
Approximation Error
Trajectories
Trajectory
Error Bounds
Output
Nonlinear Dynamic System
Neural Network Model
Dynamical systems
Target
Model
Training

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Liu, B., & Si, J. (1996). Approximation errors of state and output trajectories using recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1112 LNCS, pp. 803-808). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1112 LNCS). Springer Verlag. https://doi.org/10.1007/3-540-61510-5_135

Approximation errors of state and output trajectories using recurrent neural networks. / Liu, Binfan; Si, Jennie.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 1112 LNCS Springer Verlag, 1996. p. 803-808 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 1112 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Liu, B & Si, J 1996, Approximation errors of state and output trajectories using recurrent neural networks. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 1112 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1112 LNCS, Springer Verlag, pp. 803-808, 1996 International Conference on Artificial Neural Networks, ICANN 1996, Bochum, Germany, 7/16/96. https://doi.org/10.1007/3-540-61510-5_135
Liu B, Si J. Approximation errors of state and output trajectories using recurrent neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 1112 LNCS. Springer Verlag. 1996. p. 803-808. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/3-540-61510-5_135
Liu, Binfan ; Si, Jennie. / Approximation errors of state and output trajectories using recurrent neural networks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 1112 LNCS Springer Verlag, 1996. pp. 803-808 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{cb241c413bc747c9b113248294396c73,
title = "Approximation errors of state and output trajectories using recurrent neural networks",
abstract = "This paper addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. We present training error bounds of trajectories between the recurrent neural network models and the target systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b]m ∈ Rm.",
author = "Binfan Liu and Jennie Si",
year = "1996",
doi = "10.1007/3-540-61510-5_135",
language = "English (US)",
isbn = "3540615105",
volume = "1112 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "803--808",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Approximation errors of state and output trajectories using recurrent neural networks

AU - Liu, Binfan

AU - Si, Jennie

PY - 1996

Y1 - 1996

N2 - This paper addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. We present training error bounds of trajectories between the recurrent neural network models and the target systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b]m ∈ Rm.

AB - This paper addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. We present training error bounds of trajectories between the recurrent neural network models and the target systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b]m ∈ Rm.

UR - http://www.scopus.com/inward/record.url?scp=84902144167&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84902144167&partnerID=8YFLogxK

U2 - 10.1007/3-540-61510-5_135

DO - 10.1007/3-540-61510-5_135

M3 - Conference contribution

AN - SCOPUS:84902144167

SN - 3540615105

SN - 9783540615101

VL - 1112 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 803

EP - 808

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

PB - Springer Verlag

ER -