Improving backpropagation learning with feature selection

Rudy Setiono, Huan Liu

Research output: Contribution to journalArticle

18 Citations (Scopus)

Abstract

There exist redundant, irrelevant and noisy data. Using proper data to train a network can speed up training, simplify the learned structure, and improve its performance. A two-phase training algorithm is proposed. In the first phase, the number of input units of the network is determined by using an information base method. Only those attributes that meet certain criteria for inclusion will be considered as the input to the network. In the second phase, the number of hidden units of the network is selected automatically based on the performance of the network on the training data. One hidden unit is added at a time only if it is necessary. The experimental results show that this new algorithm can achieve a faster learning time, a simpler network and an improved performance.

Original languageEnglish (US)
Pages (from-to)129-139
Number of pages11
JournalApplied Intelligence
Volume6
Issue number2
StatePublished - 1996
Externally publishedYes

Fingerprint

Backpropagation
Feature extraction

Keywords

  • Backpropagation
  • Feature selection
  • Feedforward neural network
  • Information theory

ASJC Scopus subject areas

  • Artificial Intelligence
  • Control and Systems Engineering

Cite this

Improving backpropagation learning with feature selection. / Setiono, Rudy; Liu, Huan.

In: Applied Intelligence, Vol. 6, No. 2, 1996, p. 129-139.

Research output: Contribution to journalArticle

@article{201aca746d2848f1a2f4fb0f3f3f584d,
title = "Improving backpropagation learning with feature selection",
abstract = "There exist redundant, irrelevant and noisy data. Using proper data to train a network can speed up training, simplify the learned structure, and improve its performance. A two-phase training algorithm is proposed. In the first phase, the number of input units of the network is determined by using an information base method. Only those attributes that meet certain criteria for inclusion will be considered as the input to the network. In the second phase, the number of hidden units of the network is selected automatically based on the performance of the network on the training data. One hidden unit is added at a time only if it is necessary. The experimental results show that this new algorithm can achieve a faster learning time, a simpler network and an improved performance.",
keywords = "Backpropagation, Feature selection, Feedforward neural network, Information theory",
author = "Rudy Setiono and Huan Liu",
year = "1996",
language = "English (US)",
volume = "6",
pages = "129--139",
journal = "Applied Intelligence",
issn = "0924-669X",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - Improving backpropagation learning with feature selection

AU - Setiono, Rudy

AU - Liu, Huan

PY - 1996

Y1 - 1996

N2 - There exist redundant, irrelevant and noisy data. Using proper data to train a network can speed up training, simplify the learned structure, and improve its performance. A two-phase training algorithm is proposed. In the first phase, the number of input units of the network is determined by using an information base method. Only those attributes that meet certain criteria for inclusion will be considered as the input to the network. In the second phase, the number of hidden units of the network is selected automatically based on the performance of the network on the training data. One hidden unit is added at a time only if it is necessary. The experimental results show that this new algorithm can achieve a faster learning time, a simpler network and an improved performance.

AB - There exist redundant, irrelevant and noisy data. Using proper data to train a network can speed up training, simplify the learned structure, and improve its performance. A two-phase training algorithm is proposed. In the first phase, the number of input units of the network is determined by using an information base method. Only those attributes that meet certain criteria for inclusion will be considered as the input to the network. In the second phase, the number of hidden units of the network is selected automatically based on the performance of the network on the training data. One hidden unit is added at a time only if it is necessary. The experimental results show that this new algorithm can achieve a faster learning time, a simpler network and an improved performance.

KW - Backpropagation

KW - Feature selection

KW - Feedforward neural network

KW - Information theory

UR - http://www.scopus.com/inward/record.url?scp=0030129019&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0030129019&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:0030129019

VL - 6

SP - 129

EP - 139

JO - Applied Intelligence

JF - Applied Intelligence

SN - 0924-669X

IS - 2

ER -