What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation

Stephanie Forrest, Melanie Mitchell

Research output: Contribution to journalArticle

138 Citations (Scopus)

Abstract

What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing schema processing in GAs. We then give an informal description of how Walsh analysis and Bethke's Walsh-schema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?

Original languageEnglish (US)
Pages (from-to)285-319
Number of pages35
JournalMachine Learning
Volume13
Issue number2
DOIs
StatePublished - Jan 1 1993
Externally publishedYes

Fingerprint

Genetic algorithms
Polynomials
Walsh transforms
Learning systems
Processing

Keywords

  • deception
  • Genetic algorithms
  • Tanese functions
  • Walsh analysis

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Cite this

What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation. / Forrest, Stephanie; Mitchell, Melanie.

In: Machine Learning, Vol. 13, No. 2, 01.01.1993, p. 285-319.

Research output: Contribution to journalArticle

@article{cd08a848567a4d1aa1e477a965c5eb9a,
title = "What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation",
abstract = "What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing schema processing in GAs. We then give an informal description of how Walsh analysis and Bethke's Walsh-schema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?",
keywords = "deception, Genetic algorithms, Tanese functions, Walsh analysis",
author = "Stephanie Forrest and Melanie Mitchell",
year = "1993",
month = "1",
day = "1",
doi = "10.1023/A:1022626114466",
language = "English (US)",
volume = "13",
pages = "285--319",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer Netherlands",
number = "2",

}

TY - JOUR

T1 - What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation

AU - Forrest, Stephanie

AU - Mitchell, Melanie

PY - 1993/1/1

Y1 - 1993/1/1

N2 - What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing schema processing in GAs. We then give an informal description of how Walsh analysis and Bethke's Walsh-schema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?

AB - What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing schema processing in GAs. We then give an informal description of how Walsh analysis and Bethke's Walsh-schema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?

KW - deception

KW - Genetic algorithms

KW - Tanese functions

KW - Walsh analysis

UR - http://www.scopus.com/inward/record.url?scp=0027701744&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0027701744&partnerID=8YFLogxK

U2 - 10.1023/A:1022626114466

DO - 10.1023/A:1022626114466

M3 - Article

VL - 13

SP - 285

EP - 319

JO - Machine Learning

JF - Machine Learning

SN - 0885-6125

IS - 2

ER -