Response surface design evaluation and comparison

Christine M. Anderson-Cook, Connie M. Borror, Douglas Montgomery

Research output: Contribution to journalArticle

83 Citations (Scopus)

Abstract

Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models.

Original languageEnglish (US)
Pages (from-to)629-641
Number of pages13
JournalJournal of Statistical Planning and Inference
Volume139
Issue number2
DOIs
StatePublished - Feb 1 2009

Fingerprint

Response Surface Design
Evaluation
Optimality Criteria
Graphical Methods
Response Surface
Trade-offs
Robust Parameter Design
Design
Evaluation design
Response surface
Split-plot Design
Mixture Experiments
Model Misspecification
Potential Problems
Generalized Linear Model
Missing Data

Keywords

  • Design optimality
  • Fraction of design space plots
  • Graphical methods
  • Variance dispersion graphs

ASJC Scopus subject areas

  • Statistics, Probability and Uncertainty
  • Applied Mathematics
  • Statistics and Probability

Cite this

Response surface design evaluation and comparison. / Anderson-Cook, Christine M.; Borror, Connie M.; Montgomery, Douglas.

In: Journal of Statistical Planning and Inference, Vol. 139, No. 2, 01.02.2009, p. 629-641.

Research output: Contribution to journalArticle

Anderson-Cook, Christine M. ; Borror, Connie M. ; Montgomery, Douglas. / Response surface design evaluation and comparison. In: Journal of Statistical Planning and Inference. 2009 ; Vol. 139, No. 2. pp. 629-641.
@article{60eebac417824a67a66767abc694d61c,
title = "Response surface design evaluation and comparison",
abstract = "Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models.",
keywords = "Design optimality, Fraction of design space plots, Graphical methods, Variance dispersion graphs",
author = "Anderson-Cook, {Christine M.} and Borror, {Connie M.} and Douglas Montgomery",
year = "2009",
month = "2",
day = "1",
doi = "10.1016/j.jspi.2008.04.004",
language = "English (US)",
volume = "139",
pages = "629--641",
journal = "Journal of Statistical Planning and Inference",
issn = "0378-3758",
publisher = "Elsevier",
number = "2",

}

TY - JOUR

T1 - Response surface design evaluation and comparison

AU - Anderson-Cook, Christine M.

AU - Borror, Connie M.

AU - Montgomery, Douglas

PY - 2009/2/1

Y1 - 2009/2/1

N2 - Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models.

AB - Designing an experiment to fit a response surface model typically involves selecting among several candidate designs. There are often many competing criteria that could be considered in selecting the design, and practitioners are typically forced to make trade-offs between these objectives when choosing the final design. Traditional alphabetic optimality criteria are often used in evaluating and comparing competing designs. These optimality criteria are single-number summaries for quality properties of the design such as the precision with which the model parameters are estimated or the uncertainty associated with prediction. Other important considerations include the robustness of the design to model misspecification and potential problems arising from spurious or missing data. Several qualitative and quantitative properties of good response surface designs are discussed, and some of their important trade-offs are considered. Graphical methods for evaluating design performance for several important response surface problems are discussed and we show how these techniques can be used to compare competing designs. These graphical methods are generally superior to the simplistic summaries of alphabetic optimality criteria. Several special cases are considered, including robust parameter designs, split-plot designs, mixture experiment designs, and designs for generalized linear models.

KW - Design optimality

KW - Fraction of design space plots

KW - Graphical methods

KW - Variance dispersion graphs

UR - http://www.scopus.com/inward/record.url?scp=55149112817&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=55149112817&partnerID=8YFLogxK

U2 - 10.1016/j.jspi.2008.04.004

DO - 10.1016/j.jspi.2008.04.004

M3 - Article

VL - 139

SP - 629

EP - 641

JO - Journal of Statistical Planning and Inference

JF - Journal of Statistical Planning and Inference

SN - 0378-3758

IS - 2

ER -