## Abstract

This paper generalizes the widely used Nelder and Mead (Comput J 7:308-313, 1965) simplex algorithm to parallel processors. Unlike most previous parallelization methods, which are based on parallelizing the tasks required to compute a specific objective function given a vector of parameters, our parallel simplex algorithm uses parallelization at the parameter level. Our parallel simplex algorithm assigns to each processor a separate vector of parameters corresponding to a point on a simplex. The processors then conduct the simplex search steps for an improved point, communicate the results, and a new simplex is formed. The advantage of this method is that our algorithm is generic and can be applied, without re-writing computer code, to any optimization problem which the non-parallel Nelder-Mead is applicable. The method is also easily scalable to any degree of parallelization up to the number of parameters. In a series of Monte Carlo experiments, we show that this parallel simplex method yields computational savings in some experiments up to three times the number of processors.

Original language | English (US) |
---|---|

Pages (from-to) | 171-187 |

Number of pages | 17 |

Journal | Computational Economics |

Volume | 30 |

Issue number | 2 |

DOIs | |

State | Published - Sep 1 2007 |

## Keywords

- Optimization algorithms
- Parallel computing

## ASJC Scopus subject areas

- Economics, Econometrics and Finance (miscellaneous)
- Computer Science Applications