## Project Details

### Description

In this project we ask whether it will ever be possible to use control theory to study the large, highly complex nonlinear models which arise in fields such as fusion energy or biological networks. In recent years Sum-of-Squares (SOS) has become an established tool for studying nonlinear dynamics. The problem with this tool is that it requires more computing power than can be found on a desktop computer - and desktops are not getting any faster. To solve this problem, we look at new specialized computing architectures which offer high speeds, yet require highly structured problems - structure not present in SOS. Our conclusion is that solving large-scale analysis and control problems requires a reformulation of the SOS methodology.

The SOS approach to nonlinear stability is based on the observation that every positive matrix defines a positive polynomial which, in turn, can define a positive Lyapunov function. The more variables in the polynomial, the bigger the matrix becomes. Thus for systems with many states, the matrices are intractably large. Moreover, because these matrices do not have any usable structure and because optimization of positive matrices is an inherently sequential problem, efforts to parallelize the search for a Lyapunov function using SOS have failed. To solve this problem, we turn to early work on representation theory for SOS. Specifically, in the work of Polya, Handelman and others, it was shown that in certain cases, the cone of Sum-of-Squares polynomials could be represented as the positive linear combination of a finite set of known positive functions. Thus, instead of looking for a large positive matrix, we need only search over the set of positive coefficients - a problem with a natural parallel implementation.

The disadvantage to this reformulation of the SOS condition is that, unlike the classical formulation, alternative SOS representations are incapable of defining Lyapunov functions due to the presence of ``bad points''. The specific goals of this project are: First, to reformulate the conditions of Polya and Handelman so that they can be used to construct Lyapunov functions for local stability analysis on an arbitrary polytope. Second, to develop efficient parallel algorithms to formulate and solve large-scale nonlinear stability problems both on traditional high-performance computing platforms and on the new GPU computing platform.

The SOS approach to nonlinear stability is based on the observation that every positive matrix defines a positive polynomial which, in turn, can define a positive Lyapunov function. The more variables in the polynomial, the bigger the matrix becomes. Thus for systems with many states, the matrices are intractably large. Moreover, because these matrices do not have any usable structure and because optimization of positive matrices is an inherently sequential problem, efforts to parallelize the search for a Lyapunov function using SOS have failed. To solve this problem, we turn to early work on representation theory for SOS. Specifically, in the work of Polya, Handelman and others, it was shown that in certain cases, the cone of Sum-of-Squares polynomials could be represented as the positive linear combination of a finite set of known positive functions. Thus, instead of looking for a large positive matrix, we need only search over the set of positive coefficients - a problem with a natural parallel implementation.

The disadvantage to this reformulation of the SOS condition is that, unlike the classical formulation, alternative SOS representations are incapable of defining Lyapunov functions due to the presence of ``bad points''. The specific goals of this project are: First, to reformulate the conditions of Polya and Handelman so that they can be used to construct Lyapunov functions for local stability analysis on an arbitrary polytope. Second, to develop efficient parallel algorithms to formulate and solve large-scale nonlinear stability problems both on traditional high-performance computing platforms and on the new GPU computing platform.

Status | Finished |
---|---|

Effective start/end date | 9/1/15 → 8/31/19 |

### Funding

- National Science Foundation (NSF): $280,000.00

## Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.