Distributed Non-convex Optimization of Multi-agent Systems Using Boosting Functions to Escape Local Optima

Shirantha Welikala, Christos G. Cassandras

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

We address the problem of multiple local optima arising in cooperative multi-agent optimization problems with non-convex objective functions. We propose a systematic approach to escape these local optima using boosting functions. These functions temporarily transform a gradient at a local optimum into a "boosted" non-zero gradient. Extending a prior centralized optimization approach, we develop a distributed framework for the use of boosted gradients and show that convergence of this distributed process can be attained by employing an optimal variable step size scheme for gradient-based algorithms. Numerical examples are included to show how the performance of a class of multi-agent optimization systems can be improved.

Original languageEnglish
Title of host publication2020 American Control Conference, ACC 2020
Pages2723-2728
Number of pages6
ISBN (Electronic)9781538682661
DOIs
StatePublished - Jul 2020
Event2020 American Control Conference, ACC 2020 - Denver, United States
Duration: 1 Jul 20203 Jul 2020

Publication series

NameProceedings of the American Control Conference
Volume2020-July
ISSN (Print)0743-1619

Conference

Conference2020 American Control Conference, ACC 2020
Country/TerritoryUnited States
CityDenver
Period1/07/203/07/20

Fingerprint

Dive into the research topics of 'Distributed Non-convex Optimization of Multi-agent Systems Using Boosting Functions to Escape Local Optima'. Together they form a unique fingerprint.

Cite this