Risk-averse sensor planning using distributed policy gradient

Wann Jiun Ma, Darinka Dentcheva, Michael M. Zavlanos

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

This paper considers a risk-averse approach to planning the motion of mobile sensor networks in order to maximize the information they collect in uncertain environments. Recent models of risk shape the tails of the probability distributions of the decision variables, controlling in this way the occurrence of rare but important events. In this paper, we formulate the sensor planning problem as a Markov Decision Process (MDP) and propose a distributed risk-averse policy gradient method to obtain optimal policies for the team of sensors. These policies avoid extremely low reward and high risk events. The simulation results validate the effectiveness of the proposed distributed risk-averse method.

Original languageEnglish
Title of host publication2017 American Control Conference, ACC 2017
Pages4839-4844
Number of pages6
ISBN (Electronic)9781509059928
DOIs
StatePublished - 29 Jun 2017
Event2017 American Control Conference, ACC 2017 - Seattle, United States
Duration: 24 May 201726 May 2017

Publication series

NameProceedings of the American Control Conference
ISSN (Print)0743-1619

Conference

Conference2017 American Control Conference, ACC 2017
Country/TerritoryUnited States
CitySeattle
Period24/05/1726/05/17

Fingerprint

Dive into the research topics of 'Risk-averse sensor planning using distributed policy gradient'. Together they form a unique fingerprint.

Cite this