TY - JOUR
T1 - Impact of Heterogeneity and Risk Aversion on Task Allocation in Multi-Agent Teams
AU - Wu, Haochen
AU - Ghadami, Amin
AU - Bayrak, Alparslan Emrah
AU - Smereka, Jonathon M.
AU - Epureanu, Bogdan I.
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2021/10
Y1 - 2021/10
N2 - Cooperative multi-agent decision-making is a ubiquitous problem with many real-world applications. In many practical applications, it is desirable to design a multi-agent team with a heterogeneous composition where the agents can have different capabilities and levels of risk tolerance to address diverse requirements. While heterogeneity in multi-agent teams offers benefits, new challenges arise including how to find optimal heterogeneous team compositions and how to dynamically distribute tasks among agents in complex operations. In this work, we develop an artificial intelligence framework for multi-agent heterogeneous teams to dynamically learn task distributions among agents through reinforcement learning. The framework extends Decentralized Partially Observable Markov Decision Processes (Dec-POMDP) to be compatible to model various types of heterogeneity. We demonstrate our approach with a benchmark problem on a disaster relief scenario. The effect of heterogeneity and risk aversion in agent capabilities and decision-making strategies on the performance of multi-agent teams in uncertain environments is analyzed. Results show that a well-designed heterogeneous team outperforms its homogeneous counterpart and possesses higher adaptivity in uncertain environments.
AB - Cooperative multi-agent decision-making is a ubiquitous problem with many real-world applications. In many practical applications, it is desirable to design a multi-agent team with a heterogeneous composition where the agents can have different capabilities and levels of risk tolerance to address diverse requirements. While heterogeneity in multi-agent teams offers benefits, new challenges arise including how to find optimal heterogeneous team compositions and how to dynamically distribute tasks among agents in complex operations. In this work, we develop an artificial intelligence framework for multi-agent heterogeneous teams to dynamically learn task distributions among agents through reinforcement learning. The framework extends Decentralized Partially Observable Markov Decision Processes (Dec-POMDP) to be compatible to model various types of heterogeneity. We demonstrate our approach with a benchmark problem on a disaster relief scenario. The effect of heterogeneity and risk aversion in agent capabilities and decision-making strategies on the performance of multi-agent teams in uncertain environments is analyzed. Results show that a well-designed heterogeneous team outperforms its homogeneous counterpart and possesses higher adaptivity in uncertain environments.
KW - AI-Based methods
KW - cooperating robots
KW - multi-robot systems
KW - reinforcement learning
KW - task planning
UR - http://www.scopus.com/inward/record.url?scp=85110823251&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85110823251&partnerID=8YFLogxK
U2 - 10.1109/LRA.2021.3097259
DO - 10.1109/LRA.2021.3097259
M3 - Article
AN - SCOPUS:85110823251
VL - 6
SP - 7065
EP - 7072
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 4
M1 - 9484733
ER -