Combat robot strategy adaptation using multiple learning agents

Thomas Recchia, Jae Chung, Kishore Pochiraju

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

As robotic systems become more prevalent, it is highly desirable for them to be able to operate in highly dynamic environments. A common approach is to use reinforcement learning to allow an agent controlling the robot to learn and adapt its behavior based on a reward function. This paper presents a novel multi-agent system that cooperates to control a single robot battle tank in a melee battle scenario, with no prior knowledge of its opponents' strategies. The agents learn through reinforcement learning, and are loosely coupled by their reward functions. Each agent controls a different aspect of the robot's behavior. In addition, the problem of delayed reward is addressed through a time-averaged reward applied to several sequential actions at once. This system was evaluated in a simulated melee combat scenario and was shown to learn to improve its performance over time. This was accomplished by each agent learning to pick specific battle strategies for each different opponent it faced.

Original languageEnglish
Title of host publicationDynamics, Control and Uncertainty
Pages305-313
Number of pages9
EditionPARTS A AND B
DOIs
StatePublished - 2012
EventASME 2012 International Mechanical Engineering Congress and Exposition, IMECE 2012 - Houston, TX, United States
Duration: 9 Nov 201215 Nov 2012

Publication series

NameASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)
NumberPARTS A AND B
Volume4

Conference

ConferenceASME 2012 International Mechanical Engineering Congress and Exposition, IMECE 2012
Country/TerritoryUnited States
CityHouston, TX
Period9/11/1215/11/12

Fingerprint

Dive into the research topics of 'Combat robot strategy adaptation using multiple learning agents'. Together they form a unique fingerprint.

Cite this