Cheaper and Faster: Distributed Deep Reinforcement Learning with Serverless Computing

Hanfei Yu, Jian Li, Yang Hua, Xu Yuan, Hao Wang

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

Deep reinforcement learning (DRL) has gained immense success in many applications, including gaming AI, robotics, and system scheduling. Distributed algorithms and architectures have been vastly proposed (e.g., actor-learner architecture) to accelerate DRL training with large-scale server-based clusters. However, training on-policy algorithms with the actor-learner architecture unavoidably induces resource wasting due to synchronization between learners and actors, thus resulting in significantly extra billing. As a promising alternative, serverless computing naturally fits on-policy synchronization and alleviates resource wasting in distributed DRL training with pay-as-you-go pricing. Yet, none has leveraged serverless computing to facilitate DRL training. This paper proposes MINIONSRL, the first serverless distributed DRL training framework that aims to accelerate DRL training- and cost-efficiency with dynamic actor scaling. We prototype MINIONSRL on top of Microsoft Azure Container Instances and evaluate it with popular DRL tasks from OpenAI Gym. Extensive experiments show that MINIONSRL reduces total training time by up to 52% and training cost by 86% compared to latest solutions.

Original languageEnglish
Pages (from-to)16539-16547
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume38
Issue number15
DOIs
StatePublished - 25 Mar 2024
Event38th AAAI Conference on Artificial Intelligence, AAAI 2024 - Vancouver, Canada
Duration: 20 Feb 202427 Feb 2024

Fingerprint

Dive into the research topics of 'Cheaper and Faster: Distributed Deep Reinforcement Learning with Serverless Computing'. Together they form a unique fingerprint.

Cite this