Comparing a class of dynamic model-based reinforcement learning schemes for handoff prioritization in mobile communication networks

El Sayed M. El-Alfy, Yu Dong Yao

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

This paper presents and compares three model-based reinforcement learning schemes for admission policy with handoff prioritization in mobile communication networks. The goal is to reduce the handoff failures while making efficient use of the wireless network resources. A performance measure is formed as a weighted linear function of the blocking probability of new connection requests and the handoff failure probability. Then, the problem is formulated as a semi-Markov decision process with an average cost criterion and a simulation-based learning algorithm is developed to approximate the optimal control policy. The proposed schemes are driven by a dynamic model estimated simultaneously while learning the control policy using samples generated from direct interactions with the network. Extensive simulations are provided to assess and compare their effectiveness of the algorithm under a variety of traffic conditions with some well-known policies.

Original languageEnglish
Pages (from-to)8730-8737
Number of pages8
JournalExpert Systems with Applications
Volume38
Issue number7
DOIs
StatePublished - Jul 2011

Keywords

  • Cellular systems
  • Handoff prioritization
  • Mobile communication networks
  • Reinforcement learning
  • Resource management
  • Semi-Markov decision process

Fingerprint

Dive into the research topics of 'Comparing a class of dynamic model-based reinforcement learning schemes for handoff prioritization in mobile communication networks'. Together they form a unique fingerprint.

Cite this