Home // ICN 2024, The Twenty-Third International Conference on Networks // View article
Authors:
Yukinobu Fukushima
Yuki Koujitani
Kazutoshi Nakane
Yuta Tarutani
Celimuge Wu
Yusheng Ji
Tokumi Yokohira
Tutomu Murase
Keywords: Multi-stage information processing system, VM migration control, Deep reinforcement learning, Deep Deterministic Policy Gradient (DDPG)
Abstract:
This paper tackles a Virtual Machine (VM) migration control problem to maximize the progress (accuracy) of information processing tasks in multi-stage information processing systems. The conventional methods for this problem (e.g., VM sweeping method and VM number averaging method) are effective only for specific situations, such as when the system load is high. In this paper, in order to achieve high accuracy in various situations, we propose a VM migration method using a Deep Reinforcement Learning (DRL) algorithm. It is difficult to directly apply a DRL algorithm to the VM migration control problem because the size of the solution space of the problem dynamically changes according to the number of VMs staying in the system while the size of the agent’s action space is fixed in DRL algorithms. Therefore, the proposed method divides the VM migration control problem into two problems: the problem of determining only the VM distribution (i.e., the proportion of the number of VMs deployed on each edge server) and the problem of determining the locations of all the VMs so that it follows the determined VM distribution. The former problem is solved by a DRL algorithm, and the latter problem is solved by a heuristic method. The simulation results confirm that our proposed method can select quasi-optimal VM locations in various situations with different link delays.
Pages: 13 to 18
Copyright: Copyright (c) IARIA, 2024
Publication date: May 26, 2024
Published in: conference
ISSN: 2308-4413
ISBN: 978-1-68558-174-9
Location: Barcelona, Spain
Dates: from May 26, 2024 to May 30, 2024