Home // ICAS 2011, The Seventh International Conference on Autonomic and Autonomous Systems // View article
Authors:
Xavier Dutreilh
Sergey Kirgizov
Olga Melekhova
Jacques Malenfant
Nicolas Rivierre
Isis Truck
Keywords: Cloud computing; virtual machine allocation; rein- forcement learning; autonomic computing
Abstract:
Dynamic and appropriate resource dimensioning is a crucial issue in cloud computing. As applications go more and more 24/7, online policies must be sought to balance performance with the cost of allocated virtual machines. Most industrial approaches to date use ad hoc manual policies, such as threshold-based ones. Providing good thresholds proved to be tricky and hard to automatize to fit every application requirement. Research is being done to apply automatic decision-making approaches, such as reinforcement learning. Yet, they face a lot of problems to go to the field: having good policies in the early phases of learning, time for the learning to converge to an optimal policy and coping with changes in the application performance behavior over time. In this paper, we propose to deal with these problems using appropriate initialization for the early stages as well as convergence speedups applied throughout the learning phases and we present our first experimental results for these. We also introduce a performance model change detection on which we are currently working to complete the learning process management. Even though some of these proposals were known in the reinforcement learning field, the key contribution of this paper is to integrate them in a real cloud controller and to program them as an automated workflow.
Pages: 67 to 74
Copyright: Copyright (c) IARIA, 2011
Publication date: May 22, 2011
Published in: conference
ISSN: 2308-3913
ISBN: 978-1-61208-134-2
Location: Venice/Mestre, Italy
Dates: from May 22, 2011 to May 27, 2011