다양한 워크로드에 대처할 수 있는 컨테이너 기반 애플리케이션의 수평 & 수직 탄력성을 제어하기 위한 강화 학습(RL) 솔루션에 관한 연구
Abstract (요약) 🕵🏻♂️
Software containers are changing the way distributed applications are executed and managed on cloud computing resources. Interestingly, containers offer the possibility of handling workload fluctuations by exploiting both horizontal and vertical elasticity "on the fly". However, most of the existing control policies consider horizontal and vertical scaling as two disjointed control knobs. In this paper, we propose Reinforcement Learning (RL) solutions for controlling the horizontal and vertical elasticity of container-based applications with the goal to increase the flexibility to cope with varying workloads. Although RL represents an interesting approach, it may suffer from a possible long learning phase, especially when nothing about the system is known a-priori. To speed up the learning process and identify better adaptation policies, we propose RL solutions that exploit different degrees of knowledge about the system dynamics (i.e., Q-learning, Dyna-Q, and Model-based). We integrate the proposed policies in Elastic Docker Swarm, our extension that introduces self-adaptation capabilities in the container orchestration tool Docker Swarm. We demonstrate the effectiveness and flexibility of model-based RL policies through simulations and prototype-based experiments.
어떤 내용의 논문인가요? 👋
다양한 워크로드에 대처할 수 있는 컨테이너 기반 애플리케이션의 수평 & 수직 탄력성을 제어하기 위한 강화 학습(RL) 솔루션에 관한 연구
Abstract (요약) 🕵🏻♂️
Software containers are changing the way distributed applications are executed and managed on cloud computing resources. Interestingly, containers offer the possibility of handling workload fluctuations by exploiting both horizontal and vertical elasticity "on the fly". However, most of the existing control policies consider horizontal and vertical scaling as two disjointed control knobs. In this paper, we propose Reinforcement Learning (RL) solutions for controlling the horizontal and vertical elasticity of container-based applications with the goal to increase the flexibility to cope with varying workloads. Although RL represents an interesting approach, it may suffer from a possible long learning phase, especially when nothing about the system is known a-priori. To speed up the learning process and identify better adaptation policies, we propose RL solutions that exploit different degrees of knowledge about the system dynamics (i.e., Q-learning, Dyna-Q, and Model-based). We integrate the proposed policies in Elastic Docker Swarm, our extension that introduces self-adaptation capabilities in the container orchestration tool Docker Swarm. We demonstrate the effectiveness and flexibility of model-based RL policies through simulations and prototype-based experiments.
이 논문을 읽어서 무엇을 배울 수 있는지 알려주세요! 🤔
이 논문을 제대로 읽었을 때 어떤 지식을 얻을 수 있을까요?
같이 읽어보면 좋을 만한 글이나 이슈가 있을까요?
만약에 있다면 자유롭게 작성해 주세요!
레퍼런스의 URL을 알려주세요! 🔗
markdown 으로 축약하지 말고, 원본 링크 그대로 그냥 적어주세요!