jjalcaraz-upct / network-slicing

Network slicing gym environment and a model-based RL agent with kernels
MIT License
36 stars 11 forks source link
5g network-slicing reinforcement-learning reinforcement-learning-environments

Network slicing environment

Description

Source code of the paper "Model-Based Reinforcement Learning with Kernels for Resource Allocation in RAN Slices" published in IEEE Transactions on Wireless Communications. The code provides a network slicing environment where time-frequency resources must be allocated among several network slices in consecutive decision stages. The environtment implements the OpenAI Gym https://github.com/openai/gym interface and can interact Stable-Baselines RL agents https://github.com/hill-a/stable-baselines and Keras-RL agents https://github.com/keras-rl/keras-rl. Additionally, the code includes a novel model-based RL control algorithm (KBRL). At each decision stage, the control agent observes a set of variables describing the state of the system. Based on these observations, the agent makes a resource allocation decision specifying the resource blocks, RBs, assigned to each slice for a number of upcoming radio frames (observation period). At the end of each observation period, the agent receives a signal indicating the fulfillment or violation of the service level agreement (SLA) for each slice during that period. The end of an observation period determines the start of a new decision stage. The objective of the control agent is to allocate the resources efficiently (i.e. use the minimum required number of RBs) while statisfying the SLAs of the network slices.

Acknowledgements

This work was supported by project grant PID2020-116329GB-C22 funded by MCIN / AEI / 10.13039/501100011033

How to use it

Requirements

The enviroment requires Open-AI gym, Numpy and Pandas packages. The RL agents are provided by stable-baselines (version 2, which uses TensorFlow), and the scripts for plotting results use scipy and matplotlib. The following versions of these packages are known to work fine with the environment:

gym==0.15.3
numpy==1.19.1
pandas==0.25.2
stable-baselines==2.10.1
tensorflow==1.9.0
scipy==1.5.4
matplotlib==3.3.4

To run the NAF agent, Keras and Keras-RL are also required. The tested versions are:
Keras==2.2.1
keras-rl==0.4.2

It is recommended to use a python virtual environment to install the above packages.

Instalation

  1. Clone or download the repository in your local machine

  2. Open a terminal window and (optionally) activate the virtual environment

  3. Go to the gym-ran_slice folder in the terminal window

  4. Once in the gym-ran_slice folder run:

    pip install -e .

Experiment scripts

There are four scripts for launching simulation experiments:

And four scripts for plotting results:

Figure 3 of the paper showing the performance curves for a scenario with 5 eMMB RAN slices, where KBRL attains the lowest rate of SLA violations while using less resources than model-free RL algorithms:

Project structure

The following files implement the environment:

The KBRL agent is implemented in:

The following files are required to build the experiments:

How to cite this work

The code of this repository:

@misc{net_slice,
title={Network slicing environment},
author={Juan J. Alcaraz},
howpublished = {\url{https://github.com/jjalcaraz-upct/network-slicing/}},
year={2022}
}

The paper where KBRL was presented:

@misc{alcaraz2022, author = {Alcaraz, Juan J. and Losilla, Fernando and Zanella, Andrea and Zorzi, Michele},
title = title = {Model-{Based} {Reinforcement} {Learning} {With} {Kernels} for {Resource} {Allocation} in {RAN} {Slices}},,
publisher = {IEEE},
journal = {IEEE Transactions on Wireless Communications},
year = {2023}, month = {1}, pages = {486--501}, volume = {22}, }

Licensing information

This code is released under the MIT lisence.