jpedro1992 / gym-fog

A custom OpenAi Gym environment for the simulation of a fog-cloud infrastructure.
Other
3 stars 2 forks source link

Need help in implementation #1

Open mihir-agarwal0211 opened 2 years ago

mihir-agarwal0211 commented 2 years ago

Hey, i went through your paper "Resource Provisioning in Fog Computing through Deep Reinforcement Learning" and found it interesting, although while trying to implement it, i could not implement it because of lack of documentation, kindly share if you have any documentation/video Thank you

jpedro1992 commented 2 years ago

Hi @mihir-agarwal0211,

Thank you for reaching out! What is your main doubt? Were you able to install the environment?

In this repo, you have a couple of gym-fog environments. For the installation of the environment, you need to:

cd gym-fog pip install -e .

Regarding the RL agent itself, you can use the one you prefer :). In the paper, we used DDQN but in this repo, we only open-sourced the environment based on a Fog Computing architecture.

You can follow the tutorials from openai-gym to deploy different RL agents. As an Example.

Please let me know if you have further questions.

mihir-agarwal0211 commented 2 years ago

Hi @jpedro1992, Thank you for your reply.

I ran the example link you provided for the Open AI gym, and it helped bring things into perspective, i was running the files on google colab, and having trouble in which file to run. Can you please give more information on the implementation instructions?

mihir-agarwal0211 commented 2 years ago

in the fog_env_energy_efficiency_small.py file, you are importing plotting from rl.util, Although i cannot find any such standard library for the same, and it is giving an error. Did you have another folder rl?

jpedro1992 commented 2 years ago

Hi @mihir-agarwal0211,

Thanks for letting me know! I added that dependency to the repo. It should work fine now.

As I mentioned you need to run it via your own RL agent/algorithm. Consider the following random agent as an example:

import gym
import numpy as np
import random

env = gym.make('gym_fog:FogEnvEnergyEfficiencyLarge-v0', name="RandomAgent", number_users=1, dynamic=False)

state = env.reset()

num_steps = 99
for s in range(num_steps+1):
    print(f"step: {s} out of {num_steps}")

    action = env.action_space.sample()

    env.step(action)

    env.render()

env.close()
mihir-agarwal0211 commented 2 years ago

Thanks a lot for adding the dependency.

I understand that I have to use my own RL Algorithm, but can you please help me in which file can I add my own RL agent? do i need to add it in the __init__.py in rl?

I am currently facing this error CPLEX Error 1016: Community Edition. Problem size limits exceeded. Purchase at http://ibm.biz/error1016. do i need to purchase IBM CPLEX? or can it be done by decreasing the problem size?

jpedro1992 commented 2 years ago

You can create even a different folder with your own file. You just need to add the openai-gym dependency and create an environment based on gym-fog.

Regarding cplex, indeed you need the complete version to run the current environments since they both create a lot of variables. Otherwise, you need to reduce the complexity of the model in terms of the total number of variables.

Btw, You can get the full version of cplex for free if you are a student.