intelligent-environments-lab / CityLearn

Official reinforcement learning environment for demand response and load shaping
MIT License
462 stars 167 forks source link

[BUG] custom_module not found when trying to update scheme after defining CustomReward class #58

Closed lijiayi9712 closed 1 year ago

lijiayi9712 commented 1 year ago

Issue Description

After I define my own reward function and updating the scheme in the source code following this link: https://www.citylearn.net/overview/reward_function.html?highlight=custom_module

I keeps getting the "ModuleNotFoundError: No module named 'custom_module'" error when defining env = CityLearnEnv(dataset_name, central_agent=True, simulation_end_time_step=WINDOW*14)

Expected Behavior

Please describe what you expected to happen.

Actual Behavior

Please describe what actually happened.

Steps to Reproduce

After following the above link, I run:

dataset_name = 'citylearn_challenge_2022_phase_1' WINDOW = 24 env = CityLearnEnv(dataset_name, central_agent=True, simulation_end_time_step=WINDOW*14)

This would give the error

Environment

Possible Solution

If you have any ideas for how to fix the issue, please describe them here.

Additional Notes

Please provide any additional information that may be helpful in resolving this issue.

kingsleynweye commented 1 year ago

@lijiayi9712 please, can i see your file tree structure, the custom reward file and how you updated the schema to track down what the issue might be?

lijiayi9712 commented 1 year ago

Hi Kingsley, thanks for getting back to me! I didn't create a custom reward file: I directly created this custom reward class in Ipython notebook. For scheme modification, I directly modified the scheme file "data/citylearn_challenge_2022_phase_1/schema.json" and change the corresponding part to { ..., "reward_function": { "type": "custom_module.CustomReward", ... }, ... }

Should I create a custom reward file instead? Do you mind showing me how to create the file? Thanks!

kingsleynweye commented 1 year ago

@lijiayi9712 do you mind if i have a look at your notebook?

lijiayi9712 commented 1 year ago

Sure, should I take a screenshot?

This is how I tried to define the CustomReward:

from typing import List
from citylearn.citylearn import CityLearnEnv
from citylearn.reward_function import RewardFunction

class CustomReward(RewardFunction):
    """Calculates custom user-defined multi-agent reward.

    Reward is the :py:attr:`net_electricity_consumption_emission`
    for entire district if central agent setup otherwise it is the
    :py:attr:`net_electricity_consumption_emission` each building.

    Parameters
    ----------
    env: citylearn.citylearn.CityLearnEnv
        CityLearn environment.
    """

    def __init__(self, env: CityLearnEnv):
        super().__init__(env)

    def calculate(self) -> List[float]:
        reward_list = [min(b.net_electricity_consumption[b.time_step]*-1**3, 0) for b in self.env.buildings]
        if self.env.central_agent:
            reward = [sum(reward_list)]
        else:
            reward = reward_list
        return reward

dataset_name = 'citylearn_challenge_2022_phase_2'
WINDOW = 24
env = CityLearnEnv(dataset_name, central_agent=True, simulation_end_time_step=WINDOW*14)
model = RLAgent(env)
model.learn(episodes=2, deterministic_finish=True)
kingsleynweye commented 1 year ago

@lijiayi9712 the schema approach works when the reward is defined in a separate module so that way you can define the path to that module.

Since you have it defined in the same file as you are running the simulation, the way to go about it is:

dataset_name = 'citylearn_challenge_2022_phase_2'
WINDOW = 24
env = CityLearnEnv(dataset_name, central_agent=True, simulation_end_time_step=WINDOW*14)
env.reward_function = CustomReward(env) # update reward function after initializing environment
model = RLAgent(env)
model.learn(episodes=2, deterministic_finish=True)

So, revert the schema to its original reward_function function definition and replace your code with the snippet above. Let me know if you run into any other issues.

lijiayi9712 commented 1 year ago

Thank you!!! This works perfectly!!!