RLE-Foundation / rllte

Long-Term Evolution Project of Reinforcement Learning
https://docs.rllte.dev/
MIT License
467 stars 86 forks source link

[Bug]: RE3 crashes when self.k > self.idx #36

Open roger-creus opened 1 year ago

roger-creus commented 1 year ago

🐛 Bug

In the example in the documentation here RE3 crashes in re3.py - line 174 when self.k > self.idx. This can happen when the storage size has been reached and self.idx starts from 0 again. This is the line:

intrinsic_rewards[:, i] = th.log(th.kthvalue(dist, self.k + 1, dim=1).values + 1.0)

To Reproduce

from rllte.agent import PPO
from rllte.env import make_envpool_atari_env 
from rllte.xplore.reward import RE3

if __name__ == "__main__":
    # env setup
    device = "cuda:0"
    env = make_envpool_atari_env(device=device, num_envs=8)
    # create agent
    agent = PPO(env=env, 
                device=device,
                tag="ppo_atari")
    # create intrinsic reward
    re3 = RE3(observation_space=env.observation_space,
              action_space=env.action_space,
              device=device,
              num_envs=8,
              storage_size=100
     )
    # set the module
    agent.set(reward=re3)
    # start training
    agent.train(num_train_steps=5000)

Relevant log output / Error message

IndexError: kthvalue(): Expected reduction dim 1 to have non-zero size.

System Info

No response

Checklist

yuanmingqi commented 1 year ago

Thanks for your report! I will check it asap.