werner-duvaud / muzero-general

MuZero
https://github.com/werner-duvaud/muzero-general/wiki/MuZero-Documentation
MIT License
2.49k stars 611 forks source link

Loss: nan #40

Closed guskal01 closed 4 years ago

guskal01 commented 4 years ago

I often get this error after training for a few hours. It has happened in all the games I've tried (but I've only tried two-player games). The error message below is from tictactoe. If this only happens to me, maybe it could have something to do with my low self played games per training step ratio. Training continues but self-play stops after the error.

2020-04-27 19:00:35,290.ERROR worker.py:1011 -- Possible unhandled error from worker: ray::ReplayBuffer.get_batch() (pid=10953, ip=192.168.0.113)
  File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 407, in ray._raylet.execute_task.function_executor
  File "/home/gustav/Desktop/muzero-general/replay_buffer.py", line 74, in get_batch
    game_id, game_history, game_prob = self.sample_game(self.buffer)
  File "/home/gustav/Desktop/muzero-general/replay_buffer.py", line 133, in sample_game
    game_index = numpy.random.choice(len(self.buffer), p=game_probs)
  File "mtrand.pyx", line 920, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN
2020-04-27 19:00:35,290 ERROR worker.py:1011 -- Possible unhandled error from worker: ray::Trainer.continuous_update_weights() (pid=10952, ip=192.168.0.113)
  File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 407, in ray._raylet.execute_task.function_executor
  File "/home/gustav/Desktop/muzero-general/trainer.py", line 56, in continuous_update_weights
    index_batch, batch = ray.get(replay_buffer.get_batch.remote(self.model.get_weights()))
ray.exceptions.RayTaskError(ValueError): ray::ReplayBuffer.get_batch() (pid=10953, ip=192.168.0.113)
  File "python/ray/_raylet.pyx", line 452, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 407, in ray._raylet.execute_task.function_executor
  File "/home/gustav/Desktop/muzero-general/replay_buffer.py", line 74, in get_batch
    game_id, game_history, game_prob = self.sample_game(self.buffer)
  File "/home/gustav/Desktop/muzero-general/replay_buffer.py", line 133, in sample_game
    game_index = numpy.random.choice(len(self.buffer), p=game_probs)
  File "mtrand.pyx", line 920, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN
Last test reward: 20.00. Training step: 87849/100000. Played games: 25959. Loss: nan
werner-duvaud commented 4 years ago

Hi,

It doesn't seem to be happening to me but I'll try to let it run a little.

It's hard to debug, do you have a clue what could be causing this?

It is possible that it is due to too much overfitting so the loss becomes numerically unstable but it can also be something else.

Have you changed the configuration? A learning rate that is too large or too small (because of the decay?) can cause this too.

guskal01 commented 4 years ago

If I load the same network and continue training I get the same error almost immediately. By doing this I traced the nan values to https://github.com/werner-duvaud/muzero-general/blob/af199b8e94f9fb19c8c328b468911f924e176c21/models.py#L444 observation doesn't contain any nan's, but encoded_state is filled with only nan. It looks to me like the network is broken somehow?

I used the default config while training. I can send you the model if that would help.

werner-duvaud commented 4 years ago

@guskal01 It would be great: duvaudwerner@yahoo.fr Thanks

guskal01 commented 4 years ago

Sent. To reproduce, start muzero.py, choose tictactoe, load pretrained model, start training.

I also included the tfevents-files in case you want to see if something strange happened during training.

werner-duvaud commented 4 years ago

Hi,

Thanks

I based myself on this commit. I loaded the weights and I had no errors and the training continued normally for at least 1000 self played games. I tested with Ubuntu and Mac, it seems to work fine ... I also tried to set approximately the same self-play / train ratio, it worked well too.

Do you use the latest version of Ray and PyTorch?

I updated the normalization in the last commit, it doesn't change anything for me but maybe it will solve the error for you.

guskal01 commented 4 years ago

Thanks for the help, and sorry for the very late reply. I haven't had time to let it run for a very long time, but so far I haven't had any problems with the latest commit. I'll send another comment if the problem returns.

guskal01 commented 4 years ago

Just left it running for a long time now without problem, so it seems to be fixed.