opendilab / LightZero

[NeurIPS 2023 Spotlight] LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios (awesome MCTS)
https://huggingface.co/spaces/OpenDILabCommunity/ZeroPal
Apache License 2.0
1.17k stars 122 forks source link

How to use efficient zero for board games #202

Closed drblallo closed 7 months ago

drblallo commented 8 months ago

Hi,

I am trying to understand how to use the LightZero framework, i have been able to use both alphazero and muzero to run tictactoe and efficient zero to run memory, but i don't understand how one is supposed use efficient zero for board games.

I tried to edit the configuration file of memory trying to port the configurations from the one of tic tac toe, but the program fails in various ways.

Is there anything fundamental that prevent from using the tic tac toe env with the efficient zero algorithm, or it is just a issue of understanding the exact impact of all configurations parameters within a configuration file?

puyuan1996 commented 8 months ago

Greetings, EfficientZero is primarily designed to enhance sample efficiency in environments with image-based inputs. As board games typically do not rely on image inputs, the performance gains from employing EfficientZero in such contexts might not be particularly significant, which is why we have not previously provided configurations for board games. However, if you are interested in exploring the performance of EfficientZero in board game settings, we have now provided a configuration example for TicTacToe in https://github.com/opendilab/LightZero/pull/204. Should you have any questions or wish to engage in further discussion, please feel free to reach out to us at any time.

drblallo commented 8 months ago

i see, if i can manage to collect some data, i will share them.

in the meantime, i have been trying to understand various configurations, i am not sure this is a bug, but it does look like one to me.

https://github.com/opendilab/LightZero/blob/29c9afd4dc631ff568984ddeacca3d4ce4e29065/lzero/policy/muzero.py#L344

phi_transoforms are applied regardless of muzero being initialized with categorical rewards or not. this entails that when muzero is used categorical rewards turned off, it fails the reward. I did so by passing categorical_distribution=False to the model dict in the config file of bot tictactoe.

Screenshot_2024-03-27_18-11-04

maybe i am missing something about how to use them. It is unclear to me why one should prefer categorical rewards when using a single float as a reward.

drblallo commented 8 months ago

furthemore, i tried change this line of code into the tictactoe env https://github.com/opendilab/LightZero/blob/29c9afd4dc631ff568984ddeacca3d4ce4e29065/zoo/board_games/tictactoe/envs/tictactoe_env.py#L329 to

reward = np.array(float(winner == -1)).astype(np.float32)

with the intention of seeing how long would it take to muzero to learn to aim for always drawing the game, in the vs bot version of the setup.

when i did so, it learned something, but after 153.000 steps and 90 minutes of work it did not managed perfectly learn to do so. It this intended? I understand that muzero is a complex model, but this should not be particularly harder to learn than the always winning version.

Screenshot_2024-03-27_21-10-35

Screenshot_2024-03-27_21-16-18

puyuan1996 commented 8 months ago

maybe i am missing something about how to use them. It is unclear to me why one should prefer categorical rewards when using a single float as a reward.

You can find a detailed analysis in the following papers: "Improving Regression Performance with Distributional Losses" (ICML 2018), "Observe and Look Further: Achieving Consistent Performance on Atari" (2018), and "Stop Regressing: Training Value Functions via Classification for Scalable Deep RL" (2024). These studies indicate that the primary advantage of adopting a categorical distribution is the ability to maintain more stable gradients in the face of noisy target variables and non-constant characteristics. Such stability is a key factor for performance and scalability, which is why LightZero has this option enabled by default.

puyuan1996 commented 8 months ago

but this should not be particularly harder to learn than the always winning version.

Hello, could you please provide the configuration file for your agent as well as the complete TensorBoard log files? This would be beneficial for our in-depth analysis. Additionally, it is advisable to save some replay data from the training process, so that we can observe the learning behaviors and evolution of the agent.