Open lunathanael opened 1 month ago
Sorry for the late response.
Regarding the action space: I noticed that in your full configuration, the 'action_space_size' is set to 2560. For an MCTS+RL algorithm, this value is indeed quite large. Even in well-studied Atari games, the largest discrete action space is only 18. Although MZ series algorithms can manage higher-dimensional discrete action spaces, 2560 is still excessive, and the current poor performance is likely due to this. Simplifying the action space could enhance efficiency.
Regarding the occurrence of illegal actions, I'm puzzled, as Tetris should have a fixed action space. This might be related to issues in your environment implementation, so I suggest checking the relevant code.
Regarding the observation space: Are you currently using a vector form? Perhaps directly using raw images would be more appropriate, and employing our convolutional-ResNet-based representation network for processing might improve model performance.
Regarding the environment's reward: What level of reward is considered a good convergence state? During training, consider applying some form of normalization to the reward to help stabilize the training process.
Regarding poor performance: Besides the aforementioned MDP design, there might also be issues with the balance between exploration and exploitation. First, confirm whether it's due to insufficient exploration, meaning effective trajectories are not being collected, or insufficient exploitation, meaning good trajectories are collected but the policy/value network fails to effectively learn. You can analyze this by monitoring and printing the policy and some key frames.
If possible, you can submit a PR so we can compare and review your specific code for better discussion. You're also welcome to raise more related discussion questions. Thanks for your attention.
Thank you for the response!
I appreciate the notes and the helpful advice! I truly appreciate you taking the time to provide me with your insightful knowledge. I understand that nothing can be said through a GitHub issue and without any code, so I will submit a PR soon and ask again for your advice. I wonder, however, if MZ series algorithms are not the right direction for such an application. I am always open to more suggestions for improving my current setup. Thank you again!
Regarding the action space, I recommend employing the most fundamental five discrete actions: rotate, move left, move right, move down, and drop to the bottom. When Tetris is modeled as a Markov Decision Process (MDP) environment, everything is deterministic except for the randomness introduced when a new piece appears at the top of the screen. These five discrete actions completely and accurately encapsulate the game mechanics, serving as the simplest and most comprehensive choice for reinforcement learning algorithms.
If you can obtain the environment state in vector form and use it as observations (obs), this approach is highly reasonable. You can first debug under this setup and then expand to image-based observations (image obs).
Opting to use the MuZero-style algorithm to address the Tetris problem is a prudent decision, as it effectively manages complex and dynamic environments and facilitates long-term planning.
You can deselect the two options in the top left corner of TensorBoard to fully display the original curves for easier analysis. Additionally, capturing the TensorBoard curves related to the collector can provide further insights.
As for why the current performance is akin to a random strategy, I suspect this may relate to the handling of observations (obs) and action masks (action_mask). The current average episode length for evaluation is approximately 18, suggesting that the algorithm has not effectively learned to clear lines. This might be because the collector has not gathered high-quality episodes, leading to an inability to learn an effective strategy. I would also like to confirm whether the implementation related to *botris_5move*
has adopted the aforementioned five-dimensional discrete action space and whether it is currently a single-player environment.
Best wishes!
Hello! I'm trying to implement the many models and algorithms described in this library in the context of Tetris: specifically for multiplayer Tetris where players compete to efficiently clear lines to send as many lines as possible. Currently, I am developing a simple bot that extends beyond simply placing tetrominoes randomly.
Here's what I have: An environment modeled after atari and game_2048 that allows models to interact and train successfully. A Modified rewards system to incentivize more blocks place, emphasizing any lines cleared. A config file to work with EfficientZero, with over 48 single-GPU hours trained.
Here's Some context on the environment trained on: The observation is a 10colx8row one hot encoded board, this is stacked with some more one-hot encoded information such as the current piece, pieces in the queue, and piece held. Each move is encoded as the coordinate of the piece place, the type of the piece placed, and the rotation of the piece, for a one-hot encoded size of 2560. The input size is 144. Currently the model uses mlp. It is worth noting, even after many training iterations, the console still warns that a lot of illegal moves are attempted, despite the action mask being provided for the varied action space. It seems possibly the model is not able to correctly learn the legal actions.
I've also done a small amount of testing using an action space of 10 instead, and some more with reanalyzed set to 0.25 and etc. I am always open to trying everything as I just want to get something going :).
Let me know if there is any more information or resources or context I can provide to facilitate my learning process.
Here's Some graphs from the training.
Here is the
total_config.py
file for the run: