Open codetiger opened 7 years ago
Hi, you are welcome. Actually agent didn't score a much and no 2048 tile was achieved. You can look at results: https://github.com/gorgitko/MI-MVI_2016/tree/master/results It seems random playing is very close to agent's playing :/
Anyway I think 2048 game is not much suitable for RL because of large sampling space.
Hi, I added some optimization techniques to my Agent and got good results.
The agent was trained for 100K episodes in 2x2 grid and got 100% optimal move every time. However, I did not have enough patience to train the agent for 4x4 grid. https://github.com/codetiger/MachineLearning-2048
Nice! I will check it.
Hi, Thanks for sharing your work. Just out of curiosity how much did your agent score?