Closed light3317 closed 5 years ago
It will playing the same way according to the model.
If you want to improve it, you should generate more training data and train the network.
I'd add, fwiw, this is one of the differences between DeepStack and Libratus. There was a good website talking about this in simple english I cannot find now, but surely the Libratus paper (pdf) talks about it. But it uses a lot of computing power (it used a super computer you can read about in the paper) to do patch real time, but it continued patching throughout the night as well.
I'd add, fwiw, this is one of the differences between DeepStack and Libratus. There was a good website talking about this in simple english I cannot find now, but surely the Libratus paper (pdf) talks about it. But it uses a lot of computing power (it used a super computer you can read about in the paper) to do patch real time, but it continued patching throughout the night as well.
yeah would be interesting to see if they have a match. unfortunately they never got a chance to do so.
Just wondering after the model was chosen, when playing against the bot, will the bot automatically log new playing data to adjust the play, or it will still playing the same way according to the model? (unless we compile new data to manually retrain it)