Closed hlsafin closed 5 months ago
You can refer to results.csv at the main page. The results at the read.me are from an older version. I've executed the bbf.ipynb file.
I have not replicated the result of all the games yet. Unfortunately, using a single 4090 still takes a lot of time, as you have to train the model from scratch for every game and every seed, which takes around 6 hours for this GPU. And currently I'm not at the same city where the 4090 is at.
I'm doing my bachelor's thesis about this network though, but it is not going to be in english.
Is it possible to train enviroments with multiple GPU in order to make the training process faster? I assume the bottleneck is the large network?
It's a pretty small network of 35 M parameters. There is a certain bottleneck on the evaluation environments, which I did not know how to implement parallely.
Also I do not have access to multi GPU servers.
okay thank you
Were you able to reproduce all the BBF results in the paper? I see a few of them posted on the github page, but not all of them.