Closed DLPerf closed 6 months ago
However, inference_device
is called here, is there any good idea? @kimbring2
@DLPerf Hello, sorry for late response. Thank you for sharing good idea about the tf.function. Actually, I should train the model using CPU because of constant leaking of memory.
I could not solve that issue before. However, it seems like moving the tf.function inside of function could solve that issue.
Actually, I just borrow the train code from the Google Seed_RL project. Therefore, I am not sure I can change the code like your advice.
Lucklily, I could train the agent without GPU because the observation of Dota2 does not have any image frame.
@DLPerf Anyway, could you run the Dota2 environment successfully? I only tested it on my workspace. Therefore, I am not sure the environment works well also in the another workspace.
I found that recent update of Dota2 raises issue in the the GRPC part of Dotaservice. That is why I upload the previous Dota2 client version in Google Drive.
Hello! Our static bug checker has found a performance issue in dota2/omninight/learner_dota.py:
create_host
is repeatedly called in a for loop, but there are tf.function decorated functionsinference
andagent_inference
defined and called increate_host
.In that case, when
create_host
is called in a loop, the functioninference
andagent_inference
will create a new graph every time, and that can trigger tf.function retracing warning.Similar issues in: dota2/shadowfiend/learner_dota.py, dr-derks-mutant-battlegrounds/learner_2.py and dr-derks-mutant-battlegrounds/learner_1.py.
Here is the tensorflow document to support it.
Briefly, for better efficiency, it's better to use:
than:
Looking forward to your reply.