Hello, after a certain number of episodes (around 400 in my case), the training stops because the GPU run out of memory, it seems that the problem is in tensorflow training
I'm trying to find how to replicate the problem, but it seems that it happens randomly at the moment
EDIT:
it seems that this is not memory leak. As the training episodes increase, the agent is able to cover more track, and thus the training data gets big and the GPU can't handle it
Hello, after a certain number of episodes (around 400 in my case), the training stops because the GPU run out of memory, it seems that the problem is in tensorflow training
I'm trying to find how to replicate the problem, but it seems that it happens randomly at the moment
EDIT: it seems that this is not memory leak. As the training episodes increase, the agent is able to cover more track, and thus the training data gets big and the GPU can't handle it