Hey @ClementPerroud, excellent work on Gym-Trading-Env and Rainbow-Agent. Regarding Rainbow, per TensorFlow "By default, TensorFlow maps nearly all of the GPU memory of all GPUs visible to the process." The challenge comes when I test multiple instances of Rainbow. It locks up all my GPU memory (and most system memory) for a relatively low intensity processes. Best option I've found in the past is to limit memory growth across GPUs to only that required by the process. This requires adding the following code to agent.py after importing TensorFlow:
gpus = tf.config.list_physical_devices('GPU')
if gpus:
try:
Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
Memory growth must be set before GPUs have been initialized
Hey @ClementPerroud, excellent work on Gym-Trading-Env and Rainbow-Agent. Regarding Rainbow, per TensorFlow "By default, TensorFlow maps nearly all of the GPU memory of all GPUs visible to the process." The challenge comes when I test multiple instances of Rainbow. It locks up all my GPU memory (and most system memory) for a relatively low intensity processes. Best option I've found in the past is to limit memory growth across GPUs to only that required by the process. This requires adding the following code to agent.py after importing TensorFlow:
gpus = tf.config.list_physical_devices('GPU') if gpus: try:
Currently, memory growth needs to be the same across GPUs
except RuntimeError as e:
Memory growth must be set before GPUs have been initialized
Thanks again for the great contribution.