ClementPerroud / Rainbow-Agent

Replicate of Reinforcement Learning Rainbow with Tensorflow 2 from paper "Rainbow: Combining Improvements in Deep Reinforcement Learning"
MIT License
8 stars 7 forks source link

Memory management code #2

Open MickyDowns opened 6 months ago

MickyDowns commented 6 months ago

Hey @ClementPerroud, excellent work on Gym-Trading-Env and Rainbow-Agent. Regarding Rainbow, per TensorFlow "By default, TensorFlow maps nearly all of the GPU memory of all GPUs visible to the process." The challenge comes when I test multiple instances of Rainbow. It locks up all my GPU memory (and most system memory) for a relatively low intensity processes. Best option I've found in the past is to limit memory growth across GPUs to only that required by the process. This requires adding the following code to agent.py after importing TensorFlow:

gpus = tf.config.list_physical_devices('GPU') if gpus: try:

Currently, memory growth needs to be the same across GPUs

for gpu in gpus:
  tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")

except RuntimeError as e:

Memory growth must be set before GPUs have been initialized

print(e)

Thanks again for the great contribution.