Open rPortelas opened 2 years ago
in lines 240-241 of core / reanalyze_worker.py
, try changing them to
trained_steps = ray.get(self.storage.get_counter.remote())
target_weights = None
and changing lines 252-253 to
if new_model_index > self.last_model_index:
self.last_model_index = new_model_index
target_weights = ray.get(self.storage.get_target_weights.remote())
also, try explicitly doing gc.collect()
periodically.
btw, in train/mean_score of your posted plot, 100K in x-axis is not for Atari 100K, but for Atrai 10M (i.e., 10M interactions with the env)? is the understanding above right?
The X axis corresponds to training steps (not environment steps). My experiments were scheduled to run 900k training steps while performing 30M environment steps (I stopped them at around 600k). This means that for each 100k training steps in the x-axis there are around 30/9= 3,33M environment steps being processed.
Is it clearer ?
Thanks for your suggestions :).
I already tried to add periodic gc.collect()
, which did not solve the issue. For your other suggested modifications, could you tell me a bit more about it ? I see that it makes the code slightly more efficient since it loads target weights only if needed.
Did you solve this RAM issue on your side by modifying these lines ?
did not try the exp in the large scale as you discussed.
but the change on codes relevant to target_weights
makes the train.sh
be runnable.
and decreasing the gpu_actor
really helps for the RAM usage.
lastly, in line 17 of storage.py
, try changing it to self.queue = Queue(maxsize=size, actor_options={"num_cpus": 3})
or larger than 3, as the bottleneck seems to be the ray Queue
is not fast enough to get and to send the data, not the gpu_actor
is 20 or a number less than the default 20.
but the change on codes relevant to
target_weights
makes thetrain.sh
be runnable.
Hmm interesting. Could it be just because you never get to load the target weights in your experiments because they are shorter than the target model checkpoint interval (meaning that you never get into the if statement in line 252) ?
no. it is just because this would save RAM memory, so train.sh
would run without breaking until the end.
I am currently experimenting on scaling EfficientZero to learning setups with high-data regimes.
As a first step, I am running experiments on Atari, with a replay buffer of 1M environment steps. While doing this I observed that RAM consumption keeps increasing long after the replay buffer reached its maximum size.
Here are tensorboard plots on Breakout, for a 600k training steps run (20M environment steps / 80M environment frames):
I perform experiments on cluster computers featuring 4 tesla V100 gpus / 40 cpus and 187GB of RAM.
As you can see, although the maximum replay buffer size ("total_node_num") is reached after 30k training steps, RAM (in %) keeps increasing until around 250k steps, from 80% to 85%.
Ideally, I would also like to increase the batch size. But it seems like the problem gets worse in that setting:
The orange curves are from the same Breakout experiments, but with a batch size of 512 (instead of 256), and a smaller replay buffer size (0.1M). Here the maximum replay buffer size is obtained at 4k training steps but memory keeps increasing until 100K+ steps. I understand that a bigger batch means more RAM because more data is being processed when updating/doing MCTS, but it does not explain why it keeps increasing after the replay buffer fills up
Any ideas on what causes this high ram consumption, and how we could mitigate that ?
Run details
Here are the parameters used for the first experiment I described (pink curves):