Hello, I'd like to run my training script that is customised by rl2_ppo_metaworld_ml45.py, while I replaced the localSampler with RaySampler. The parameter n_workers is set to 18, which is smaller than the number of my CPU cores. However, I used the command 'htop' to visualise CPU usage, and found that the usage rate is very low. Sometimes, only one or two cores are 100%, and the rest are idle, which leads to slow sampling speed.
10 training epochs will take near 5000 seconds, and 7e6 env steps take 22 hours. How to speed up the sampling efficiency? Thanks a lot.
Hello, I'd like to run my training script that is customised by rl2_ppo_metaworld_ml45.py, while I replaced the localSampler with RaySampler. The parameter n_workers is set to 18, which is smaller than the number of my CPU cores. However, I used the command 'htop' to visualise CPU usage, and found that the usage rate is very low. Sometimes, only one or two cores are 100%, and the rest are idle, which leads to slow sampling speed.
10 training epochs will take near 5000 seconds, and 7e6 env steps take 22 hours. How to speed up the sampling efficiency? Thanks a lot.