Closed planetbalileua closed 2 years ago
Hi planetbalileua and thanks for reaching out!
We realize that some codes are not consistent due to the fast iteration and we are doing refactorings. For Isaac Gym users, I have published a single process version with a demo on Ant and Humanoid. Could you please try that and see if the error remains?
Hi supersglzc! The single process version after some small modifications works fine! The changes I made: Add
args.if_use_perand comment out line 60 in elegant/rl/train/evaluator.py (which is using wandb) Thank you so so much for you help!
Would you like to test the updated file at: https://github.com/AI4Finance-Foundation/ElegantRL/blob/master/examples/tutorial_Isaac_Gym.py
Hi! I have tested the updated file and there's an error on finding train_and_evaluate_mp in run.py for the latest release. Some other errors from my side:
ImportError: cannot import name 'ReplayBufferList' from 'elegantrl.train.replay_buffer' (/home/meow/ElegantRL/elegantrl/train/replay_buffer.py)
So I added replay buffer list in replay_buffer.py
File "/home/meow/ElegantRL/elegantrl/agents/AgentPPO.py", line 657, in AgentPPOHterm def __init__(self, net_dim: int, state_dim: int, action_dim: int, gpu_id: int = 0, args: Arguments = None): NameError: name 'Arguments' is not defined
Added from elegantrl.train.config import Arguments
Thank you again for updating!
Fixed the errors. The issue is closed.
Hello! Thank you for creating this brilliant library! This is so helpful on a personal project I am working on. I faced an error when trying to run tutorial_Isaac_Gym.py in the example folder:
I'm running this on NVIDIA RTX3070TI with 8GB VRAM, and my CUDA version is:
The same Ant(with 2048env) example was working when I test it using the original isaac gym train.py. I'm pretty sure that I have free VRAM (~7.2GB) when running this but it still appears the CUDA out of memory error. My torch version is 1.11.0.
I have also tried to reduce the number of envs, batch size, network size and other parameters, but the error remains.
Once again thank you so much for any possible help on this issue