nv-tlabs / ASE

Other
793 stars 128 forks source link

config formula for arbitrary gpu memory sizes #11

Open mshoe opened 2 years ago

mshoe commented 2 years ago

I ran into CUDA out of memory errors using a gtx 1050 (4gb) running the command:

python ase/run.py --task HumanoidAMP --cfg_env ase/data/cfg/humanoid_sword_shield.yaml --cfg_train ase/data/cfg/train/rlg/amp_humanoid.yaml --motion_file ase/data/motions/reallusion_sword_shield/RL_Avatar_Atk_2xCombo01_Motion.npy --headless

To avoid the out of memory errors I lowered:

Is there a formula to calculate the total memory used based on these variables and possibly some others?

xbpeng commented 2 years ago

Thanks for the tips, that should be helpful for some folks.

I think the default settings will need about 16gb of memory on the GPU. I don't have a formula handy for calculating the total memory cost, but if you dig into the various tensors, it should be possible to get a ballpark estimate.

Robokan commented 1 year ago

using nvidia-smi I am showing it only uses 8111 MiB on my 24gb rtx 4090. Which is only 25% usage.