Open yfpeng1234 opened 2 months ago
when processing libero_10 and libero_90, it's also the case. I guess demonstrations in these subset have longer trajectory than libero_spatial and libero_object, then my GPU with 24G memory could not handle it.
Hi! peng. I am encountering the same error while preprocessing data and trying to figure it out
Exception CUDA out of memory. Tried to allocate 2.91 GiB. GPU 0 has a total capacity of 15.71 GiB of which 2.57 GiB is free.
Including non-PyTorch memory, this process has 13.12 GiB memory in use. Of the allocated memory 12.96 GiB is allocated by
PyTorch, and 25.24 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory
Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) when processing ./data/atm_libero/
libero_object/pick_up_the_ketchup_and_place_it_in_the_basket_demo/demo_0.hdf5
If there are any updates, we can discuss further
That's great! I'm also trying to figure it out
I'm trying to figure it out as well
Hi, thanks for your interest in our work. The data preprocessing with Cotracker indeed requires large amounts of computation, and it's highly related to the video length. I preprocess the dataset with A100 GPUs. If you cannot get GPUs with larger memory, I suggest you can try to use half-precision or chunk the videos into multiple clips.
Dear author, I encountered CUDA out of memory when process libero_goal. Actually I can successfully process libero_spatial and libero_object. But when process libero_goal, "CUDA out of memory" when process the third demonstration. Btw, the memory of my GPU is 24G. Thanks!