lukasHoel / text2room

Text2Room generates textured 3D meshes from a given text prompt using 2D text-to-image models (ICCV2023).
https://lukashoel.github.io/text-to-room/
MIT License
1k stars 69 forks source link

How can I work on google colab? #2

Closed softmurata closed 1 year ago

softmurata commented 1 year ago

Thanks for great work!! I would like to try your code on google colab, but I met OutOfMemory error at generate_scene.py Please tell me about spec at your environment.

lukasHoel commented 1 year ago

Hi,

we run the code on a RTX 3090 and it requires ~18GB VRAM. This is mainly caused by the large diffusion model. You can try to add some existing techniques to reduce the VRAM requirements, e.g. see here: https://huggingface.co/docs/diffusers/optimization/fp16#offloading-to-cpu-with-accelerate-for-memory-savings

voruin commented 1 year ago

Hi,

we run the code on a RTX 3090 and it requires ~18GB VRAM. This is mainly caused by the large diffusion model. You can try to add some existing techniques to reduce the VRAM requirements, e.g. see here: https://huggingface.co/docs/diffusers/optimization/fp16#offloading-to-cpu-with-accelerate-for-memory-savings

Hi,

I attempted to run the program on my computers, one with RTX 3080 ~10GB VRAM, the other 1080ti ~12GB VRAM, also come to this OutOfMemory error. You suggested to try implementing some existing techniques to reduce the VRAM requirements, such as adding pipeline.enable_sequential_cpu_offload(). I attempted to add some codes according to the link you share, but it did not solve the issue. There is no change (the exact capacities used, free are all the same).

torch.cuda..OutOfMemoryError: CUDA out of memory. Tried to allocate 3.16 GiB (GPU 0; 10.00 GiB total capacity; 8.88 GiB already allocated; 0 MiB free; 8.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I am new to this and may have done something incorrectly. I would greatly appreciate it if you could provide more detailed instructions on how to integrate this into the original generate_scene.py.

Bests,

ForeverAT commented 1 year ago

@voruin The model seems to run on a single GPU. I used 3080Ti ~ 12GB, and it works. So by specifying to use the 1080ti should be able to solve the problem with the technique he suggested