baaivision / EVA

EVA Series: Visual Representation Fantasies from BAAI
MIT License
2.32k stars 167 forks source link

EVA CLIP 8B GPU out of memory in COLAB PRO Hugging face version #147

Closed chandrabhuma closed 7 months ago

chandrabhuma commented 8 months ago

model = AutoModel.from_pretrained( model_name_or_path, torch_dtype=torch.float16, trust_remote_code=True).to('cuda').eval()

When running the EVA CLIP 8B model of hugging face . The following error comes OutOfMemoryError: CUDA out of memory. Tried to allocate 32.00 MiB. GPU 0 has a total capacity of 14.75 GiB of which 5.06 MiB is free. Process 11929 has 14.74 GiB memory in use. Of the allocated memory 14.52 GiB is allocated by PyTorch, and 127.44 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables).

Colab Pro with GPU Ram 15 GB...

Any solution Please...

Quan-Sun commented 8 months ago

Hi @chandrabhuma. Generally, for the 8B model, a minimum of 16GB of GPU memory is recommended when using fp16/bf16 precision. you can consider offloading part of parameters to the CPU memory.