Open fangming-he opened 8 months ago
Did you end up finding out the answer to this? I ran into the same issue with a 16 GB GPU trying to run on a GCP VM instance.
I ran streamingLLM on an A100 (40GB), using Llama-2-13b and Aquila2-7B, but they were both Out of menory :( I don't know what I did wrong
Did you enable_streaming? If enable_streaming and has 32GB memory on GPU, it should be OK to run it.
How much CUDA memory are required to run the example?
While running exmaple with command "CUDA_VISIBLE_DEVICES=0 python examples/run_streaming_llama.py --enable_streaming" Below error pop up: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacty of 7.92 GiB of which 131.69 MiB is free. Including non-PyTorch memory, this process has 7.79 GiB memory in use. Of the allocated memory 7.03 GiB is allocated by PyTorch, and 131.61 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF