Closed swearos closed 8 months ago
I also want to know how many resources are needed. Where do you report out of memory, loading the xformer part?
Hi, if you don't have enough gpu memory, please consider using quantization. See the following for example: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/demos/multi_turn_mm_box.py#L77
If you don't wanna quantize the model: We have not tried with 24G-memory GPUs, but as a rough estimation, the GPU memory cost for hosting SPHINX on two GPUs should be close to 24G (without quantization). So you may be able to successfully run it without quantization after some optimization, but overall it is really extreme.
@yaomingzhang @swearos Please refer to issue 114: https://github.com/Alpha-VLLM/LLaMA2-Accessory/issues/114
24gb gpu is out of memory. Would you consider releasing a smaller model, one that can run under 24GB?