Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.63k stars 168 forks source link

how many gpu memory need,run SPHINX ? #112

Closed swearos closed 8 months ago

swearos commented 8 months ago

24gb gpu is out of memory. Would you consider releasing a smaller model, one that can run under 24GB?

yaomingzhang commented 8 months ago

I also want to know how many resources are needed. Where do you report out of memory, loading the xformer part?

ChrisLiu6 commented 8 months ago

Hi, if you don't have enough gpu memory, please consider using quantization. See the following for example: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/demos/multi_turn_mm_box.py#L77

If you don't wanna quantize the model: We have not tried with 24G-memory GPUs, but as a rough estimation, the GPU memory cost for hosting SPHINX on two GPUs should be close to 24G (without quantization). So you may be able to successfully run it without quantization after some optimization, but overall it is really extreme.

gaopengpjlab commented 8 months ago

@yaomingzhang @swearos Please refer to issue 114: https://github.com/Alpha-VLLM/LLaMA2-Accessory/issues/114