Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.72k stars 176 forks source link

[WIP] Further memory optimization of SPHINX series models #118

Open linziyi96 opened 12 months ago

linziyi96 commented 12 months ago

This PR currently introduces 3 changes to fit SPHINX-13B FP16 on 4*16GB GPUs:

  1. Support resharding the checkpoints to higher degree of tensor parallelism to support 4 GPUs (our checkpoints are released with a tensor parallel size of 2).
  2. Move the visual backbone creation to CPU. As the visual backbones have to be created with FP32 and with some unused language parameters, directly creating on GPUs, as is currently implemented, causes a memory spike and the consequent OOM on 16GB GPUs.
  3. In the multi_turn_mm_box demo, gives an option to disable SAM. This is a work-around to save a few GBs of memory on GPU 0 as they cannot be sharded easily now.

nvidia-smi with the model running on 4*V100-16GB after this PR: image