Closed Godofnothing closed 1 year ago
This is not going to be full solution. I have gotten Codegen-16B-multi to work on an A6000/48GB. The script we used to pull it off is here:
https://github.com/nuprl/MultiPL-E/blob/main/inference/codegen.py
Note the crazy code for the stopping criteria. IIRC it was necessary to get things to work.
Can you make sure that FP16 is set and follow memory consumption up until accelerator.prepare
?
@loubnabnl I set fp16 in the accelerate launch --mixed_precision fp16
but it doesn't help. There is no GPU memory consumption up to accelerator.prepare
.
@Godofnothing we found a bug which made the memory consumption more than necessary, can you try running evaluation with code from this PR https://github.com/bigcode-project/bigcode-evaluation-harness/pull/61? you now need to specify --precision fp16
Closing this issue, as I tried loading CodeGen-16B in mixed precision and it fits under 40GB of RAM
Sorry for long delay. I've pulled the latest version of the code and model successfully fits onto 40GB. Thanks for your help and response.
Thanks for the great work and convenient benchmarking tool!
I would like to evaluate
CodeGen-16B
model on thehumaneval
benchmark. At my disposal there is A6000 GPUs with 48Gb of memory. The evaluation script crashes due to CUDA out of memory here (i.e accelerator.prepare) even with the smallest batch size - 1.Since it is model evaluation I would expect that most of the memory is occupied by the model params (no optimizer states).
Naively, this model should fit into a single GPU if loaded in half precision, since
2x 16 = 32 < 48
. However, when setting in theaccelerate launch
mixed precision withfp16
I still face OOM error.What measures would you suggest to fit the model onto a single GPU?