bigcode-project / bigcode-evaluation-harness

A framework for the evaluation of autoregressive code generation language models.
Apache License 2.0
771 stars 201 forks source link

How to evaluate the model memory efficiently? #52

Closed Godofnothing closed 1 year ago

Godofnothing commented 1 year ago

Thanks for the great work and convenient benchmarking tool!

I would like to evaluate CodeGen-16B model on thehumaneval benchmark. At my disposal there is A6000 GPUs with 48Gb of memory. The evaluation script crashes due to CUDA out of memory here (i.e accelerator.prepare) even with the smallest batch size - 1.

Since it is model evaluation I would expect that most of the memory is occupied by the model params (no optimizer states).
Naively, this model should fit into a single GPU if loaded in half precision, since 2x 16 = 32 < 48. However, when setting in the accelerate launch mixed precision with fp16 I still face OOM error.

What measures would you suggest to fit the model onto a single GPU?

arjunguha commented 1 year ago

This is not going to be full solution. I have gotten Codegen-16B-multi to work on an A6000/48GB. The script we used to pull it off is here:

https://github.com/nuprl/MultiPL-E/blob/main/inference/codegen.py

Note the crazy code for the stopping criteria. IIRC it was necessary to get things to work.

loubnabnl commented 1 year ago

Can you make sure that FP16 is set and follow memory consumption up until accelerator.prepare ?

Godofnothing commented 1 year ago

@loubnabnl I set fp16 in the accelerate launch --mixed_precision fp16 but it doesn't help. There is no GPU memory consumption up to accelerator.prepare.

loubnabnl commented 1 year ago

@Godofnothing we found a bug which made the memory consumption more than necessary, can you try running evaluation with code from this PR https://github.com/bigcode-project/bigcode-evaluation-harness/pull/61? you now need to specify --precision fp16

loubnabnl commented 1 year ago

Closing this issue, as I tried loading CodeGen-16B in mixed precision and it fits under 40GB of RAM

Godofnothing commented 1 year ago

Sorry for long delay. I've pulled the latest version of the code and model successfully fits onto 40GB. Thanks for your help and response.