clinicalml / co-llm

Co-LLM: Learning to Decode Collaboratively with Multiple Language Models
https://arxiv.org/abs/2403.03870
102 stars 7 forks source link

experimental equipment #4

Closed hechengbo-H closed 6 months ago

lolipopshock commented 6 months ago

Hi, This work is great. What I want to ask is how many and what type of GPUs are needed to run this work?

Thanks! We reported this in our paper -- we use 4 A100 80G GPUs to fine-tune the 7B model, and the same config is used in QLoRA tuning of the 70B models.