zhuyiche / llava-phi

365 stars 38 forks source link

Number of V100 needed in pre-training stage and fine-tuning stage #10

Open Yang-bug-star opened 8 months ago

Yang-bug-star commented 8 months ago

Is it possible to train on 2 v100 gpus due to the small language model used ?

monjurulkarim commented 7 months ago

@zhuyiche @JLM-Z @Yang-bug-star Were you able to conduct training on your GPUs? I'm also interested in know about it.

Yang-bug-star commented 7 months ago

Sorry, I haven't tried it.

feiyu12138 commented 7 months ago

I found the training time is very close to original LLaVA: llama with clip 336 batch 14 except the pretrain is 1-hour shorter, is that normal?

hunarbatra commented 6 months ago

@feiyu12138 how many gpu's / which gpu did you use?