Closed kynthesis closed 7 months ago
Dear CogVLM's authors,
Thank you for your outstanding work on MLLM. Can you share a bit about estimating the time required to fine-tune or train the model?
Hardware requirement Model Inference: For INT4 quantization: 1 * RTX 3090(24G) (CogAgent takes ~ 12.6GB, CogVLM takes ~ 11GB) For FP16: 1 * A100(80G) or 2 * RTX 3090(24G) Finetuning: For FP16: 4 * A100(80G) [Recommend] or 8* RTX 3090(24G).
what is your means?
if i use 4*a100 (80 GB)
Dear CogVLM's authors,
Thank you for your outstanding work on MLLM. Can you share a bit about estimating the time required to fine-tune or train the model?