Vision-CAIR / MiniGPT-4

Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
https://minigpt-4.github.io
BSD 3-Clause "New" or "Revised" License
25.45k stars 2.92k forks source link

could you provide the training loss #167

Open guozhiyao opened 1 year ago

guozhiyao commented 1 year ago

Could you provide the training log of stage1 and stage2?

guozhiyao commented 1 year ago

I change the 13b llm and pretrain the project layer by loading your stage2 checkpoint. The initial loss is around 5 and drop slowly. It is around 2.5 after 1.5m samples. Is it normal? I have try to fine tune your model with the same config and model as your stage2 and the loss is around 1.4. I do not know if my pretrain loss can match your training log or not.

yPanStupidog commented 1 year ago

Can you do fine-tune using 7B?

guozhiyao commented 1 year ago

Can you do fine-tune using 7B?

@yPanStupidog I try to change the llm. But I do not know if my pretrain loss is normal. Because I found the fine tune loss of your 13b is around 1.4. I have not try the 7b yet. Could you provide the training log of 7b or 13b please?

yPanStupidog commented 1 year ago

Could you please help me check whether finetune could be applied on 7B?I always met errors!

yuanlisky commented 1 year ago

13B stage2 finetune in my case, the initial loss is 3.8, epoch 4, 200 iters, the best loss is 1.9

ImKeTT commented 1 year ago

Can you do fine-tune using 7B?

@yPanStupidog I try to change the llm. But I do not know if my pretrain loss is normal. Because I found the fine tune loss of your 13b is around 1.4. I have not try the 7b yet. Could you provide the training log of 7b or 13b please?

@guozhiyao a quick question, what's your config for pre-training? I tried to pre-train this model using the default training config (stage 1) with a 7b LLM, it gave me nan loss at the beginning, did you meet the similar error? Thanks

LIO-H-ZEN commented 1 year ago

Can you do fine-tune using 7B?

@yPanStupidog I try to change the llm. But I do not know if my pretrain loss is normal. Because I found the fine tune loss of your 13b is around 1.4. I have not try the 7b yet. Could you provide the training log of 7b or 13b please?

@guozhiyao a quick question, what's your config for pre-training? I tried to pre-train this model using the default training config (stage 1) with a 7b LLM, it gave me nan loss at the beginning, did you meet the similar error? Thanks

same error

hhllxx1121 commented 1 year ago

Can you do fine-tune using 7B?

@yPanStupidog I try to change the llm. But I do not know if my pretrain loss is normal. Because I found the fine tune loss of your 13b is around 1.4. I have not try the 7b yet. Could you provide the training log of 7b or 13b please?

@guozhiyao a quick question, what's your config for pre-training? I tried to pre-train this model using the default training config (stage 1) with a 7b LLM, it gave me nan loss at the beginning, did you meet the similar error? Thanks

same error

I have encountered the same problem. Have you resolved it

Zhanghahah commented 1 year ago

same issue, could you provide some insights?

GDUTT1 commented 11 months ago

I use vicuna7b and EVA01-CLIP-g-14 , the initial loss is 6.8479. After 1.0M sampls, the loss is 2.6, is it normal?