Hello, I encountered an issue during fine-tuning. While training on llava_v1_5_mix665k_with_video_chatgpt, the loss suddenly dropped to 0 at a certain step and has remained at 0. Could you help me identify the possible cause of this issue?
{'loss': 1.9156, 'learning_rate': 8.695652173913044e-07, 'epoch': 0.0}
{'loss': 1.8055, 'learning_rate': 1.7391304347826088e-06, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 2.6086956521739132e-06, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 3.4782608695652175e-06, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 4.347826086956522e-06, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 5.2173913043478265e-06, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 6.086956521739132e-06, 'epoch': 0.01}
Hello, I encountered an issue during fine-tuning. While training on llava_v1_5_mix665k_with_video_chatgpt, the loss suddenly dropped to 0 at a certain step and has remained at 0. Could you help me identify the possible cause of this issue? {'loss': 1.9156, 'learning_rate': 8.695652173913044e-07, 'epoch': 0.0}
{'loss': 1.8055, 'learning_rate': 1.7391304347826088e-06, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 2.6086956521739132e-06, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 3.4782608695652175e-06, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 4.347826086956522e-06, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 5.2173913043478265e-06, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 6.086956521739132e-06, 'epoch': 0.01}