facebookresearch / crystal-text-llm

Large language models to generate stable crystals.
Other
68 stars 12 forks source link

Loss is always zero while training #3

Closed dqgdqg closed 2 months ago

dqgdqg commented 7 months ago

OS: ubuntu 20.04 GPU: 3090 24G Python: 3.12

The loss is always zero while training with the following command:

python llama_finetune.py --run-name 7b-test-run --model 7b

{'loss': 0.0, 'learning_rate': 1e-05, 'epoch': 0.0}
{'loss':️ 0.0, 'learning_rate': 2e-05, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 3e-05, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 4e-05, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 5e-05, 'epoch': 0.0}
{'loss':️ 0.0, 'learning_rate': 6e-05, 'epoch': 0.0}
{'loss': 0.0, 'learning_rate': 7e-05, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 8e-05, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 9e-05, 'epoch': 0.01}
{'loss':️ 0.0, 'learning_rate': 0.0001, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 9.99999986577034e-05, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 9.999999463081364e-05, 'epoch': 0.01} {'loss': 0.0, 'learning_rate': 9.999998791933097e-05, 'epoch': 0.01} {'loss':️ 0.0, 'learning_rate': 9.999997852325571e-05, 'epoch': 0.01} {'loss': 0.0, 'learning_rate': 9.99999664425884e-05, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 9.999995167732965e-05, 'epoch': 0.01} {'loss': 0.0, 'learning_rate': 9.99999342274803e-05, 'epoch': 0.01}
{'loss':️ 0.0, 'learning_rate': 9.999991409304125e-05, 'epoch': 0.01} {'loss': 0.0, 'learning_rate': 9.99998912740136e-05, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 9.99998657703986e-05, 'epoch': 0.01}
{'loss': 0.0, 'learning_rate': 9.999983758219756e-05, 'epoch': 0.02} {'loss':️ 0.0, 'learning_rate': 9.999980670941202e-05, 'epoch': 0.02} {'loss': 0.0, 'learning_rate': 9.999977315204364e-05, 'epoch': 0.02} {'loss': 0.0, 'learning_rate': 9.999973691009423e-05, 'epoch': 0.02}

Is it normal?

ngruver commented 7 months ago

That's odd. I just did a fresh install of the dependencies in the following setup

OS: Ubuntu 18.04.5 GPU: 2 A100 80GB Python: 3.8.13

and I'm seeing

{'loss': 6.5001, 'learning_rate': 1e-05, 'epoch': 0.0}
{'loss': 7.1352, 'learning_rate': 2e-05, 'epoch': 0.0}
{'loss': 7.1497, 'learning_rate': 3e-05, 'epoch': 0.0}
{'loss': 7.1056, 'learning_rate': 4e-05, 'epoch': 0.0}
{'loss': 7.1625, 'learning_rate': 5e-05, 'epoch': 0.0}
{'loss': 7.225, 'learning_rate': 6e-05, 'epoch': 0.0}
{'loss': 7.2047, 'learning_rate': 7e-05, 'epoch': 0.01}
{'loss': 7.1193, 'learning_rate': 8e-05, 'epoch': 0.01}
{'loss': 7.1611, 'learning_rate': 9e-05, 'epoch': 0.01}
{'loss': 7.1416, 'learning_rate': 0.0001, 'epoch': 0.01}
{'loss': 7.172, 'learning_rate': 9.99999986577034e-05, 'epoch': 0.01}
{'loss': 7.1688, 'learning_rate': 9.999999463081364e-05, 'epoch': 0.01}

Maybe you are running into an issue resulting from torch/cuda versions. Are you seeing any errors or warnings?

dqgdqg commented 7 months ago

I'm running with only one single GPU. Is it the potential cause? I can't test it with 2 GPUs.

BTW, my CUDA and torch versions are as the following:

CUDA: 11.8 torch: 2.2.0+118

A-zhudong commented 5 months ago

The key is "--fp8". Set it to False will solve this.

A-zhudong commented 3 months ago

how do you solved it? fp32 requires too much memory.@dqgdqg

dqgdqg commented 3 months ago

how do you solved it? fp32 requires too much memory.@dqgdqg

I just re-installed all requirements and the issue was gone.