Closed rumeshmadhusanka closed 2 years ago
It's not an issue of GPU not supporting fp16. It's an issue of many models trained in bf16 and attempted to be used with fp16 and overflowing due to the incompatible numerical range. bf16-pretrained models use much bigger weight values than fp16 can accommodate so it overflows.
Please see: https://github.com/huggingface/transformers/pull/10956 for various workarounds.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Environment info
transformers
version: 4.15.0.dev0Models:
both have the same behavior
Library:
Information
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
model="google/mt5-base" !python mt5-simplification/finetune.py \ --model_name_or_path $model \ --do_train \ --fp16 \ --do_eval \ --adafactor \ --source_lang com \ --target_lang sim \ --source_prefix "com-sim: " \ --train_file train.json \ --validation_file valid.json \ --output_dir mt5-simplification \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --save_total_limit=1 \ --adam_epsilon=1e-6 \ --learning_rate=3e-5 \ --save_strategy=epoch \ --report_to="wandb" \ --max_steps=1200 \ --warmup_steps=250 \ --overwrite_output_dir \ --log_level debug \ --output_dir saved \ --predict_with_generate
Some of the output logs:
When I run a translation task on Kaggle's GPU(Tesla P100-PCIE) or AWS's T4 GPU the training loss is always zero. This has been tried out multiple times with different training params.
Expected behavior
Loss not to be zero while training
Throw an error message if the GPU doesn't support fp16