Open ChaiBapchya opened 3 years ago
@ChaiBapchya could you share the launcher and the script you used to execute distributed training using horovod? The mentioned scripts/launcher is using SageMaker Data Parallelism, this would make it easier to reproduce the error.
Very minute difference between SMDDP and Horovod [as you know the APIs for SMDDP are made similar to Hvd for ease-of-use]. Here's the 2 scripts you'd need to run the training https://gist.github.com/ChaiBapchya/03c88c70bd8e003585e7edde436b403d Run
python launcher.py
in a machine that is able to create SM training jobs.
hey @ChaiBapchya,
are you sure these scripts are right? your imported
import horovod.tensorflow as hvd
but hvd
is never used.
Additionally not sure if this will work since the script is using keras
and tf
not sure if this will work since the script is using keras and tf
Well the keras APIs are essentially only used for loss/optimizer while hvd/smddp TF APIs are used for instrumenting distributed training. That's how the original script is used.
I was able to get bert_base_uncased
and distilbert_base_uncased
working for both horovod
& smddp
with this same script]
Hey @ChaiBapchya,
I have created another script https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed with a version for single-gpu and multi-gpu. The results of the test can you see below. | model | type | batch_size | worked |
---|---|---|---|---|
bert-base-uncased |
horovod | 16 | 🛑 | |
bert-base-uncased |
horovod | 8 | ✅ | |
bert-large-uncased-whole-word-masking |
horovod | 8 | 🛑 | |
bert-large-uncased-whole-word-masking |
horovod | 6 | ✅ | |
bert-base-uncased |
single | 16 | ✅ | |
bert-base-uncased |
single | 8 | ✅ | |
bert-large-uncased-whole-word-masking |
single | 16 | 🛑 | |
bert-large-uncased-whole-word-masking |
single | 8 | ✅ |
For me, this doesn't seem to be a transformers
issue and is more related to tensorflow
and horovod
. Can you ask your internal team for more insights about why horovod
is taking so much extra space?
Experiment: test with vanilla tf 2.4.1 Result - OOM Explanation: Uninstalling the tensorflow in DLC [aws-tf dlc] with the vanilla/stock tensorflow-gpu package [same version 2.4.1] from pypi - results in similar OOM errors.
Summary - this OOM is not caused by aws-tf binary alone. The issue is probably intrinsic to vanilla tf2.4.1
As far as your scripts [https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed] are concerned - they don't use smdp/horovod APIs right? What else is the diff between single node & multi-node scripts of yours? What's the purpose of removing those APIs and testing on single-node?
It uses horovord.keras
https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed/blob/752e9d545dfb0dbe2920f03c8d75ce6b6571894c/scripts/train.py#L21, instead of TensorFlow or SMDP. Since I don't have yet access to Keras SMDP.
I tested multi- against single-node to verify that it is working and single GPU and to have comparison values for performance and efficency
Hey @ChaiBapchya,
I have created another script https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed with a version for single-gpu and multi-gpu. The results of the test can you see below.
model type batch_size worked
bert-base-uncased
horovod 16 🛑bert-base-uncased
horovod 8 ✅bert-large-uncased-whole-word-masking
horovod 8 🛑bert-large-uncased-whole-word-masking
horovod 6 ✅bert-base-uncased
single 16 ✅bert-base-uncased
single 8 ✅bert-large-uncased-whole-word-masking
single 16 🛑bert-large-uncased-whole-word-masking
single 8 ✅ For me, this doesn't seem to be atransformers
issue and is more related totensorflow
andhorovod
. Can you ask your internal team for more insights about whyhorovod
is taking so much extra space?
Configuration
Parameters, Hyperparameters
Versions
Huggingface - 2.4.1 Transformer - 4.5.0 DLC - 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-tensorflow-training:2.4.1-transformers4.5.0-gpu-py37-cu110-ubuntu18.04
Experiments
Summary
Independent of distributed training strategy, instance-type, instance-count, TF2.4.1 + HF bert-large suffers from OOM
Entire Stack trace