huggingface / notebooks

Notebooks using the Hugging Face libraries 🤗
Apache License 2.0
3.59k stars 1.5k forks source link

BERT large OOM with TF 2.4.1 + transformer 4.5.0 #38

Open ChaiBapchya opened 3 years ago

ChaiBapchya commented 3 years ago

Configuration

Parameters, Hyperparameters

Key Value1 Value2
Instance count 1 2
Instance types p3.2xlarge p3dn.24xlarge
Models bert-base-uncased bert-large-uncased-whole-word-masking
batch_size 2 8
distributions horovod smddp

Versions

Huggingface - 2.4.1 Transformer - 4.5.0 DLC - 763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-tensorflow-training:2.4.1-transformers4.5.0-gpu-py37-cu110-ubuntu18.04

Experiments

Nodes Instance Type bert-base bert-large
1 p3.2xlarge success OOM
2 p3.2xlarge success OOM
1 p3dn.24xlarge success OOM
2 p3dn.24xlarge success OOM

Summary

Independent of distributed training strategy, instance-type, instance-count, TF2.4.1 + HF bert-large suffers from OOM

Entire Stack trace


1,13]<stderr>:2021-05-05 22:35:24.070037: W tensorflow/core/common_runtime/bfc_allocator.cc:433] Allocator (GPU_0_bfc) ran out of memory trying to allocate 32.00MiB (rounded to 33554432)requested by op tf_bert_for_sequence_classification/bert/
encoder/layer_._15/attention/self/transpose_3
[1,13]<stderr>:Current allocation summary follows.
[1,13]<stderr>:2021-05-05 22:35:24.071150: W tensorflow/core/common_runtime/bfc_allocator.cc:441] ****************************************************************************************************
[1,13]<stderr>:2021-05-05 22:35:24.071187: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at transpose_op.cc:184 : Resource exhausted: OOM when allocating tensor with shape[16,512,16,64] and type float on /job:localhost/repli
ca:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[1,13]<stderr>:Traceback (most recent call last):
[1,13]<stderr>:  File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
[1,13]<stderr>:    "__main__", mod_spec)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
[1,13]<stderr>:    exec(code, run_globals)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/mpi4py/__main__.py", line 7, in <module>
[1,13]<stderr>:    main()
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/mpi4py/run.py", line 196, in main
[1,13]<stderr>:    run_command_line(args)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/mpi4py/run.py", line 47, in run_command_line
[1,13]<stderr>:    run_path(sys.argv[0], run_name='__main__')
[1,13]<stderr>:  File "/usr/local/lib/python3.7/runpy.py", line 263, in run_path
[1,13]<stderr>:    pkg_name=pkg_name, script_name=fname)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/runpy.py", line 96, in _run_module_code
[1,13]<stderr>:    mod_name, mod_spec, pkg_name, script_name)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
[1,13]<stderr>:    exec(code, run_globals)
[1,13]<stderr>:  File "train_bert.py", line 242, in <module>
[1,13]<stderr>:    main()
[1,13]<stderr>:  File "train_bert.py", line 205, in main
[1,13]<stderr>:    verbose=1 if hvd.rank() == 0 else 0,
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1129, in fit
[1,13]<stderr>:    tmp_logs = self.train_function(iterator)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
[1,13]<stderr>:    result = self._call(*args, **kwds)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 888, in _call
[1,13]<stderr>:    return self._stateless_fn(*args, **kwds)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2943, in __call__
[1,13]<stderr>:    filtered_flat_args, captured_inputs=graph_function.captured_inputs)  # pylint: disable=protected-access
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 1919, in _call_flat
[1,13]<stderr>:    ctx, args, cancellation_manager=cancellation_manager))
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 560, in call
[1,13]<stderr>:    ctx=ctx)
[1,13]<stderr>:  File "/usr/local/lib/python3.7/site-packages/tensorflow/python/eager/execute.py", line 60, in quick_execute
[1,13]<stderr>:    inputs, attrs, num_outputs)
[1,13]<stderr>:tensorflow.python.framework.errors_impl.ResourceExhaustedError:  OOM when allocating tensor with shape[16,512,16,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[1,13]<stderr>:#011 [[node tf_bert_for_sequence_classification/bert/encoder/layer_._15/attention/self/transpose_3 (defined at /usr/local/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:279) ]]
[1,13]<stderr>:Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[1,13]<stderr>: [Op:__inference_train_function_54347]
[1,13]<stderr>:
[1,13]<stderr>:Errors may have originated from an input operation.
[1,13]<stderr>:Input Source operations connected to node tf_bert_for_sequence_classification/bert/encoder/layer_._15/attention/self/transpose_3:
[1,13]<stderr>: tf_bert_for_sequence_classification/bert/encoder/layer_._15/attention/self/MatMul_1 (defined at /usr/local/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:278)
[1,13]<stderr>:
[1,13]<stderr>:Function call stack:
[1,13]<stderr>:train_function
[1,13]<stderr>:
philschmid commented 3 years ago

@ChaiBapchya could you share the launcher and the script you used to execute distributed training using horovod? The mentioned scripts/launcher is using SageMaker Data Parallelism, this would make it easier to reproduce the error.

ChaiBapchya commented 3 years ago

Very minute difference between SMDDP and Horovod [as you know the APIs for SMDDP are made similar to Hvd for ease-of-use]. Here's the 2 scripts you'd need to run the training https://gist.github.com/ChaiBapchya/03c88c70bd8e003585e7edde436b403d Run

python launcher.py

in a machine that is able to create SM training jobs.

philschmid commented 3 years ago

hey @ChaiBapchya,

are you sure these scripts are right? your imported

    import horovod.tensorflow as hvd

but hvd is never used.

Additionally not sure if this will work since the script is using keras and tf

ChaiBapchya commented 3 years ago

not sure if this will work since the script is using keras and tf

Well the keras APIs are essentially only used for loss/optimizer while hvd/smddp TF APIs are used for instrumenting distributed training. That's how the original script is used.

I was able to get bert_base_uncased and distilbert_base_uncased working for both horovod & smddp with this same script]

philschmid commented 3 years ago

Hey @ChaiBapchya,

I have created another script https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed with a version for single-gpu and multi-gpu. The results of the test can you see below. model type batch_size worked
bert-base-uncased horovod 16 🛑
bert-base-uncased horovod 8
bert-large-uncased-whole-word-masking horovod 8 🛑
bert-large-uncased-whole-word-masking horovod 6
bert-base-uncased single 16
bert-base-uncased single 8
bert-large-uncased-whole-word-masking single 16 🛑
bert-large-uncased-whole-word-masking single 8

For me, this doesn't seem to be a transformers issue and is more related to tensorflow and horovod. Can you ask your internal team for more insights about why horovod is taking so much extra space?

ChaiBapchya commented 3 years ago

Experiment: test with vanilla tf 2.4.1 Result - OOM Explanation: Uninstalling the tensorflow in DLC [aws-tf dlc] with the vanilla/stock tensorflow-gpu package [same version 2.4.1] from pypi - results in similar OOM errors.

Summary - this OOM is not caused by aws-tf binary alone. The issue is probably intrinsic to vanilla tf2.4.1

ChaiBapchya commented 3 years ago

As far as your scripts [https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed] are concerned - they don't use smdp/horovod APIs right? What else is the diff between single node & multi-node scripts of yours? What's the purpose of removing those APIs and testing on single-node?

philschmid commented 3 years ago

It uses horovord.keras https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed/blob/752e9d545dfb0dbe2920f03c8d75ce6b6571894c/scripts/train.py#L21, instead of TensorFlow or SMDP. Since I don't have yet access to Keras SMDP.

I tested multi- against single-node to verify that it is working and single GPU and to have comparison values for performance and efficency

philschmid commented 3 years ago

Hey @ChaiBapchya,

I have created another script https://github.com/philschmid/huggingface_sagemaker_tensorflow_distributed with a version for single-gpu and multi-gpu. The results of the test can you see below.

model type batch_size worked bert-base-uncased horovod 16 🛑 bert-base-uncased horovod 8 ✅ bert-large-uncased-whole-word-masking horovod 8 🛑 bert-large-uncased-whole-word-masking horovod 6 ✅ bert-base-uncased single 16 ✅ bert-base-uncased single 8 ✅ bert-large-uncased-whole-word-masking single 16 🛑 bert-large-uncased-whole-word-masking single 8 ✅ For me, this doesn't seem to be a transformers issue and is more related to tensorflow and horovod. Can you ask your internal team for more insights about why horovod is taking so much extra space?