Closed nmallinar closed 4 years ago
Hello @nmallinar thanks the question. While we haven't tried to do multi-GPU training, following the docs the second approach is the correct one, using the arguments in RunConfig, since the scope is use in Keras models, not in Estimator based ones. I do wonder if this is supported in TPUEstimator/TPURunConfig as we are using, but if that's not the case, it should be easy to change.
I found this guide for multi GPU training on TF1 that might be useful, make sure to check the Estimator section. Apparently there's a JSON Environment variable that has to be properly set up.
The error you mention seems strange, since the third argument for self._call_input_fn is declared here. Can you do pip show
to get the version of tensorflow and tensor_flow estimator that you have on your runtime, they should be 1.14.
@eisenjulian Yes, I was thinking about adding switches from TPUEstimator -> Estimator in absence of use_tpu, I have seen similar designs in TF multi-gpu BERT training code in other repos. However, the error seems to indicate that the problem is not from passing in the distributed strategy object but rather something in the input_fn construction/calling and there may not need to be such switches (or even with the switches, the input_fn may still throw this error with an Estimator - I'll update when I get around to checking this).
Results of pip show:
Name: tensorflow-gpu
Version: 1.14.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: packages@tensorflow.org
License: Apache 2.0
Location: /mydata/repos/tapas2/venv/lib/python3.7/site-packages
Requires: protobuf, tensorboard, keras-applications, numpy, grpcio, wheel, tensorflow-estimator, wrapt, google-pasta, termcolor, astor, keras-preprocessing, six, absl-py, gast
Required-by: tapas
Name: tensorflow-estimator
Version: 1.14.0
Summary: TensorFlow Estimator.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: UNKNOWN
License: Apache 2.0
Location: /mydata/repos/tapas2/venv/lib/python3.7/site-packages
Requires:
Required-by: tensorflow-gpu
I will look further into this guide you posted as well, thanks for the reference.
Switching from TPUEstimator -> Estimator solves the input_fn issue, seems like they both have different self._call_input_fn signatures. So I just switch on all estimators / dependent objects to non-tpu objects and their relevant params.
This still left one issue: the BERT optimizer in its current form is not multi-gpu friendly, so I adapted this implementation for multi-gpu in models/bert/optimization.py: https://github.com/HaoyuHu/bert-multi-gpu/blob/master/custom_optimization.py and now I am properly able to train.
In case anybody else is following this solution path: train_batch_size should now be specified as per-gpu batch size and then internally when computing num_train_steps you should multiply by N_GPUs to use your effective batch size. I am unable to get the gradient accumulation wrapper as it is now to work in the multi-gpu setting, but I can update with a solution if I end up trying to make it work. Anyway, I will close this issue for now.
Thanks!
Also running into the same problem on multi-gpu, single gpu works fine but is much much slower than tpu. i have changed all TPUEstimator or Estimator objects, also tried to adapt BERT optimiser as per the link shared, am using MirroredStrategy and tried with and without specifying the devices . But the problem is either it doesnt run the process on the GPUs or if it shows as running, the volatile util shows 0% on both, believe its doesnt work. Would appreciate if you can share more about your work around ..
Hi @nmallinar, can you share your work about where we need to change from TPUEstimator
to Estimator
because I have the same issue with you. Thanks.
Hello @dhuy237, my implementation is based on a slightly outdated version of the Tapas codebase and unfortunately at the time I got a little busy to re-submit the code. I will plan to resolve those diffs and host a fork or submit a PR accordingly. If there is anything specific you are having trouble with on your end I may be able to help you debug though, as I ran into many errors along the way and might be able to help you avoid some of those same mistakes.
Hi all,
I got it to work with some changes, it took me a week or so to get it working, can share my changes if that helps. I can consolidate the changes and share by tonight or tomorrow. you can also try that
Thanks Sarah
On Thu, Aug 13, 2020 at 11:49 AM Neil Mallinar notifications@github.com wrote:
Hello @dhuy237 https://github.com/dhuy237, my implementation is based on a slightly outdated version of the Tapas codebase and unfortunately at the time I got a little busy to re-submit the code. I will plan to resolve those diffs and host a fork or submit a PR accordingly. If there is anything specific you are having trouble with on your end I may be able to help you debug though, as I ran into many errors along the way and might be able to help you avoid some of those same mistakes.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google-research/tapas/issues/9#issuecomment-673649034, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKABY2T4Y3BZ3E7PBTNWFLSAQYSZANCNFSM4NA65BEA .
@nmallinar I still have this error when trying to train the model with a single GPU
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given
.
And I don't know where to start to fix this bug.
@sarahpanda Hope your solution can help me.
@dhuy237 so these are the non-TPU version of objects in run_task_main.py that I use:
run_config = tf.estimator.RunConfig(...)
estimator = tf.estimator.Estimator(...,
model_fn=model_fn,
config=run_config)
and in tapas_classifier_model.py:
output_spec = tf.estimator.EstimatorSpec(...)
I could not get it to work with tf *.tpu classes. I think I ran into this when I used the TPU version of one of these still.
@nmallinar I didn't notice that this repo is tapas
. I am trying to run this repo https://github.com/zihangdai/xlnet. I will check your solution with my project. Thank for your help.
@sarahpand It would be great if you could share your solution here!
Sure let me do that..
Sarah
On Fri, Aug 14, 2020 at 12:34 AM Thomas Müller notifications@github.com wrote:
@sarahpand https://github.com/sarahpand It would great if you could share your solution here!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google-research/tapas/issues/9#issuecomment-673934537, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKABYZV7AXFHLJ25GNGFZ3SATSHFANCNFSM4NA65BEA .
After changing my code like @nmallinar:
output_spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
scaffold=scaffold_fn)
run_config = tf.estimator.RunConfig(FLAGS)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={'batch_size': 8})
I don't get this error anymore:
TypeError: _call_input_fn() takes 3 positional arguments but 4 were given
But I got this error:
Traceback (most recent call last):
File "run_coqa.py", line 1775, in <module>
tf.app.run()
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_coqa.py", line 1714, in main
estimator.train(input_fn=train_input_fn, max_steps=2000) # max_steps=FLAGS.train_steps)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1158, in _train_mo
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1192, in _train_mo
saving_listeners)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1420, in _train_wiec
scaffold=estimator_spec.scaffold)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/site-packages/tensorflow/python/training/basic_session_run_hooks.py", line 546, in __init_
self._save_path = os.path.join(checkpoint_dir, checkpoint_basename)
File "/home/huytran/miniconda3/envs/TF/lib/python3.7/posixpath.py", line 80, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not FlagValues
Do you guys know why I got this?
Hello @dhuy237 I am bit confused by your stacktrace since I don't recognize the paths of the files, specially the run_coqa.py one. Can you confirm you are running the correct binary?
@eisenjulian I trying to run this repo https://github.com/stevezheng23/xlnet_extension_tf. So it is different from this repo. But I got the same error as @nmallinar had before.
After I changed my code as @nmallinar suggested, I have the TypeError
. I post the error here just hope someone can help me to solve it.
If it is not appropriate for this post, I can delete my comment. Thanks.
I recommend you ask in their repo. It does seem that changing TPUEstimator -> Estimator and TPUEstimatorSpec for EstimatorSpec fixed the signature issue, so consider double checking that you didn't miss any instance of TPUEstimators.
Hello,
I am trying to run this codebase on a single machine with eight GPUs. I have installed all through requirements.txt and prepped the data. When I run, I am only able to use train_batch_size=8 and notice that only one of the eight GPUs is utilized (the other 7 show ~300MB of data on device while the first GPU shows ~15GB). Additionally, while I can see this usage of the GPU(s) by the run script, I get an output message in the train log of:
I0514 19:25:27.752172 139628342118208 tpu_estimator.py:2965] Running train on CPU
, though I have been ignoring this for now. So I am trying to get the other seven GPUs in the loop so that I can train with train_batch_size=64.I initially tried wrapping the optimization code in:
and I notice that the model is properly replicated across the eight GPUs, however I cannot expand my train_batch_size to any multiple larger than 8. I tried wrapping the dataset object, at the end of
input_fn
before returning, instrategy.experimental_distribute_dataset(ds)
to see if it was a matter of not sending batches to each device. However, I ran into deeper errors that I am unfamiliar with when pursuing this route (if this is a preferable way to enable multi-GPU I could update this issue with stack traces I got after running with the aforementioned changes).Before debugging further in this direction, I tried to step back to the outer run_task_main.py after reading that you can instead pass MirroredStrategy or CentralStorageStrategy objects directly into the RunConfig that goes into an Estimator. So I undid the aforementioned changes that I manually made in the lower levels (e.g. reset repo back to master) and added to:
However, I now run into the error:
which I suspect may have to do with the functools.partial wrap around input_fn, but I am having trouble understanding this or determining next steps (I am generally unfamiliar with Tensorflow as a library).
If anybody can help me with this it would be greatly appreciated. Thanks so much for the work and time!