microsoft / DeepSpeedExamples

Example models using DeepSpeed
Apache License 2.0
6.12k stars 1.05k forks source link

Bing BERT #8

Open tomekrut opened 4 years ago

tomekrut commented 4 years ago

Hi guys, I have been trying to run the Bing experiment but it seems I can't for now.

"datasets": {
--
  | "wiki_pretrain_dataset": "/data/bert/bnorick_format/128/wiki_pretrain",
  | "bc_pretrain_dataset": "/data/bert/bnorick_format/128/bookcorpus_pretrain"
  | },

I see this stuff is missing to fully validate the code.

tjruwase commented 4 years ago

Thanks for trying out DeepSpeed. Unfortunately, these datasets are not yet publicly available. We are working on resolving this. Apologies for the inconvenience.

oliverhu commented 4 years ago

any update on the dataset?

sriramsrao commented 4 years ago

Can we run DeepSpeed BERT trainer on the NVDA-generated hdf5 data?

jeffra commented 4 years ago

We'll be open sourcing the pre-processing scripts we used to get the data in this format very soon. However, if you're both at LinkedIn we can probably figure out a way for you to just download our datasets directly. Send me an email internally.

In theory you should be able to run the NVIDIA hdf5 format but it will take some code change to support it, which we have not done.

oliverhu commented 4 years ago

@jeffra that's awesome, thanks :) Sending the email now.

Rachnas commented 4 years ago

I am also looking for these datasets for pre-training Bert model. Any update about data availability?

jeffra commented 4 years ago

Hi @Rachnas and @tomekrut, we have uploaded our pre-processing script for the raw bookcorpus and wikipedia datasets to get them into our numpy compatible format. We haven't written up a tutorial yet on how to use them but feel free to check out the script here: https://github.com/microsoft/DeepSpeedExamples/blob/jeffra/bert_preprocessing/bing_bert/turing/bert_pretrain_data.py

Rachnas commented 4 years ago

@jeffra Thank you!

piyushghai commented 4 years ago

@jeffra I was trying to run Bing BERT and hit the same issue, where the dataset is missing.

  1. Do you have an idea if I can leverage a dataset created from https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT/data to run Bing BERT ?

  2. Or, do you have an idea on when a tutorial will be able for the pre-processing script for wiki and book corpus ?

tjruwase commented 4 years ago

@piyushghai We are pleased to announce that support for training Bing BERT with Nvidia dataset, #27. Please give it a try.

tjruwase commented 4 years ago

@sriramsrao, @oliverhu, @tomekrut We have added support for training with Nvidia dataset. Thanks for the patience. We would really appreciate feedback on your experience trying it out. Thanks!

oliverhu commented 4 years ago

thank you @tjruwase

liuyq47 commented 4 years ago

Hi, thanks for adding the NVIDIA dataset support. After trying it out, I see sometimes there are spikes in step time during the training process like the one I shown below. The spikes happens at allreduce methods. Screen Shot 2020-07-27 at 5 54 43 PM

I don't have the original dataset so I don't know if there is a similar behavior in original dataset.

tjruwase commented 4 years ago

@liuyq47 Thanks for trying out the new dataset.

Can you be more specific on the timer names and values showing the spikes? The highlighted section of the screenshot seems fine to me, except it seems you are running with a gradient accumulation step of 1 and an effective batch size of 4K (instead of 64K).

liuyq47 commented 4 years ago

I was comparing the time to the step above and below the highlighted section. Normally I see backward pass takes around ~400ms and backward_allreduce steps takes around 229ms but this highlighted section has much higher backward pass time which is due to higher backward_allreduce time.

tjruwase commented 4 years ago

Thanks for the clarification. So to confirm, you are observing occasional spikes of allreduce time from ~229 to ~415. Yes, that does look odd. To help repro for a quick sanity check, can you please share your json config and hardware details (GPU type and count)?

liuyq47 commented 4 years ago

I'm using 8 DGX-1 (64 V100-SXM2) Pytorch version 1.5.0 and Cuda 10.1

deepspeed_bsz64k_lamb_config_seq128.json.txt

bert_large_lamb_nvidia_data.json.txt

tjruwase commented 4 years ago

Awesome. Thanks!

tjruwase commented 4 years ago

@liuyq47 I can confirm that I do see occasional spikes as well with all-reduce latency with a similar setup. In my case, I used single DGX2 node, 16GPUs and saw min/max of 20msec/37msec. I don't know what could cause such spikes, and don't want to speculate at this point. While these spikes should not affect convergence, I am curious whether it has noticeable impact on your training speed, especially as you increase the number of nodes. Is this this case? Can you try increasing the gradient accumulation steps (and reducing the number of nodes)?

liuyq47 commented 4 years ago

I've seen the spikes too with gradient accumulations. (8 nodes with bz of 64 and gradient accumulation of 16) and higher number of nodes (64 DGX-1). Normal all-reduce time is 200ms, but sometimes I saw > 300ms, or even 500ms. Is this spike much longer than you see(20msec/37msec)? This does not affect the training accuracy but does affect training time. I saw this spikes happening around 20% of the time.

tjruwase commented 4 years ago

@liuyq47 Thanks for confirming that this issue shows with gradient accumulations. Now, I suspect it has to do with the nvidia dataset as I don't believe we have previously seen this with the bing dataset. One difference that I notice is that nvidida dataset uses a random data sampler whereas bing dataset uses a distributed sampler.

Regarding the spikes, 200ms/500ms in your case versus 20ms/37ms in mine, I am more concerned about the relative size. In other words, you are seeing 2.5X spike, which is very significant, while it is a lower 1.8X for me. More concerning is that allreduce was already the slowest portion of your computation (compared to forward/backward/optimizer), and so a 2.5X spike 20% of the time is quite significant. We will take a closer look into this. Thanks so much for helping to diagnose this far.

vgaraujov commented 3 years ago

Hi @Rachnas and @tomekrut, we have uploaded our pre-processing script for the raw bookcorpus and wikipedia datasets to get them into our numpy compatible format. We haven't written up a tutorial yet on how to use them but feel free to check out the script here: https://github.com/microsoft/DeepSpeedExamples/blob/jeffra/bert_preprocessing/bing_bert/turing/bert_pretrain_data.py

Hi @jeffra,
Are you still willing to share your original dataset? I am really interested in replicating your results

Thanks

huahuaai commented 3 years ago

How could I download the dataset from nvidia?

dancingpipi commented 3 years ago

@tjruwase "The scripts assume that the datasets are available in the path /workspace/bert" could you show me the directory tree of /workspace/bert ? I have download nvidia wiki data, and formatted them to hdf5. but don't know how to put them to data dir.

tjruwase commented 3 years ago

@dancingpipi, sorry I have not run this in a long time and don't have the datasets setup on my box. But, can you try /workspace/bert/data/128 /workspace/bert/data/512

The related configuration setting is here.

Let me know if that works.

dancingpipi commented 3 years ago

@dancingpipi, sorry I have not run this in a long time and don't have the datasets setup on my box. But, can you try /workspace/bert/data/128 /workspace/bert/data/512

The related configuration setting is here.

Let me know if that works.

thanks for your reply, I'll do a try

dancingpipi commented 3 years ago

@dancingpipi, sorry I have not run this in a long time and don't have the datasets setup on my box. But, can you try /workspace/bert/data/128 /workspace/bert/data/512

The related configuration setting is here.

Let me know if that works.

It works!~

zyz0000 commented 3 years ago

@jeffra Could you send me an email to share your datasets for bert pretraining? Thank you so much!