NVIDIA / DeepLearningExamples

State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
13.54k stars 3.23k forks source link

[tacotron2/Pytorch] 'Tacotron2' object has no attribute 'text_to_sequence' #955

Closed Luosuu closed 3 years ago

Luosuu commented 3 years ago

Describe the bug I run the tutorial on https://pytorch.org/hub/nvidia_deeplearningexamples_waveglow/ and I got errors

AttributeError: 'Tacotron2' object has no attribute 'text_to_sequence

it is the error from line 20 of this tutorial

sequence = np.array(tacotron2.text_to_sequence(text, ['english_cleaners']))[None, :]

To Reproduce it will occur no matter executing the tutorial on Google colab or on my local machine

Expected behavior sequence should be generated from the tacotron2 model by inputting the text

Environment Please provide at least:

seemethere commented 3 years ago

Also observing this as well

I'm assuming this issue arose from one of these two commits:

CC @GrzegorzKarchNV, @jan-golda

seemethere commented 3 years ago

Did some digging and it looks like this might've been broken for a while on master but was only recently pushed to the torchhub branch,

I've tried the notebook again this time with a commit that for sure works in a8328ce1690fe473f3c5ac55811dab6b0f36700d

Also did a quick git bisect of this issue and found the commit where this started to be broken in the torchub branch:

`git bisect` logs `test_tacotron2`: ```python import torch import os tacotron2 = torch.hub.load(os.getcwd(), 'nvidia_tacotron2', source='local') tacotron2.text_to_sequence ``` `git bisect logs`: ``` ❯ git bisect start torchhub a8328ce1690fe473f3c5ac55811dab6b0f36700d Bisecting: 405 revisions left to test after this (roughly 9 steps) [06e39168fb90682d171f7db5204aca05a440d99e] Merge pull request #513 from GrzegorzKarchNV/trtis_cpp-update ❯ git bisect run python test_tacotron.py running python test_tacotron.py Bisecting: 198 revisions left to test after this (roughly 8 steps) [c4061ebb09d93d28051bb79bfda02b04f91f871b] Merge branch 'master' into master running python test_tacotron.py Bisecting: 102 revisions left to test after this (roughly 7 steps) [22a6f9d99ecddd64aa3ec0c1036d02cb35805b80] Merge pull request #200 from NVIDIA/nvpstr/release19.08_b running python test_tacotron.py Bisecting: 51 revisions left to test after this (roughly 6 steps) [2c66d3525dbfb29c99b1954f28d26ec5c197f993] Merge pull request #132 from GrzegorzKarchNV/fixed-audio-sample running python test_tacotron.py Bisecting: 24 revisions left to test after this (roughly 5 steps) [39c6c63f91f3c86c8a094712937357247fda72f9] Merge pull request #116 from IrishCoffee/master running python test_tacotron.py Bisecting: 11 revisions left to test after this (roughly 4 steps) [2cfd880a6b2561493c4bb608622fe2c554ca75ee] Merge pull request #103 from lxp121/master running python test_tacotron.py Bisecting: 5 revisions left to test after this (roughly 3 steps) [f6c367e96720e73d47f7f739f6772fa04c89e7d6] Merge pull request #107 from NVIDIA/nvpstr/master3 running python test_tacotron.py Bisecting: 2 revisions left to test after this (roughly 2 steps) [a644350589f9abc91b203f73e686a50f5d6f3e96] Updating models and adding BERT/PyT running python test_tacotron.py Bisecting: 0 revisions left to test after this (roughly 1 step) [3b3d0f6a552c985b8fe788f93a903779941ee941] Merge branch 'gh/master' into nvpstr/master3 running python test_tacotron.py Bisecting: 0 revisions left to test after this (roughly 0 steps) [0663b67c1afe9979c9767ed63c7e719b1d0629b9] Updating models running python test_tacotron.py 0663b67c1afe9979c9767ed63c7e719b1d0629b9 is the first bad commit commit 0663b67c1afe9979c9767ed63c7e719b1d0629b9 Author: Przemek Strzelczyk <41076710+nvpstr@users.noreply.github.com> Date: Mon Jul 8 22:51:28 2019 +0200 Updating models .gitignore | 3 + MxNet/Classification/RN50v1.5/LICENSE | 1 - MxNet/Classification/RN50v1.5/__init__.py | 0 MxNet/Classification/RN50v1.5/data.py | 2 + PyTorch/Classification/RN50v1.5/img/.gitkeep | 0 .../Classification/RN50v1.5/resnet50v1.5/README.md | 0 PyTorch/Detection/SSD/Dockerfile | 2 +- .../SSD300_pytorch_18.08_inference_fp16.json | 31 - .../SSD300_pytorch_18.08_inference_fp32.json | 31 - .../SSD300_pytorch_18.08_training_fp16.json | 59 - .../SSD300_pytorch_18.08_training_fp32.json | 59 - .../SSD300_pytorch_19.01_inference_fp16.json | 34 - .../SSD300_pytorch_19.01_inference_fp32.json | 34 - .../SSD300_pytorch_19.01_training_fp16.json | 52 - .../SSD300_pytorch_19.01_training_fp32.json | 45 - .../SSD300_pytorch_19.05_inference_fp16.json | 34 - .../SSD300_pytorch_19.05_inference_fp32.json | 34 - PyTorch/Detection/SSD/qa/benchmark_performance.py | 81 - ...0_pytorch_18.08_fp16_full_run_acc_baseline.json | 1 - ...0_pytorch_18.08_fp32_full_run_acc_baseline.json | 1 - ...pytorch_19.01_fp16_1epoch_run_acc_baseline.json | 20 - ...pytorch_19.01_fp32_1epoch_run_acc_baseline.json | 20 - PyTorch/Detection/SSD/qa/qa_accuracy_main.py | 73 - PyTorch/Detection/SSD/qa/qa_perf_main.py | 199 - PyTorch/Detection/SSD/qa/qa_utils.py | 115 - .../SSD/qa/testing_DGX1V_8GPU_fp16_1epoch_run.sh | 4 - .../SSD/qa/testing_DGX1V_8GPU_fp16_full_run.sh | 4 - .../SSD/qa/testing_DGX1V_8GPU_fp32_1epoch_run.sh | 4 - .../SSD/qa/testing_DGX1V_8GPU_fp32_full_run.sh | 4 - .../qa/testing_DGX1V_inference_benchmark_fp16.sh | 3 - .../qa/testing_DGX1V_inference_benchmark_fp32.sh | 3 - .../qa/testing_DGX1V_training_benchmark_fp16.sh | 3 - .../qa/testing_DGX1V_training_benchmark_fp32.sh | 3 - PyTorch/Detection/SSD/src/coco_pipeline.py | 4 +- PyTorch/LanguageModeling/BERT/.dockerignore | 3 + PyTorch/LanguageModeling/BERT/.gitignore | 129 + PyTorch/LanguageModeling/BERT/Dockerfile | 27 + PyTorch/LanguageModeling/BERT/LICENSE | 202 + PyTorch/LanguageModeling/BERT/README.md | 554 + PyTorch/LanguageModeling/BERT/bert_config.json | 13 + .../BERT/create_pretraining_data.py | 472 + PyTorch/LanguageModeling/BERT/data/README.md | 30 + .../BERT/data/bookcorpus/clean_and_merge_text.py | 23 + .../BERT/data/bookcorpus/download_bookcorpus.sh | 9 + .../BERT/data/create_datasets_from_start.sh | 38 + .../BERT/data/glue/download_mrpc.sh | 7 + .../BERT/data/merge_datasets_after_creation.sh | 29 + .../BERT/data/squad/squad_download.sh | 60 + PyTorch/LanguageModeling/BERT/data/utils/config.sh | 24 + .../BERT/data/utils/create_mixed_dataset.py | 160 + .../BERT/data/utils/create_mixed_dataset_ids.py | 134 + .../BERT/data/utils/preprocessing.sh | 23 + .../BERT/data/utils/preprocessing_xargs_wrapper.sh | 15 + .../BERT/data/utils/sentence_segmentation_nltk.py | 28 + .../BERT/data/utils/shard_text_input_file.py | 47 + .../data/wikipedia_corpus/download_wikipedia.sh | 30 + .../data/wikipedia_corpus/remove_tags_and_clean.py | 39 + PyTorch/LanguageModeling/BERT/extract_features.py | 297 + PyTorch/LanguageModeling/BERT/file_utils.py | 249 + PyTorch/LanguageModeling/BERT/fused_adam_local.py | 205 + PyTorch/LanguageModeling/BERT/modeling.py | 1249 + PyTorch/LanguageModeling/BERT/optimization.py | 218 + PyTorch/LanguageModeling/BERT/requirements.txt | 13 + PyTorch/LanguageModeling/BERT/run_glue.py | 649 + PyTorch/LanguageModeling/BERT/run_pretraining.py | 417 + .../BERT/run_pretraining_inference.py | 300 + PyTorch/LanguageModeling/BERT/run_squad.py | 1143 + PyTorch/LanguageModeling/BERT/run_swag.py | 561 + PyTorch/LanguageModeling/BERT/schedulers.py | 92 + .../LanguageModeling/BERT/scripts/data_download.sh | 38 + .../LanguageModeling/BERT/scripts/docker/build.sh | 9 + .../LanguageModeling/BERT/scripts/docker/launch.sh | 23 + PyTorch/LanguageModeling/BERT/scripts/run.sh | 184 + PyTorch/LanguageModeling/BERT/scripts/run_glue.sh | 63 + .../BERT/scripts/run_pretraining.sh | 152 + .../BERT/scripts/run_pretraining_inference.sh | 146 + PyTorch/LanguageModeling/BERT/scripts/run_squad.sh | 88 + PyTorch/LanguageModeling/BERT/scripts/run_swag.sh | 62 + .../BERT/scripts/start_pretraining.sh | 89 + PyTorch/LanguageModeling/BERT/tokenization.py | 391 + .../LanguageModeling/BERT/vocab/download_models.py | 123 + PyTorch/LanguageModeling/BERT/vocab/vocab | 30522 +++++++++++++++++ PyTorch/Recommendation/NCF/Dockerfile | 2 +- PyTorch/Recommendation/NCF/README.md | 560 +- PyTorch/Recommendation/NCF/dataloading.py | 158 + PyTorch/Recommendation/NCF/download_dataset.sh | 7 +- PyTorch/Recommendation/NCF/img/dgx1v_32_curve.png | Bin 42574 -> 41910 bytes PyTorch/Recommendation/NCF/img/hr_histogram.png | Bin 28153 -> 30062 bytes PyTorch/Recommendation/NCF/logger/analyzer.py | 3 +- PyTorch/Recommendation/NCF/ncf.py | 303 +- PyTorch/Recommendation/NCF/neumf.py | 10 +- PyTorch/Recommendation/NCF/prepare_dataset.sh | 21 +- PyTorch/Recommendation/NCF/requirements.txt | 1 + .../pytorch/maskrcnn_benchmark/engine/trainer.py | 8 +- .../pytorch/maskrcnn_benchmark/utils/model_zoo.py | 11 +- .../MaskRCNN/pytorch/tools/train_net.py | 9 +- PyTorch/SpeechSynthesis/Tacotron2/Dockerfile | 7 +- PyTorch/SpeechSynthesis/Tacotron2/README.md | 719 +- .../SpeechSynthesis/Tacotron2/audio/audio_fp16.wav | Bin 854074 -> 831533 bytes .../SpeechSynthesis/Tacotron2/audio/audio_fp32.wav | Bin 823354 -> 773178 bytes PyTorch/SpeechSynthesis/Tacotron2/common/layers.py | 1 + .../ljs_audio_text_train_subset_625_filelist.txt | 625 + .../Tacotron2/filelists/ljs_mel_text_filelist.txt | 13100 +++++++ .../filelists/ljs_mel_text_test_filelist.txt | 500 + .../filelists/ljs_mel_text_train_filelist.txt | 12500 +++++++ .../ljs_mel_text_train_subset_1250_filelist.txt | 1250 + .../ljs_mel_text_train_subset_2500_filelist.txt | 2500 ++ .../ljs_mel_text_train_subset_625_filelist.txt | 625 + .../filelists/ljs_mel_text_val_filelist.txt | 100 + .../Tacotron2/img/tacotron2_amp_loss.png | Bin 0 -> 31253 bytes .../Tacotron2/img/tacotron2_arch.png | Bin 0 -> 201553 bytes .../Tacotron2/img/tacotron2_fp32_loss.png | Bin 24531 -> 29853 bytes .../Tacotron2/img/waveglow_arch.png | Bin 0 -> 90239 bytes PyTorch/SpeechSynthesis/Tacotron2/inference.py | 33 +- .../SpeechSynthesis/Tacotron2/inference_perf.py | 21 +- PyTorch/SpeechSynthesis/Tacotron2/models.py | 27 +- .../platform/train_tacotron2_AMP_DGX1_16GB_1GPU.sh | 2 + .../platform/train_tacotron2_AMP_DGX1_16GB_4GPU.sh | 2 + .../platform/train_tacotron2_AMP_DGX1_16GB_8GPU.sh | 2 + .../train_tacotron2_FP16_DGX1_16GB_1GPU.sh | 2 +- .../train_tacotron2_FP16_DGX1_16GB_4GPU.sh | 2 +- .../train_tacotron2_FP16_DGX1_16GB_8GPU.sh | 2 +- .../train_tacotron2_FP32_DGX1_16GB_1GPU.sh | 2 +- .../train_tacotron2_FP32_DGX1_16GB_4GPU.sh | 2 +- .../train_tacotron2_FP32_DGX1_16GB_8GPU.sh | 2 +- .../platform/train_waveglow_AMP_DGX1_16GB_1GPU.sh | 2 + .../platform/train_waveglow_AMP_DGX1_16GB_4GPU.sh | 2 + .../platform/train_waveglow_AMP_DGX1_16GB_8GPU.sh | 2 + .../platform/train_waveglow_FP16_DGX1_16GB_1GPU.sh | 2 +- .../platform/train_waveglow_FP16_DGX1_16GB_4GPU.sh | 2 +- .../platform/train_waveglow_FP16_DGX1_16GB_8GPU.sh | 2 +- .../platform/train_waveglow_FP32_DGX1_16GB_1GPU.sh | 2 +- .../platform/train_waveglow_FP32_DGX1_16GB_4GPU.sh | 2 +- .../platform/train_waveglow_FP32_DGX1_16GB_8GPU.sh | 2 +- .../Tacotron2/preprocess_audio2mel.py | 64 + ...otron2_epochtrain_FP16_DGX1_16GB_8GPU_single.sh | 28 - ...otron2_epochtrain_FP32_DGX1_16GB_8GPU_single.sh | 27 - ...otron2_inferbench_FP16_DGX1_16GB_1GPU_single.sh | 17 - ...otron2_inferbench_FP32_DGX1_16GB_1GPU_single.sh | 17 - ...otron2_trainbench_FP16_DGX1_16GB_8GPU_single.sh | 32 - ...otron2_trainbench_FP32_DGX1_16GB_8GPU_single.sh | 31 - ...veGlow_epochtrain_FP16_DGX1_16GB_8GPU_single.sh | 27 - ...veGlow_epochtrain_FP32_DGX1_16GB_8GPU_single.sh | 26 - ...veGlow_inferbench_FP16_DGX1_16GB_1GPU_single.sh | 17 - ...veGlow_inferbench_FP32_DGX1_16GB_1GPU_single.sh | 16 - ...veGlow_trainbench_FP16_DGX1_16GB_8GPU_single.sh | 31 - ...veGlow_trainbench_FP32_DGX1_16GB_8GPU_single.sh | 30 - ...cotron2_fulltrain_FP16_DGX1_16GB_8GPU_single.sh | 28 - ...cotron2_fulltrain_FP32_DGX1_16GB_8GPU_single.sh | 27 - ...aveGlow_fulltrain_FP16_DGX1_16GB_8GPU_single.sh | 27 - ...aveGlow_fulltrain_FP32_DGX1_16GB_8GPU_single.sh | 26 - ..._inferbench_BS1_FP16_DGX1_16GB_1GPU_single.json | 90 - ..._inferbench_BS1_FP32_DGX1_16GB_1GPU_single.json | 90 - ..._inferbench_BS4_FP32_DGX1_16GB_1GPU_single.json | 91 - ..._inferbench_BS8_FP16_DGX1_16GB_1GPU_single.json | 91 - ..._inferbench_BS1_FP16_DGX1_16GB_1GPU_single.json | 91 - ..._inferbench_BS1_FP32_DGX1_16GB_1GPU_single.json | 90 - ..._inferbench_BS4_FP32_DGX1_16GB_1GPU_single.json | 91 - ..._inferbench_BS8_FP16_DGX1_16GB_1GPU_single.json | 91 - .../SpeechSynthesis/Tacotron2/qa/check_curves.py | 170 - .../SpeechSynthesis/Tacotron2/qa/input_lengths.pt | Bin 20338 -> 0 bytes PyTorch/SpeechSynthesis/Tacotron2/qa/mel_padded.pt | Bin 1432356 -> 0 bytes .../Tacotron2/qa/tacotron2_fp16-full-loss.json | 16037 --------- .../Tacotron2/qa/tacotron2_fp16-infer-bs1.json | 53 - .../Tacotron2/qa/tacotron2_fp16-infer-bs8.json | 53 - .../Tacotron2/qa/tacotron2_fp16-perf.json | 197 - .../Tacotron2/qa/tacotron2_fp16-short-loss.json | 16037 --------- .../Tacotron2/qa/tacotron2_fp32-full-loss.json | 16037 --------- .../Tacotron2/qa/tacotron2_fp32-infer-bs1.json | 53 - .../Tacotron2/qa/tacotron2_fp32-infer-bs4.json | 53 - .../Tacotron2/qa/tacotron2_fp32-perf.json | 197 - .../Tacotron2/qa/tacotron2_fp32-short-loss.json | 16037 --------- .../SpeechSynthesis/Tacotron2/qa/text_padded.pt | Bin 3780347 -> 0 bytes .../Tacotron2/qa/waveglow_fp16-full-loss.json | 16037 --------- .../Tacotron2/qa/waveglow_fp16-infer-bs1.json | 53 - .../Tacotron2/qa/waveglow_fp16-infer-bs8.json | 53 - .../Tacotron2/qa/waveglow_fp16-perf.json | 197 - .../Tacotron2/qa/waveglow_fp16-short-loss.json | 16037 --------- .../Tacotron2/qa/waveglow_fp32-full-loss.json | 16037 --------- .../Tacotron2/qa/waveglow_fp32-infer-bs1.json | 53 - .../Tacotron2/qa/waveglow_fp32-infer-bs4.json | 53 - .../Tacotron2/qa/waveglow_fp32-perf.json | 197 - .../Tacotron2/qa/waveglow_fp32-short-loss.json | 16037 --------- .../Tacotron2/scripts/prepare_dataset.sh | 27 - .../Tacotron2/scripts/prepare_mels.sh | 21 + .../Tacotron2/scripts/train_tacotron2.sh | 2 +- .../Tacotron2/scripts/train_waveglow.sh | 2 +- .../Tacotron2/tacotron2/data_function.py | 2 +- .../SpeechSynthesis/Tacotron2/tacotron2/model.py | 69 +- PyTorch/SpeechSynthesis/Tacotron2/train.py | 128 +- .../SpeechSynthesis/Tacotron2/waveglow/model.py | 10 +- PyTorch/Translation/GNMT/README.md | 317 +- .../Translation/GNMT/scripts/tests/inference.sh | 44 + .../scripts/tests/reference_inference_performance | 6 + .../scripts/tests/reference_training_performance | 20 + .../Translation/GNMT/scripts/tests/train_1epoch.sh | 52 + .../Translation/GNMT/scripts/tests/train_bench.sh | 53 + .../Translation/GNMT/scripts/tests/train_full.sh | 51 + PyTorch/Translation/GNMT/seq2seq/utils.py | 12 +- PyTorch/Translation/GNMT/train.py | 14 +- PyTorch/Translation/GNMT/translate.py | 23 +- TensorFlow/Classification/RN50v1.5/README.md | 103 +- TensorFlow/Classification/RN50v1.5/main.py | 7 +- .../Classification/RN50v1.5/model/resnet_v1_5.py | 16 +- .../Classification/RN50v1.5/runtime/runner.py | 113 +- .../RN50v1.5/runtime/runner_utils.py | 30 +- .../scripts/benchmarking/DGX1V_inferbench_fp16.sh | 2 +- .../scripts/benchmarking/DGX1V_inferbench_fp32.sh | 2 +- .../scripts/benchmarking/DGX1V_trainbench_fp16.sh | 1 + .../scripts/benchmarking/DGX2_inferbench_fp16.sh | 2 +- .../scripts/benchmarking/DGX2_trainbench_fp32.sh | 4 +- .../RN50v1.5/scripts/benchmarking/benchmark.py | 35 +- .../Classification/RN50v1.5/utils/__init__.py | 4 +- .../RN50v1.5/utils/cmdline_helper.py | 25 +- .../Classification/RN50v1.5/utils/dali_utils.py | 154 + .../Classification/RN50v1.5/utils/data_utils.py | 21 + TensorFlow/Detection/SSD/download_all.sh | 12 +- .../Detection/SSD/qa/testing_DGX1V_accuracy.sh | 26 - .../SSD/qa/testing_DGX1V_accuracy_fp16.sh | 1 - .../SSD/qa/testing_DGX1V_accuracy_fp32.sh | 1 - .../Detection/SSD/qa/testing_DGX1V_convergence.sh | 20 - .../SSD/qa/testing_DGX1V_convergence_fp16.sh | 1 - .../SSD/qa/testing_DGX1V_convergence_fp32.sh | 1 - .../qa/testing_DGX1V_inference_benchmark_fp16.sh | 20 - .../qa/testing_DGX1V_inference_benchmark_fp32.sh | 20 - .../Detection/SSD/qa/testing_DGX1V_performance.sh | 24 - .../qa/testing_DGX1V_training_benchmark_fp16.sh | 21 - .../qa/testing_DGX1V_training_benchmark_fp32.sh | 19 - TensorFlow/LanguageModeling/BERT/scripts/run.sub | 184 + .../BERT/scripts/start_pretraining.sh | 90 + TensorFlow/Recommendation/NCF/prepare_dataset.sh | 14 +- TensorFlow/Segmentation/UNet_Medical/LICENSE | 2 +- .../qa/L0_joc_unet_medical_inferbench_fp16.sh | 44 - .../qa/L0_joc_unet_medical_inferbench_fp32.sh | 43 - .../qa/L0_joc_unet_medical_trainbench_fp16.sh | 43 - .../qa/L0_joc_unet_medical_trainbench_fp16_8gpu.sh | 54 - .../qa/L0_joc_unet_medical_trainbench_fp32.sh | 42 - .../qa/L0_joc_unet_medical_trainbench_fp32_8gpu.sh | 52 - .../qa/L2_joc_unet_medical_train_fp16.sh | 47 - .../qa/L2_joc_unet_medical_train_fp32.sh | 46 - .../qa/L3_joc_unet_medical_convergence_fp16.sh | 47 - .../qa/L3_joc_unet_medical_convergence_fp32.sh | 47 - TensorFlow/Translation/Transformer/Dockerfile | 43 + TensorFlow/Translation/Transformer/README.md | 395 + TensorFlow/Translation/Transformer/encode_data.py | 49 + TensorFlow/Translation/Transformer/job.json | 30 + .../Translation/Transformer/requirements.txt | 35 + .../Translation/Transformer/scripts/data_helper.sh | 24 + .../Transformer/scripts/docker/build.sh | 7 + .../Transformer/scripts/docker/launch.sh | 11 + .../Transformer/scripts/download_data.sh | 6 + .../Transformer/scripts/run_training.sh | 52 + .../Transformer/scripts/verify_dataset.sh | 30 + TensorFlow/Translation/Transformer/transformer.png | Bin 0 -> 196304 bytes .../Translation/Transformer/transformer/README.md | 255 + .../Transformer/transformer/__init__.py | 0 .../transformer/data/convert_utf8_to_tfrecord.py | 83 + .../Transformer/transformer/data/process_data.py | 467 + .../Transformer/transformer/model/__init__.py | 0 .../transformer/model/attention_layer.py | 145 + .../Transformer/transformer/model/beam_search.py | 527 + .../transformer/model/beam_search_test.py | 101 + .../transformer/model/embedding_layer.py | 83 + .../Transformer/transformer/model/ffn_layer.py | 72 + .../Transformer/transformer/model/fp16_utils.py | 35 + .../transformer/model/fused_layer_norm.py | 137 + .../transformer/model/mixed_precision_optimizer.py | 87 + .../Transformer/transformer/model/model_params.py | 82 + .../Transformer/transformer/model/model_utils.py | 136 + .../transformer/model/model_utils_test.py | 68 + .../Transformer/transformer/model/transformer.py | 380 + .../Translation/Transformer/transformer/options.py | 106 + .../Transformer/transformer/transformer_main.py | 458 + .../Transformer/transformer/translate.py | 335 + .../Transformer/transformer/utils/__init__.py | 0 .../Transformer/transformer/utils/compute_bleu.py | 127 + .../transformer/utils/compute_bleu_test.py | 63 + .../Transformer/transformer/utils/dataset.py | 253 + .../transformer/utils/distributed_utils.py | 21 + .../Transformer/transformer/utils/metrics.py | 482 + .../Transformer/transformer/utils/tokenizer.py | 641 + .../transformer/utils/tokenizer_test.py | 182 + .../vocab.translate_ende_wmt32k.32768.subwords | 33708 +++++++++++++++++++ 283 files changed, 112898 insertions(+), 133464 deletions(-) create mode 100644 .gitignore create mode 100644 MxNet/Classification/RN50v1.5/__init__.py create mode 100644 PyTorch/Classification/RN50v1.5/img/.gitkeep create mode 100644 PyTorch/Classification/RN50v1.5/resnet50v1.5/README.md delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_18.08_inference_fp16.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_18.08_inference_fp32.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_18.08_training_fp16.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_18.08_training_fp32.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_19.01_inference_fp16.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_19.01_inference_fp32.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_19.01_training_fp16.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_19.01_training_fp32.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_19.05_inference_fp16.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_baselines/SSD300_pytorch_19.05_inference_fp32.json delete mode 100644 PyTorch/Detection/SSD/qa/benchmark_performance.py delete mode 100644 PyTorch/Detection/SSD/qa/curve_baselines/SSD300_pytorch_18.08_fp16_full_run_acc_baseline.json delete mode 100644 PyTorch/Detection/SSD/qa/curve_baselines/SSD300_pytorch_18.08_fp32_full_run_acc_baseline.json delete mode 100644 PyTorch/Detection/SSD/qa/curve_baselines/SSD300_pytorch_19.01_fp16_1epoch_run_acc_baseline.json delete mode 100644 PyTorch/Detection/SSD/qa/curve_baselines/SSD300_pytorch_19.01_fp32_1epoch_run_acc_baseline.json delete mode 100644 PyTorch/Detection/SSD/qa/qa_accuracy_main.py delete mode 100644 PyTorch/Detection/SSD/qa/qa_perf_main.py delete mode 100644 PyTorch/Detection/SSD/qa/qa_utils.py delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_8GPU_fp16_1epoch_run.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_8GPU_fp16_full_run.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_8GPU_fp32_1epoch_run.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_8GPU_fp32_full_run.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_inference_benchmark_fp16.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_inference_benchmark_fp32.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_training_benchmark_fp16.sh delete mode 100644 PyTorch/Detection/SSD/qa/testing_DGX1V_training_benchmark_fp32.sh create mode 100644 PyTorch/LanguageModeling/BERT/.dockerignore create mode 100644 PyTorch/LanguageModeling/BERT/.gitignore create mode 100644 PyTorch/LanguageModeling/BERT/Dockerfile create mode 100644 PyTorch/LanguageModeling/BERT/LICENSE create mode 100644 PyTorch/LanguageModeling/BERT/README.md create mode 100644 PyTorch/LanguageModeling/BERT/bert_config.json create mode 100644 PyTorch/LanguageModeling/BERT/create_pretraining_data.py create mode 100644 PyTorch/LanguageModeling/BERT/data/README.md create mode 100644 PyTorch/LanguageModeling/BERT/data/bookcorpus/clean_and_merge_text.py create mode 100755 PyTorch/LanguageModeling/BERT/data/bookcorpus/download_bookcorpus.sh create mode 100755 PyTorch/LanguageModeling/BERT/data/create_datasets_from_start.sh create mode 100755 PyTorch/LanguageModeling/BERT/data/glue/download_mrpc.sh create mode 100755 PyTorch/LanguageModeling/BERT/data/merge_datasets_after_creation.sh create mode 100755 PyTorch/LanguageModeling/BERT/data/squad/squad_download.sh create mode 100755 PyTorch/LanguageModeling/BERT/data/utils/config.sh create mode 100644 PyTorch/LanguageModeling/BERT/data/utils/create_mixed_dataset.py create mode 100644 PyTorch/LanguageModeling/BERT/data/utils/create_mixed_dataset_ids.py create mode 100755 PyTorch/LanguageModeling/BERT/data/utils/preprocessing.sh create mode 100755 PyTorch/LanguageModeling/BERT/data/utils/preprocessing_xargs_wrapper.sh create mode 100644 PyTorch/LanguageModeling/BERT/data/utils/sentence_segmentation_nltk.py create mode 100644 PyTorch/LanguageModeling/BERT/data/utils/shard_text_input_file.py create mode 100755 PyTorch/LanguageModeling/BERT/data/wikipedia_corpus/download_wikipedia.sh create mode 100644 PyTorch/LanguageModeling/BERT/data/wikipedia_corpus/remove_tags_and_clean.py create mode 100644 PyTorch/LanguageModeling/BERT/extract_features.py create mode 100644 PyTorch/LanguageModeling/BERT/file_utils.py create mode 100644 PyTorch/LanguageModeling/BERT/fused_adam_local.py create mode 100644 PyTorch/LanguageModeling/BERT/modeling.py create mode 100644 PyTorch/LanguageModeling/BERT/optimization.py create mode 100644 PyTorch/LanguageModeling/BERT/requirements.txt create mode 100644 PyTorch/LanguageModeling/BERT/run_glue.py create mode 100644 PyTorch/LanguageModeling/BERT/run_pretraining.py create mode 100644 PyTorch/LanguageModeling/BERT/run_pretraining_inference.py create mode 100644 PyTorch/LanguageModeling/BERT/run_squad.py create mode 100644 PyTorch/LanguageModeling/BERT/run_swag.py create mode 100644 PyTorch/LanguageModeling/BERT/schedulers.py create mode 100755 PyTorch/LanguageModeling/BERT/scripts/data_download.sh create mode 100644 PyTorch/LanguageModeling/BERT/scripts/docker/build.sh create mode 100644 PyTorch/LanguageModeling/BERT/scripts/docker/launch.sh create mode 100755 PyTorch/LanguageModeling/BERT/scripts/run.sh create mode 100755 PyTorch/LanguageModeling/BERT/scripts/run_glue.sh create mode 100644 PyTorch/LanguageModeling/BERT/scripts/run_pretraining.sh create mode 100644 PyTorch/LanguageModeling/BERT/scripts/run_pretraining_inference.sh create mode 100755 PyTorch/LanguageModeling/BERT/scripts/run_squad.sh create mode 100755 PyTorch/LanguageModeling/BERT/scripts/run_swag.sh create mode 100644 PyTorch/LanguageModeling/BERT/scripts/start_pretraining.sh create mode 100644 PyTorch/LanguageModeling/BERT/tokenization.py create mode 100644 PyTorch/LanguageModeling/BERT/vocab/download_models.py create mode 100644 PyTorch/LanguageModeling/BERT/vocab/vocab create mode 100644 PyTorch/Recommendation/NCF/dataloading.py create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_audio_text_train_subset_625_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_test_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_train_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_train_subset_1250_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_train_subset_2500_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_train_subset_625_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/filelists/ljs_mel_text_val_filelist.txt create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/img/tacotron2_amp_loss.png create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/img/tacotron2_arch.png create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/img/waveglow_arch.png create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/platform/train_tacotron2_AMP_DGX1_16GB_1GPU.sh create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/platform/train_tacotron2_AMP_DGX1_16GB_4GPU.sh create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/platform/train_tacotron2_AMP_DGX1_16GB_8GPU.sh create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/platform/train_waveglow_AMP_DGX1_16GB_1GPU.sh create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/platform/train_waveglow_AMP_DGX1_16GB_4GPU.sh create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/platform/train_waveglow_AMP_DGX1_16GB_8GPU.sh create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/preprocess_audio2mel.py delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_Tacotron2_epochtrain_FP16_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_Tacotron2_epochtrain_FP32_DGX1_16GB_8GPU_single.sh delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_Tacotron2_inferbench_FP16_DGX1_16GB_1GPU_single.sh delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_Tacotron2_inferbench_FP32_DGX1_16GB_1GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_Tacotron2_trainbench_FP16_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_Tacotron2_trainbench_FP32_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_WaveGlow_epochtrain_FP16_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_WaveGlow_epochtrain_FP32_DGX1_16GB_8GPU_single.sh delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_WaveGlow_inferbench_FP16_DGX1_16GB_1GPU_single.sh delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_WaveGlow_inferbench_FP32_DGX1_16GB_1GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_WaveGlow_trainbench_FP16_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L2_WaveGlow_trainbench_FP32_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L3_Tacotron2_fulltrain_FP16_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L3_Tacotron2_fulltrain_FP32_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L3_WaveGlow_fulltrain_FP16_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/L3_WaveGlow_fulltrain_FP32_DGX1_16GB_8GPU_single.sh delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/Tacotron2_inferbench_BS1_FP16_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/Tacotron2_inferbench_BS1_FP32_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/Tacotron2_inferbench_BS4_FP32_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/Tacotron2_inferbench_BS8_FP16_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/WaveGlow_inferbench_BS1_FP16_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/WaveGlow_inferbench_BS1_FP32_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/WaveGlow_inferbench_BS4_FP32_DGX1_16GB_1GPU_single.json delete mode 100755 PyTorch/SpeechSynthesis/Tacotron2/qa/baselines/WaveGlow_inferbench_BS8_FP16_DGX1_16GB_1GPU_single.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/check_curves.py delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/input_lengths.pt delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/mel_padded.pt delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp16-full-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp16-infer-bs1.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp16-infer-bs8.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp16-perf.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp16-short-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp32-full-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp32-infer-bs1.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp32-infer-bs4.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp32-perf.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/tacotron2_fp32-short-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/text_padded.pt delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp16-full-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp16-infer-bs1.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp16-infer-bs8.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp16-perf.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp16-short-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp32-full-loss.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp32-infer-bs1.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp32-infer-bs4.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp32-perf.json delete mode 100644 PyTorch/SpeechSynthesis/Tacotron2/qa/waveglow_fp32-short-loss.json create mode 100644 PyTorch/SpeechSynthesis/Tacotron2/scripts/prepare_mels.sh create mode 100644 PyTorch/Translation/GNMT/scripts/tests/inference.sh create mode 100644 PyTorch/Translation/GNMT/scripts/tests/reference_inference_performance create mode 100644 PyTorch/Translation/GNMT/scripts/tests/reference_training_performance create mode 100644 PyTorch/Translation/GNMT/scripts/tests/train_1epoch.sh create mode 100644 PyTorch/Translation/GNMT/scripts/tests/train_bench.sh create mode 100644 PyTorch/Translation/GNMT/scripts/tests/train_full.sh create mode 100644 TensorFlow/Classification/RN50v1.5/utils/dali_utils.py delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_accuracy.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_accuracy_fp16.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_accuracy_fp32.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_convergence.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_convergence_fp16.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_convergence_fp32.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_inference_benchmark_fp16.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_inference_benchmark_fp32.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_performance.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_training_benchmark_fp16.sh delete mode 100644 TensorFlow/Detection/SSD/qa/testing_DGX1V_training_benchmark_fp32.sh create mode 100755 TensorFlow/LanguageModeling/BERT/scripts/run.sub create mode 100755 TensorFlow/LanguageModeling/BERT/scripts/start_pretraining.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L0_joc_unet_medical_inferbench_fp16.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L0_joc_unet_medical_inferbench_fp32.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L0_joc_unet_medical_trainbench_fp16.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L0_joc_unet_medical_trainbench_fp16_8gpu.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L0_joc_unet_medical_trainbench_fp32.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L0_joc_unet_medical_trainbench_fp32_8gpu.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L2_joc_unet_medical_train_fp16.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L2_joc_unet_medical_train_fp32.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L3_joc_unet_medical_convergence_fp16.sh delete mode 100644 TensorFlow/Segmentation/UNet_Medical/qa/L3_joc_unet_medical_convergence_fp32.sh create mode 100644 TensorFlow/Translation/Transformer/Dockerfile create mode 100644 TensorFlow/Translation/Transformer/README.md create mode 100644 TensorFlow/Translation/Transformer/encode_data.py create mode 100644 TensorFlow/Translation/Transformer/job.json create mode 100644 TensorFlow/Translation/Transformer/requirements.txt create mode 100755 TensorFlow/Translation/Transformer/scripts/data_helper.sh create mode 100755 TensorFlow/Translation/Transformer/scripts/docker/build.sh create mode 100755 TensorFlow/Translation/Transformer/scripts/docker/launch.sh create mode 100755 TensorFlow/Translation/Transformer/scripts/download_data.sh create mode 100755 TensorFlow/Translation/Transformer/scripts/run_training.sh create mode 100755 TensorFlow/Translation/Transformer/scripts/verify_dataset.sh create mode 100644 TensorFlow/Translation/Transformer/transformer.png create mode 100644 TensorFlow/Translation/Transformer/transformer/README.md create mode 100644 TensorFlow/Translation/Transformer/transformer/__init__.py create mode 100644 TensorFlow/Translation/Transformer/transformer/data/convert_utf8_to_tfrecord.py create mode 100644 TensorFlow/Translation/Transformer/transformer/data/process_data.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/__init__.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/attention_layer.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/beam_search.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/beam_search_test.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/embedding_layer.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/ffn_layer.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/fp16_utils.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/fused_layer_norm.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/mixed_precision_optimizer.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/model_params.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/model_utils.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/model_utils_test.py create mode 100644 TensorFlow/Translation/Transformer/transformer/model/transformer.py create mode 100644 TensorFlow/Translation/Transformer/transformer/options.py create mode 100644 TensorFlow/Translation/Transformer/transformer/transformer_main.py create mode 100644 TensorFlow/Translation/Transformer/transformer/translate.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/__init__.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/compute_bleu.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/compute_bleu_test.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/dataset.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/distributed_utils.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/metrics.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/tokenizer.py create mode 100644 TensorFlow/Translation/Transformer/transformer/utils/tokenizer_test.py create mode 100644 TensorFlow/Translation/Transformer/transformer/vocab/vocab.translate_ende_wmt32k.32768.subwords bisect run success ```

CC @nv-kkudrynski

Luosuu commented 3 years ago

Is it possible to fix it in the near future? or modify the tutorial on pytorch hub.

jan-golda commented 3 years ago

From what I know this should be fixed now that https://github.com/pytorch/hub/pull/204 was merged. I am not able to check that at the moment, will take a look tomorrow.

nv-kkudrynski commented 3 years ago

Indeed, our PR to pytorch/hub with updated example notebooks took a few days to be processed. @seemethere @Luosuu, I just checked and it works now. Sorry for inconveniece.

seemethere commented 3 years ago

No worries, thanks for the fix @nv-kkudrynski!

Luosuu commented 3 years ago

thanks for the fix @nv-kkudrynski! Also thanks for your help and analysis @seemethere !