Closed genbei closed 3 years ago
Can you please be more descriptive when you are opening an issue? Did the training start, what type of installation did you use? So that I can try to reproduce the error.
The code works fine on my end.
Can you please be more descriptive when you are opening an issue? Did the training start, what type of installation did you use? So that I can try to reproduce the error.
The code works fine on my end.
git clone https://github.com/TharinduDR/TransQuest.git cd TransQuest pip install -r requirements.txt
It's okay for me to run monotransquest, but running siamesetransquest will report an error
Did it start the training?
Did it start the training?
no,Is still the problem
Can you check whether this colab notebook works for you? https://colab.research.google.com/drive/1QXiVoyTT7XgOVgJVQL9XbliozTwfxcR4?usp=sharing
Can you check whether this colab notebook works for you? https://colab.research.google.com/drive/1QXiVoyTT7XgOVgJVQL9XbliozTwfxcR4?usp=sharing
I have solved the above problems.
In addition, I think you should modify your code. Because of the following problems,I made the same mistake by reducing batch_size to 4. Maybe you should torch.cuda.empty_cache ()
, or with torch.no_grad()
Traceback (most recent call last): File "/home/miniconda3/envs/transquest/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/miniconda3/envs/transquest/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/TransQuest-master/examples/sentence_level/wmt_2020_task2/en_zh/siamesetransquest.py", line 67, in
model.train_model(train_df, eval_df) File "/home/TransQuest-master/transquest/algo/sentence_level/siamesetransquest/run_model.py", line 98, in train_model output_path=self.args.best_model_dir) File "/home/TransQuest-master/transquest/algo/sentence_level/siamesetransquest/models.py", line 693, in fit loss_value.backward() File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/torch/autograd/init.py", line 147, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: CUDA out of memory. Tried to allocate 978.00 MiB (GPU 0; 14.73 GiB total capacity; 10.25 GiB already allocated; 625.88 MiB free; 13.21 GiB reserved in total by PyTorch)
Can you provide more details on how you solved the issue, for future reference?
Since this was not a bug I am removing the bug label
$ python -m examples.sentence_level.wmt_2020_task2.en_zh.siamesetransquest
Traceback (most recent call last): File "/home/miniconda3/envs/transquest/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/miniconda3/envs/transquest/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/TransQuest/examples/sentence_level/wmt_2020_task2/en_zh/siamesetransquest.py", line 67, in
model = SiameseTransQuestModel(MODEL_NAME)
File "/home/TransQuest/transquest/algo/sentence_level/siamesetransquest/run_model.py", line 30, in init
self.model = SiameseTransformer(model_name, args=args)
File "/home/TransQuest/transquest/algo/sentence_level/siamesetransquest/models.py", line 236, in init
transformer_model = Transformer(model_name, max_seq_length=self.args.max_seq_length)
File "/home/TransQuest/transquest/algo/sentence_level/siamesetransquest/models.py", line 50, in init
self.tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir=cache_dir, tokenizer_args)
File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 423, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, *kwargs)
File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1710, in from_pretrained
resolved_vocab_files, pretrained_model_name_or_path, init_configuration, init_inputs, kwargs
File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1781, in _from_pretrained
tokenizer = cls(*init_inputs, init_kwargs)
File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py", line 144, in init
kwargs,
File "/home/miniconda3/envs/transquest/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 96, in init
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: expected value at line 1 column 1