Open chenxichen95 opened 4 years ago
You should set "eval_file" to "test" (like what "main.json5" does), which evaluates the test set directly in each evaluation during training. This is not what we should do in production, but aligns with previous works and makes the results comparable.
I use "main.json5", and the acc of quora test set can't get 89.2%, only about 84%.
This is the log of training model by using quora.
Did you download the Quora dataset from the link in ReadMe and prepare the data using "prepare_quora.py"?
yes~
Download and unzip Quora dataset (pre-processed by Wang et al.) to data/orig.
cd data && python prepare_quora.py
I did.
Then that's strange... What's your pytorch version? I'll rerun the experiment later
my pytorch version is 1.1.0 . Other dataset can achieve the score in your paper except the quora dataest.
Hi ,I was testing RE2 (Pytorch 1.3.1) on quora dataset, and I got results that were aligned with the paper : Notes :
Hi ,I was testing RE2 (Pytorch 1.3.1) on quora dataset, and I got results that were aligned with the paper :
I'm trying to reproduce the result on Pytorch 1.5.1 and can confirm that the masking issue is gone with the above fix. I, however, ran into another issue, as below:
07/31/2020 01:00:55 train (384348) | dev (10000)
07/31/2020 01:01:00 setup complete: 0:01:36s.
07/31/2020 01:01:00 Epoch: 1
07/31/2020 01:05:56 > epoch 1 updates 3000 loss: 0.2415 lr: 0.0011 gnorm: 0.3525
07/31/2020 01:05:58
Traceback (most recent call last):
File "train.py", line 48, in <module>
main()
File "train.py", line 31, in main
states = trainer.train()
File "code\simple-effective-text-matching-pytorch\src\trainer.py", line 63, in train
self.log.log_eval(dev_stats)
File "code\simple-effective-text-matching-pytorch\src\utils\logger.py", line 90, in log_eval
train_stats_str = ' '.join(f'{key}: ' + self._format_number(val) for key, val in self.train_meters.items())
File "code\simple-effective-text-matching-pytorch\src\utils\logger.py", line 90, in <genexpr>
train_stats_str = ' '.join(f'{key}: ' + self._format_number(val) for key, val in self.train_meters.items())
File "code\simple-effective-text-matching-pytorch\src\utils\logger.py", line 59, in _format_number
return f'{x:.4f}' if float(x) > 1e-3 else f'{x:.4e}'
TypeError: AverageMeter.__float__ returned non-float (type Tensor)
I'm fairly new to Pytorch and still learning the basics but it does seem like 'model.eval()' was changed somehow, in the newer version. Has anyone encountered this issue before and is there a workaround? Thanks in advance!
@TheMnBN
Hi, sorry for the late reply.
In pytorch 1.5, the return type of torch.nn.utils.clip_grad_norm_
has change from "float" to "torch.Tensor". Change line 82 of model.py to
'gnorm': grad_norm.item(),
should solve the problem.
@TheMnBN Hi, sorry for the late reply. In pytorch 1.5, the return type of
torch.nn.utils.clip_grad_norm_
has change from "float" to "torch.Tensor". Change line 82 of model.py to'gnorm': grad_norm.item(),
should solve the problem.
Hey, Very Nice Work.
I started training this model and it got failed after few hours giving out of memory error.
result = self.forward(*input, **kwargs)
File "/content/simple-effective-text-matching-pytorch/src/modules/fusion.py", line 50, in forward x2 = self.fusion2(torch.cat([x, x - align], dim=-1)) RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 0; 14.76 GiB total capacity; 12.74 GiB already allocated; 85.75 MiB free; 13.75 GiB reserved in total by PyTorch)
If possible can anyone please provide me the pre-trained model for inference directly?
Thanks in advance!!
I use quora.json5 to train a new re2, and use the new model to test. But the acc of quora test set can't get 89.2%.