Closed gitathrun closed 6 years ago
I faced the same problem in similar envrionment and avoided it by editing classifier.PyTorchClassifier.score()
:
devX = torch.FloatTensor(devX).cuda()
and devy = torch.LongTensor(devy).cuda()
,devX[i:i + self.batch_size]
with torch.FloatTensor(devX[i:i + self.batch_size]).cuda()
as well as devy[i:i + self.batch_size]
with torch.LongTensor(devy[i:i + self.batch_size]).cuda()
.This worked for me with large dataset like SNLI.
Could you please try to:
replace this line : https://github.com/facebookresearch/SentEval/blob/master/senteval/tools/classifier.py#L119
by
if not isinstance(devX, torch.cuda.FloatTensor) and not self.cudaEfficient:
Thanks
Hi, were you able to fix the problem? Thanks, Alexis
Please re-open the task if not.
Hi, I am running my model with SentEval framework and using SNLI as the target dataset, but I got runtime memory error after the embedding process is completed, and beginning the classifier training process. here is the error message: for the record, the runtime environment is 2 K80 GPU with CUDA, memory size is 112GiB, but I am not sure this process used two GPU or just one, so not sure the runtime memory size is 11GiB or 22GiB
I switched classifier option to sklearn logistic regression (UsePytorch = False) with RAM at 10GiB, the error still shows (no memory error), but different description.
I am just wandering, since the SNLI is a 110K size dataset, how many memory size is capable for the classifier to process the embedded sentences.
Many Thanks