NVIDIA / sentiment-discovery

Unsupervised Language Modeling at scale for robust sentiment classification
Other
1.06k stars 202 forks source link

Accuracy issues when run on CPU and GPU? #20

Open yashkumaratri opened 6 years ago

yashkumaratri commented 6 years ago

When I run the Binary SST (FP16) model for prediction of sentiment , on GPU the accuracy is fine but accuracy rate drops significantly when I run it on CPU?

yashkumaratri commented 6 years ago

generated This is what It gives out.A negative score for every sentence when it is run on CPU

raulpuric commented 6 years ago

That's rather strange. We're seeing some weirdness as well.

We saved the model on GPU, but we probably shouldve saved them on CPU for safety.

torch.load(<load_path>, map_location=lambda storage, loc: storage)

Can you try this for loading the model on cpu

yashkumaratri commented 6 years ago

Ya, I tried it but it yields the same result.

raulpuric commented 6 years ago

Did you ever solve this? We think it might be related to a pytorch versioning error.

yashkumaratri commented 6 years ago

tried variants but it didn't work out.

raulpuric commented 6 years ago

Couldn't figure out what was wrong with it.

Spun a new model on pytorch 0.4.

The models that I think are up and working rn are sst_clf.pt, sst_clf_16.pt, imdb_clf_16.pt

Will upload the others through next week.

Thanks for your patience.