Closed shubham99bisht closed 4 years ago
I am not sure about the error you encountered. It may have something to do with the CUDA that Google Colab is using. You may want to search for similar errors relating to Google Colab and PyTorch.
To save the model in PyTorch, follow the official docs: https://pytorch.org/docs/stable/notes/serialization.html
This issue was generated due to unrecognised symbols (symbols which aren't there in VOCAB) in text, which leads to [-1] value in text_tensor. Proper checking & modification of input text resolves the issue.
I found this command in your repo's train.py file to save the model:
torch.save(model.state_dict(), "model.pth")
For reloading model from previously saved checkpoint:
device = torch.device("cpu")
hidden_size=256
model = MyModel0(len(VOCAB), 16, hidden_size).to(device)
model.load_state_dict(torch.load("../model.pth"))
Hello @zzzDavid @Michael-Xiu I've trained the model for task3 on Google Colab. Training & Validation works perfectly, but I'm stuck with Inference part.
Whenever I submit a new text for inference I receive following error:
Would you please specify how to do inference on new data or share code for inference?
Also, another issue is, I couldn't find a way to save my model. Is a there to way to save the model for future inference?
Thanks for sharing awesome project :)