Closed Spaskich closed 3 years ago
Looking at the error, it seems to be related to DyNET itself. The framework pushed tensors on two different GPUs. We are no longer supporting DyNET and have moved NLP-Cube to PyTorch. We are currently working on releasing a new version and you will no longer have this type of issues. Until then, I suggest you use a CPU-only docker.
When deploying the project to a container, I get the following warning:
2/2/2021 12:39:24 PM WARNING: This is a development server. Do not use it in a production deployment.
2/2/2021 12:39:24 PM Use a production WSGI server instead.
Could this cause any of the aforementioned problems?
No, it has nothing to do with this. The actual error is ValueError: Attempt to do tensor forward in different devices (nodes 20330 and 39)
, which has to do with DyNet allocating tensors on two different GPUs.
Probably you cannot reproduce the error locally, because you have none or just one GPU.
This issue should be fixed in 3.0, which has been officially released
Describe the bug I'm getting the following errors on some of the requests when running in a docker container as a web server:
I can't reproduce the issue when running the image locally.
To Reproduce Steps to reproduce the behavior:
10.42.109.126 - - [04/Feb/2021 14:38:55] "GET /nlp?lang=id&text=Grab-Gojek+Dapat+Saingan+Baru+di+Negeri+Singa HTTP/1.1" 500 -
Expected behavior When running locally the everything works fine and the Cube returns the expected output.
Screenshots