Closed predestination closed 5 years ago
I had to use a larger instance with this to run it on a server in aws. For demo purposes, I think I used a m5.xlarge. Predictions run slower with these instances though, and obviously a c5 instance would be better, but they are a bit more costly.
I recently moved, and I am slowly getting more time back. I'll get something up with the Docker file to run a simple service.
Oh! and one last thing. Make sure it isn't related to spacy versions. Right now, the neural coref library doesn't support spacy > 2.1.3
I provided an example Docker flask service. The readme explains how to run the service.
what should be the server configuration, for running model on AWS? it is taking local storage, and the process is getting killed, giving 500 internal server? we have 12 gigs of ram on aws and 12GB SSD