NLPbox / stanford-corenlp-docker

build/run the most current Stanford CoreNLP server in a docker container
43 stars 32 forks source link

How to store fetched models and lemmers #1

Closed theronic closed 3 years ago

theronic commented 6 years ago

Thanks for making this! When I run this and hit the web server on http://localhost:8080 (mapped to port 9000), it fetches a bunch of models. Adding extra annotation libraries tend to crash the container (probably buggy Docker), but main problem is it re-fetches models on every run.

Where should I map a volume to store fetched models?

arne-cl commented 3 years ago

Dear @theronic, I'm not sure I understand your question completely.

CoreNLP loads models when started into memory, so this has to happen on every start.

You can avoid crashes by increasing the heap space given to the JVM. By default, the container uses 4 GB, but you can increase it e.g. to 8 GB like this:

docker run -e JAVA_XMX=8g -it corenlp