NVIDIA / gpu-rest-engine

A REST API for Caffe using Docker and Go
BSD 3-Clause "New" or "Revised" License
421 stars 94 forks source link

Regarding instructions of loading in a Digits model into GRE #24

Closed rperdon closed 6 years ago

rperdon commented 6 years ago

https://github.com/NVIDIA/gpu-rest-engine/issues/4

I was looking at this instruction and tried following the links to modification of the deploy file and found error 404 on those links. Can you provide the modifications necessary to the prototxt files that are necessary for GRE to utilize an exported Digits model?

flx42 commented 6 years ago

Look at those lines: https://github.com/NVIDIA/gpu-rest-engine/blob/722f3597f8a47881c2d7bd9fe7fecd86f2fba0a6/Dockerfile.caffe_server#L96-L103

rperdon commented 6 years ago

Is there a way to load more than one model into the GRE?

CMD ["caffe-server", "deploy.prototxt", "bvlc_reference_caffenet.caffemodel", "imagenet_mean.binaryproto", "synset_words.txt"]

Could I run multiple workdir and cmd lines for different models?

I would like the idea that I could call the api such as

curl -XPOST --data-binary @Hei1.png http://127.0.0.1:8000/api/classify where the calls for the api/classify portion maybe would have api/classify/catsvsdogs, api/classify/malevsfemale etc

-I edited the Dockerfile.caffe_server last lines to "point" to my mapped volume. I commented out the copy lines for the other model

WORKDIR /models/mymodel CMD ["caffe-server", "deploy.prototxt", "snapshot_iter_10480.caffemodel", "mean.binaryproto", "corresp.txt"]

docker run -v /media/shared/models:/models --runtime=nvidia --name=server --net=host --rm inference_server

When the server loaded up, it was still running the original model.

flx42 commented 6 years ago

It will require modifying the golang code, but it's totally possible. Adding new endpoints handlers is quite easy in go.

Another option is to run N different containers (one for each model) listening on different ports (e.g. 8000, 8001, 8002...). Then you put an nginx reverse-proxy on top that dispatches to the right server based on the URI. You have examples on the nginx website: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass

rperdon commented 6 years ago

Would I be modifying the main.go file in the gpu-rest-engine/caffe folder? Do I have to rebuild the docker image when I make a change to the dockerfile/go file?

Edit: -I rebuilt it and it looks like the changes took but it looks like I have some debugging of files to do. How does the synset_words.txt file relate to something like a correspond.txt? Ie I have a 0 anime 1 non-anime in a corresp.txt file which allows my model to be identified by another API like DeepDetect to recognize the classifications.