carlomazzaferro / racket

Serve your models with confidence
Other
39 stars 6 forks source link

Racket init command does not produce regression.py #4

Open BlackToothGrin96 opened 5 years ago

BlackToothGrin96 commented 5 years ago

Description

Hi, I'm loving Racket; it's great work and you are definitely a hero.

I have created two separate projects using the racket init --name command, everything seems to work fine but the file regression.py was not generated for either project (which is disappointing as regression is my goal :( )

What I Did

First Project: $ racket init --name racket_project_0 Folder Tree: . ├── classification.py ├── docker-compose.yaml ├── Dockerfile.racket ├── Dockerfile.tfserving ├── extras │   └── TEST.jpg ├── racket.yaml ├── serialized │   ├── base │   │   └── 1 │   │   ├── saved_model.pb │   │   └── variables │   │   ├── variables.data-00000-of-00001 │   │   └── variables.index │   ├── base_1.h5 │   ├── base_1.json │   └── base_history_1.json ├── web │   ├── asset-manifest.json │   ├── index.html │   ├── manifest.json │   ├── precache-manifest.3be394996438461aed1dfe1534b1c647.js │   ├── service-worker.js │   └── static │   ├── css │   │   ├── 2.280c789f.chunk.css │   │   ├── 2.280c789f.chunk.css.map │   │   ├── main.4cc9d3bc.chunk.css │   │   └── main.4cc9d3bc.chunk.css.map │   └── js │   ├── 2.2580a9e0.chunk.js │   ├── 2.2580a9e0.chunk.js.map │   ├── main.816293ed.chunk.js │   ├── main.816293ed.chunk.js.map │   ├── runtime~main.a8a9905a.js │   └── runtime~main.a8a9905a.js.map └── sqlite.db

Second project: $ racket init --name racket_house_prices_regression Folder Structure: . ├── classification.py ├── docker-compose.yaml ├── Dockerfile.racket ├── Dockerfile.tfserving ├── extras │   └── TEST.jpg ├── racket.yaml ├── serialized │   ├── base │   │   └── 1 │   │   ├── saved_model.pb │   │   └── variables │   │   ├── variables.data-00000-of-00001 │   │   └── variables.index │   ├── base_1.h5 │   ├── base_1.json │   └── base_history_1.json ├── sqlite.db ├── web │   ├── asset-manifest.json │   ├── index.html │   ├── manifest.json │   ├── precache-manifest.3be394996438461aed1dfe1534b1c647.js │   ├── service-worker.js │   └── static │   ├── css │   │   ├── 2.280c789f.chunk.css │   │   ├── 2.280c789f.chunk.css.map │   │   ├── main.4cc9d3bc.chunk.css │   │   └── main.4cc9d3bc.chunk.css.map │   └── js │   ├── 2.2580a9e0.chunk.js │   ├── 2.2580a9e0.chunk.js.map │   ├── main.816293ed.chunk.js │   ├── main.816293ed.chunk.js.map │   ├── runtime~main.a8a9905a.js │   └── runtime~main.a8a9905a.js.map └── housing.csv

Is there anything important I'll be missing by working from classification.py or can I just alter the model definition to suit a regression scenario?

Thanks for the great work btw; this package is awesome :)

BlackToothGrin96 commented 5 years ago

Ok, This hasn't been a problem for me; I have been successful in converting the classification.py script into a regression script.

One final question I have is how do I access inference for different models hosted on the same serving instance?

Currently I can access inference via the api endpoints but only for the latest version of the model specified by the ENV MODEL_NAME variable in tfserving dockerfile. I can see through the racket dashboard and racket ls that I have other versions of this model and some completely different models on the server and I'm wondering if I can specify in my request WHICH network should complete the request?

For example, I would like to send inference data to the server with a header specifying which model and version should complete the request.