ai4os / DEEPaaS

A REST API to serve machine learning and deep learning models
https://deepaas.readthedocs.io
Apache License 2.0
35 stars 15 forks source link

Dev diana #72

Closed dianamariand92 closed 4 years ago

dianamariand92 commented 4 years ago

Description

Integrate execution support from the command line for DEEPaaS, through the deepaas-execute command.

Type of change

How Has This Been Tested?

The command has been tested with several of the examples found in the DEEP OPEN CATALOG. The models were tested with the DEEPaaS API V2 and is not compatible with the V1.

$ deepaas-execute -h usage: deepaas-execute [-h] [-ct CONTENT_TYPE] -o OUTPUT [--url URL] input_file

Option to obtain the prediction of the model through the command line.

positional arguments: input_file Set input file to predict

optional arguments: -h, --help show this help message and exit -ct CONTENT_TYPE, --content_type CONTENT_TYPE Especify the content type of the output file ('image/png', 'application/json', 'application/zip (by default application/json)') -o OUTPUT, --output OUTPUT Save the result to a local file. --url URL Activate url as input file type.

Example with a file: $ deepaas-execute frisbee.jpg -o output/

Example with an url: $ deepaas-execute https://storage.googleapis.com/tfjs-models/assets/posenet/frisbee.jpg --url TRUE -o output/

In the previous examples, the content-type of the output is a .json file (by default) with the prediction result. It is possible to specify other content-type with the option -ct or --content_type like image/png, if the model allows obtaining an output image in the prediction as is the case of pose estimation and it is also possible to specify application/zip to obtain a .zip at the output with the image and a .json file with the prediction.

Checklist:

alvarolopez commented 4 years ago

Hi Diana.

Thanks for contributing!

I will start a review requesting some changes. As a general note, we require that all tests (unit, style, etc.) pass for any commit. Please ensure that the command tox ends successfully.

Also, could you add some documentation as well? I think that a page for the command (like this) and a brief explanation under the user docs is enough.

vykozlov commented 4 years ago

Hi Diana, looked a bit into the code and the help, do I understand right that the script provides CLI only to the 'prediction' function? And only two parameters can be given, i.e. either 'file' or 'url'? Ideally, we also want to train and pass more (application-defined) parameters...

dianamariand92 commented 4 years ago

Hi @vykozlov

It is correct, the script only allows access to the 'prediction' function. In addition to the 'input_file' and 'url' parameters, you can specify the content-type of the output file with the --ct option (by default it is a json, but you can specify others based on what output your model generates). The --model-name option allows you to specify the model name from what you want to obtain the prediction if you have an environment with several models installed. If you have only one model installed, that model is used to make the prediction, otherwise, you need to specify the model name. The -o option is to specify the output directory (which is a required option).

To perform the training of the model from the CLI it is necessary to create another script where the necessary parameters for the training of the model can be passed to it.

Cheers.

alvarolopez commented 4 years ago

Hi @dianamariand92, we are ready to go.

I will do a squash and merge, rather than merging the 24 commints.

Thanks for the implementation!

alvarolopez commented 4 years ago

@dianamariand92 @gmolto this has been released in 1.2.0.