Closed scottdangelo closed 6 years ago
For the Deploy to IBM Cloud
button, the custom classifier is built via the custom-model.sh. The script is triggered in the deploy to IBM Cloud pipeline. I will add a create_classifier() in the run.py to create the model in case there's something not working in the script.
For the manifest.yaml
, Cloud Foundry should able to read the Procfile
file and start the server.
I will also add a reference command curl -X GET "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers/{classifier_id}?api_key={$API_KEY}&version=2016-05-20"
for those who want to check their model status via command line.
hmm...Didn't seem to create the custom classifier for me. I'll delete and re-test...
Ah, I found the problem from the script. Apparently --guid flag no longer return the correct API key for visual recognition. I will update that command.
Looks like the latest deploy with PR#18 still has an issue:
Scotts-MBP-2:watson-waste-sorter scottda$ curl -X GET "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers?api_key={$API_KEY}&version=2016-05-20"
{
"classifiers": []
}
also saw this in the Deploy Stage History logs:
The documentation in the README indicates that you can press the
Deploy to IBM Cloud
button and then skip to step 3. But if one does this, there will not be a custom classifier created. It would be pretty optimal (and slick) to add this to the server/run.py . You could:True
You've already done this the sort() method, but the problem exists that you only return something from set_classifier()if classifier['name'] == 'waste' and classifier['status'] == 'ready'
IF that statement is not true, you Never set the classifier_id in sort.I would recommend That set_classifier() returns
null
as a default, and that a conditional checks for the return value from set_classifier in the sort() method. IF you get an ID, carry on. IF you get null, call a new routinecreate_classifier()
and upload the training .zip files in this routine, etc (or shell out and call your custom_model.sh)ALSO, you need a run command in manifest.yaml to start the server:
I'd document this in the README and warn the user that it takes some time (usually 5-10 minutes) to train the classifier. Maybe document how to see if it is ready via UI or CLI.