Closed abe732 closed 8 years ago
I will definitely take a look. I am not familiar with this so if you have any documentation URL's you can pass along please do. Or event better... pull requests are always welcome.
Cool yeah, will also look into more and do a pull request this weekend.
For a little bit more context, the image keyword extraction that's built into the general API is pretty basic and not quite what our team was looking for (we're trying to ID food items, and it comes back with "vegetable" rather than something more specific like "asparagus"). To solve that, we're creating our own custom classifiers by feeding in .zip files that we can then run images against to get the more specific keywords.
Looks like those custom classifiers are private to a user/api-key and require calls against a different endpoint (v3/classifiers) than the general one (v3/classify). The full path for the general endpoint is what it looks like you guys build up for the .imageKeywords method (https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify).
This is the section of the documentation I was referencing: https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/visual-recognition/api/v3/#create_a_classifier
Turns out this was the source of the issue: https://www.ibm.com/blogs/watson/2016/05/visual-recognition-update/
Alchemy Vision has been migrated over to Watson Visual Recognition. The old Alchemy keys work with the this Alchemy API but any new Watson Visual Recognition keys won't work with this npm module.
Would be great to be able to access the /v3/classifiers endpoint to do training via server side calls. Could be missing something in the functionality here, but thought I'd suggest it.
Adam