jolibrain / deepdetect

Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
https://www.deepdetect.com/
Other
2.51k stars 561 forks source link

Set of DNN pre-trained models for a range of image classification domains (fashion, furnitures, ...) #33

Open beniz opened 8 years ago

beniz commented 8 years ago

This is the discussion / report ticket for the set of models available here: http://www.deepdetect.com/applications/model/ They include classification or clothes, furnitures, gender, planes, cars, etc... See full list and instructions on the page above.

kmatzen commented 8 years ago

Do you have documentation as to which photos from each synset appeared in your training and test sets?

beniz commented 8 years ago

@kmatzen I don't have documentation about that.

But it should be pretty easy to get the list of synsets from the model class list. I might be able to provide a list of the exact synsets in most cases if this would prove something important for some of the model users.

It's a bit more tricky for the files I believe. The list of files is likely to be stored in the training db, and from there, there may be a way to get back to the photo without providing the file itself.

It may not be of great direct help at this stage, but I've fixed a script to grab what publicly online from the full Imagenet (~85%): https://github.com/beniz/imagenet_downloader

I've actually trained from that, which may make the matching to the full Imagenet tarball more tricky even. FTR, I've had access to the full Imagenet dump sometimes in the past but they have recently resetted the credentials as it seems.

roblkw226 commented 8 years ago

In the information about the website, you've listed that you split the dataset into training and testing. I understand you cannot share the images due to licensing. But presuming that we're looking at the same imagenet dataset, how did you decide the split? If for example, you simply chose the first 4500 to be training, and a remaining 500 to be testing, then we could do the same.

Otherwise we run the risk of using the training data in our tests, which would obviously be problematic. Is there some log somewhere of what files you used?

beniz commented 8 years ago

@roblkw226 good point, unfortunately the splits are random from a shuffled dataset (and no seed). I guess you would like to get an appreciation of the accuracy of one or more models, is that correct ? We could think of a few ways to help you do that.

roblkw226 commented 8 years ago

Yes, that's right. We'd like to understand the accuracy, before and after some fine-tuning adjustments we have in mind. We'd appreciate any help in that area.

beniz commented 8 years ago

OK, there are several ways to get or approximate the accuracy:

If you need more lively interaction, ping me on gitter, https://gitter.im/beniz/deepdetect