jolibrain / deepdetect

Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
https://www.deepdetect.com/
Other
2.52k stars 561 forks source link

Tensorrt get whole output distributions #718

Closed YaYaB closed 4 years ago

YaYaB commented 4 years ago

Configuration

Your question / the problem you're facing:

I have an issue getting the whole distribution of predictions using Tensorrt. I took the model available on dd's website and named age_real https://deepdetect.com/models/init/desktop/images/classification/age_real.tar.gz

Error message (if any) / steps to reproduce the problem:

Api call

./dede --port 8080

Serveur log output

DeepDetect [ commit 6d6c79aaf43171a93dba38ba79ac5f0207f21c71 ]
[2020-03-31 17:39:39.638] [api] [info] Running DeepDetect HTTP server on localhost:8080
Serveur log output

{"status":{"code":201,"msg":"Created"}}

- Create Prediction

Api call

curl -X POST "http://localhost:8080/predict" -d '{ "service":"age", "parameters":{ "input":{ "width":224, "height":224 }, "output":{ "best": -1 }, "mllib":{ "gpu": true, "gpuid":0 } }, "data":["https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"] }'


Serveur log output:

{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"age","time":78.0},"body":{"predictions":[{"classes":[{"prob":0.039911042898893359,"cat":"64"},{"prob":0.039756447076797488,"cat":"59"},{"prob":0.03768615797162056,"cat":"58"},{"prob":0.03682645782828331,"cat":"63"},{"prob":0.0337473526597023,"cat":"60"},{"prob":0.03269067779183388,"cat":"65"},{"prob":0.031601760536432269,"cat":"61"},{"prob":0.030762679874897004,"cat":"56"},{"prob":0.02948242612183094,"cat":"69"},{"prob":0.029322879388928415,"cat":"62"},{"prob":0.028792699798941613,"cat":"57"},{"prob":0.028574569150805475,"cat":"52"},{"prob":0.025717398151755334,"cat":"53"},{"prob":0.024055950343608857,"cat":"67"},{"prob":0.023989643901586534,"cat":"70"},{"prob":0.02291012741625309,"cat":"66"},{"prob":0.022343944758176805,"cat":"51"},{"prob":0.02228982001543045,"cat":"68"},{"prob":0.021931204944849016,"cat":"46"},{"prob":0.021392779424786569,"cat":"54"},{"prob":0.02011769637465477,"cat":"50"},{"prob":0.01886207051575184,"cat":"71"},{"prob":0.01854248158633709,"cat":"49"},{"prob":0.018445363268256189,"cat":"55"},{"prob":0.016146285459399225,"cat":"47"},{"prob":0.01608026772737503,"cat":"43"},{"prob":0.015782978385686876,"cat":"72"},{"prob":0.015771018341183664,"cat":"48"},{"prob":0.01291127223521471,"cat":"45"},{"prob":0.012669348157942295,"cat":"39"},{"prob":0.012159079313278199,"cat":"44"},{"prob":0.012118972837924958,"cat":"73"},{"prob":0.011211513541638852,"cat":"42"},{"prob":0.010879951529204846,"cat":"41"},{"prob":0.010865830816328526,"cat":"40"},{"prob":0.010718132369220257,"cat":"38"},{"prob":0.009786386974155903,"cat":"76"},{"prob":0.009574225172400475,"cat":"75"},{"prob":0.009325992316007615,"cat":"36"},{"prob":0.009199768304824829,"cat":"74"},{"prob":0.008859331719577313,"cat":"35"},{"prob":0.008029934018850327,"cat":"37"},{"prob":0.007884347811341286,"cat":"32"},{"prob":0.007757887244224548,"cat":"34"},{"prob":0.007446122355759144,"cat":"28"},{"prob":0.007158203981816769,"cat":"33"},{"prob":0.006972167640924454,"cat":"79"},{"prob":0.006729381158947945,"cat":"77"},{"prob":0.006354460027068853,"cat":"29"},{"prob":0.006136180832982063,"cat":"78"},{"prob":0.005879126489162445,"cat":"31"},{"prob":0.005578963551670313,"cat":"30"},{"prob":0.004760335199534893,"cat":"27"},{"prob":0.003724484471604228,"cat":"25"},{"prob":0.003679256420582533,"cat":"26"},{"prob":0.0030432655476033689,"cat":"80"},{"prob":0.0025948514230549337,"cat":"21"},{"prob":0.002387574641034007,"cat":"82"},{"prob":0.0022729074116796257,"cat":"84"},{"prob":0.0022209142334759237,"cat":"24"},{"prob":0.002197189023718238,"cat":"19"},{"prob":0.002125473925843835,"cat":"18"},{"prob":0.0020604748278856279,"cat":"81"},{"prob":0.001969977281987667,"cat":"23"},{"prob":0.001851910026744008,"cat":"13"},{"prob":0.0018393435748293996,"cat":"20"},{"prob":0.0018350948812440038,"cat":"22"},{"prob":0.0016824934864416719,"cat":"16"},{"prob":0.0016230609035119415,"cat":"14"},{"prob":0.001596794929355383,"cat":"10"},{"prob":0.0015939727891236544,"cat":"11"},{"prob":0.0015616295859217644,"cat":"83"},{"prob":0.0014493847265839577,"cat":"12"},{"prob":0.0014132109936326743,"cat":"15"},{"prob":0.001358087407425046,"cat":"17"},{"prob":0.0010639301035553218,"cat":"85"},{"prob":0.0008789854473434389,"cat":"86"},{"prob":0.0008403996471315622,"cat":"9"},{"prob":0.0006944536580704153,"cat":"87"},{"prob":0.0006185959791764617,"cat":"8"},{"prob":0.000484773947391659,"cat":"90"},{"prob":0.000430611107731238,"cat":"88"},{"prob":0.00042910597403533757,"cat":"1"},{"prob":0.000427636899985373,"cat":"7"},{"prob":0.00034334370866417885,"cat":"89"},{"prob":0.0003426025214139372,"cat":"91"},{"prob":0.0003400304412934929,"cat":"6"},{"prob":0.00031456825672648847,"cat":"5"},{"prob":0.00026430681464262307,"cat":"92"},{"prob":0.0002416658098809421,"cat":"93"},{"prob":0.00022789830109104514,"cat":"96"},{"prob":0.00019892588898073882,"cat":"94"},{"prob":0.0001975559862330556,"cat":"4"},{"prob":0.0001753802935127169,"cat":"100"},{"prob":0.00016417646838817745,"cat":"2"},{"prob":0.00015920295845717192,"cat":"95"},{"prob":0.00014193494280334562,"cat":"3"},{"prob":0.00012453968520276248,"cat":"99"},{"prob":0.00012437920668162405,"cat":"0"},{"prob":0.00010991709132213145,"cat":"98"},{"last":true,"cat":"97","prob":0.00007859385368647054}],"uri":"https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"}]}}


Here I got the whole distribution of the predictions using the flag "mllib.best" to -1.
Now let's try to do the same with Tensorrt v5.1.

- Launch Dede

Api call

./dede --port 8080

Serveur log output

DeepDetect [ commit 6d6c79aaf43171a93dba38ba79ac5f0207f21c71 ] [2020-03-31 17:39:39.638] [api] [info] Running DeepDetect HTTP server on localhost:8080


- Create service
Api call

curl -X PUT "http://localhost:8080/services/age" -d '{ "mllib":"tensorrt", "description":"object detection service", "type":"supervised", "parameters":{ "input":{ "connector":"image", "height": 224, "width": 224 }, "mllib":{ "datatype": "fp32", "maxBatchSize": 1, "maxWorkspaceSize": 6096, "tensorRTEngineFile": "TRTengine_bs", "gpuid":0 } }, "model":{ "repository":"/mnt/terabox/research/age-classification/models/yaya/age" } }'

Serveur log output

{"status":{"code":201,"msg":"Created"}}

- Create Prediction

Api call

curl -X POST "http://localhost:8080/predict" -d '{ "service":"age", "parameters":{ "input":{ "width":224, "height":224 }, "output":{ "best": -1 }, "mllib":{ "gpu": true, "gpuid":0 } }, "data":["https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"] }'


Serveur log output:

{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"age","time":642.0},"body":{"predictions":[{"classes":[],"uri":"https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"}]}}


As you can see I get empty prediction. However If I remove the "mllib.best" I get the best_match.

{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"age","time":612.0},"body":{"predictions":[{"classes":[{"last":true,"cat":"64","prob":0.039911042898893359}],"uri":"https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"}]}}


Now if I try putting "mllib.best" to 1 or another value here is what I get a result very different with the category 0 with a low probability:

{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"age","time":2921.0},"body":{"predictions":[{"classes":[{"last":true,"cat":"0","prob":0.000175380046130158}],"uri":"https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"}]}}



I would like to get the whole distribution but it seems that the element "mllib.best" does not work as it should.
beniz commented 4 years ago

Can you try "best":0 ? I believe we have a wrong test against 0 instad of < 0.

beniz commented 4 years ago

Actually the pathway is wrong in tensorrlib.cc. @fantes maybe I can take this, it should go through the supervised connector instead.

fantes commented 4 years ago

what do you mean "the pathway is wrong" ?

fantes commented 4 years ago

you mean it should not be filtered in tensorrlib and instead the supervisedouputconnector should do it?

fantes commented 4 years ago

in this case it seems the only thing to do is to remove code from tensorrtlib, i can handle it, i am tired of fighting against torch/c++ :)

YaYaB commented 4 years ago

Can you try "best":0 ? I believe we have a wrong test against 0 instad of < 0.

Yep it gives empty prediction

{"status":{"code":200,"msg":"OK"},"head":{"method":"/predict","service":"age","time":918.0},"body":{"predictions":[{"classes":[],"uri":"https://images.unsplash.com/photo-1580128660010-fd027e1e587a?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=500&q=60"}]}}
fantes commented 4 years ago

Hi this should be fixed by : https://github.com/jolibrain/deepdetect/pull/720 @YaYaB thank you a lot for the very precise bug report, it helps a lot for testing :)

YaYaB commented 4 years ago

Great, anytime :) I'll test it tonight and close the issue if it resolves everything on my side!

YaYaB commented 4 years ago

It fixes the issue on my side (tried with best equals to -1, 1 and several larger values)