Closed Suraj520 closed 4 years ago
Hi, Please, could you be more specific?
Hi @FilippoAleotti, Thanks for reverting back with a response
To be more precise: I generated the frozen graph and .tflite from https://github.com/FilippoAleotti/mobilePydnet/tree/v2/single_inference and visualized it via Netron. The output was true div type: float32[1,384,640,1]
However, It is different from https://github.com/FilippoAleotti/mobilePydnet/blob/tflite/Android_tflite/app/src/main/assets/pydnet%2B%2B.tflite which contains output at 3 scales as mentioned in Pydnet paper:
name: PSD/resize/ResizeBilinear : float32[1,448,640,1] name: PSD/resize_1/ResizeBilinear : float32[1,448,640,1] name: PSD/resize_2/ResizeBilinear : float32[1,448,640,1]
The difference in architectures from Master Branch to V2(Default) Branch, intrigued me to open the issue to request you for providing detailed documentation towards training the model as per the following reference https://github.com/FilippoAleotti/mobilePydnet/tree/v2/single_inference
I felt like https://github.com/FilippoAleotti/mobilePydnet/tree/v2/single_inference 's README needs to be a bit more descriptive. Kindly correct me, If I am missing something :)
This because I left in this frozen graph only the last prediction. If you need lower scale predictions, you can make you own frozen graph including also H/2 and H/4
Oh Okay, Thanks @FilippoAleotti !
@FilippoAleotti