Open NoxTheXelor opened 4 years ago
You specify image size in https://github.com/experiencor/keras-yolo3/blob/master/config.json.
min input size and max input size
On Tuesday, June 16, 2020, Ash notifications@github.com wrote:
Dear @experiencor https://github.com/experiencor , I'm using your (awesome) repository for 2 month now and I just discoverd there are no input size like those ones. [image: image] https://user-images.githubusercontent.com/26748640/84695649-6deedc80-af4b-11ea-8d38-d5d69bab89af.png It was not a problem for me until I had to deploy my model on a graphic card (Nvidia Jetson TX2) through this repo https://github.com/jkjung-avt/ tensorrt_demos#yolov3. My question is what is the input size of you NN ?
Thanks for your answer !
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/experiencor/keras-yolo3/issues/280, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLCJP5GLAZMDCM6WIS4ZV3RWZZJPANCNFSM4N6QEYWA .
Hi @AntoineHoe Just to elaborate a little on what experiencor has said:
During training, the input image size is varied randomly between the values of min_input_size and max_input_size in the config file. See _get_net_size() in generator.py for the code that does this. This means that the trained model should be able to successfully detect objects for images in this size range.
The input image size used during inference is hardcoded in predict.py on line 26. If you want to change this you could alter the hardcoded value or add a new field to the config file and modify the code to use this value.
Hope that helps.
Dear @experiencor , I'm using your (awesome) repository for 2 month now and I just discoverd there are no input size like those ones. It was not a problem for me until I had to deploy my model on a graphic card (Nvidia Jetson TX2) through this repo https://github.com/jkjung-avt/tensorrt_demos#yolov3. My question is what is the input size of you NN ?
UPDATE I use this parametes to build my model does that involve the output model is an 416 one or should I set them both to 416 to do so ?
Thanks for your answer !