Open hadign20 opened 7 years ago
Did you use the custom class labels when generating the data set: custom class ?
@user7077 No I didn't. Is it necessary? Because my classes are different than the ones in the link you mentioned.
@hadi-ghnd Did you have any success with this problem?
@lemhell I avoided adjusting class labels on the dataset creation page and waited until around 40 epochs, and it started to show some mAP and precision results.
I have set epochs upto 30 and got MAP =0 for all value.Also the graph is similar to the one shown above.I have used the custom detect net as mentioned down. customNetworkused.txt
Please help me to modify the custom net such that it can draw bounding box and coverage.
@hadi-ghnd how big is your bounding boxes ? did you consider changing the stride ?
bounding box used is 60x60.Stride =16.My object is really small upto 20x20 pixel.Please suggest to what value i should change its stride value and try.
On 05-Apr-2017, at 2:52 PM, sani1486 notifications@github.com wrote:
@hadi-ghnd https://github.com/hadi-ghnd how big is your bounding boxes ? did you consider changing the stride ?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1517#issuecomment-291804084, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckMlJFciJtRSw717hV2vurXbPuVjGks5rs11NgaJpZM4MiNy4.
Please may i know which all parameters along with stride to be changed;if taking stride=8;
On 05-Apr-2017, at 2:52 PM, sani1486 notifications@github.com wrote:
@hadi-ghnd https://github.com/hadi-ghnd how big is your bounding boxes ? did you consider changing the stride ?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1517#issuecomment-291804084, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckMlJFciJtRSw717hV2vurXbPuVjGks5rs11NgaJpZM4MiNy4.
@sulthanashafi You can specify the spacing of the grid squares in the training labels by setting the stride in pixels in the detectnet_groundtruth_param layer. For example:
detectnet_groundtruth_param { stride: 16 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN min_cvg_len: 20 coverage_type: RECTANGULAR image_size_x: 1024 image_size_y: 512 obj_norm: true crop_bboxes: false } In this layer you can also specify an image training patch size (image_size_x, image_size_y). When these parameters are set, every time an image is fed into DetectNet during training it takes a random crop of this size as input. This can be useful if you have very large images in which the objects you wish to detect are very small. https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/
this link might help . and i am not sure what else to change along with stride and crop_bboxes .. i am looking for a solution too . let me know about your developments on this
Thank you.Have you got the coverage and bounding box.If then may i know whether you used pertrained model and also may you share your log file and prototxt used.
On 05-Apr-2017, at 5:21 PM, sani1486 notifications@github.com wrote:
@sulthanashafi https://github.com/sulthanashafi You can specify the spacing of the grid squares in the training labels by setting the stride in pixels in the detectnet_groundtruth_param layer. For example:
detectnet_groundtruth_param { stride: 16 scale_cvg: 0.4 gridbox_type: GRIDBOX_MIN min_cvg_len: 20 coverage_type: RECTANGULAR image_size_x: 1024 image_size_y: 512 obj_norm: true crop_bboxes: false } In this layer you can also specify an image training patch size (image_size_x, image_size_y). When these parameters are set, every time an image is fed into DetectNet during training it takes a random crop of this size as input. This can be useful if you have very large images in which the objects you wish to detect are very small. https://devblogs.nvidia.com/parallelforall/detectnet-deep-neural-network-object-detection-digits/ <x-msg://8/url> this link might help . and i am not sure what else to change along with stride and crop_bboxes .. i am looking for a solution too . let me know about your developments on this
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/NVIDIA/DIGITS/issues/1517#issuecomment-291837492, or mute the thread https://github.com/notifications/unsubscribe-auth/AZZckEEH9JgWfqEM5MLCU_dRdocDgVtwks5rs4AwgaJpZM4MiNy4.
I am trying to perform object detection on a custom dataset. I have converted the label format to KITTI and created the dataset. I also changed the size of all images to 1024x1024 and changed every number (1248, 384, ...) in the detectnet_network.prototxt file to 1024. But I have a problem. When I start training the model, the mAP is constantly on 0 which means the network doesn't train.
I don't know what the problem is, but I think it might be with the size of the images. Do I have to change the size of bounding boxes as well, or digit deals with that by itself? What should I do to make this work like it does on KITTI dataset?