Closed dhruvp closed 6 years ago
One problem you may be facing is that the target boxes are quite small relative to the image. It may be that the predefined anchors don't overlap enough with some of the ground truth boxes to qualify them as positives for training. You might try breaking your image into several smaller images and training on those.
I agree with @mhwilder. You could go through the trouble of visualizing the foreground anchors to see what it is really training on, but it isn't straightforward (we should add a debug tool for this sometime).
I definitely agree with @hgaiser on this. More inspection tools are always the answer in Deep Learning. If you do happen to make a particularly nice solution to this visualisation @dhruvp, feel free to post a PR.
@dhruvp can you please tell how did you use this with your own dataset actually I am new to keras
Hey all thanks for the suggestions. I tried splitting up the image and that helped (makes sense as my dimensions are now closer to those of COCO etc.). However, I don't see this as a long term solution as it now adds the issue of accounting for cells that might be on the edges of the newly divided cells (i.e. which image is responsible for counting them? What if we now only see half a cell in two images etc.).
I tried changing anchors and recompiling but ran into issues there. @hgaiser if you could help us make anchors more easily editable that would greatly help as I suspect the current suggested anchors are large and designed for COCO etc. and suggesting smaller ones would help. I will also look into visualizing the intermediate outputs (perhaps the suggested anchors for each pyramid).
@dexter1608 I followed the instructions provided in the README for using a CSVGenerator to train on my own data. I also found I could only train in a GPU.
Thanks everyone for all the help and thanks again to the maintainers for contributing this really useful work.
Dhruv
To avoid problems with objects at image borders (when you cut up the image), you could cut up the image in overlapping parts and perform non-maximum suppression after merging all the resulting bounding boxes back into one set (adjusted to the coordinate system of the original input).
However, I agree that the proper solution is to make the anchors tweakable.
@dhruvp yeah making those anchors tweakable is on my to-do list. No guarantees on when it will be implemented though..
I will consider this issue resolved. If you find it necessary to re-open, be my guest.
@dhruvp you probably solved this by now, but as your white blood cells are sparsely distributed throughout your image and quite different from your red blood cells, can I refer you to my comment on a different issue? Might help you detect a few more :) https://github.com/fizyr/keras-retinanet/issues/202#issuecomment-372034304
Thanks so much Tristan! I'll check it out now.
Dhruv
On Mon, Mar 12, 2018 at 6:06 AM, Tristan Henser-Brownhill < notifications@github.com> wrote:
@dhruvp https://github.com/dhruvp you probably solved this by now, but as your white blood cells are sparsely distributed throughout your image and quite different from your red blood cells, can I refer you to my comment on a different issue? Might help you detect a few more :) #202 (comment) https://github.com/fizyr/keras-retinanet/issues/202#issuecomment-372034304
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/fizyr/keras-retinanet/issues/189#issuecomment-372302853, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJrSktB5g7xezzdV6xvgzgm9pvb0cRWks5tdnLogaJpZM4RERA- .
Hi!
Thanks so much for your work first of all. This is a really nice repository with really clear instructions and fantastic readability so it's been a pleasure to work with so far!
I'm trying to use retinanet on a dataset of blood cells like below:
I've trained retinanet on just this single image in hopes of just overfitting it and seeing if at least that will work. Unfortunately it seems as though it is stuck at detecting 5-6 cells and not more than that (after 25 epochs, 100 steps per epoch). It also flatlines on training at this point.
Any thoughts on why this might occur? Are there any simple parameters I can tweak to improve performance when there are many tiny objects to detect?
Thanks!
Dhruv