Closed JiaLim98 closed 4 years ago
I have a similar problem when computing anchors using kmean_anchors
: even if I force-replace default anchors with the newly computed ones, I don't know how to save them to the .yml. Simply coping the values printed doesn't give the same bpr when loading anchors from the .yml.
In particular, there is one line executed after computing anchors with kmean_anchors
that seems to be re-iterated at each model loading.
I am currently using the modification from #462 .
@maxjcohen @JiaLim98 anchors may be supplied in your model.yaml file as you have shown above. AutoAnchor runs before training to ensure your anchors are a good fit for your data. If they are not, then new anchors are computed and evolved and attached to your model automatically. No action is needed on your part. You can disable autoanchor with python train.py --noautoanchor
.
Strongly advise to git clone the latest repo (often), there have been many recent updates and bug fixes.
@maxjcohen Thank you for the information @glenn-jocher The anchors are there because I calculated using kmean_anchors and typed them in myself. Anyway, are you saying that the code will auto compute the anchors and type them inside the .yaml file? This is because I want to know the anchors used.
@JiaLim98 your yaml is never modified. AutoAnchor will attach anchors automatically to your model.pt file (i.e. last.pt or best.pt). If AutoAnchor computes anchors it will be obvious, it displays them to you and even advises you to update your yaml if you'd like to reuse them in the future.
@glenn-jocher I see. Which way do you advise to open the .pt files?
model = torch.load('model.pt')
You can visualize with Netron also.
@glenn-jocher Thank you very much. Sorry for inconvenience caused as I'm new to object detection.
@glenn-jocher Another question. I can't seem to find the best weights saved in the weights folder or the runs folder. I have tried both --nosave of value true and false. Can you point it out to me?
@JiaLim98 don't use nosave. Just start from the colab notebook and follow the examples there. The code displays to you where your models are saved after training completes.
@glenn-jocher Yes, I know about the weights for last epoch. Previously, there is best and last weights right? Or is the last.pt shown here is the best weights?
@JiaLim98 last.pt is the last weights. best.pt is the best weights.
Just use the notebook and run the tutorial.
@JiaLim98 your yaml is never modified. AutoAnchor will attach anchors automatically to your model.pt file (i.e. last.pt or best.pt). If AutoAnchor computes anchors it will be obvious, it displays them to you and even advises you to update your yaml if you'd like to reuse them in the future.
Hi,I want to extract the anchors and use it in the future .According to your idea. you mean AutoAnchor will save anchors automatically to the best.pt file , I use netron to open best.pt file ,but i do not know where the anchors be saved ? just found Detect layer has some parameters , Could you tell me where the anchors be saved or how to extract the anchors ? Thanks !
@china56321 AutoAnchor displays the computed anchors on screen and advises you to update your yaml if you'd like to use them in the future.
@china56321 AutoAnchor displays the computed anchors on screen and advises you to update your yaml if you'd like to use them in the future. Does it display the computed anchors on screen when training ? or when it finished training ? The former part is same with .yaml file ?
No. After that, it will say analyzing anchors...
@china56321 it's very obvious, it prints everything out for you.
@china56321 it's very obvious, it prints everything out for you.
OK, thank you!
No. After that, it will say analyzing anchors...
I see analyzing anchors.. , but it did't show the anchors . my purpose is to save or extract the anchors ,Could you tell me how should i do ? or post a screenshot ? thank you
@china56321 right. Your anchors were analyzed against your dataset and a best possible recall was computed. This BPR is high enough not to warrant anchor recomputation. No new anchors were calculated for you.
Dear @glenn-jocher , If a model is trained with an image size, say 640, but when testing a 320-size image is fed to the model. Do I need to change the anchors?
@edwardnguyen1705 anchors are never changed on a trained model. Also, you should train at the same size you intend to run inference at for best results.
@glenn-jocher thank you for your information.
@glenn-jocher Can you explain the evolution of anchors after kmean based anchor computation?
@abhiagwl4262 a genetic evolution optimizer runs on the anchors following a kmeans scan. Fitness function is the same as the regression loss used in loss.py. Code is below.
Can somebody explain to me or point to me to a documents that explains what the lines depth_multiple: 0.33 # model depth multiple width_multiple: 0.50 # layer channel multiple
anchors:
@Yashar78 depth and width multiple are YOLOv5 compound scaling constants inspired by EfficientDet: https://arxiv.org/abs/1911.09070
Anchors are model anchors, all YOLO models use these to normalize outputs.
@glenn-jocher
anchors:
[10,13, 16,30, 33,23] # P3/8
[30,61, 62,45, 59,119] # P4/16
[116,90, 156,198, 373,326] # P5/32
How can I reproduce these anchors ?
@abhiagwl4262 I'm sorry I don't understand the question. You can manually specify anchors in your model.yaml file:
@glenn-jocher What I am trying to say is... we have algorithm that can generate custom anchors for any data. The anchors defined in yolov5/models/yolov5s.yaml are used for training on COCO data. Are the following anchors being calculated using the same algorithm?
@abhiagwl4262 hey buddy. I think these anchors are simply adopted from the original YOLOv3 anchors by Josheph Redmon at https://github.com/pjreddie/darknet, or if not then they are generated by YOLOv5 autoanchor.
@glenn-jocher Yeah, They have been taken from YOLOv3 directly. The anchors being calculated by autoanchor algorithm are not matching with the once taken from YOLOv3. Didn't you guys think of using custom ones ?
@abhiagwl4262 if the anchors are unmodified then auto-anchor has determined that they are suitable for training as-is.
Our YOLOv5-P6 models use auto-anchor evolved anchors for example (for training at --img 1280): https://github.com/ultralytics/yolov5/blob/238583b7d5c19029920d56c417c406c829569c75/models/hub/yolov5s6.yaml#L6-L12
@glenn-jocher Btw, I did run training for multiple datasets and custom anchors always improve the results irrespective of the good BPR value from the anchors given for COCO.
@abhiagwl4262 oh interesting, thanks for the feedback! It looks like autoanchor is working well for your use case then :)
One trick to force autoanchor to create new anchors in all cases is to raise the BPR threshold from 0.98 to 1.0 here on L43 of autoanchor.py: https://github.com/ultralytics/yolov5/blob/1df8c6c963d31ce84895101a70e45e0afdcb0bc2/utils/autoanchor.py#L40-L44
Hi Ultralytics,
First of all, thank you so much for the interesting work and sharing.
❔Question
I mainly have three questions to ask:
Size of anchors May I know why the anchors are much bigger than the scale of the Feature Pyramid Network?
bpr of computed anchors I wanted to know what my anchors are so I computed them using kmean_anchors in utils.py. May I know why both anchors can reach a bpr of 1.0000? This is because there are quite a big difference between them and I wanted to know whether the default anchors or kmean anchors works better for my data.
Source of anchors Where does the model take anchors from? Is it from the configuration files shown above or they are calculated everytime? This is because from Q2 above, both anchors achieved bpr of 1. That makes me question does the model use that anchors.
Best regards, JiaLim98