Open CAOXINGWEN opened 8 months ago
Hi, thanks for your interest.
Could you share more details on the image you are using? It sounds like it's a custom image, neither from our dataset, and maybe not even a microscopy image? This would certainly require retraining. Create annotations in the same format as in our dataset and then try to run the command in the README.
If you can provide more infos, I might be able to assist you.
Here's my image, I've run through your model so far, do I need to retrain it or make it the same as your dataset and get it under a microscope (which seems seems to be a bit difficult for me)
Getting the sample under microscope of course doesn't make sense in this case. Retraining is definitely required to get better results, but only the annotations need to be as in our dataset, not the input images. However, I am still pessimistic that it will perform sufficiently well on your images afterwards. This algorithm has some limitations when dealing with that many growth rings.
May I ask how to label the image as you do, is it with “labelme”? And especially how to deal with the lines where the annual rings meet, i.e. the common lines, is it necessary to label them twice?
I used Gimp. Specifically I used the path tool to draw the boundaries between the rings and then bucket fill to colorize. The ring boundaries should be white, the background black, the first ring red and the remaining rings can have an arbitrary but unique color.
Hello author, thank you for your answer. Could you please give an example of png input with white ring boundaries, black background and red first ring? I got some troubles when dealing with my data like CAOXINGWEN mentioned above. Thank you!
@Master7Sword I've just updated the colors for the .png
annotation files in our dataset so that they can be used for training. Previously only the .tiff
files were loadable. Here is an example file EH_0022.
I also fixed a small bug which prevented .png
files from loading.
Hi, I wanted to share that I recently trained the proposed INBD model using RGB images of the Pinus taeda species, and I obtained excellent results. To prepare the data, I labeled the images using LabelMe and then developed a script to convert the annotations into the format required by the INBD model. If you're interested, you can find the code here
Hello author, thank you for your very good work, I'm using a normal RGB tree annual image for semantic segmentation and it failed, I want to know if you can tell me which code to modify here, or retrain the model?