weecology / DeepForest

Python Package for Airborne RGB machine learning
https://deepforest.readthedocs.io/
MIT License
490 stars 172 forks source link

Not very accurate results with my images #461

Closed Caio-Giulio-Cesare closed 1 year ago

Caio-Giulio-Cesare commented 1 year ago

Hello everyone and, first of all, many congratulations, it is truly a wonderful project!

I've been using DeepForest for some time and I have a problem that unfortunately I can't solve in any way. DeepForest is very accurate with the images that come with it, unfortunately I can't get the same excellent results with my images, and the quality of the forecasts is much lower.

My images have the same size (400x400) and resolution 0.2m/px very similar to the example images.

I can't figure out if the problem is that the model is trained with images of non-European trees, or if the quality of the images is bad, or if there is some other kind of problem.

For example, I attach one of these images with the relative results.

Can anyone help me and give me some indication on how to proceed to have better recognition accuracy?

Thanks in advance to anyone kind enough to help me, and congratulations again.

MY1_20cm MY1_RESULTS

ethanwhite commented 1 year ago

Hi @Caio-Giulio-Cesare! Start by checking out the docs page on "How do I make the predictions better?". One of the things it mentions is fine tuning the model using a small number of labeled images from your own dataset. Information on how to do that is available in the "Training" section of the "Getting Started" page. If neither of those solves your problems then describe what you've tried from those pages and what isn't working and we'll see what we can do to help.

Caio-Giulio-Cesare commented 1 year ago

Hi @Caio-Giulio-Cesare! Start by checking out the docs page on "How do I make the predictions better?". One of the things it mentions is fine tuning the model using a small number of labeled images from your own dataset. Information on how to do that is available in the "Training" section of the "Getting Started" page. If neither of those solves your problems then describe what you've tried from those pages and what isn't working and we'll see what we can do to help.

Thank you very much, I will try this way. Bye...CIAO

bw4sz commented 1 year ago

Any update here, can we close?

Caio-Giulio-Cesare commented 1 year ago

Yes, thanks, it can be closed. Unfortunately, however, I can not do the training.

On Colab it works, but on my PC (Windows - CPU and venv), unfortunately not works, and I don't understand why.

If I put workers >0 (for example workers =1), the function code at a certain moment restarts from the first instruction of the function.

If I put workers =0 and model.config["epochs"] = 5, training starts but does not complete all the foreseen epochs but only the first one, at the end of the first epoch it gives me the message

Trainer.fit stopped: max_epochs=1 reached.

even though I set 5 epochs.

In any case it can be closed, most likely it's my fault and it depends on the fact that unfortunately I'm not very good at programming and I can't understand something.

Thanks again so much.

PS: I'm also making a GUI (TkInter) that uses DeepForest, both to view the results and to make changes after the prediction and add manual annotations for a new training. If I can fix the issues I'm having with the training I'll let you know. Thanks again... Bye...CIAO.



Error log detail:

Model from DeepForest release https://github.com/weecology/DeepForest/releases/tag/1.0.0 was already downloaded. Loading model from file. Loading pre-built model: https://github.com/weecology/DeepForest/releases/tag/1.0.0 No validation file provided. Turning off validation loop GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs

| Name | Type | Params

0 | model | RetinaNet | 32.1 M

31.9 M Trainable params 222 K Non-trainable params 32.1 M Total params 128.592 Total estimated model params size (MB) C:#######\test_addestramento_modello\venv\lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:432: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 8 which is the number of cpus on this machine) in theDataLoaderinit to improve performance. rank_zero_warn( C:\##########\test_addestramento_modello\venv\lib\site-packages\pytorch_lightning\loops\fit_loop.py:280: PossibleUserWarning: The number of training batches (1) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. rank_zero_warn( Epoch 0: 100%|██████████| 1/1 [00:04<00:00, 4.37s/it, v_num=27] Trainer.fitstopped:max_epochs=1` reached.



code: the same in the manual code_and_config_fit_problem.zip

Caio-Giulio-Cesare commented 1 year ago

I should have solved the training problem. In addition to setting in the code

model.config["epochs"] = 5

I set the parameter in the .yml configuration file (train section).

epochs: 5

Now the training is OK for 5 epochs (with 1 worker).

Thanks and bye

ethanwhite commented 1 year ago

Glad you got it sorted out. When you set model.config["epochs"] = 5 will determine whether it overrides the value in the config file or not, but generally setting it in the config file is the right way to go. Since you've got it changed in the config you can drop the model.config["epochs"] = 5 line entirely.

I'll go ahead and close this issue, but definitely feel encouraged to open new issues as you proceed if you have questions or run into problems.

Caio-Giulio-Cesare commented 1 year ago

Glad you got it sorted out. When you set model.config["epochs"] = 5 will determine whether it overrides the value in the config file or not, but generally setting it in the config file is the right way to go. Since you've got it changed in the config you can drop the model.config["epochs"] = 5 line entirely.

I'll go ahead and close this issue, but definitely feel encouraged to open new issues as you proceed if you have questions or run into problems.

OK, thanks again and again many compliments for DeepForest... truly a great job. Bye.