Open recyclerjh opened 1 year ago
I am excited to try to train my network but still having trouble. Any suggestions on what to do so it won't crash?
Dear recyclejh,
This error is caused by the fact that the total_pixels is zero. This sounds strange. Are you sure that the dataset is loaded in a corrected way ? Please, give me more details about how you are proceeding to train the network, to try to better understand what happen.
Best, Massimiliano
Hey Massimiliano,
Thanks tons for getting back to me. I think I loaded my dataset correctly. I started in one area of the image and tried to draw polygons and classify everything in that area. I am trying to keep it simple to just a couple types of vegetation and mud essentially. Once I got those set I go to export the training set. I select a folder and then set the area to export using the box icon to draw a box around the polygons I have made. I can see that in the folder I designated there are subfolders "test', 'training' and 'validation'. These each have 'images' and 'labels' folders in them but the 'training' folder has nothing in them.
Let me know what you think and if you would like some more info
Here are some images. Taglab_example is the photo of the area I am exporting The other two are the images and labels in the test folder from the Training files folder.
Hope that makes sense
Thanks tons for your help John
On Tue, Nov 7, 2023 at 6:40 AM Massimiliano Corsini < @.***> wrote:
Dear recyclejh,
This error is caused by the fact that the total_pixels is zero. This sounds strange. Are you sure that the dataset is loaded in a corrected way ? Please, give me more details about how you are proceeding to train the network, to try to better understand what happen.
Best, Massimiliano
— Reply to this email directly, view it on GitHub https://github.com/cnr-isti-vclab/TagLab/issues/122#issuecomment-1798684027, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDMDTYLFNUHJSN5KIEXBSDLYDJB4NAVCNFSM6AAAAAA6OIGQQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOJYGY4DIMBSG4 . You are receiving this because you authored the thread.Message ID: @.***>
Heyy there!!
I happen to be encountering the exact same issue. When I try to train my network a prompt appears saying it may take several minutes and then Taglab crashes with the exact same error.
It would be extremely helpful if someone had a fix to this problem.
Cheers! Nikhil
Dear recyclerjh,
the reason of the crash is that the training folder is empty. How much is the resolution of your orthoimage and of the export area ?
Dear Maxcorsini,
I am bumping into the same issue where my training data folder is empty. I uploaded both the orthomosaic and dem files. My resolution is 1mm pixel size and dimensions 566x530. The area to export in this case was 8,15,558,515.
At the same time both the test and the validation folders have images and labels.
Cheers! Nikhil
Dear Nicktmss
this is the point ! TagLab is designed to work on orthoimage. Typically, such orthoimages are very huge, e.g. 20000 x 20000 pixels. Hence, TagLab cut in tiles the orthoimage to create the dataset. Your orthoimage is too much small, and TagLab does not reach to create data from it. In the next, we will add a warning. Taking into account that with so few data you cannot create a classifier that works in a reliable way.
Hope it helps.
Best, Massimiliano
On Fri, Nov 17, 2023 at 3:38 PM nick @.***> wrote:
Dear Maxcorsini,
I am bumping into the same issue where my training data folder is empty. I uploaded both the orthomosaic and dem files. My resolution is 1mm pixel size and dimensions 566x530. The area to export in this case was 8,15,558,515.
At the same time both the test and the validation folders have images and labels.
Cheers! Nikhil
— Reply to this email directly, view it on GitHub https://github.com/cnr-isti-vclab/TagLab/issues/122#issuecomment-1816545440, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAW6IXVDDWWZIACJG74IZADYE5ZF3AVCNFSM6AAAAAA6OIGQQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJWGU2DKNBUGA . You are receiving this because you commented.Message ID: @.***>
ours is approximately 30,000 x 21,000 and is approximately 1.5 cm/pix. I think I had to scale it down to 5 cm/pix to keep it within the max res and size
On Fri, Nov 17, 2023 at 5:13 AM Massimiliano Corsini < @.***> wrote:
Dear recyclerjh,
the reason of the crash is that the training folder is empty. How much is the resolution of your orthoimage ?
— Reply to this email directly, view it on GitHub https://github.com/cnr-isti-vclab/TagLab/issues/122#issuecomment-1816405136, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDMDTYNNVY3S4ZZ2ZLTN57LYE5PHRAVCNFSM6AAAAAA6OIGQQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJWGQYDKMJTGY . You are receiving this because you authored the thread.Message ID: @.***>
Hi recyclerjh,
I do not understand your last comment. TagLab supports images of size 32767 x 32767 . So, you do not need to scale down an orthoimage of 30000 x 21000 pixels.
REgarding the problem with the empty training set, in your case the problem is not the image resolution. Please, check the size of the export area used (that may not coincide with the working area).
Best
Yes you are correct. I was looking at the numbers in TagLab. I the size is 31k x 20k.
The size of the export area I believe is correct. I am using the box to outline it before the export
On Fri, Nov 17, 2023 at 2:09 PM Massimiliano Corsini < @.***> wrote:
Hi recyclerjh,
I do not understand your last comment. TagLab supports images of size 32767 x 32767 . So, you do not need to scale down an orthoimage of 30000 x 21000 pixels.
REgarding the problem with the empty training set, in your case the problem is not the image resolution. Please, check the size of the export area used (that may not coincide with the working area).
Best
— Reply to this email directly, view it on GitHub https://github.com/cnr-isti-vclab/TagLab/issues/122#issuecomment-1817176415, or unsubscribe https://github.com/notifications/unsubscribe-auth/BDMDTYJJSAFABCERQSW6MF3YE7OABAVCNFSM6AAAAAA6OIGQQ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMJXGE3TMNBRGU . You are receiving this because you authored the thread.Message ID: @.***>
Hi recyclerjh,
the size of your map is ok. If also the export area is correct let's check if the problem is the target resolution vs the current resolution. You say that the scale of your map is 1.5 cm / pixel. What is the target resolution entered ?
I have created and exported a training set. I was then able to export the dataset after making the necessary changes to line 213 with qimage_cropped = qimage_map.copy(int(left), int(top), int(w), int(h))
I now am getting an error when I try to train the and it says the data set will be analyzed I get this Traceback (most recent call last): File "C:\Program Files\TagLab\source\QtTYNWidget.py", line 200, in chooseDatasetFolder self.analyzeDataset() File "C:\Program Files\TagLab\source\QtTYNWidget.py", line 373, in analyzeDataset target_classes, freq_classes = CoralsDataset.importClassesFromDataset(labels_folder, self.project_labels) File "C:\Program Files\TagLab\models\coral_dataset.py", line 315, in importClassesFromDataset dict_freq[key] = float(dict_freq[key]) / float(total_pixels) ZeroDivisionError: float division by zero
Suggestions?