Open tiffmm3 opened 5 months ago
Hi Tiffany,
first of all, when you import your label maps created in ArcGis check that they are aligned with the orthoimages (it may happen or not depending on the georeferenciation, TagLab manage georeference information but not use them to align the orthos). To facilitate this you can use the trasparency to switch between the imported labels and the orthoimages.
The second issue that you may have is the resolution of your map. If the resolution is not high you can experience problems during the tiles generation.
Finally, when you export the dataset, you need to re-set the area of export. I mean, the working area influences many options of TagLab but not the export of the dataset, for this purpose you need to indicate again the region of interests on the map.
I hope that these suggestions help you.
All the best
I forgot one important thing. Pay attention that the color you used in the tiff for the labels and the color of your dictionary in TagLab (you need to set a label color-label name dictionary for your project) are exactly the same.
Best, Massimiliano
I forgot one important thing. Pay attention that the color you used in the tiff for the labels and the color of your dictionary in TagLab (you need to set a label color-label name dictionary for your project) are exactly the same.
Best, Massimiliano
Awesome, I will try all of this out, and may return with follow-up questions. Thank you Massimiliano!
I forgot one important thing. Pay attention that the color you used in the tiff for the labels and the color of your dictionary in TagLab (you need to set a label color-label name dictionary for your project) are exactly the same.
Best, Massimiliano
Hi again @maxcorsini, unfortunately I am still running into issues. Here's a run-down of what I did so far:
Could not get the rasters to export into tiff files while keeping the color values Also could not get them to have a black background?
Solution: Export Raster > NoData value = 0 > Use Colormap, Use Renderer, Compression Type LZW Makes a tiff file BUT it gets rid of the color values (this is still in ArcGIS Pro)
Re-do the color values by copying the raster symbology (one by one)
Add black background: Share > Export map > Geotiff
For each tiff file, cut out the ocean and the Esri tags: Photoshop > crop
(Tested out one tiff file) Upload “New Map” to TagLab (no orthoimage, just the one tiff file)
Export > Training data
Check the folder > all the labels are black still :((
File > Train Your Network > the only classes recognized were black/background
Any ideas on how to fix the labeling problem?
Unrelated to the custom network question previously posted^
I got the automatic classifier (Porites) working, but I am experiencing varying levels of accuracy. See below for examples.
1) After running the classifier and saving the project- I try to open the project again and TagLab crashes (Not Responsive). This occurs every other time I use the classifier, how do I prevent this?
2) I am hoping to use the automatic classifier to build a training set to make a custom network. I was wondering if there are any tips/advice to achieving greater accuracy? Screenshots link: https://docs.google.com/document/d/1aawuAxCXUUehq_4mRMLlSF3YkmPnf9qyDJpgNMOQlok/edit?usp=sharing
3) Does the Porites classifier improve over time? As you run more orthomosaics, does the model 'learn' from them?
Hi, I see the labels load in the Taglab's interface but this seems the image of the labels and not the labels imported as regions (I see black stuff instead of the orthoimage even if the transparency is set to 50%). To import the labels correctly you need to Add an orthoimage, and then Import the Label Map corresponding to such orthoimage. Plase, let me know if this helps.
Thank you @maxcorsini for the response! I attempted what you said above, but when I upload a label image it is very small in comparison to the orthomosaic. And the labels (which I checked are corresponding to the color dictionary) are all labeled one type, despite there being multiple.
Here are screenshots of the labeling issue: https://docs.google.com/document/d/1aawuAxCXUUehq_4mRMLlSF3YkmPnf9qyDJpgNMOQlok/edit
*First Edit: I added more screenshots that show what the training network looks like for this orthomosaic/label image. After exporting the training dataset and doing Train Your Network-- TagLab only recognized 99% background and .27% Cythera.
I looked at the exported Training dataset and while there were some label images, most of them were black. I'm guessing this is because of the size difference between the label image and ortho? Does the image have to be of a raster?
Here is an exact breakdown of what I did so far: On ArcGIS Pro open the Polygon > Colormap Symbology with Layer file > Share > Export PDF > Export JPEG > PhotoShop to clip the Esri tags > Resize Image in pixels to exact orthomosaic height and width > Export highest quality JPEG > Upload as label image
Second edit: It ended up working!! Here is what I did for anyone who is struggling with this:
^This only works if you have the digitized version of your coral orthos! If anyone wants the raster_to_jpeg.py feel free to comment or DM :)
So excited that the custom network worked!! I am now looking into ways to improve the model, looking at the results, do you @maxcorsini have any suggestions?
Number of epochs: 10 Learning rate: .00005 L2 Regularization: .0005 Batch size: 4 Accuracy: 0.598 Mean Intersection over Union: 0.534
Hello! I am trying to use the automatic classifier, but I am running into this error:
I am also running into an issue when attempting to create a custom network. Here are the steps I have taken so far:
Do the manual segmentation and classification of 10 maps. These are originally shapefiles which hold a field called Benthic_features that labels each shape as a coral type (ArcGIS Pro)
Export 10 tiff files (with a black background, each color in a file represents a new class/coral type. The coral type's color is NOT consistent across the tiff files. I am not sure if that's okay?)
Create a New Project > import 10 new maps (tiff files) > so now I can see all my maps in the "Layers" section
Then export my working areas as training dataset. File >Export > Export New Training DataSet (I'm not sure how to adjust the pixel size according to the map scale) When I look into the three folders: training, validation, test > I see that there are only images in the validation and test folders and the 'label' folder within the validation folder and the test folders are filled with black PNGs. Does this mean they did not identify any labels? Another question: Is the training folder empty because I am supposed to upload the corresponding orthomosaics of the tiff files?
Haven't gotten to this step yet, but when I try to run the "Train your network" with my existing created training data, there's an error that says:
Traceback (most recent call last): File "C:\Users\OliviaPC\Documents\TagLab-main\source\QtTYNWidget.py", line 200, in chooseDatasetFolder self.analyzeDataset() File "C:\Users\OliviaPC\Documents\TagLab-main\source\QtTYNWidget.py", line 373, in analyzeDataset target_classes, freq_classes = CoralsDataset.importClassesFromDataset(labels_folder, self.project_labels) File "C:\Users\OliviaPC\Documents\TagLab-main\models\coral_dataset.py", line 315, in importClassesFromDataset dict_freq[key] = float(dict_freq[key]) / float(total_pixels) ZeroDivisionError: float division by zero
I'm guessing this is because I didn't get any labels identified during the last step.
Thank you!