Closed Ekinkit closed 5 years ago
Have a look at the documentation of the ImageDataProvider.
I have found it easiest to do a quick conversion from whatever format your images are in to tif. You can in the process also then preprocess your images. However, as in the documentation, you can specify your format as long as it is readable by Pillow.
I have found it easiest to do a quick conversion from whatever format your images are in to tif. You can in the process also then preprocess your images. However, as in the documentation, you can specify your format as long as it is readable by Pillow.
Thank you so much~
@Ekinkit I don't know if you were successful with jpg, but here is my experience: My training images were jpg and labels were VGG JSON files. I converted the JSON labels to jpg format, but when I zoomed into the jpg labels as much as possible, I found that the conversion of JSON mask to binary jpg was not precisely binary - it had several grey pixels (instead of simply black and white). So I had to convert everything to .png and all looked good. I think I will avoid JPG from now on!
@soroushr Thank you for your reply. I have successfully trained the model with .jpg input files while I used keras. https://github.com/ShawDa/unet-rgb this repo gives me some inspirations.
I am new in the machine learning and sorry for my such easy problem. I have read the files showed in the github and find that many people use .tif files as their input. Here is my problem: How can I use image_gen.py to create my own train/val dataset with .jpg/.png images. Need I transform them into another form? Can someone provide me with ideas to start training?