OPHoperHPO / image-background-remove-tool

✂️ Automated high-quality background removal framework for an image using neural networks. ✂️
https://carve.photos
Apache License 2.0
1.39k stars 272 forks source link

Parameter tuning not working for these grayscale images #115

Closed amal-r1 closed 2 years ago

amal-r1 commented 2 years ago

I am doing a project in which I want to extract the white path in these images, but I am not getting a result in which the edges are perfect. At least in the first 2 images attached have similar backgrounds and it should be easily able to detect the white path. I have even tried with all the 4 models and tried extensive parameter tuning but still the issue persists. Any inputs or suggestions are highly appreciated. 1: preprocessed 2: grey_image 3: 57

OPHoperHPO commented 2 years ago

Hello, @amal-r1

These types of images are not typical, so the project models and neural networks have not been optimized for them. The main goal of this project is to remove the background from ordinary images such as portraits, photos of animals, objects, etc. Therefore, in this case it is difficult to guarantee quality work for your type of images,

In your case, I would recommend trying to remove the background by calculating the difference in brightness of individual zones in the image using cv2, this should work for such images. But this method is not always able to cope with images with always different brightness of the required zones in the image.

For better adaptation of background removal in the context of a given domain with different zone brightnesses, I recommend using CarveKit in conjunction with your segmentation model.

You need to train your TRACER B7 model (wo edge version) on your images, then in the code load it into a TRACER class object from the framework and send it to the interface object. This will give you better background removal directly on your images.

You can find all the necessary materials for training your TRACER neural network model in its original repository

amal-r1 commented 2 years ago

Thank you for your valuable suggestions @OPHoperHPO. I got most of your points. I actually tried creating a segmentation model to directly extract the white path here, but at the edges some pixels where always getting cut off due to minor lighting changes. So could you elaborate more on conjunction of CarveKit with my segmentation model ? Also it is hard to annotate the edges, so like how much would be an ideal amount of data to retrain the TRACER model?

OPHoperHPO commented 2 years ago

I got most of your points. I actually tried creating a segmentation model to directly extract the white path here, but at the edges some pixels where always getting cut off due to minor lighting changes.

In view of the specifics of neural networks, each model has a percentage of accuracy that it is capable of keeping the edges of the cutted area relatively accurate, so this can be directly related to this.

Model training is a long process, but in short it should look like this:

  1. Prepare a dataset that will have pairs (original image, mask of the area to be cut out (white is the desired area, black is the background). The number of images depends on the complexity of the domain and the augmentation used. Try to start with 1000 different images and increase if the adaptation of the model to unseen data goes badly. From experience, I want to say that for not complex domains without a variety of backgrounds, this will be more than enough. An example of datasets can be downloaded by the names of the datasets in the readme of the TRACER repository.
  2. Split your dataset to validation and training subsets.
  3. You need to initialize the TRACER B7 model from scratch (it's called arch=7 in net repository), then load the pre-trained weights for effencientnet-b7 into the model's encoder. (Before that, replace the files in the root of the repository with the files from the w.o_edges folder) Then, as a standard, perform training until the maximum percentage of F1, S-metric, MAE (min) accuracy on the validation set is reached. The training code is also in that repository.
  4. Load the resulting best version of the model in the code into the wrapper for TRACER from carvekit, and then load it into the wrapper into the interface, as shown in one of the examples in the CarveKit readme.
amal-r1 commented 2 years ago

Thank you, this was very helpful.