hamidriasat / BASNet

Code for Boundary-Aware Segmentation Network for Mobile and Web Applications
MIT License
20 stars 1 forks source link

help #3

Open Aziz-Prithibee13 opened 7 months ago

Aziz-Prithibee13 commented 7 months ago

in example , it give predicted mask, how can I get segmented image from here?

tia

hamidriasat commented 7 months ago

You can update display method with below code to show overlay Image. I have tested it in my BASNET example. Right before showing predicted segmentation map update the display method and it will show overlay Image too with predicted mask.

def display(display_list):
    title = ["Input Image", "True Mask", "Predicted Mask", "Overlay"]

    # Create a new figure with subplots
    plt.figure(figsize=(15, 5))

    # Display the original image, true mask, and predicted mask
    for i in range(3):
        plt.subplot(1, 4, i + 1)
        plt.title(title[i])
        plt.imshow(keras.utils.array_to_img(display_list[i]), cmap="gray")
        plt.axis("off")

    # Display the overlay of the predicted mask on the original image
    plt.subplot(1, 4, 4)
    plt.title(title[3])

    # Assuming display_list[0] is the original image and display_list[2] is the predicted mask
    segmented_part = np.multiply(display_list[0], display_list[2])
    # segmented_part = np.multiply(display_list[0], 1 - display_list[2])  # For opposite
    plt.imshow(keras.utils.array_to_img(segmented_part), cmap="gray")
    plt.axis("off")

    plt.show()
Aziz-Prithibee13 commented 7 months ago

Ok,

I tell you in details,

I am trying to do thesis on skin disease segmentation. I have decided i will use a model which had published in 2021 or later. I try your model, it give a better result in example.. I figure out how to display segmented image with help of chatgpt.

I have a 9000+ image dataset with masks.when i load them into your example many things has been noticed.

  1. It take only +-128 image for training, 4 only for validation. Not taking whole nearly 10000 image
  2. After many failed attempt, the model is runned
  3. But in last stage when i try to display the predictions, it gives wrong prediction mask. Sometimes full black, sometimes predict wrong mask.

Here is the code

https://colab.research.google.com/drive/1cFLRS9CwZSL4PLFs13DN38Tw5RHrPwhD

Here is dataset link, https://www.kaggle.com/datasets/surajghuwalewala/ham1000-segmentation-and-classification

Can you help me how to get successful run my dataset on basnet?

hamidriasat commented 7 months ago

@Aziz-Prithibee13 There are three mistakes I noticed in code you shared.

  1. In load_paths methods, its loading paths of only first 140 images and masks, that's why you are not seeing your full dataset in dataloader. so update load_paths with below code.
def load_paths(path, split_ratio):
    images = sorted(glob(os.path.join(path, "images/*")))
    masks = sorted(glob(os.path.join(path, "masks/*")))
    len_ = int(len(images) * split_ratio)
    return (images[:len_], masks[:len_]), (images[len_:], masks[len_:])
  1. Second its loading default weights after training, remove below them lines.
    !!gdown 1OWKouuAQ7XpXZbWA3mmxDPrFGW71Axrg
    basnet_model.load_weights("./basnet_weights.h5")
  2. Epochs are still set to 1, at least 50 to 100 epochs will yield considerable results and for better performance, you have to train even more, see paper for more details.

Hopefully, after these changes, you will be able to train your model, remember BASNet is a deep model and need resources and time to train, you will not be able to train it in free colab version. See Training Settings for more details.

Prithibee13 commented 4 months ago

Hello again

I run this code multiple times now. Because of using free colab, epoch is still 1 or 2. for a smaller dataset it goes for 4. nothing no problem here.

I now want trying to use probabilistic models like bayesian neural network to find uncertainty of the model. but got some problem.

  1. batch no can not excede than 3, other wise model get graph error
  2. I try to save the model with => basnet_model.save('/content/basnet_ph2.h5') but getting error on getting weights => print(tf.keras.models.load_model('/content/basnet_ph2.h5').get_weights())

    TypeError Traceback (most recent call last)

    in () ----> 1 print(tf.keras.models.load_model('/content/basnet_ph2.h5').get_weights())

2 frames

in __init__(self, **kwargs) 3 4 def __init__(self, **kwargs): ----> 5 super().__init__(name="basnet_loss", **kwargs) 6 self.smooth = 1.0e-9 7 TypeError: keras.src.losses.Loss.__init__() got multiple values for keyword argument 'name' same problem when trying to load the model or using bayesian on it. how can recover this things? #tia
hamidriasat commented 4 months ago

Hi @Prithibee13

  1. Batch size is not increasing because of low GPU memory. That's not a programming problem, either decrease your input image size or arrange a better GPU (you can even go for the paid version of colab for this thing).

  2. Regarding not being able to load weights, try using my method to load weights like below.

    basnet_model.load_weights("./basnet_weights.h5")