Closed Kaushal-11 closed 4 months ago
Modify it for different activation functions and optimizers
Other activation functions and optimizers did not yield as good results in terms of segmentation accuracy and training efficiency.
because there is main reason of using in such a complex model, In input layer, ReLu handle the vanishing gradient problem. in output layer, Sigmoid is suitable for binary segmentation tasks.
and Adam optimizer is good for it because of it dynamically adjusts the learning rate during training
So, this combination is giving me good accuracy for this segmentation
Can you try K-Fold method, tune early stopping epochs, Selu, Mish activation, Lion optimizer
Do an experimentation of these with several models. Then come to a conclusion.
I implemented my model with selu activation, Lion optimizer with K-Fold validation and EarlyStopping callbacks but it is not give good results.
Here comparision between results, | Metric | SELU + Lion + K-Fold + EarlyStopping | ReLU + Adam |
---|---|---|---|
Accuracy | 0.92316 | 0.94052 | |
F1 Score | 0.34482 | 0.55769 | |
Jaccard | 0.25670 | 0.44812 | |
Recall | 0.34664 | 0.60501 | |
Precision | 0.52644 | 0.61600 |
I performed all possible combination and played with hyperparameter I can conclude that ReLU often provides more stable and faster convergence during training compared to SELU, Sigmoid, Mish, tanh. The Adam optimizer, which works well with ReLU, provided better optimization and generalization in this scenario.
I performed all possible combination and played with hyperparameter I can conclude that ReLU often provides more stable and faster convergence during training compared to SELU, Sigmoid, Mish, tanh. The Adam optimizer, which works well with ReLU, provided better optimization and generalization in this scenario.
I don't agree. I have seen Mish and Selu completely outperform Relu sometimes. Anyways, have you tried some other strategies of augmentation or any different model?
i inserted metrics results and final prediction image which derived using the SeLU. Unfortunately in this problem, SeLU and Mish is not working well.
Yes, I have tried Data Augmentation in my local, And also implemented another model like DeepLabV3+ and ResUnet Model
DeepLabv3+ model's results are not good as Unet But ResUnet's results is not much good but slightly better than Unet Model
Hey, @SrijanShovit
can you answer my message
@SrijanShovit I understand the above PR doesnt come along with your expectation, let the contributor know what to go with it. Ideally its common in opnesource PR getting rejected. so let the contributor know are were proceeding with it or not.
This issue has been automatically closed because it has been inactive for more than 7 days. If you believe this is still relevant, feel free to reopen it or create a new one. Thank you!
@SrijanShovit why there is no reply . we can help the contributor if you are not planning to merge
Resizing images to a fixed size (256x256) Normalizing pixel values (0-255 to 0-1 range) Using metrics like accuracy, recall, and precision Logging training progress and metrics to a CSV file Applying a threshold (0.5) to convert probabilities to binary masks Calculating evaluation metrics like Accuracy, F1 Score, Jaccard Index (IoU), Recall, Precision Visualizing results : [ Original image | Ground truth mask | Predicted mask]