The MTL classification task does not perform very well as per @ConnorWatts and brings the model down. This observation was based on the MTL model with only the classification head.
My experiments
NOTE: accuracy values missing at the moment. Only Loss measured. Further note, for all of these the new_dataset is used, not the old dataset (see #6 for more details). This is something we need to standardise across the board.
1) Using ClassificationHead
Used 3 epochs only due to comp. constraints. Got average BinaryCrossEntropy loss of ~ 0.67
2) Using ClassificationHeadUnet
Basically instead of applying the classification directly after the encoding stage, the data is decoded using the UNet structure with skips from encoding stage included. The classification is then applied at the final stage.
Used 3 epochs. The decoding stage is identical to that of the img segmentation (but ofc, this is a separate head, so decoding weights aren't shared!!). After the decoding, global avg pool added and then dense layer added.
BinaryCrossEntropy loss is 0.64... so the improvement is marginal...
3) benchmark classification accuracy using SVM model
With 500 data points, the classification accuracy is 0.678 (note this is classification accuracy on TEST set)
Training accuracies reach up to 90%. We just need to make sure that test accuracies are decent as well. Closing issue as model no longer uses ClassificationHeadUNet
The MTL classification task does not perform very well as per @ConnorWatts and brings the model down. This observation was based on the MTL model with only the classification head.
My experiments
NOTE: accuracy values missing at the moment. Only Loss measured. Further note, for all of these the new_dataset is used, not the old dataset (see #6 for more details). This is something we need to standardise across the board.
1) Using
ClassificationHead
2) Using
ClassificationHeadUnet
3) benchmark classification accuracy using SVM model