LJOVO / TranSalNet

TranSalNet: Towards perceptually relevant visual saliency prediction. Neurocomputing (2022)
https://doi.org/10.1016/j.neucom.2022.04.080
MIT License
51 stars 10 forks source link

Loss value #3

Closed mr17m closed 1 year ago

mr17m commented 1 year ago

Hello, I have some questions. What was your model's best value for the loss after training on trainset? In other words, except the loss value that is illustrated during training, is there another metric in your scripts that we can check to see if our model's performance is convincing and is not disappointing? Moreover, for evaluation on Salicon should we send the test results to their website?

LJOVO commented 1 year ago

Hi,

Firstly, I don't recommend using the loss value during training to judge whether the model has converged, as this can potentially lead to overfitting. I suggest evaluating the model based on the loss value from the validation set. In my experience, if the model achieves a loss value around -2.5 or lower on the validation set, it can be considered usable. If you're looking for a preliminary assessment of the model's performance, one direct approach would be to test various metrics on the SALICON validation set. If you need the results of the model on the SALICON test set, to the best of my knowledge, the only method is to upload the results to their website.

mr17m commented 1 year ago

Thank you so much for the helpful and quick answer.

mr17m commented 1 year ago

Hi, Regarding your last response some question raised for me.

  1. Your train scrip reads the validation set using the csv file and uses it for validation.

    val_set = MyDataset(ids=val_ids,
                        stimuli_dir=r'datasets\val\val_stimuli/',
                        saliency_dir = r'datasets\val\val_saliency/',
                         fixation_dir=r'datasets\val\val_fixation/',
                        transform=transforms.Compose([
                            transforms.ToTensor(),
                            transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                            ]))

    after training finished, for preliminary assessment of the model, do you mean the model should be evaluated on this validation set again using the samples that are mentioned in the csv file?

  2. Using your train script for training my model, it stops on the 10th epoch regarding the following condition:

if phase == 'val' and epoch_loss < best_loss:
            best_loss = epoch_loss
            best_model_wts = copy.deepcopy(model.state_dict())
            counter = 0
        elif phase == 'val' and epoch_loss >= best_loss:
            counter += 1
            if counter ==5:
                print('early stop!')
                break

I think the 10th epoch would be so soon for being terminated and it makes me judge the performance of my model that would not be good (with the train loss of -1.0728 and Best val loss: -0.593928 ). Could you recall that how many epochs you model was trained before such termination? Please let me know what you think

LJOVO commented 12 months ago

Hi,

  1. Yes, utilizing the SALICON validation set for a preliminary assessment of the model is a viable approach.
  2. The loss values you've mentioned do not align with my experience, unfortunately, I can't offer further constructive advice on this matter.