i made my custom dataset and annotate them as scribble. when I want to fine-tuning my own data I did this process :
in net.py line 199 , self.load_state_dict(torch.load('./assets/resnet50-19c8e357.pth'), strict=False), I modify resnet with the model weight like this : self.load_state_dict(torch.load('./assets/model-best.pth'), strict=False) . and train on my new data. but the prediction is not acceptable . i took attention in scribble annotation it got [0 1 2 ]values but the final result wasn't satisfied, can you guess why . one more this I should say , this matter happens in fine-tuning , when I combined my data with the paper train data I doesn't happen.
I am not sure what you are asking. But I could provide the hint that our scribble use [0,1,2] values, so you might want to change your scribble value to the same or change the code script that read the scribble.
hi man
i made my custom dataset and annotate them as scribble. when I want to fine-tuning my own data I did this process :
in net.py line 199 , self.load_state_dict(torch.load('./assets/resnet50-19c8e357.pth'), strict=False), I modify resnet with the model weight like this : self.load_state_dict(torch.load('./assets/model-best.pth'), strict=False) . and train on my new data. but the prediction is not acceptable . i took attention in scribble annotation it got [0 1 2 ]values but the final result wasn't satisfied, can you guess why . one more this I should say , this matter happens in fine-tuning , when I combined my data with the paper train data I doesn't happen.