nhshin-mcl / MWR

MIT License
75 stars 11 forks source link

Can't use / understand pretrained models to perform inference #5

Open ghost opened 2 years ago

ghost commented 2 years ago

Hi, I read your paper and great work!

I have some issues running the models on my images. Given that you provide the model pretrained I expected to use them more or less in this manner

global_model = Network.Global_Regressor()
global_model.load_state_dict(torch.load('global.pth'))
local_models = []
for x in range(num_locals):
    local_m = Network.Local_Regressor()
    local_models.append(local_m.load_state_dict(torch.load(f'local{x}.pth')))

my_image = Image.open(open('face.jpg', 'rb'))
# Preprocess image
my_image = torch.transforms.ToTensor(my_image)

global_output = global_model(my_image)
if global_output == something:
    age = local_models[3][my_image]
elif ....  #other cases 

But your code is super complicated and I don't understand it fully. In particular why the datasets are required if the models are pretrained? And why that's not always the case based on the "sampling" variable (I guess)?

So, basically, I just want to use your approach with the model pretrained on morph (or another dataset) to do inference on my data without ever having to download morph (or any other dataset). Is it possible? If not what do you mean with pretrained models?

Thanks

Hab2Verer commented 5 months ago

Were you able to get it to work? if yes, can you provide the code, please.