idiap / attention-sampling

This Python package enables the training and inference of deep learning models for very large data, such as megapixel images, using attention-sampling
Other
98 stars 18 forks source link

Update to enable saving/loading model as .json #7

Closed andersbhc-mmmi closed 4 years ago

andersbhc-mmmi commented 5 years ago

I had an issue getting reproducible results just saving the weights of the model and loading them into the redefined model. By saving the model itself as a .json file and saving the model's weights using Keras' builtin ModelCheckpoint callback by:

model_json = model.to_json()
with open(os.path.join(args["datapath"], "model.json"), "w") as json_file:
    json_file.write(model_json)

And afterwards loading the model from a .json file and loading the weights into that model by:

json_file = open(os.path.join(args["datapath"], "model.json"), "r")
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json, {"ResizeImages": ResizeImages, "SampleSoftmax": SampleSoftmax, 
"L2Normalize": L2Normalize, "ActivityRegularizer": ActivityRegularizer,  
"SamplePatches": SamplePatches, "TotalReshape": TotalReshape, "Expectation": Expectation})

model.load_weights(args["weights_path"])

This allowed me to get reproducible results after saving/loading the model and the weights. The changes made to ats_layer.py and layers.py allow this way of saving and loading.

angeloskath commented 5 years ago

Looks awesome, I will merge it later today and do a new release.

I will add some tests that check for the saving and loading as well to make sure this doesn't break in the future.

Thanks!

angeloskath commented 4 years ago

Hey, I know it has been months and you are probably not interested anymore.

If you are however, I have noticed some things that mean I cannot merge this PR. The ActivityRegularizer as well as the ResizeImages have changed the public API, namely one would accept many regularizers but now only one and the other would fail if the resize was not bilinear or bicubic but now sets bilinear and prints to stdout.

Dealing with serializing the regularizer should be dealt with as in the rest of the Keras framework, e.g. see the Dense layer implementation.

Let me know if you feel like updating the PR or if I should close it.

Thanks for your work! Angelos

andersbhc-mmmi commented 4 years ago

Feel free to close it. Thank you.

Kind regards

Anders Bossel Holst Christensen

Videnskabelig assistent

SDU Robotics, Mærsk Mc-Kinney Møller Instituttet

T

23 86 58 17<tel:+4565508789>

abc@mmmi.sdu.dk mailto:abc@mmmi.sdu.dk

Syddansk Universitet

Campusvej 55

5230 Odense M

www.sdu.dk http://www.sdu.dk

[http://cdn.sdu.dk/img/sdulogos/SDU_BLACK_signatur.png]


From: Angelos Katharopoulos notifications@github.com Sent: Monday, March 2, 2020 2:35:49 PM To: idiap/attention-sampling Cc: Anders Bossel Holst Christensen; Author Subject: Re: [idiap/attention-sampling] Update to enable saving/loading model as .json (#7)

Hey, I know it has been months and you are probably not interested anymore.

If you are however, I have noticed some things that mean I cannot merge this PR. The ActivityRegularizer as well as the ResizeImages have changed the public API, namely one would accept many regularizers but now only one and the other would fail if the resize was not bilinear or bicubic but now sets bilinear and prints to stdout.

Dealing with serializing the regularizer should be dealt with as in the rest of the Keras framework, e.g. see the Dense layer implementation.

Let me know if you feel like updating the PR or if I should close it.

Thanks for your work! Angelos

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/idiap/attention-sampling/pull/7?email_source=notifications&email_token=ANAN432ZI77EWR5BF2ZO6YTRFOY3LA5CNFSM4I2JVNR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENPKJEY#issuecomment-593405075, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ANAN433MLB7TVL5UOWIOF3LRFOY3LANCNFSM4I2JVNRQ.