MolecularAI / Reinvent

Apache License 2.0
338 stars 110 forks source link

Some problems of using fp16 to reduce memory consumption #23

Closed ErikZhang-9762 closed 3 years ago

ErikZhang-9762 commented 3 years ago

Hi,

I used my own QSAR model, but it was out of memory I want to use apex for fp16 mixed precision calculation

There is a statement when using apex: model, optimizer = amp.initialize(model, optimizer, opt_level="O1") I'm a little confused about using those two parameters instead of model and optimizer

Look forward to your reply sincerely

patronov commented 3 years ago

Hi, Im not really sure where is the place in the code where this takes place. Could you please direct us to the line, related to the problem? I should point out that the provided code runs only with scikit-learn models with certain types of fingerprint calculations. Which fingerprint type and parameters is your model using? The code for the supported FP types is here .