microsoft / molecule-generation

Implementation of MoLeR: a generative model of molecular graphs which supports scaffold-constrained generation
MIT License
250 stars 37 forks source link

Tensorflow warnings when using encode #56

Open peetceenatoo opened 1 year ago

peetceenatoo commented 1 year ago

with load_model_from_directory(model_dir) as model: embeddings = model.encode(smiles)

"WARNING:tensorflow:From ****\tensorflow\python\util\deprecation.py:576: calling function (from tensorflow.python.eager.polymorphic_function.polymorphic_function) with experimental_relax_shapes is deprecated and will be removed in a future version. Instructions for updating: experimental_relax_shapes is deprecated, use reduce_retracing instead"

and

"WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.data_structures has been moved to tensorflow.python.trackable.data_structures. The old module will be deleted in version 2.11."

I would like my scripts to not just stop working one day.

kmaziarz commented 1 year ago

I'll take a look. I think some of these originate in tf2_gnn, so it will take a while to update it there and then propagate the new version here. That being said, using tensorflow earlier than 2.11 for now sounds reasonable and not overly restrictive, as 2.11 is relatively recent.

kmaziarz commented 1 year ago

Interestingly, I checked locally under 2.12.1, and while the deprecation warning regarding python.training.tracking.data_structures is there, the import still works. So the message is perhaps not entirely accurate.

kmaziarz commented 1 year ago

The deprecation warning related to python.training.tracking.data_structures should now avoided through microsoft/tf2-gnn#61 (soon to be released on PyPI as 2.14.0).

shriyamr commented 5 months ago

While using the encode and decode methods of the wrapper class, I often get the error 'VaeWrapper' object has no attribute '_inference_server'. Is there some way to fix this?

kmaziarz commented 5 months ago

While using the encode and decode methods of the wrapper class, I often get the error 'VaeWrapper' object has no attribute '_inference_server'. Is there some way to fix this?

Are you loading the model as a context manager (as explained in the README) and only using the model inside of the context?

with load_model_from_directory(model_dir) as model:
   pass # Use the model inside

# Do not use the model outside

The with block signals to the model that it should spawn parallel processes for encoding/decoding; exiting the block signals that the processes can be cleaned up. This is why you should not load the model without with (e.g. model = load_model_from_directory(model_dir)), as that would give you a model that has not been fully initialized, which would cause the error you mentioned. Alternatively, the same error would arise if one was to use the model after exiting the with block, at which point the processes have been killed and the inference server has been deleted.