globus-labs / mof-generation-at-scale

Create new MOFs by combining generative AI and simulation on HPC
MIT License
18 stars 5 forks source link

Cache loading difflinker into memory for inference #79

Closed WardLT closed 8 months ago

WardLT commented 8 months ago

It takes about 2 minutes to get DiffLinker ready for inference in the worst case. Not sure how much of that is model loading, but that can be reduced with an lru_cache.

WardLT commented 8 months ago

Fixed