Closed WardLT closed 8 months ago
It takes about 2 minutes to get DiffLinker ready for inference in the worst case. Not sure how much of that is model loading, but that can be reduced with an lru_cache.
lru_cache
Fixed
It takes about 2 minutes to get DiffLinker ready for inference in the worst case. Not sure how much of that is model loading, but that can be reduced with an
lru_cache
.