Closed danielhollas closed 9 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
05b2d98
) 97.44% compared to head (da39a40
) 97.44%.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hi @danielhollas – thanks for the PR
I'm not sure about this change, given the (admittedly small) increase in complexity for a saving of O(ms) in a package that tends to execute (with external QM calculations) in O(h)
Hi Tom, thanks for your comment. I agree that the change in timing is small in comparison to the cost of QM computations, however, sometimes we import autode when we do debugging in interactive mode and the import time is noticeable. This is also what motivated this change in the first place. We also noticed that importing autode slows down the import of mlptrain. We are planning to update the version of autode in mlptrain soon and I think it would be good if this update also increases the import speed.
I'm not sure you'll notice a 0.1s change. Nevertheless, I don't think the overhead of remembering to import matplotlib
lazily is that high, so happy to merge with a couple of edits. @danielhollas would you mind:
v1.4.2
branch instead of masterDone.
Nevertheless, I don't think the overhead of remembering to import matplotlib lazily is that high
Happy to contribute a test that will check that matplotlib is not loaded after autode
import.
Happy to contribute a test that will check that matplotlib is not loaded after autode import
Yes please 👍🏼
@t-young31 I've added a test and verified that it fails on the main branch and passes here.
We've been looking at the import time of the mlptrain package, which takes over a second on the cluster, and around 620ms on my dev machine with NVMe drive. Importing
autode
by itself takes 465 ms on main branch.One of the easy wins is to import
matplotlib
only when needed, which saves around 160 ms.Other potential improvements would come from delayed import of scipy and / or RDkit. But those would require more changes --- happy to open a separate PR if that is desired.
Corresponding PR on
mlptrain
repo: https://github.com/duartegroup/mlp-train/pull/84