Closed max3-2 closed 3 years ago
So if I understand this correctly, you want to cache previously loaded models.
The thing is, Python already has caching tools in the standard library. It's the @lru_cache
decorator from the functools
module. We could just slap that decorator on the load()
method and it would do almost exactly the same thing with just one line of code. Only then caching would not be optional, which, as you point out, it should be, so as to not confuse users who may have modified the saved model file.
However, we can still invalidate the cache even with the built-in tool. Consider this example:
import mph
from timeit import default_timer as now
from functools import lru_cache
client = mph.Client()
client.load = lru_cache(client.load) # Adds the cache.
for n in range(1, 6):
if n == 4:
client.load.cache_clear()
t0 = now()
model = client.load('tests/capacitor.mph')
print(f'Load #{n} took {now()-t0:.1f} seconds.')
Which produces this output:
Load #1 took 15.4 seconds.
Load #2 took 0.0 seconds.
Load #3 took 0.0 seconds.
Load #4 took 1.9 seconds.
Load #5 took 0.0 seconds.
Note how Comsol itself does quite a bit of caching already. The Python cache then gets the load time down to practically zero.
So I think this is not really needed. Users can opt in if they want this and all they need is the standard tool.
There is another difference. With @lru_cache
, the cache is indexed by the file
parameter which is passed in to load()
, not the fully resolved file path. But I think this is a minor issue. However, on that note, it would make sense to expose the .getFilePath()
method from the Comsol API as model.path()
and client.paths()
.
Some remarks on this one:
0) I changed some naming and added the methods to model and client
1) Buffering will drastically deteriorate if there are results in the model (at least on macOS, (UNIX?))
2) Using relative pathes has some risk when working interactively, especially since ipython
supports shell commands and a cd
is issued quite fast. This would break the cache
3) If we have the two methods as described above, the overhead is 3 lines, no other tools needed. See my reasoning below.
Users can opt in if they want this and all they need is the standard tool.
The thing is (my opinion) - the default user won't. I think code should help any user get the most out of their work, and this can speed up tasks without interfering with the toolset for pro users.
I will make caching strictly opt-in (to go the path of least surprise) and add an access method caching()
to the Client
class.
As discussed in #16, this checks model memory on load for file names and loads the model from memory if possible. This has the single culprit that an external change to the model wont be reflected. This is why I added the reload flag to load a file as a new model even if the file exists