Closed naik-aakash closed 11 months ago
Just tagging you here @ml-evs . Thanks for the help in advance.
Hi @naik-aakash, could you run this and just confirm the last two lines of output please? I've seen this kind of error before if tensorflow changes its API between minor versions...
import modnet
import tensorflow
print(f"{modnet.__version__=}")
print(f"{tensorflow.__version__=}")
import modnet import tensorflow print(f"{modnet.__version__=}") print(f"{tensorflow.__version__=}")
I get following output
modnet.__version__='0.4.1'
tensorflow.__version__='2.14.0'
Hmm. thanks. Would you mind installing the pinned versions of our deps? Tensorflow really likes breaking things between 2.13, 2.14 etc... I'll paste below to make it easier:
pip install tensorflow==2.11 tensorflow-probability==0.19.0
the rest of the reqs should be fine
I see, sure will try this out
If that doesn't help I can try to figure it out this evening (at a conference today [un]fortunately)
If that doesn't help I can try to figure it out this evening (at a conference today [un]fortunately)
Did not help, I tried installing the git repo as well with pinned dependencies. Still the same issue
Interesting ! Kind of had my attention on tensorflow too, but it has nothing to do with it (the optimizer would be unhappy but here it is fit).
The bug sits in fit preset, that uses learning_rate instead of lr (which is passed down as kwargs to fit which doesn't expect it).
Fit preset is kind of deprecated (fit genetic is superior), and is therefore not maintained anymore.
Can I suggest using fit genetic in the meanwhile (and even long term) ?
Hi @ppdebreuck, I see. Sure, Thanks for the help. I will then switch to fit genetic as hyperparameter optimization strategy 😃
As a first test, I wanted to compare the matbench phonon results from MODNET v0.1.12, including new features. And I see for this benchmark, GA was not used. (https://matbench.materialsproject.org/Full%20Benchmark%20Data/matbench_v0.1_modnet_v0.1.12/)
I see some improvement in the results using GA with my new set of features (Bonding analysis data) along with Matminer features. But maybe the results will not be very comparable I guess.
Any ideas to tackle this @ml-evs or @ppdebreuck ?
Glad it's working for you now @naik-aakash (and thanks @ppdebreuck for spotting the issue -- I will try to find time to push a fix).
I see some improvement in the results using GA with my new set of features (Bonding analysis data) along with Matminer features. But maybe the results will not be very comparable I guess.
Sounds good! The GA is purely for hyperparameter optimisation, which in itself is going to be a function of your features, so you could argue that its the combination of the two that leads to the improvement (i.e., they are still comparable). If you wanted to try to do more of an apples-to-apples comparison, I guess you could fix the architecture and just swap out the features, but I'm not sure its worth doing. You could also try to investigate feature importance with something like SHAP to see how much your new features contribute to the predictions on average.
Error
TypeError: fit() got an unexpected keyword argument 'learning_rate'
Code Used
Traceback Screenshots