nuclear-multimessenger-astronomy / nmma

A pythonic library for probing nuclear physics and cosmology with multimessenger analysis
https://nuclear-multimessenger-astronomy.github.io/nmma/
GNU General Public License v3.0
30 stars 58 forks source link

Create dependabot.yml #211

Closed sahiljhawar closed 7 months ago

sahiljhawar commented 1 year ago

Looking at the all *_requirements.txt shows that only Sphinx is pinned to 4.4.0. And rest all required packages are above a certain version, hence the latest versions are installed. Not sure, how dependabot will be useful.

mcoughlin commented 1 year ago

@sahiljhawar I would like us to figure out how to test what versions of sklearn and tensorflow break for the downloaded files, which I guess is a bit different than what dependabot does for us at the moment. I guess this should be maybe coupled with a test that commits our (smallest) sklearn / tensorflow example files and simply tries to load them in. What do you think?

sahiljhawar commented 1 year ago

For tensorflow I couldn't find any installation command, excpet the docs (is it on need to know basis?). For sklearn, it requires >=1.0.2, hence the latest version. Did anyone faced issues with either of these libraries in the past, if that's the case, then an investigation into this will be beneficial. Also as mentioned in #101, tests already exists for both of the above mentioned libraries; however I couldn't see any output/warning/errors in the Actions, but coveralls shows the coverage.

mcoughlin commented 1 year ago

@sahiljhawar it would still be good to find a way to specifically test the utility of the sklearn and tensorflow files we have on Zenodo.

sahiljhawar commented 1 year ago

In a previous coversation, you told that, downloading the model grid times out the CI, but for each job the max execution time is 6 hours , isn't that enough for the download?

mcoughlin commented 1 year ago

@sahiljhawar i think we want to just have a small file locally. It's a huge burden on their servers, especially since it happens for each version of python.

sahiljhawar commented 1 year ago

If this is the bottle neck, we can have a seperate job which downloads the model grid before the actual tests begin and use that through out all the tests? See here

sahiljhawar commented 1 year ago

@mcoughlin We can use Git LFS to track the model files and use it in the workflow. Found this

mcoughlin commented 1 year ago

@sahiljhawar Can you maybe see for how long you can keep one of the downloaded files around?