Closed Theodlz closed 11 months ago
LGTM!
Thanks for the review Brian! Just to be sure, did that new code work fine for you (i.e. have you had a chance to try the code0?
@Theodlz Yes, I tried it out by forcing the source
to be zenodo
and gitlab
and it worked in both cases. It did help me identify an unrelated issue with analysis.py
that I'll be posting about soon.
@Theodlz Yes, I tried it out by forcing the
source
to bezenodo
andgitlab
and it worked in both cases. It did help me identify an unrelated issue withanalysis.py
that I'll be posting about soon.
Great! Do you need that issue to be addressed before this is merged, or should I wait before merging this?
I run into this issue (#280) even on main, so I think you're good to merge this even before we resolve it.
Edit - wrong issue number
I run into this issue (#280) even on main, so I think you're good to merge this even before we resolve it.
Edit - wrong issue number
Noted! Thanks a lot for the review, and let me know if you want me to look at the code of the PR you might open.
This PR makes it possible to point either to gitlab or zenodo to fetch the models.
We make gitlab the default, as this is the one we are able to actively update. Also, we make sure that a failure to use one source of models triggers a retry with another source, until we run out of options. Here we have just 2 of them, but we can imagine having multiple ones in the future.
Maybe, later on, we'll want to have the models.yaml file live in a seperate repo or NMMA itself, and have a pointer to each source for each models, to have a more robust system. Something to think about.
For context, we opted for gitlab to bypass the very strict upload and bandwidth limits imposed by github when using git lfs (a module that allows for the upload of large binary files). Here's the gitlab repo: https://gitlab.com/Theodlz/nmma-models. There, we make use of compression to reduce the individual file size. This should help with downloading speeds, as well as maintaining the repo's full size small enough.
Also added some tests to improve the coverage a little bit :)