Closed SimonKamuk closed 3 weeks ago
Looks great to me! It will be really nice to have this automated testing on a real dataset. What do you think to creating a file called
DEVELOPING.md
in the repo root where you detail the contents of the test data a bit more?@sadamov could you give this a read through too please?
I added this as a notebook instead, since I could then include the code directly. Let me know if you would still prefer just a md file :)
Looks great to me! It will be really nice to have this automated testing on a real dataset. What do you think to creating a file called
DEVELOPING.md
in the repo root where you detail the contents of the test data a bit more? @sadamov could you give this a read through too please?I added this as a notebook instead, since I could then include the code directly. Let me know if you would still prefer just a md file :)
Could we move this and all future notebooks to a neural-lam/notebooks
folder? Then I would also give this notebook a more desciptive name like create_reduced_meps_dataset.ipynb
I only have two minor comments about comments you could add to the code. Otherwise this is good to go.
Remember to update the changelog to say that you have added testing of dataset loading, graph creation and training based on reduced MEPS dataset stored on AWS S3! This is a big step!!
Also I think it would be nice to add to the top of the README cicd badges for the testing if @sadamov and @joeloskarsson you are ok with that?
I have no opinion on cicd badges, so please go ahead if you'd like
@sadamov Do you have any further requests or are we ready to merge? :)
Ready go merge! :rocket:
@SimonKamuk at the moment I am seeing "Unit Tests: Failing" on the badge, even thought this is not the case for the main branch. I was looking at https://github.com/mllam/neural-lam/actions/workflows/run_tests.yml and it seems to be on some other branch they have failed. I really don't know much about how these badges work, so not sure how it is supposed to function :sweat_smile: Do you know if there is a way to tweak it so that it only applies to the main branch? Or would that require large changes to the way the github workflows are set up?
@SimonKamuk at the moment I am seeing "Unit Tests: Failing" on the badge, even thought this is not the case for the main branch. I was looking at https://github.com/mllam/neural-lam/actions/workflows/run_tests.yml and it seems to be on some other branch they have failed. I really don't know much about how these badges work, so not sure how it is supposed to function 😅 Do you know if there is a way to tweak it so that it only applies to the main branch? Or would that require large changes to the way the github workflows are set up?
Yeah, I noticed that as well. I could specify that the badge should only refer to the main branch, but the issue is that the workflows are not running on pushes to main, so there are no runs to refer to. In the recent commit, I made these changes, but I am not sure what the implication of no longer ignoring pushes to main (why were they ignored in the first place @leifdenby?)
Ok, that's great! I guess running on push to main is generally unnecessary since we run everything before merging. However I don't think it matters if we run it an extra time just for the badge.
Implemeted tests for loading a reduced size meps example dataset, creating graphs, and training model.
Work TODO:
meps_example_reduced
neural_lam.weather_dataset.WeatherDataset
from downloaded datacloses #30