Open alexpattyn opened 2 years ago
A basic test we could do for now is run run_all.py on various platforms, e.g. RHEL, Windows, and MacOS. I'll have to see how other projects do that though.
Ideally, we would test each algorithm against a known output. E.g. if the fft-based hauptmann is known to be in a good state currently, we can save the recon and make sure future changes don't modify that.
If other members have ideas that they want to code up we could include those as well.
Yes, good idea. We can definitely also run a couple of sanity-checking tests that test that there is any output and the algorithms do not crash. We could further test that the passed arguments are considered correctly.
@jgroehl I can't really play around with github actions in the official repo. Therefore, I'll check them out in my fork. I'll just need someone with commit access to implement them once I figure out what actions to use to satisfy the task above.
Update: I have a build matrix working, but I always have issues getting the mac and win task to finish. They always time out after 6hrs. Not sure if this is a limit of github actions and how much data we can download into the container to test (since we are verifying run_all.py).
As we discussed during the last meeting (2022.06.01), we could make better use of github actions to: