Closed jdjfisher closed 3 years ago
@JDJFisher thanks for this. We will check this out. I should note that @tobyflynn is looking at your code gen work to merge into our main repo. If you are interested we can certainly use the help.
Thank you for setting this up! I assume it does not measure performance. So what I would suggest doing is to generate a smaller mesh using naca0012.m and validate with that. Do you need help setting this up?
I should note that @TobyFlynn is looking at your code gen work to merge into our main repo. If you are interested we can certainly use the help.
Nice, I currently don't have a lot of free time but I can help out a bit. Just let me know where I can help out :+1:
I assume it does not measure performance.
Nope. The workflows run in containers on the default GH action runner so I'm not sure how consistent the performance would be.
We could add a step to log the performance that the app debug prints to stdout and then analyse the performance across multiple executions of the job. If it's fairly consistent, we could introduce another step into the workflow that can detect a decrease in performance using the runtime of previous executions of the action from the master branch.
If you're interested in doing something like this, a self-hosted runner for the execution of jobs could be set up. This would probably help with consistent performance. https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners
So what I would suggest doing is to generate a smaller mesh using naca0012.m and validate with that. Do you need help setting this up?
Oh yeah, a smaller mesh is probably a good idea. Yeah, I'm not familiar with that so I'd probably need a hand. Although for the time being, I think using the standard mesh is probably fine :+1:
@gihanmudalige @reguly This is ready for review. There is alot more that can be done here but I think this is a good start. In its current state this workflow provides end-to-end testing for plain c++ airfoil with seq
, genseq
, vec
and openmp
. I did also look into cuda
which I was able to build just fine but I ran into problems with installing drivers for the runtime.
If you're interested in self-hosting the workflow runs on a dedicated platform let me know. I can look into that for you.
https://github.com/JDJFisher/OP2-Common/actions/runs/1071823409
@JDJFisher apologies for taking this long... Once merged, where do I enable this CI test? Or does it just automatically start running once merged on each push?
No problem. Yes, the CI should start running automatically once this is merged unless "Actions" has been explicitly disabled in the repository settings.
This workflow will run on all pushes to master
and pull requests into master
(from contributors). This is a pretty standard practise.
Also, it might be a good idea to squash merge this PR :+1:
Here's a basic GA workflow to validate airfoil and provide some end-to-end regression testing.
This particular job takes ~11 minutes to run on the unoptimised airfoil so It might be a good idea to reduce the iterations and increase the error margin before the build step in the workflow.
The next step is to look into running the translator and building
genseq
,cuda
etcEdit: The check hasn't run because of the current repository settings so here's the log from the fork. https://github.com/JDJFisher/OP2-Common/pull/1/checks