Open ewels opened 1 year ago
What if we used: https://cirun.io/
Free for open-source, and might save us the headache of managing it.
The other option, have we talked to GitHub about getting some credits for runners? I know we've asked for a bump during hackathons.
Regarding cirun, our main opposition points in the past were:
Awesome, those are good reasons!
Hey @mashehu I want to jump in on this one:
need for yaml file in every repo we want to use this in.
This isn't a requirement, you can create a single config file in the organization (in .cirun
repo) and that would be sufficient, example: https://github.com/conda-forge/.cirun/blob/master/.cirun.global.yml
After this you can just do the following in all/any of your repositories:
# This label is coming from the global cirun config.
runs-on: cirun-openstack-gpu-large
I didn't understood this point though, can you elaborate on it
can't split it up nicely between github runners and self-hosted runners
PS: I am the creator of cirun, happy to help with anything here.
Currently, we have a single EC2 instance running for all actions jobs. It is persistent and runs multiple actions runs at once. We have to be careful with stuff like cleaning out data after a run to prevent storage from filling.
Long term, I think it would be better to use auto-scaling runners as described in the GitHub docs. These are set up with either Kubernetes or Terraform and spin up a new instance for each actions job. They are isolated from one another and discarded after each job, meaning a clean instance is guaranteed and that they shouldn't eat up storage over time. It also means that the available runners will scale with the number of CI jobs being submitted, hopefully giving us zero queue time during hackathons etc.