This adds some support for running the tasks on kubernetes.
The end goal I see here is:
Someone adds a new image/updates an image in k8/images to run another project and this gets pushed to master
Magic things happen, and we end up with more data in data/typing and data/api.json
We are still a little ways away from that, but this sets up the mechanisms to run tasks remotely.
I am currently using Argo to actually run the tasks and docker buildx to build the images.
I am thinking now of switching the whole thing to just use buildx so the end result is containers that are built with the data in them, using multi stage builds...
That would make the caching/versioning a bit easier, because right now the process to run a new build is like:
Edit the Dockerfile of an image
Rebuild that image
Once it's done, run a new argo task with the new image tag.
Instead if we did it all in docker buildx we might be able to do step 3 implicitly, using docker layer caching...
But I think we should merge this in as is, since I have some bugs to fix in the python package and we can iterate on this.
This adds some support for running the tasks on kubernetes.
The end goal I see here is:
k8/images
to run another project and this gets pushed tomaster
data/typing
anddata/api.json
We are still a little ways away from that, but this sets up the mechanisms to run tasks remotely.
I am currently using Argo to actually run the tasks and docker buildx to build the images.
I am thinking now of switching the whole thing to just use
buildx
so the end result is containers that are built with the data in them, using multi stage builds...That would make the caching/versioning a bit easier, because right now the process to run a new build is like:
Dockerfile
of an imageInstead if we did it all in docker buildx we might be able to do step 3 implicitly, using docker layer caching...
But I think we should merge this in as is, since I have some bugs to fix in the python package and we can iterate on this.