Closed sbrunk closed 11 months ago
I think this looks good, thanks!
I guess this means in the future if we for instance want to add an extension to postgres we need to do that in a separate PR, and then make use of it in a follow-up PR? Sounds about optimal to me, if the main CI flow doesn't involve creating new docker images. Am I getting it more or less right?
I'll go ahead and merge this right away I guess, as I'm sure you can guess my knowledge of all this is rather superficial. If there are any problems we'll fix them as we see them
In other news #65 is good to go, my todo list is just to add the comment about PG16 needed for insertStreamingUnsaved
or whatever I called it. I'll try to squeeze it through CI with these changes tomorrow.
I guess this means in the future if we for instance want to add an extension to postgres we need to do that in a separate PR, and then make use of it in a follow-up PR? Sounds about optimal to me, if the main CI flow doesn't involve creating new docker images. Am I getting it more or less right?
Yes exactly. The original idea was to run the docker build only on changes even in the main CI workflows though to avoid that. I think changes in the docker images are relatively static though, so if you're fine with the current solution let's keep it that way.
In other news #65 is good to go, my todo list is just to add the comment about PG16 needed for
insertStreamingUnsaved
or whatever I called it. I'll try to squeeze it through CI with these changes tomorrow.
That's great. I've used the insertStreaming
with COPY
from #65 to speed up a pipeline and it's just amazing how much faster it became. Mix in some batching and fs2 parEvalMap
magic and now the DB is never the bottleneck in that pipeline. :)
I'll try again today with your latest commits.
We'd like to run generation and tests against multiple Postgres versions, so we need multiple images.
This is the first step, building and pushing multiple images. Using the new images needs to come in a subsequent step, but is already prepared in this PR as well.
We also use an additional matrix for Scala versions to avoid repeated job definitions.
@oyvindberg I played a bit with using dependent jobs to build the docker and then use the just built image to run tests, without having to push the image to a registry. But it was slow. We need to tar the image, upload it as artifact and then download in the next job. It's probably possible to improve it but it needs more effort. So I settled with keeping the two decoupled workflows for now, which is easier, but also means we need two different PRs (see TODOs).