Closed fredemmott closed 5 years ago
@jjergus what do you think we should do here?
Could:
keep things as-is and do this work to restore .git (slow)
We could also do it in the background while the user is already working with the sources.
change the step functions to look for a stamp file uploaded after everything else, instead of instance state, and create success images after the upload instead, do this work (fast, bit more complicated)
We might want to do this anyway, to speed up the build-and-publish step function (updating repos doesn't have to wait for docker images), and the same mechanism we implement here could also be used in other places where we currently have unnecessary dependencies.
remove them, and create a new kind of image doing builds from git checkouts (fast, simple, duplicate workers/jobs)
Random idea: Should we do all builds from git checkouts? One advantage would be that builds wouldn't have to wait for the source tarball, but could start immediately (but they'd do most of the work we do for the source tarballs so we probably wouldn't save that much time).
drop the HHVM-on-demand idea?
I don't have a strong opinion -- I can't estimate how much it would be used, but based on the current OnDemand usage probably not much :(
Random idea: Should we do all builds from git checkouts? One advantage would be that builds wouldn't have to wait for the source tarball, but could start immediately (but they'd do most of the work we do for the source tarballs so we probably wouldn't save that much time).
In the past we've discovered that the release builds don't match the source tarballs; I want to make sure that doesn't regress
I don't have a strong opinion -- I can't estimate how much it would be used, but based on the current OnDemand usage probably not much :(
Feels like this is probably the best option short-term
Using the source tarballs also makes bit-for-bit reproducible builds a possibility
In the past we've discovered that the release builds don't match the source tarballs
We could make sure that the same script is used in both cases (up to the point where we delete .git, tests, etc.)
Debian packages are traditionally built from the source tarballs; there is support for a git-orientated workflow but it looks like this will involve several forks (as we have multiple different debian/ directories depending on the age of the distribution) and a lot more work.
It's possible to built outside of the debian scripts and create a package from the built artifacts; we used to do that and it was hard to maintain (e.g. runtime dependencies would need to be manually maintained separate to the build dependencies) - and also made creating customized packages harder (as there would be no debian source package for them).
One complication is that the docker images are built from source tarballs, which do not have git metadata; this can be restored with:
the third-party directory (submodules) needs some more work