radiasoft / containers

Builds Fedora on Docker or Vagrant
Apache License 2.0
2 stars 1 forks source link

proxy image builds #51

Closed robnagler closed 7 years ago

robnagler commented 8 years ago

To mitigate network failures when building images, we can put a persisten proxy-cache in front of the build container. Need to figure this out, because this week has seen a lot of network failures.

@elventear have you seen this done before?

robnagler commented 8 years ago

One problem is that we are going to have to spoof https, because most sites are https. I'm not sure if we can inject a no-check-certificate in all handlers.

I don't think it is possible to figure out each build process to know how to configure a local file cache or alternate url or even increasing the timeout/retries. Perhaps we can do this for certain downloads (Warp just failed, for example, and we download that).

elventear commented 8 years ago

I don't think we can do transparent caching of the HTTP protocol. Maybe different tools we use have different features that we can leverage?

robnagler commented 8 years ago

There are a couple of ways to fake curl, wget, etc. One is to install our own CA certificate on the affected system. The other is to set flags like curl --insecure and wget --no-check-certificate. However, some of the libraries (like used by synergia's contractor) use neither of these libraries so probably replacing the CA is the best solution. We can remove the cert after the install is complete.

There is a maintenance headache about the certs and the cache.

Another alternative is to restart builds if we get a known network error. The big one is if we could fix code.sh to restart the build automatically. I know you (@elventear) were having problems with pip so perhaps we could wrap that, too. What do you think?

elventear commented 8 years ago

That approach is to ensure the network always works somehow. What about a different approach where we cache things locally. When building containers when can mount the cache from the host to the docker container.

robnagler commented 8 years ago

The problem with a file-based approach is that we have to look into each build systems' model. In particular, synergia is very complicated. You can do it, but you need to modify a lot of stuff and keep track of versions.

In some ways, I yearn for the days for RPMs where this wasn't a problem. You rarely need to rebuild all parts of a system at the same time. Docker doesn't solve the composition problem, because it makes it easy to build by imposing an order. That's a double-edged sword. RPMs are composable, because they are designed to be idempotent in the face of uncertain initial conditions. Docker images are easy to build, because the initial conditions are constant, and there's no need for uninstall (the most difficult part of creating RPMs).

With many of these codes, the downloads happen at the start so I think restartable builds are going to give us the most bang for the buck.

We don't have time for this now (there's a cycle here), but we should incorporate it soon, especially in regards to upgrading to whatever base OS we choose going forward.

robnagler commented 7 years ago

Now that we are building on travis (and they'll extend our FOSS build times) we don't need this.