Open gabemontero opened 4 years ago
today openshift build v1 can obtain certs for the global proxy
Could this be put in a secret and linked to the service account which runs the builds?
today openshift build v1 can obtain certs for the global proxy
Could this be put in a secret and linked to the service account which runs the builds?
No that only works with auth @sbose78 not for certs
So what you'll see for the default certs in buildv1 are that configmaps are created to house certs for
I've captured the mechanics of what build v2 will need in obu
.... see https://github.com/gabemontero/obu/blob/master/pkg/cmd/cli/cmd/global_proxy_config.go
So ultimately, before calling buildah
, you can
--cert-dir
option for buildah
Presumably the other build tools also consume the well known proxy env vars and have analogous options to --cert-dir
What would be the format to specify the proxy settings? Should we expose as a dedicated attribute in Build
or are we interested to allow users to control the environment variables?
In build v1 we did not surface proxy related items explicitly in the API @otaviof . The build controller just has to always set the env's on the build pod and mount the CA from the config map in the build pod in order for builds to work correctly when OpenShift's global proxy was in play.
The obu link in my previous comment shows how to retrieve that information.
@gabemontero The snippet is really helpful, thank you!
A related topic: Where are the trusted CA certs stored in OpenShift / Kubernetes ?
@gabemontero The snippet is really helpful, thank you!
A related topic: Where are the trusted CA certs stored in OpenShift / Kubernetes ?
Specific to openshift global proxy, the CAs are in a well known configmap, and there is a seperate controller that can inject those CAs into any CM in any namespace if that CM has a certain label.
In general, CAs are stored in either configmaps or secrets which are mounted in a pod.
By default, the CA for talking to the apiserver is included with the serviceaccount mount for any pod (i.e. every pod has an SA associated with it, and the auth token from that SA is included in the mount, along with the namespace the SA is from (and the pod is running in), the CA for the apiserver.
Tekton may support this feature upstream - it's possible with the right configuration we get this for free.
@gabemontero this issue seems to cover two pieces:
--cert-dir
Both points are supposed to apply for certificates for a container registry and for something very Openshift specific(a.k.a global proxy).
We already have issues to address 1) (see https://github.com/shipwright-io/build/issues/48) and 2) (see https://github.com/shipwright-io/build/issues/224). This issue is marked for v1.0.0 release, but seems to be not very clear what exactly needs to be done here, assuming most of the missing pieces are already available in other issues.
Opinions?
Your point 1 is not quite right @qu1queee
The env vars are first and foremost for defining the http/https proxy and no proxy env vars so that tools like git
perform the necessary changes in their remote interactions.
I agree on your points about #48 and #224 are generic alternatives.
Though it can also be said that proxies are a common enough fact of life for our users that some additional, directed application of #48 and #224 on a global scale based on the presence of a global proxy for a customer could aid cluster administrators. The TEP's in upstream tekton in fact could more or less be categorized along these lines (as is the proxy specific support we provide in openshift).
To that this could be another "keep an item open to monitor tekton in this space" tracker.
If we went down that path, what to do with the fact that this has been assigned to the 1.0.0 GA milestone? I would agree that upstream trackers like this should not necessarily be included in that milestone.
Unless @sbose78 @siamaksade @adambkaplan disagree, I'm good with moving it out of that milestone.
@gabemontero ok, +1 on removing this from 1.0.0
We do need to verify that standard ways of passing proxy information is respected by the image build operation in a way agnostic to the distribution of Kubernetes.
Example:
Additionally, there could be Build
-specific proxy information that could be passed using env vars and volumes in the spec.
I would like to keep this issue around to ensure we at least validate & document how to use global and local proxy information for Builds in Shipwright - even if we don't necessarily have to do anything explicitly :)
Let me know, @qu1queee .
We do need to verify that standard ways of passing proxy information is respected by the image build operation in a way agnostic to the distribution of Kubernetes.
Example:
* Globally, the build operation should respect http_proxy env vars ( and other similar ones ) * Globally, the build operation should respect the existence of a proxy certificate in a standard location ( configmap ). * Globally, Our build tools/strategies should be made aware of the above information.
A bit of an update on this one since @sbose78 last check in beginning of this year (no 9 months ago)
with https://github.com/shipwright-io/build/pull/817 and https://github.com/shipwright-io/cli/pull/27 from @coreydaley we should have sufficient env var support such that the proxy related env vars can be specified and consumed in a shipwright buildstrategy/build/buildrun tuple
the SHIP https://github.com/shipwright-io/community/pull/23 from @adambkaplan for mounting volumes lays out a generic solution which could include additional configmap/secrets needed for mounting the certs/creds needed for interacting with proxies (which of course are different form the secrets etc. shipwright currently has for authenticating / communitcating with SCMs i.e. git repos).
Once both are in place, we can see about say a blog post to introduce the connect with proxy topic, along with probably some additions to our existing API obj markdowns.
today openshift build v1 can obtain certs for the global proxy and ensure they are available to the build process
ultimately buildv2 should have some form of this
it very well might make sense for DEVEX to help abstact out logic used for build v1 for use by buildv2 .... https://github.com/gabemontero/obu is a prototype of such an endeavor