Open everettraven opened 3 months ago
Can the ServiceAccount be patched to attach the cluster pull secret?
Can the ServiceAccount be patched to attach the cluster pull secret?
@bentito Could you elaborate on what you mean by this?
The user/admin would need to do this:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-pull-secret"}]}'
and then the code would need to reference it, for example on Pod creation:
// Define a new Pod object
pod := &corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "demo-pod",
Namespace: req.Namespace,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "demo-container",
Image: "my-registry.example.com/my-image:latest",
},
},
ServiceAccountName: "default", // Ensure this ServiceAccount has the pull secret
},
}
Benefits of blessing the SA with the pull secret like this are 1) it's centralized for any usage (Pod, Deployment, etc); 2) Just one place to go to for creds rotation.
@bentito thanks for more info on that. I forgot that in catalogd's unpacking implementation for image sources we have the ServiceAccount configured secrets disabled for pull secrets: https://github.com/operator-framework/catalogd/blob/e6da1538304ec2af0648c918bff127632c4d0d0e/internal/source/image_registry_client.go#L58-L61
I think we should enable them for both catalogd and operator-controller (o-c should follow a similar pattern to catalogd)
/assign
With https://github.com/operator-framework/operator-controller/commit/097d897611ec72e85e5c0d92723466846aa2d06c and related PRs merged, the context of this RFC has changed. @skattoju is working to update the RFC to reflect the current state.
@cgwalters We stumbled upon https://github.com/containers/image/pull/1746 during a discussion about how OLM can directly pull images using OCP's global pull secret and the containers/image
library.
I see in the PR that /etc/containers/auth.json
is not a location that containers/image
reads by default (I had originally mistakenly assumed it was).
Since OLM is a controller running in an OCP cluster with a randomly assigned UID, none of the default locations "just work". It seems like we need to do some combination of:
Given that PR, I'm wondering if you have any recommendation for setting up global auth in a system-level OCP component?
how OLM can directly pull images using OCP's global pull secret
Today the OCP one is the kubelet one, which is not /etc/containers/auth.json
. But, if we introduced /etc/containers/auth.json
, it'd make sense to start writing it by default and have /var/lib/kubelet/config.json
be a symlink to it, right?
Given that PR, I'm wondering if you have any recommendation for setting up global auth in a system-level OCP component?
The PR isn't done so doesn't help anything today.
In the meantime, yes you'll need to get access to the kubelet pull secret and explicitly use it. There's an API object for it I believe which you could just fetch from your operator and project the secret into your env.
But, if we introduced /etc/containers/auth.json, it'd make sense to start writing it by default and have /var/lib/kubelet/config.json be a symlink to it, right?
That might be a better question for the MCO and cri-o folks. Certainly if it were the other way around and /etc/containers/auth.json
was a symlink, it would be broken for folks mounting the entire /etc/containers
directory and expecting it to work.
The PR isn't done so doesn't help anything today.
In the meantime, yes you'll need to get access to the kubelet pull secret and explicitly use it.
Yeah, that PR being unmerged is why I wanted to get your opinion.
I'm trying to understand where on the continuum that PR is. Is it "we've agreed on the path location and high level goal, but just working out details"? Or is it "this is a proposal and we may reject it outright"?
If the former, that might be enough to convince us to go ahead and use that location within our controller filesystem and we'd explicitly configure /etc/containers/auth.json
in the SystemContext
. If this is the ultimate path, we would eventually be able to drop our /var/lib/kubelet/config.json
mount since we are already mounting /etc/containers
to pick up its other configuration.
On the other hand, if that /etc/containers/auth.json
location itself is still up in the air, or if there's a chance its semantics would be different than "system-wide credentials", then maybe our project should choose our own filesystem location and then configure our types.SystemContext
explicitly withthat path.
I think it'd probably be a good idea to wait until the location is codified upstream before just starting to write a file there.
In the short term especially for OCP I think /var/lib/kubelet/config.json
(on the node) and the pull global secret (in the apiserver) are going to be canonical/preferred for quite some time.
@anik120 @skattoju Given Colin's answer above, I propose we mount /var/lib/kubelet/config.json
from the node into our pod at /etc/operator-controller/auth.json
and then hardcode our controller to set that path in the SystemContext
(can't remember if we need to check to see if the file exists first).
If the file location on the node ever changes, we can just change our hostPath
volume definition to use the new location without affecting our mount path inside the container filesystem (it would remain /etc/operator-controller/auth.json
)
Currently, there is no support for creating a
ClusterExtension
that can install a bundle that is in a private registry and requires authentication. This epic is scoped for enabling that support. There are a couple different approaches that should be supported:Secret
Secret
Brief: https://docs.google.com/document/d/14abNwdBEe6bGaLczGIlgYR9ghz6gbp_qene_7znIa8o/edit
RFC: https://docs.google.com/document/d/1BXD6kj5zXHcGiqvJOikU2xs8kV26TPnzEKp6n7TKD4M/edit