Closed markbaumgarten closed 2 years ago
Yes, it's working with containerd. The same principles (https_proxy, mitm certs) used for dockerd can be used for containerd, but you can also use containerd's toml configs to fine tune stuff that was not possible in docker case. I have not had time to write up documentation, indeed all docs we have here were written by contributors, so please send PRs.
My initial attempt is failing sue to incorrect media type, so, not a drop in replacement ATM
level=error msg="Failed to handle backOff event &ImageCreate{Name:k8s.gcr.io/pause:3.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],} for k8s.gcr.io/pause:3.5" error="update image store for \"k8s.gcr.io/pause:3.5\": get image info from containerd: get image diffIDs: unexpected media type application/octet-stream
Should I create a new issue then - don't understand why we are closing this?
I have spent some time on this issue - and really don't know how to make containerd understand how to use this a a proxy.
I guess I can contribute to explain how NOT TO DO IT:
Heres my demo k8s node containerd /etc/containerd/config.toml(the last two lines was added by me)
version = 2
root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0
[grpc]
max_recv_message_size = 16777216
max_send_message_size = 16777216
[debug]
level = "debug"
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "k8s.gcr.io/pause:3.3"
max_container_log_line_size = -1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
runtime_engine = ""
runtime_root = ""
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
systemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
endpoint = ["http://10.100.40.20:3128/"]
This is me running the proxy:
docker run --rm --name docker_registry_proxy -it -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true -e REGISTRIES="k8s.gcr.io gcr.io quay.io" -v $(pwd)/docker_mirror_cache:/docker_mirror_cache -v
(pwd)/docker_mirror_certs:/ca rpardini/docker-registry-proxy:0.6.2
Adding certificate for registry: docker.caching.proxy.internal
Adding certificate for registry: registry-1.docker.io
Adding certificate for registry: auth.docker.io
Adding certificate for registry: k8s.gcr.io
Adding certificate for registry: gcr.io
Adding certificate for registry: quay.io
INFO: Will create certificate with names DNS:docker.caching.proxy.internal,DNS:registry-1.docker.io,DNS:auth.docker.io,DNS:k8s.gcr.io,DNS:gcr.io,DNS:quay.io
INFO: CA already exists. Good. We'll reuse it.
INFO: Generate IA key
INFO: Create a signing request for the IA: 06eb570a9ad8 2022.03.28 14:42
INFO: Sign the IA request with the CA cert and key, producing the IA cert
INFO: Initialize the serial number for signed certificates
INFO: Create the key (w/o passphrase..)
INFO: Create the signing request, using extensions
INFO: Sign the request, using the intermediate cert and key
INFO: Concatenating fullchain.pem...
INFO: Concatenating fullchain_with_key.pem
Adding Auth for registry 'some.authenticated.registry' with user 'oneuser'.
Adding Auth for registry 'another.registry' with user 'user'.
Manifest caching config: ---
# First tier caching of manifests; configure via MANIFEST_CACHE_PRIMARY_REGEX and MANIFEST_CACHE_PRIMARY_TIME
location ~ ^/v2/(.*)/manifests/(stable|nightly|production|test) {
set $docker_proxy_request_type "manifest-primary";
proxy_cache_valid 10m;
include "/etc/nginx/nginx.manifest.stale.conf";
}
# Secondary tier caching of manifests; configure via MANIFEST_CACHE_SECONDARY_REGEX and MANIFEST_CACHE_SECONDARY_TIME
location ~ ^/v2/(.*)/manifests/(.*)(\d|\.)+(.*)(\d|\.)+(.*)(\d|\.)+ {
set $docker_proxy_request_type "manifest-secondary";
proxy_cache_valid 60d;
include "/etc/nginx/nginx.manifest.stale.conf";
}
# Default tier caching for manifests. Caches for 1h (from MANIFEST_CACHE_DEFAULT_TIME)
location ~ ^/v2/(.*)/manifests/ {
set $docker_proxy_request_type "manifest-default";
proxy_cache_valid 1h;
include "/etc/nginx/nginx.manifest.stale.conf";
}
---
Timeout configs: ---
# Timeouts
# ngx_http_core_module
keepalive_timeout 300s;
send_timeout 60s;
client_body_timeout 60s;
client_header_timeout 60s;
# ngx_http_proxy_module
proxy_read_timeout 60s;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
# ngx_http_proxy_connect_module - external module
proxy_connect_read_timeout 60s;
proxy_connect_connect_timeout 60s;
proxy_connect_send_timeout 60s;
---
Upstream SSL certificate verification enabled.
Testing nginx config...
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Starting nginx! Have a nice day.
And when I pull from the client machine, it pulls directly from quay.io (the cache folder is empty).
sudo ctr image pull quay.io/aptible/busybox:latest
quay.io/aptible/busybox:latest: resolved |++++++++++++++++++++++++++++++++++++++|
layer-sha256:3dc2d28a236c994b81dde05c22d91d7d53069683f31d92caa76633cf0776312b: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:8293be41dd1a2dba838f097528c8f958fb3e603d12ba4af11648c1cf02d4ddcb: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:d956721ca37e16629cd76e1e35ac9a7ca1542f9fd8e83332db57a242ed5d946f: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:1bb94794574296f00de0ce392a0f90141a5cc740d6aa8e2c69d36302fbca6c98: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:5a085a30ef4bd4333306c51cb744cca70de0bb7fb864442172d2aa33481c2a8e: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:ca3be78f492658072971b37f895f08a70b708c10a92a71b53a17c7b59899dc28: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:74d7f2034d355fa4b2c22562eb5511d90424021dff8626eb4cd9ff352f3761e3: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:f40947f0effd5741cb69a67d38ea20a4fb324bea354631272e9461b1001b1cd8: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:34bb7110f39f1f4167b81b7679b932fb72998ad178ab68476844d232526ab521: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:08d1d294a17fe49b00be4aa590bceabc397b581f6e6a26850d931be515770b7f: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:0b4c2760b9bb79147d2c6a26d9fbd917d57d4ccd478d70e5150c045b28f5f036: done |++++++++++++++++++++++++++++++++++++++|
layer-sha256:379a828a121464cfc488a01da3d508312de12962011694b6d2311738122ecdf1: done |++++++++++++++++++++++++++++++++++++++|
elapsed: 3.0 s total: 5.0 Mi (1.7 MiB/s)
unpacking linux/amd64 sha256:e6503aa94695769f2e7e4e59d4b18737248a6059a42c122925aec8df09467fb2...
done: 442.996464ms
BUT: The docker_mirror_cache folder is empty :-(
Do we have further update for supporting containerd as container runtime in cloud provider kubernetes cluster?
I tried in IBM Cloud IKS cluster with Containerd .
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: docker-registry-proxy
spec:
selector:
matchLabels:
app: docker-registry-proxy
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 100%
type: RollingUpdate
template:
metadata:
labels:
app: docker-registry-proxy
spec:
terminationGracePeriodSeconds: 360
containers:
- env:
image: rpardini/docker-registry-proxy:0.6.2
imagePullPolicy: Always
name: docker-registry-proxy
resources: {}
env:
- name: ENABLE_MANIFEST_CACHE
value: "true"
- name: REGISTRIES
value: "quay.io na.artifactory.taas.kyndryl.net"
volumeMounts:
- mountPath: /docker_mirror_cache
name: docker-mirror-cache-volume
- mountPath: /ca
name: proxy-ca-volume
restartPolicy: Always
volumes:
- name: docker-mirror-cache-volume
hostPath:
# directory location on host
path: /docker_mirror_cache
# this field is optional
type: ""
- name: proxy-ca-volume
hostPath:
# directory location on host
path: /docker_mirror_certs
# this field is optional
type: ""
status:
currentNumberScheduled: 1
desiredNumberScheduled: 1
numberAvailable: 1
numberMisscheduled: 0
numberReady: 1
observedGeneration: 1
updatedNumberScheduled: 1
apiVersion: v1
kind: Service
metadata:
name: docker-registry-proxy-svc
spec:
selector:
app: docker-registry-proxy
ports:
- protocol: TCP
port: 3128
targetPort: 3128
# cat /host/etc/systemd/system/containerd.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://172.21.188.243:3128/"
Environment="HTTPS_PROXY=http://172.21.188.243:3128/"
# ls /host/etc/containerd/certs.d/
docker.io quay.io
# cat /host/etc/containerd/certs.d/quay.io/hosts.toml
server = "https://quay.io"
[host."https://quay.io"]
capabilities = ["pull", "resolve"]
As this registry.mirror in Containerd config file already deprecated in Containerd (containerd registry of mirror has been DEPRECATED https://github.com/containerd/containerd/blob/main/docs/cri/registry.md#configure-registry-endpoint) , so it's not working and also found some issue for registry mirror can't fall back to upstream repo https://github.com/containerd/containerd/issues/7321 https://github.com/containerd/containerd/issues/4531
### UBUNTU
# Get the CA certificate from the proxy and make it a trusted root.
curl http://<service-cluster-ip>:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt
echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf
update-ca-certificates --fresh
After all above configuration , as inside of debug pod we can't find a way (shouldn't support) to restart containerd service , and in this case verify using a pod to pull image from quay.io of calico-etcd image, it's not working and not cached in proxy .
Kubernetes seems to be moving away from docker and towards containerd.
Does anyone have an example showing how to setup containerd for this registry-proxy to work - preferably using a k8s DeamonSet ?