Closed danny-does-stuff closed 6 months ago
@danny-does-stuff Could you try adding mode: "0444"
to the file asset?
https://kops.sigs.k8s.io/cluster_spec/#mode
Hi, can confirm the behaviour as well. Tried to set mode as advised as well. It seems the file is getting created, but is not added to VolumeMounts in the manifest of the kube-apiserver. Cluster spec:
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2023-07-20T13:16:44Z"
generation: 1
name: dev.1690-audit.k8s.local
spec:
addons:
- ...
api:
loadBalancer:
type: Public
authorization:
rbac: {}
certManager:
enabled: false
channel: stable
cloudConfig:
openstack:
blockStorage:
bs-version: v2
createStorageClass: false
ignore-volume-az: true
override-volume-az: nova
loadbalancer:
floatingNetwork: public
floatingNetworkID: 91371e55-9cc1-4ed0-bbdc-a7476669b4bd
manageSecurityGroups: true
method: ROUND_ROBIN
provider: haproxy
useOctavia: false
monitor:
delay: 1m
maxRetries: 3
timeout: 30s
router:
externalNetwork: public
cloudControllerManager:
clusterName: dev.1690-audit.k8s.local
image: k8scloudprovider/openstack-cloud-controller-manager:v1.25.6
cloudProvider: openstack
configBase: ...
containerRuntime: containerd
containerd:
registryMirrors:
docker.io:
- ...
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: master-zone-01
name: etcd-zone-01
volumeSize: 2
volumeType: fast-1000
- instanceGroup: master-zone-02
name: etcd-zone-02
volumeSize: 2
volumeType: fast-1000
- instanceGroup: master-zone-03
name: etcd-zone-03
volumeSize: 2
volumeType: fast-1000
manager:
env:
- name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
value: 7d
- name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
value: 14d
memoryRequest: 100Mi
name: main
provider: Manager
- cpuRequest: 100m
etcdMembers:
- instanceGroup: master-zone-01
name: etcd-zone-01
volumeSize: 2
volumeType: fast-1000
- instanceGroup: master-zone-02
name: etcd-zone-02
volumeSize: 2
volumeType: fast-1000
- instanceGroup: master-zone-03
name: etcd-zone-03
volumeSize: 2
volumeType: fast-1000
manager:
env:
- name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
value: 7d
- name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
value: 14d
memoryRequest: 100Mi
name: events
provider: Manager
fileAssets:
- content: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
mode: "0444"
name: audit-policy-config
path: /etc/kubernetes/audit/policy-config.yaml
roles:
- ControlPlane
iam:
allowContainerRegistry: true
legacy: false
kubeAPIServer:
allowPrivileged: true
auditLogMaxAge: 10
auditLogMaxBackups: 1
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /etc/kubernetes/audit/policy-config.yaml
oidcClientID: kubernetes
oidcGroupsClaim: groups
oidcIssuerURL: ...
oidcUsernameClaim: email
kubeProxy:
metricsBindAddress: 0.0.0.0
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.25.10
masterPublicName: api.dev.1690-audit.k8s.local
metricsServer:
enabled: true
insecure: true
networkCIDR: 10.0.0.0/20
networking:
cilium: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
sshKeyName: dev.1690-audit.k8s.local
subnets:
- cidr: ...
name: zone01
type: Private
zone: local_zone_01
- cidr: ...
name: zone02
type: Private
zone: local_zone_02
- cidr: ...
name: zone03
type: Private
zone: local_zone_03
- cidr: ...
name: utility-zone01
type: Utility
zone: local_zone_01
topology:
bastion:
bastionPublicName: bastion.dev.1690-audit.k8s.local
dns:
type: Private
masters: private
nodes: private
Here the manifest for the kube-apiserver that is getting created in /etc/kubernetes/manifests/
:
apiVersion: v1
kind: Pod
metadata:
annotations:
dns.alpha.kubernetes.io/internal: api.internal.dev.1690-audit.k8s.local
kubectl.kubernetes.io/default-container: kube-apiserver
creationTimestamp: null
labels:
k8s-app: kube-apiserver
name: kube-apiserver
namespace: kube-system
spec:
containers:
- args:
- --log-file=/var/log/kube-apiserver.log
- --also-stdout
- /usr/local/bin/kube-apiserver
- --allow-privileged=true
- --anonymous-auth=false
- --api-audiences=kubernetes.svc.default
- --apiserver-count=3
- --audit-log-maxage=10
- --audit-log-maxbackup=1
- --audit-log-maxsize=100
- --audit-log-path=/var/log/kube-apiserver-audit.log
- --audit-policy-file=/etc/kubernetes/audit/policy-config.yaml
- --authorization-mode=Node,RBAC
- --bind-address=0.0.0.0
- --client-ca-file=/srv/kubernetes/ca.crt
- --cloud-config=/etc/kubernetes/in-tree-cloud.config
- --cloud-provider=external
- --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota
- --enable-aggregator-routing=true
- --etcd-cafile=/srv/kubernetes/kube-apiserver/etcd-ca.crt
- --etcd-certfile=/srv/kubernetes/kube-apiserver/etcd-client.crt
- --etcd-keyfile=/srv/kubernetes/kube-apiserver/etcd-client.key
- --etcd-servers-overrides=/events#https://127.0.0.1:4002
- --etcd-servers=https://127.0.0.1:4001
- --kubelet-client-certificate=/srv/kubernetes/kube-apiserver/kubelet-api.crt
- --kubelet-client-key=/srv/kubernetes/kube-apiserver/kubelet-api.key
- --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
- --oidc-client-id=kubernetes
- --oidc-groups-claim=groups
- --oidc-issuer-url=....
- --oidc-username-claim=email
- --proxy-client-cert-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.crt
- --proxy-client-key-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator.key
- --requestheader-allowed-names=aggregator
- --requestheader-client-ca-file=/srv/kubernetes/kube-apiserver/apiserver-aggregator-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=443
- --service-account-issuer=https://api.internal.dev.1690-audit.k8s.local
- --service-account-jwks-uri=https://api.internal.dev.1690-audit.k8s.local/openid/v1/jwks
- --service-account-key-file=/srv/kubernetes/kube-apiserver/service-account.pub
- --service-account-signing-key-file=/srv/kubernetes/kube-apiserver/service-account.key
- --service-cluster-ip-range=...
- --storage-backend=etcd3
- --tls-cert-file=/srv/kubernetes/kube-apiserver/server.crt
- --tls-private-key-file=/srv/kubernetes/kube-apiserver/server.key
- --v=2
command:
- /go-runner
image: registry.k8s.io/kube-apiserver:v1.25.10@sha256:ccce3b0e4b288635f642c73a9a847ed67858e86c5afe37fc775887821aa3cd9e
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 3990
initialDelaySeconds: 45
timeoutSeconds: 15
name: kube-apiserver
ports:
- containerPort: 443
hostPort: 443
name: https
resources:
requests:
cpu: 150m
volumeMounts:
- mountPath: /var/log/kube-apiserver.log
name: logfile
- mountPath: /etc/ssl
name: etcssl
readOnly: true
- mountPath: /etc/pki/tls
name: etcpkitls
readOnly: true
- mountPath: /etc/pki/ca-trust
name: etcpkica-trust
readOnly: true
- mountPath: /usr/share/ssl
name: usrsharessl
readOnly: true
- mountPath: /usr/ssl
name: usrssl
readOnly: true
- mountPath: /usr/lib/ssl
name: usrlibssl
readOnly: true
- mountPath: /usr/local/openssl
name: usrlocalopenssl
readOnly: true
- mountPath: /var/ssl
name: varssl
readOnly: true
- mountPath: /etc/openssl
name: etcopenssl
readOnly: true
- mountPath: /etc/kubernetes/in-tree-cloud.config
name: cloudconfig
readOnly: true
- mountPath: /srv/kubernetes/ca.crt
name: kubernetesca
readOnly: true
- mountPath: /srv/kubernetes/kube-apiserver
name: srvkapi
readOnly: true
- mountPath: /srv/sshproxy
name: srvsshproxy
readOnly: true
- mountPath: /var/log
name: auditlogpathdir
- args:
- --ca-cert=/secrets/ca.crt
- --client-cert=/secrets/client.crt
- --client-key=/secrets/client.key
image: registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.4@sha256:db9f17c1c8b2dfc081e62138f8dcba0a882264f4a95da13e0226af53a45e50dc
livenessProbe:
httpGet:
host: 127.0.0.1
path: /.kube-apiserver-healthcheck/healthz
port: 3990
initialDelaySeconds: 5
timeoutSeconds: 5
name: healthcheck
resources: {}
securityContext:
runAsNonRoot: true
runAsUser: 10012
volumeMounts:
- mountPath: /secrets
name: healthcheck-secrets
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
tolerations:
- key: CriticalAddonsOnly
operator: Exists
volumes:
- hostPath:
path: /var/log/kube-apiserver.log
name: logfile
- hostPath:
path: /etc/ssl
name: etcssl
- hostPath:
path: /etc/pki/tls
name: etcpkitls
- hostPath:
path: /etc/pki/ca-trust
name: etcpkica-trust
- hostPath:
path: /usr/share/ssl
name: usrsharessl
- hostPath:
path: /usr/ssl
name: usrssl
- hostPath:
path: /usr/lib/ssl
name: usrlibssl
- hostPath:
path: /usr/local/openssl
name: usrlocalopenssl
- hostPath:
path: /var/ssl
name: varssl
- hostPath:
path: /etc/openssl
name: etcopenssl
- hostPath:
path: /etc/kubernetes/in-tree-cloud.config
name: cloudconfig
- hostPath:
path: /srv/kubernetes/ca.crt
name: kubernetesca
- hostPath:
path: /srv/kubernetes/kube-apiserver
name: srvkapi
- hostPath:
path: /srv/sshproxy
name: srvsshproxy
- hostPath:
path: /var/log
name: auditlogpathdir
- hostPath:
path: /etc/kubernetes/kube-apiserver-healthcheck/secrets
type: Directory
name: healthcheck-secrets
status: {}
Possible workaround could be to move the policy-config.yaml
file to the /srv/kubernetes/kube-apiserver
as this is already defined in the manifest. Following change in cluster-spec helped:
...
fileAssets:
- content: |
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
mode: "0444"
name: audit-policy-config
path: /srv/kubernetes/kube-apiserver/policy-config.yaml
roles:
- ControlPlane
kubeAPIServer:
allowPrivileged: true
auditLogMaxAge: 10
auditLogMaxBackups: 1
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/kube-apiserver/policy-config.yaml
...
Is there any update on this please?
I've got the same issue after I upgraded from kops 1.25.2 to 1.26.5 - k8s 1.26.7 the control plane is not joining the cluster. I checked and the container kube-api exited because cannot find the audit-policy specified via fileAssets (the file is not created by kops).
In order to fix I changed the permission with mode: "0544"
I tried with 0644
but it was not working
fileAssets:
- name: audit-policy
path: /srv/kubernetes/kube-apiserver/audit.yaml
mode: "0544"
roles: [ControlPlane]
content: |
apiVersion: audit.k8s.io/v1
kind: Policy
not sure why it's working by assigning execute permission to audit.yaml
-r-xr--r--. 1 root root 317 Aug 16 06:55 audit.yaml
-rw-------. 1 root root 228 Aug 16 06:55 encryptionconfig.yaml
-rw-r--r--. 1 root root 1054 Aug 16 06:55 etcd-ca.crt
-rw-r--r--. 1 root root 1082 Aug 16 06:55 etcd-client.crt
-rw-------. 1 root root 1675 Aug 16 06:55 etcd-client.key
I ran into the same issue, but luckily also found #15488, which solved the problem for me :smile:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information.1.26.3
2. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag. 1.26.53. What cloud provider are you using? AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kubeAPIServer.fileAssets
, and the appropriate audit options tokubeAPIServer
5. What happened after the commands executed? The cluster fails to update because the kube api server errors with
6. What did you expect to happen? The cluster works fine with the given audit policy enabled
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here. Not Applicable9. Anything else do we need to know?
I can confirm that the files were added to the node, but I am not sure why the api server is not picking it up