kubernetes / enhancements

Enhancements tracking repo for Kubernetes
Apache License 2.0
3.42k stars 1.47k forks source link

Kubelet support for Split Image Filesystem #4191

Open kannon92 opened 1 year ago

kannon92 commented 1 year ago

Enhancement Description

Nice to haves:

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.

kannon92 commented 1 year ago

/sig node

SergeyKanzhelev commented 1 year ago

/stage alpha /milestone v1.29 /label lead-opted-in

kannon92 commented 1 year ago

/assign

sreeram-venkitesh commented 1 year ago

Hello @kannon92 πŸ‘‹, Enhancements team here.

Just checking in as we approach enhancements freeze on 01:00 UTC, Friday, 6th October, 2023.

This enhancement is targeting for stage alpha for v1.29 (correct me, if otherwise)

Here's where this enhancement currently stands:

For this KEP https://github.com/kubernetes/enhancements/pull/4198 seems to take care of everything. Please make sure that the PR is merged in time. Will move the KEP to tracked for enhancement freeze once everything is merged into k/enhancements.

The status of this enhancement is marked as at risk for enhancement freeze. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

sreeram-venkitesh commented 1 year ago

Hi @kannon92, checking in once more as we approach the 1.29 enhancement freeze deadline on 01:00 UTC Friday, 6th October 2023. The status of this enhancement is marked as at risk. It looks like https://github.com/kubernetes/enhancements/pull/4198 will address most of the requirements. Please make sure that the changes are merged in time. Let me know if I missed anything. Thanks!

kannon92 commented 1 year ago

Changes were merged so we hopefully should be good.

npolshakova commented 1 year ago

With KEP PR https://github.com/kubernetes/enhancements/pull/4198 approved, the enhancement is ready for the enhancements freeze. The status is now marked as tracked for enhancement freeze for 1.29. πŸš€ Thank you!

katcosgrove commented 1 year ago

Hey there @kannon92! :wave:, v1.29 Docs Lead here. Does this enhancement work planned for v1.29 require any new docs or modification to existing docs? If so, please follows the steps here to open a PR against dev-1.29 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday, 19 October 2023. Also, take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release. Thank you!

katcosgrove commented 1 year ago

Hi again @kannon92! The deadline to open a placeholder PR against k/website for required documentation is this Thursday, 19 October. Could you please update me on the status of docs for this enhancement? Thank you!

kannon92 commented 1 year ago

This first alpha release won’t have any corresponding doc changes yet.

James-Quigley commented 12 months ago

Hi @kannon92 :wave: from the v1.29 Communications Release Team! We would like to check if you have any plans to publish blogs for this KEP regarding new features, removals, and deprecations for this release. If so, you need to open a PR placeholder in the website repository. The deadline will be on Tuesday 14th November 2023 (after the Docs deadline PR ready for review) Here's the 1.29 Calendar

sreeram-venkitesh commented 11 months ago

Hey again @kannon92 πŸ‘‹ v1.29 Enhancements team here,

Just checking in as we approach code freeze at 01:00 UTC Wednesday 1st November 2023 .

Here's where this enhancement currently stands:

The status of this KEP is currently at risk for Code Freeze. From what I understand, https://github.com/kubernetes/kubernetes/pull/120914 and https://github.com/kubernetes/kubernetes/pull/120616 are the code PRs that is planned for the v1.29 release. Please make sure that https://github.com/kubernetes/kubernetes/pull/120616 is merged in time for the code freeze.

As always, we are here to help if any questions come up. Thanks!

npolshakova commented 11 months ago

With https://github.com/kubernetes/kubernetes/pull/120616 this is now marked as tracked for code freeze for 1.29! πŸš€

salehsedghpour commented 9 months ago

/remove-label lead-opted-in

kannon92 commented 9 months ago

Update for 1.30:

We are making good progress on the ecosystem (critools, crio and cadvisor PRs were merged). We need a cadvisor release to close out this implementation for alpha.

Will update https://github.com/kubernetes/kubernetes/pull/122438 to include cadvisor release when its available.

I have been working on e2e tests for image filesystem and I have started writing e2e configs for split disk.

salehsedghpour commented 9 months ago

Hello πŸ‘‹ 1.30 Enhancements Lead here,

I'm closing milestone 1.29 now, If you wish to progress this enhancement in v1.30, please follow the instructions here to opt in the enhancement and make sure the lead-opted-in label is set so it can get added to the tracking board and finally add /milestone v1.30. Thanks!

/milestone clear

kannon92 commented 8 months ago

@mrunalp @SergeyKanzhelev could you add a milestone and opt in label for this feature?

I’ll be working on advisor bump and e2e tests in 1.30

mrunalp commented 8 months ago

/milestone v1.30 /stage alpha /label lead-opted-in

tjons commented 8 months ago

Hello @kannon92 πŸ‘‹, Enhancements team here.

Just checking in as we approach enhancements freeze on Friday, February 9th, 2024 at 02:00 UTC.

This enhancement is targeting for stage alpha for 1.30 (correct me, if otherwise)

Here's where this enhancement currently stands:

For this KEP, we would just need to complete the following:

The status of this enhancement is marked as at risk for enhancement freeze. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

kannon92 commented 8 months ago

We don't need a PRR for this one. It is still staying in alpha and we had a brief one in the last release.

kannon92 commented 8 months ago

Main goal for this next stage is to get some dependencies changes and add e2e tests. So nothing changed from first PRR review.

tjons commented 8 months ago

Ah, ok! Sounds good - I'll mark this as tracked for enhancements freeze! Thanks for your quick response.

tjons commented 8 months ago

With all the requirements fulfilled this enhancement is now marked as tracked for the upcoming enhancements freeze πŸš€

drewhagen commented 8 months ago

Hello @kannon92 πŸ‘‹, 1.30 Docs Lead here.

Does this enhancement work planned for 1.30 require any new docs or modification to existing docs? If so, please follows the steps here to open a PR against dev-1.30 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday February 22nd 2024 18:00 PDT.

Also, take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release. Thank you!

fkautz commented 8 months ago

Hi @kannon92

πŸ‘‹ from the v1.30 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement!

We encourage blogs for features including, but not limited to: breaking changes, features and changes important to our users, and features that have been in progress for a long time and are graduating.

To opt in, you need to open a Feature Blog placeholder PR against the website repository. The placeholder PR deadline is 27th February, 2024. Here's the 1.30 Release Calendar

tjons commented 7 months ago

Hey again @kannon92 πŸ‘‹ Enhancements team here,

Just checking in as we approach code freeze at 02:00 UTC Wednesday 6th March 2024 .

Here's where this enhancement currently stands:

For this enhancement, it looks like the following PRs are open and need to be merged before code freeze:

Also, please let me know if there are other PRs in k/k we should be tracking for this KEP. As always, we are here to help if any questions come up. Thanks!

tjons commented 7 months ago

Hey @kannon92 - looks like the two above PRs merged! I'm seeing that https://github.com/kubernetes/kubernetes/pull/123518 has been added to the issue... code freeze is ~6 hours away. Do you think it will merge in time?

kannon92 commented 7 months ago

This is for test freeze. So I don’t think it should be tracked in code freeze.

salehsedghpour commented 7 months ago

Hello @kannon92 πŸ‘‹ , Enhancements team here.

With all the implementation(code related) PRs merged as per the issue description:

This enhancement is now marked as tracked for code freeze for the v1.30 Code Freeze!

kannon92 commented 7 months ago

We hit a blocker with adding tests. It turns out that there is a fix in container/storage which crio uses for this work.

We are waiting for podman 5.0 to be released and then we can update crio to use the latest storage changes. We want to kick off a branch for 1.30 that does not include podman 5.0 changes so we are going to wait until 1.31 to get the e2e tests working.

So I will not be documenting this feature in a blog post because there is still some risk with the feature and I do not want to advertise the feature yet.

Kubelet changes are good as is. This is mainly a problem in the container runtime so we do not need any revert or anything.

sreeram-venkitesh commented 5 months ago

Hi @kannon92 πŸ‘‹, 1.31 Enhancements Lead here.

If you wish to progress this enhancement in v1.31, please have the SIG lead opt-in your enhancement by adding the lead-opted-in label and set the milestone to v1.31 before the Production Readiness Review Freeze.

/remove-label lead-opted-in

kwilczynski commented 4 months ago

The Podman v5.0.0 has been released already a while ago:

kwilczynski commented 4 months ago

Hi @mrunalp and @SergeyKanzhelev, would you be able to provide us here with the necessary tags?

Thank you for help in advance!

kannon92 commented 4 months ago

@kwilczynski even though podman v 5.0 is out, I think one needs to bump K8s version to use crio that corresponds to podman v5.0.

I tried to rerun the e2e tests failed and they still fail with the crio commit but I think this is expected. Podman v5.0 corresponds to crio 1.31.

For this feature what we are mostly waiting for is a bump in crio to 1.31 (hopefully we can do soon). And then hopefully e2e tests will work.

kwilczynski commented 4 months ago

@kannon92, sounds good!

Aside from the Podman 5 release, you were waiting for a feature in containers/storageβ€”was that for the CRI-O bump?

Currently, the main and release-1.31 branches of CRI-O (the latter being a mirror of the main for now) have containers/storage v1.54.0 included. This is the latest release (cut about two weeks ago at the time of writing).

kwilczynski commented 4 months ago

@kannon92, was it this change we were waiting for in CRI-O?

This was release as part of v1.50.0 of containers/storage, and CRI-O 1.30.x uses v1.51.0 already.

kwilczynski commented 4 months ago

I think, I found the right change in containers/storage:

There is also the following change:

@kannon92, we can backport this to an earlier version of CRI-O via our usual backport process if needed.

kwilczynski commented 4 months ago

/retitle Kubelet support for Split Image Filesystem

k8s-ci-robot commented 4 months ago

@kwilczynski: Re-titling can only be requested by trusted users, like repository collaborators.

In response to [this](https://github.com/kubernetes/enhancements/issues/4191#issuecomment-2145243124): >/retitle Kubelet support for Split Image Filesystem Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
kwilczynski commented 4 months ago

I was able to confirm that the Split Image Filesystem feature is working as expected per:

crio --version ```console crio version 1.31.0 Version: 1.31.0 GitCommit: 9b9452f662381706cd909a36336514025fc5fb5c GitCommitDate: 2024-06-03T09:53:47Z GitTreeState: clean BuildDate: 2024-06-03T11:55:08Z GoVersion: go1.22.3 Compiler: gc Platform: linux/amd64 Linkmode: dynamic BuildTags: containers_image_ostree_stub apparmor containers_image_openpgp seccomp selinux exclude_graphdriver_devicemapper LDFlags: unknown SeccompEnabled: true AppArmorEnabled: true ```
kubelet start-up log ```console I0603 15:28:57.661822 25066 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" ```
CRI-O start-up log ```toml INFO[2024-06-03 12:41:36.415473898Z] Current CRI-O configuration: [crio] root = "/var/lib/containers/storage" runroot = "/run/containers/storage" imagestore = "/tmp/test" storage_driver = "overlay" log_dir = "/var/log/crio/pods" version_file = "/var/run/crio/version" version_file_persist = "" clean_shutdown_file = "/var/lib/crio/clean.shutdown" internal_wipe = true internal_repair = false [crio.api] grpc_max_send_msg_size = 83886080 grpc_max_recv_msg_size = 83886080 listen = "/var/run/crio/crio.sock" stream_address = "127.0.0.1" stream_port = "0" stream_enable_tls = false stream_tls_cert = "" stream_tls_key = "" stream_tls_ca = "" stream_idle_timeout = "" [crio.runtime] no_pivot = false selinux = false log_to_journald = false drop_infra_ctr = true read_only = false hooks_dir = ["/usr/share/containers/oci/hooks.d"] default_capabilities = ["CHOWN", "DAC_OVERRIDE", "FSETID", "FOWNER", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "KILL"] add_inheritable_capabilities = false allowed_devices = ["/dev/fuse"] cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"] device_ownership_from_security_context = false default_runtime = "crun" decryption_keys_path = "/etc/crio/keys/" conmon = "" conmon_cgroup = "" seccomp_profile = "" apparmor_profile = "crio-default" blockio_config_file = "" blockio_reload = false irqbalance_config_file = "/etc/sysconfig/irqbalance" rdt_config_file = "" cgroup_manager = "systemd" default_mounts_file = "" container_exits_dir = "/var/run/crio/exits" container_attach_socket_dir = "/var/run/crio" bind_mount_prefix = "" uid_mappings = "" minimum_mappable_uid = -1 gid_mappings = "" minimum_mappable_gid = -1 log_level = "info" log_filter = "" namespaces_dir = "/var/run" pinns_path = "/usr/bin/pinns" enable_criu_support = false pids_limit = -1 log_size_max = -1 ctr_stop_timeout = 30 separate_pull_cgroup = "" infra_ctr_cpuset = "" shared_cpuset = "" enable_pod_events = false irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus" hostnetwork_disable_selinux = true disable_hostport_mapping = false timezone = "" [crio.runtime.runtimes] [crio.runtime.runtimes.crun] runtime_config_path = "" runtime_path = "/usr/bin/crio-crun" runtime_type = "" runtime_root = "" allowed_annotations = ["io.containers.trace-syscall"] monitor_path = "/usr/bin/crio-conmon" monitor_cgroup = "system.slice" container_min_memory = "12MiB" [crio.runtime.runtimes.runc] runtime_config_path = "" runtime_path = "/usr/bin/crio-runc" runtime_type = "" runtime_root = "" monitor_path = "/usr/bin/crio-conmon" monitor_cgroup = "system.slice" container_min_memory = "12MiB" [crio.image] default_transport = "docker://" global_auth_file = "" pause_image = "registry.k8s.io/pause:3.9" pause_image_auth_file = "" pause_command = "/pause" signature_policy = "/etc/crio/policy.json" signature_policy_dir = "/etc/crio/policies" image_volumes = "mkdir" big_files_temporary_dir = "" auto_reload_registries = false [crio.network] cni_default_network = "" network_dir = "/etc/cni/net.d/" plugin_dirs = ["/opt/cni/bin/"] [crio.metrics] enable_metrics = false metrics_collectors = ["image_pulls_layer_size", "containers_events_dropped_total", "containers_oom_total", "processes_defunct", "operations_total", "operations_latency_seconds", "operations_latency_seconds_total", "operations_errors_total", "image_pulls_bytes_total", "image_pulls_skipped_bytes_total", "image_pulls_failure_total", "image_pulls_success_total", "image_layer_reuse_total", "containers_oom_count_total", "containers_seccomp_notifier_count_total", "resources_stalled_at_stage"] metrics_host = "127.0.0.1" metrics_port = 9090 metrics_socket = "" metrics_cert = "" metrics_key = "" [crio.tracing] enable_tracing = false tracing_endpoint = "0.0.0.0:4317" tracing_sampling_rate_per_million = 0 [crio.stats] stats_collection_period = 0 collection_period = 0 [crio.nri] enable_nri = true nri_listen = "/var/run/nri/nri.sock" nri_plugin_dir = "/opt/nri/plugins" nri_plugin_config_dir = "/etc/nri/conf.d" nri_plugin_registration_timeout = "5s" nri_plugin_request_timeout = "2s" nri_disable_connections = false ```
kubelet start-up log ```json Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Flag --feature-gates has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. I0603 15:31:18.097993 25608 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" I0603 15:31:18.098860 25608 flags.go:64] FLAG: --address="0.0.0.0" I0603 15:31:18.098868 25608 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" I0603 15:31:18.098877 25608 flags.go:64] FLAG: --anonymous-auth="true" I0603 15:31:18.098886 25608 flags.go:64] FLAG: --application-metrics-count-limit="100" I0603 15:31:18.098889 25608 flags.go:64] FLAG: --authentication-token-webhook="false" I0603 15:31:18.098891 25608 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" I0603 15:31:18.098902 25608 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" I0603 15:31:18.098904 25608 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" I0603 15:31:18.098908 25608 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" I0603 15:31:18.098910 25608 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" I0603 15:31:18.098914 25608 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf" I0603 15:31:18.098916 25608 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" I0603 15:31:18.098921 25608 flags.go:64] FLAG: --cgroup-driver="cgroupfs" I0603 15:31:18.098925 25608 flags.go:64] FLAG: --cgroup-root="" I0603 15:31:18.098927 25608 flags.go:64] FLAG: --cgroups-per-qos="true" I0603 15:31:18.098930 25608 flags.go:64] FLAG: --client-ca-file="" I0603 15:31:18.098934 25608 flags.go:64] FLAG: --cloud-config="" I0603 15:31:18.098936 25608 flags.go:64] FLAG: --cloud-provider="" I0603 15:31:18.098955 25608 flags.go:64] FLAG: --cluster-dns="[]" I0603 15:31:18.098959 25608 flags.go:64] FLAG: --cluster-domain="" I0603 15:31:18.098961 25608 flags.go:64] FLAG: --config="/var/lib/kubelet/config.yaml" I0603 15:31:18.098966 25608 flags.go:64] FLAG: --config-dir="" I0603 15:31:18.098968 25608 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" I0603 15:31:18.098972 25608 flags.go:64] FLAG: --container-log-max-files="5" I0603 15:31:18.098974 25608 flags.go:64] FLAG: --container-log-max-size="10Mi" I0603 15:31:18.098978 25608 flags.go:64] FLAG: --container-runtime-endpoint="unix:///var/run/crio/crio.sock" I0603 15:31:18.098980 25608 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" I0603 15:31:18.098984 25608 flags.go:64] FLAG: --containerd-namespace="k8s.io" I0603 15:31:18.098986 25608 flags.go:64] FLAG: --contention-profiling="false" I0603 15:31:18.098989 25608 flags.go:64] FLAG: --cpu-cfs-quota="true" I0603 15:31:18.098991 25608 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" I0603 15:31:18.098996 25608 flags.go:64] FLAG: --cpu-manager-policy="none" I0603 15:31:18.098998 25608 flags.go:64] FLAG: --cpu-manager-policy-options="" I0603 15:31:18.099002 25608 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" I0603 15:31:18.099004 25608 flags.go:64] FLAG: --enable-controller-attach-detach="true" I0603 15:31:18.099010 25608 flags.go:64] FLAG: --enable-debugging-handlers="true" I0603 15:31:18.099012 25608 flags.go:64] FLAG: --enable-load-reader="false" I0603 15:31:18.099014 25608 flags.go:64] FLAG: --enable-server="true" I0603 15:31:18.099016 25608 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" I0603 15:31:18.099022 25608 flags.go:64] FLAG: --event-burst="100" I0603 15:31:18.099024 25608 flags.go:64] FLAG: --event-qps="50" I0603 15:31:18.099028 25608 flags.go:64] FLAG: --event-storage-age-limit="default=0" I0603 15:31:18.099030 25608 flags.go:64] FLAG: --event-storage-event-limit="default=0" I0603 15:31:18.099033 25608 flags.go:64] FLAG: --eviction-hard="" I0603 15:31:18.099036 25608 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" I0603 15:31:18.099040 25608 flags.go:64] FLAG: --eviction-minimum-reclaim="" I0603 15:31:18.099042 25608 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" I0603 15:31:18.099045 25608 flags.go:64] FLAG: --eviction-soft="" I0603 15:31:18.099047 25608 flags.go:64] FLAG: --eviction-soft-grace-period="" I0603 15:31:18.099049 25608 flags.go:64] FLAG: --exit-on-lock-contention="false" I0603 15:31:18.099054 25608 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" I0603 15:31:18.099056 25608 flags.go:64] FLAG: --experimental-mounter-path="" I0603 15:31:18.099059 25608 flags.go:64] FLAG: --fail-swap-on="true" I0603 15:31:18.099061 25608 flags.go:64] FLAG: --feature-gates="KubeletSeparateDiskGC=true" I0603 15:31:18.099073 25608 flags.go:64] FLAG: --file-check-frequency="20s" I0603 15:31:18.099079 25608 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" I0603 15:31:18.099082 25608 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" I0603 15:31:18.099085 25608 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" I0603 15:31:18.099088 25608 flags.go:64] FLAG: --healthz-port="10248" I0603 15:31:18.099090 25608 flags.go:64] FLAG: --help="false" I0603 15:31:18.099095 25608 flags.go:64] FLAG: --hostname-override="" I0603 15:31:18.099096 25608 flags.go:64] FLAG: --housekeeping-interval="10s" I0603 15:31:18.099100 25608 flags.go:64] FLAG: --http-check-frequency="20s" I0603 15:31:18.099102 25608 flags.go:64] FLAG: --image-credential-provider-bin-dir="" I0603 15:31:18.099106 25608 flags.go:64] FLAG: --image-credential-provider-config="" I0603 15:31:18.099108 25608 flags.go:64] FLAG: --image-gc-high-threshold="85" I0603 15:31:18.099112 25608 flags.go:64] FLAG: --image-gc-low-threshold="80" I0603 15:31:18.099114 25608 flags.go:64] FLAG: --image-service-endpoint="" I0603 15:31:18.099118 25608 flags.go:64] FLAG: --iptables-drop-bit="15" I0603 15:31:18.099120 25608 flags.go:64] FLAG: --iptables-masquerade-bit="14" I0603 15:31:18.099122 25608 flags.go:64] FLAG: --keep-terminated-pod-volumes="false" I0603 15:31:18.099126 25608 flags.go:64] FLAG: --kernel-memcg-notification="false" I0603 15:31:18.099128 25608 flags.go:64] FLAG: --kube-api-burst="100" I0603 15:31:18.099134 25608 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" I0603 15:31:18.099139 25608 flags.go:64] FLAG: --kube-api-qps="50" I0603 15:31:18.099141 25608 flags.go:64] FLAG: --kube-reserved="" I0603 15:31:18.099143 25608 flags.go:64] FLAG: --kube-reserved-cgroup="" I0603 15:31:18.099147 25608 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf" I0603 15:31:18.099149 25608 flags.go:64] FLAG: --kubelet-cgroups="" I0603 15:31:18.099153 25608 flags.go:64] FLAG: --local-storage-capacity-isolation="true" I0603 15:31:18.099155 25608 flags.go:64] FLAG: --lock-file="" I0603 15:31:18.099158 25608 flags.go:64] FLAG: --log-cadvisor-usage="false" I0603 15:31:18.099160 25608 flags.go:64] FLAG: --log-flush-frequency="5s" I0603 15:31:18.099164 25608 flags.go:64] FLAG: --log-json-info-buffer-size="0" I0603 15:31:18.099168 25608 flags.go:64] FLAG: --log-json-split-stream="false" I0603 15:31:18.099171 25608 flags.go:64] FLAG: --log-text-info-buffer-size="0" I0603 15:31:18.099174 25608 flags.go:64] FLAG: --log-text-split-stream="false" I0603 15:31:18.099176 25608 flags.go:64] FLAG: --logging-format="text" I0603 15:31:18.099178 25608 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" I0603 15:31:18.099182 25608 flags.go:64] FLAG: --make-iptables-util-chains="true" I0603 15:31:18.099187 25608 flags.go:64] FLAG: --manifest-url="" I0603 15:31:18.099189 25608 flags.go:64] FLAG: --manifest-url-header="" I0603 15:31:18.099194 25608 flags.go:64] FLAG: --max-open-files="1000000" I0603 15:31:18.099196 25608 flags.go:64] FLAG: --max-pods="110" I0603 15:31:18.099202 25608 flags.go:64] FLAG: --maximum-dead-containers="-1" I0603 15:31:18.099204 25608 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" I0603 15:31:18.099210 25608 flags.go:64] FLAG: --memory-manager-policy="None" I0603 15:31:18.099212 25608 flags.go:64] FLAG: --minimum-container-ttl-duration="0s" I0603 15:31:18.099216 25608 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" I0603 15:31:18.099218 25608 flags.go:64] FLAG: --node-ip="10.0.0.11" I0603 15:31:18.099222 25608 flags.go:64] FLAG: --node-labels="" I0603 15:31:18.099224 25608 flags.go:64] FLAG: --node-status-max-images="50" I0603 15:31:18.099229 25608 flags.go:64] FLAG: --node-status-update-frequency="10s" I0603 15:31:18.099233 25608 flags.go:64] FLAG: --oom-score-adj="-999" I0603 15:31:18.099235 25608 flags.go:64] FLAG: --pod-cidr="" I0603 15:31:18.099239 25608 flags.go:64] FLAG: --pod-infra-container-image="registry.k8s.io/pause:3.9" I0603 15:31:18.099241 25608 flags.go:64] FLAG: --pod-manifest-path="" I0603 15:31:18.099245 25608 flags.go:64] FLAG: --pod-max-pids="-1" I0603 15:31:18.099247 25608 flags.go:64] FLAG: --pods-per-core="0" I0603 15:31:18.099251 25608 flags.go:64] FLAG: --port="10250" I0603 15:31:18.099253 25608 flags.go:64] FLAG: --protect-kernel-defaults="false" I0603 15:31:18.099257 25608 flags.go:64] FLAG: --provider-id="" I0603 15:31:18.099259 25608 flags.go:64] FLAG: --qos-reserved="" I0603 15:31:18.099263 25608 flags.go:64] FLAG: --read-only-port="10255" I0603 15:31:18.099267 25608 flags.go:64] FLAG: --register-node="true" I0603 15:31:18.099269 25608 flags.go:64] FLAG: --register-schedulable="true" I0603 15:31:18.099273 25608 flags.go:64] FLAG: --register-with-taints="" I0603 15:31:18.099277 25608 flags.go:64] FLAG: --registry-burst="10" I0603 15:31:18.099279 25608 flags.go:64] FLAG: --registry-qps="5" I0603 15:31:18.099283 25608 flags.go:64] FLAG: --reserved-cpus="" I0603 15:31:18.099285 25608 flags.go:64] FLAG: --reserved-memory="" I0603 15:31:18.099289 25608 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" I0603 15:31:18.099291 25608 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" I0603 15:31:18.099295 25608 flags.go:64] FLAG: --rotate-certificates="false" I0603 15:31:18.099297 25608 flags.go:64] FLAG: --rotate-server-certificates="false" I0603 15:31:18.099299 25608 flags.go:64] FLAG: --runonce="false" I0603 15:31:18.099306 25608 flags.go:64] FLAG: --runtime-cgroups="" I0603 15:31:18.099309 25608 flags.go:64] FLAG: --runtime-request-timeout="2m0s" I0603 15:31:18.099312 25608 flags.go:64] FLAG: --seccomp-default="false" I0603 15:31:18.099315 25608 flags.go:64] FLAG: --serialize-image-pulls="true" I0603 15:31:18.099318 25608 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" I0603 15:31:18.099321 25608 flags.go:64] FLAG: --storage-driver-db="cadvisor" I0603 15:31:18.099324 25608 flags.go:64] FLAG: --storage-driver-host="localhost:8086" I0603 15:31:18.099328 25608 flags.go:64] FLAG: --storage-driver-password="root" I0603 15:31:18.099332 25608 flags.go:64] FLAG: --storage-driver-secure="false" I0603 15:31:18.099335 25608 flags.go:64] FLAG: --storage-driver-table="stats" I0603 15:31:18.099337 25608 flags.go:64] FLAG: --storage-driver-user="root" I0603 15:31:18.099341 25608 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" I0603 15:31:18.099343 25608 flags.go:64] FLAG: --sync-frequency="1m0s" I0603 15:31:18.099346 25608 flags.go:64] FLAG: --system-cgroups="" I0603 15:31:18.099348 25608 flags.go:64] FLAG: --system-reserved="" I0603 15:31:18.099353 25608 flags.go:64] FLAG: --system-reserved-cgroup="" I0603 15:31:18.099355 25608 flags.go:64] FLAG: --tls-cert-file="" I0603 15:31:18.099359 25608 flags.go:64] FLAG: --tls-cipher-suites="[]" I0603 15:31:18.099364 25608 flags.go:64] FLAG: --tls-min-version="" I0603 15:31:18.099368 25608 flags.go:64] FLAG: --tls-private-key-file="" I0603 15:31:18.099370 25608 flags.go:64] FLAG: --topology-manager-policy="none" I0603 15:31:18.099374 25608 flags.go:64] FLAG: --topology-manager-policy-options="" I0603 15:31:18.099376 25608 flags.go:64] FLAG: --topology-manager-scope="container" I0603 15:31:18.099378 25608 flags.go:64] FLAG: --v="6" I0603 15:31:18.099382 25608 flags.go:64] FLAG: --version="false" I0603 15:31:18.099385 25608 flags.go:64] FLAG: --vmodule="" I0603 15:31:18.099391 25608 flags.go:64] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" I0603 15:31:18.099395 25608 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" I0603 15:31:18.099458 25608 feature_gate.go:254] feature gates: {map[KubeletSeparateDiskGC:true]} I0603 15:31:18.100781 25608 mount_linux.go:288] Detected umount with safe 'not mounted' behavior I0603 15:31:18.100837 25608 server.go:275] "KubeletConfiguration" configuration=< { "EnableServer": true, "StaticPodPath": "/etc/kubernetes/manifests", "PodLogsDir": "/var/log/pods", "SyncFrequency": "1m0s", "FileCheckFrequency": "20s", "HTTPCheckFrequency": "20s", "StaticPodURL": "", "StaticPodURLHeader": null, "Address": "0.0.0.0", "Port": 10250, "ReadOnlyPort": 0, "VolumePluginDir": "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/", "ProviderID": "", "TLSCertFile": "/var/lib/kubelet/pki/kubelet.crt", "TLSPrivateKeyFile": "/var/lib/kubelet/pki/kubelet.key", "TLSCipherSuites": null, "TLSMinVersion": "", "RotateCertificates": true, "ServerTLSBootstrap": false, "Authentication": { "X509": { "ClientCAFile": "/etc/kubernetes/pki/ca.crt" }, "Webhook": { "Enabled": true, "CacheTTL": "2m0s" }, "Anonymous": { "Enabled": false } }, "Authorization": { "Mode": "Webhook", "Webhook": { "CacheAuthorizedTTL": "5m0s", "CacheUnauthorizedTTL": "30s" } }, "RegistryPullQPS": 5, "RegistryBurst": 10, "EventRecordQPS": 50, "EventBurst": 100, "EnableDebuggingHandlers": true, "EnableContentionProfiling": false, "HealthzPort": 10248, "HealthzBindAddress": "127.0.0.1", "OOMScoreAdj": -999, "ClusterDomain": "cluster.local", "ClusterDNS": [ "172.17.0.10" ], "StreamingConnectionIdleTimeout": "4h0m0s", "NodeStatusUpdateFrequency": "10s", "NodeStatusReportFrequency": "5m0s", "NodeLeaseDurationSeconds": 40, "ImageMinimumGCAge": "2m0s", "ImageMaximumGCAge": "0s", "ImageGCHighThresholdPercent": 85, "ImageGCLowThresholdPercent": 80, "VolumeStatsAggPeriod": "1m0s", "KubeletCgroups": "", "SystemCgroups": "", "CgroupRoot": "", "CgroupsPerQOS": true, "CgroupDriver": "systemd", "CPUManagerPolicy": "none", "CPUManagerPolicyOptions": null, "CPUManagerReconcilePeriod": "10s", "MemoryManagerPolicy": "None", "TopologyManagerPolicy": "none", "TopologyManagerScope": "container", "TopologyManagerPolicyOptions": null, "QOSReserved": null, "RuntimeRequestTimeout": "2m0s", "HairpinMode": "promiscuous-bridge", "MaxPods": 110, "PodCIDR": "", "PodPidsLimit": -1, "ResolverConfig": "/run/systemd/resolve/resolv.conf", "RunOnce": false, "CPUCFSQuota": true, "CPUCFSQuotaPeriod": "100ms", "MaxOpenFiles": 1000000, "NodeStatusMaxImages": 50, "ContentType": "application/vnd.kubernetes.protobuf", "KubeAPIQPS": 50, "KubeAPIBurst": 100, "SerializeImagePulls": true, "MaxParallelImagePulls": null, "EvictionHard": { "imagefs.available": "15%", "imagefs.inodesFree": "5%", "memory.available": "100Mi", "nodefs.available": "10%", "nodefs.inodesFree": "5%" }, "EvictionSoft": null, "EvictionSoftGracePeriod": null, "EvictionPressureTransitionPeriod": "5m0s", "EvictionMaxPodGracePeriod": 0, "EvictionMinimumReclaim": null, "PodsPerCore": 0, "EnableControllerAttachDetach": true, "ProtectKernelDefaults": false, "MakeIPTablesUtilChains": true, "IPTablesMasqueradeBit": 14, "IPTablesDropBit": 15, "FeatureGates": { "KubeletSeparateDiskGC": true }, "FailSwapOn": true, "MemorySwap": { "SwapBehavior": "" }, "ContainerLogMaxSize": "10Mi", "ContainerLogMaxFiles": 5, "ContainerLogMaxWorkers": 1, "ContainerLogMonitorInterval": "10s", "ConfigMapAndSecretChangeDetectionStrategy": "Watch", "AllowedUnsafeSysctls": null, "KernelMemcgNotification": false, "SystemReserved": null, "KubeReserved": null, "SystemReservedCgroup": "", "KubeReservedCgroup": "", "EnforceNodeAllocatable": [ "pods" ], "ReservedSystemCPUs": "", "ShowHiddenMetricsForVersion": "", "Logging": { "format": "text", "flushFrequency": "5s", "verbosity": 6, "options": { "text": { "infoBufferSize": "0" }, "json": { "infoBufferSize": "0" } } }, "EnableSystemLogHandler": true, "EnableSystemLogQuery": false, "ShutdownGracePeriod": "0s", "ShutdownGracePeriodCriticalPods": "0s", "ShutdownGracePeriodByPodPriority": null, "ReservedMemory": null, "EnableProfilingHandler": true, "EnableDebugFlagsHandler": true, "SeccompDefault": false, "MemoryThrottlingFactor": 0.9, "RegisterWithTaints": null, "RegisterNode": true, "Tracing": null, "LocalStorageCapacityIsolation": true, "ContainerRuntimeEndpoint": "unix:///var/run/crio/crio.sock", "ImageServiceEndpoint": "" } > ```
crictl imagefsinfo ```json { "status": { "imageFilesystems": [ { "timestamp": "1717418053352086454", "fsId": { "mountpoint": "/tmp/test/overlay-images" }, "usedBytes": { "value": "4096" }, "inodesUsed": { "value": "2" } } ], "containerFilesystems": [ { "timestamp": "1717418053352074191", "fsId": { "mountpoint": "/var/lib/containers/storage/overlay-containers" }, "usedBytes": { "value": "233247" }, "inodesUsed": { "value": "30" } } ] } } ```
Exceprt from CRI-O logs ```console INFO[2024-06-03 12:52:51.977577406Z] Got pod network &{Name:test Namespace:default ID:69ff04729c04f457c16053b45456ef60accd179fa1c64c78ecfb71eaa1aec642 UID:d326b287-d1cd-4389-a050-dc877d01aa31 NetNS:/var/run/netns/72a1930b-a01b-4542-9ae7-2ebe359ab937 Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[] CgroupPath:kubepods-besteffort-podd326b287_d1cd_4389_a050_dc877d01aa31.slice PodAnnotations:0xc0000b3060}] Aliases:map[]} INFO[2024-06-03 12:52:51.977592496Z] Adding pod default_test to CNI network "crio" (type=bridge) INFO[2024-06-03 12:52:52.001423733Z] Got pod network &{Name:test Namespace:default ID:69ff04729c04f457c16053b45456ef60accd179fa1c64c78ecfb71eaa1aec642 UID:d326b287-d1cd-4389-a050-dc877d01aa31 NetNS:/var/run/netns/72a1930b-a01b-4542-9ae7-2ebe359ab937 Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[] CgroupPath:kubepods-besteffort-podd326b287_d1cd_4389_a050_dc877d01aa31.slice PodAnnotations:0xc0000b3060}] Aliases:map[]} INFO[2024-06-03 12:52:52.001520664Z] Checking pod default_test for CNI network crio (type=bridge) INFO[2024-06-03 12:52:52.004765585Z] Checking CNI network crio (config version=1.0.0) WARN[2024-06-03 12:52:52.006460188Z] Failed to get pid for pod infra container: container PID not initialized INFO[2024-06-03 12:52:52.006479017Z] Ran pod sandbox 69ff04729c04f457c16053b45456ef60accd179fa1c64c78ecfb71eaa1aec642 with infra container: default/test/POD id=8e236c35-15d8-439d-8aa7-2a49dd4b4b0c name=/runtime.v1.RuntimeService/RunPodSandbox INFO[2024-06-03 12:52:52.007175844Z] Checking image status: docker.io/library/ubuntu:22.04 id=bc3274a1-ce68-47d4-ba7b-081b2d5e7b33 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:52.007291079Z] Image docker.io/library/ubuntu:22.04 not found id=bc3274a1-ce68-47d4-ba7b-081b2d5e7b33 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:52.007334047Z] Image docker.io/library/ubuntu:22.04 not found id=bc3274a1-ce68-47d4-ba7b-081b2d5e7b33 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:52.007829484Z] Pulling image: docker.io/library/ubuntu:22.04 id=e8b196a4-6417-41fd-9d83-f7f2c1780f82 name=/runtime.v1.ImageService/PullImage INFO[2024-06-03 12:52:52.015170230Z] Trying to access "docker.io/library/ubuntu:22.04" INFO[2024-06-03 12:52:53.959583259Z] Trying to access "docker.io/library/ubuntu:22.04" INFO[2024-06-03 12:52:57.679862108Z] Pulled image: docker.io/library/ubuntu@sha256:2af372c1e2645779643284c7dc38775e3dbbc417b2d784a27c5a9eb784014fb8 id=e8b196a4-6417-41fd-9d83-f7f2c1780f82 name=/runtime.v1.ImageService/PullImage INFO[2024-06-03 12:52:57.680297016Z] Checking image status: docker.io/library/ubuntu:22.04 id=ad6af3f6-750c-44b8-9289-f515a3b6a916 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:57.680687313Z] Image status: &ImageStatusResponse{Image:&Image{Id:52882761a72a60649edff9a2478835325d084fb640ea32a975e29e12a012025f,RepoTags:[docker.io/library/ubuntu:22.04],RepoDigests:[docker.io/library/ubuntu@sha256:2af372c1e2645779643284c7dc38775e3dbbc417b2d784a27c5a9eb784014fb8 docker.io/library/ubuntu@sha256:a6d2b38300ce017add71440577d5b0a90460d0e57fd7aec21dd0d1b0761bbfb2],Size_:80416949,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Pinned:false,},Info:map[string]string{},} id=ad6af3f6-750c-44b8-9289-f515a3b6a916 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:57.681163660Z] Checking image status: docker.io/library/ubuntu:22.04 id=68c840c8-0a27-4954-ba01-96c1a8f463b9 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:57.681743943Z] Image status: &ImageStatusResponse{Image:&Image{Id:52882761a72a60649edff9a2478835325d084fb640ea32a975e29e12a012025f,RepoTags:[docker.io/library/ubuntu:22.04],RepoDigests:[docker.io/library/ubuntu@sha256:2af372c1e2645779643284c7dc38775e3dbbc417b2d784a27c5a9eb784014fb8 docker.io/library/ubuntu@sha256:a6d2b38300ce017add71440577d5b0a90460d0e57fd7aec21dd0d1b0761bbfb2],Size_:80416949,Uid:nil,Username:,Spec:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Pinned:false,},Info:map[string]string{},} id=68c840c8-0a27-4954-ba01-96c1a8f463b9 name=/runtime.v1.ImageService/ImageStatus INFO[2024-06-03 12:52:57.682185616Z] Creating container: default/test/test id=6553e544-6e17-43ac-9719-7ec7a381e4d2 name=/runtime.v1.RuntimeService/CreateContainer INFO[2024-06-03 12:52:57.682324267Z] Allowed annotations are specified for workload [io.containers.trace-syscall] INFO[2024-06-03 12:52:57.686550376Z] Allowed annotations are specified for workload [io.containers.trace-syscall] WARN[2024-06-03 12:52:57.686791277Z] Failed to get pid for pod infra container: container PID not initialized INFO[2024-06-03 12:52:57.686819311Z] Allowed annotations are specified for workload [io.containers.trace-syscall] WARN[2024-06-03 12:52:57.738732610Z] Failed to get pid for pod infra container: container PID not initialized INFO[2024-06-03 12:52:57.738799401Z] Created container 3e4220219a7eb53e8f9a919c632e7472a44f34ae4baede827b32f8acccacd77a: default/test/test id=6553e544-6e17-43ac-9719-7ec7a381e4d2 name=/runtime.v1.RuntimeService/CreateContainer INFO[2024-06-03 12:52:57.739643979Z] Starting container: 3e4220219a7eb53e8f9a919c632e7472a44f34ae4baede827b32f8acccacd77a id=f3f85018-d4f1-4e8d-bf9f-20f4c0d94cdb name=/runtime.v1.RuntimeService/StartContainer WARN[2024-06-03 12:52:57.739715557Z] Failed to get pid for pod infra container: container PID not initialized WARN[2024-06-03 12:52:57.741569755Z] Failed to get pid for pod infra container: container PID not initialized INFO[2024-06-03 12:52:57.741599559Z] Started container PID=4301 containerID=3e4220219a7eb53e8f9a919c632e7472a44f34ae4baede827b32f8acccacd77a description=default/test/test id=f3f85018-d4f1-4e8d-bf9f-20f4c0d94cdb name=/runtime.v1.RuntimeService/StartContainer sandboxID=69ff04729c04f457c16053b45456ef60accd179fa1c64c78ecfb71eaa1aec642 ```
Exceprt from kubelet logs ```console I0603 12:41:40.413609 3337 eviction_manager.go:277] "FileSystem detection" DedicatedImageFs=false SplitImageFs=true ``` ```console I0603 12:41:40.415462 3337 cadvisor_stats_provider.go:280] "Detect Split Filesystem" ImageFilesystems="&FilesystemUsage{Timestamp:1717418500415388965,FsId:&FilesystemIdentifier{Mountpoint:/tmp/test/overlay-images,},UsedBytes:&UInt64Value{Value:4096,},InodesUsed:&UInt64Value{Value:2,},}" ContainerFilesystems="&FilesystemUsage{Timestamp:1717418500415379760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-containers,},UsedBytes:&UInt64Value{Value:107396,},InodesUsed:&UInt64Value{Value:15,},}" I0603 12:41:40.416841 3337 helpers.go:975] "Eviction manager:" log="observations" signal="containerfs.available" resourceName="ephemeral-storage" available="22636688Ki" capacity="31811408Ki" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" I0603 12:41:40.416860 3337 helpers.go:975] "Eviction manager:" log="observations" signal="containerfs.inodesFree" resourceName="inodes" available="1920760" capacity="2031616" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" I0603 12:41:40.416879 3337 helpers.go:975] "Eviction manager:" log="observations" signal="nodefs.available" resourceName="ephemeral-storage" available="22636688Ki" capacity="31811408Ki" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" I0603 12:41:40.416894 3337 helpers.go:975] "Eviction manager:" log="observations" signal="allocatableMemory.available" resourceName="memory" available="8040188Ki" capacity="8122004Ki" time="2024-06-03 12:41:40.416794913 +0000 UTC m=+0.102132497" I0603 12:41:40.416928 3337 helpers.go:975] "Eviction manager:" log="observations" signal="nodefs.inodesFree" resourceName="inodes" available="1920760" capacity="2031616" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" I0603 12:41:40.416952 3337 helpers.go:975] "Eviction manager:" log="observations" signal="imagefs.available" resourceName="ephemeral-storage" available="22636688Ki" capacity="31811408Ki" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" I0603 12:41:40.416965 3337 helpers.go:975] "Eviction manager:" log="observations" signal="imagefs.inodesFree" resourceName="inodes" available="1920760" capacity="2031616" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" I0603 12:41:40.416974 3337 helpers.go:975] "Eviction manager:" log="observations" signal="pid.available" resourceName="pids" available="62940" capacity="63129" time="2024-06-03 12:41:40.416621442 +0000 UTC m=+0.101959036" I0603 12:41:40.416983 3337 helpers.go:975] "Eviction manager:" log="observations" signal="memory.available" resourceName="memory" available="7665092Ki" capacity="8122004Ki" time="2024-06-03 12:41:40.413813995 +0000 UTC m=+0.099151579" ``` ```console E0603 12:44:59.543806 3790 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" ````
A test pod details ```console Name: test Namespace: default Priority: 0 Service Account: default Node: worker-node01/10.0.0.11 Start Time: Mon, 03 Jun 2024 21:52:51 +0900 Labels: Annotations: Status: Running IP: 10.85.0.62 IPs: IP: 10.85.0.62 Containers: test: Container ID: cri-o://3e4220219a7eb53e8f9a919c632e7472a44f34ae4baede827b32f8acccacd77a Image: docker.io/library/ubuntu:22.04 Image ID: docker.io/library/ubuntu@sha256:2af372c1e2645779643284c7dc38775e3dbbc417b2d784a27c5a9eb784014fb8 Port: Host Port: Command: /bin/sh -c ulimit -a ulimit -Sn ulimit -Hn ( while true ; do date ; sleep 1 ; done ) & pid=$! echo "Starting!" _term () { kill $pid echo "Caught SIGTERM!" } trap _term TERM wait $pid trap - TERM wait $pid exit 0 State: Running Started: Mon, 03 Jun 2024 21:52:57 +0900 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-srvjn (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-srvjn: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 22s default-scheduler Successfully assigned default/test to worker-node01 Normal Pulling 21s kubelet Pulling image "docker.io/library/ubuntu:22.04" Normal Pulled 16s kubelet Successfully pulled image "docker.io/library/ubuntu:22.04" in 5.673s (5.673s including waiting). Image size: 80416949 bytes. Normal Created 16s kubelet Created container test Normal Started 16s kubelet Started container test ```
du -sh output ```console root@worker-node01:~/test# du -sh /tmp/test 440M /tmp/test ```

No issues while running with split container and image file system locations. :+1: :tada:

mrunalp commented 4 months ago

/label lead-opted-in /milestone v1.31

kwilczynski commented 4 months ago

Hello everyone! I have opened a Pull Request to track this KEP's graduation to Beta:

/cc @kannon92 @mrunalp @SergeyKanzhelev @johnbelamaric

sreeram-venkitesh commented 4 months ago

/stage beta

prianna commented 4 months ago

Hello @kannon92 πŸ‘‹, Enhancements team here.

Just checking in as we approach enhancements freeze on 02:00 UTC Friday 14th June 2024 / 19:00 PDT Thursday 13th June 2024.

This enhancement is targeting stage beta for v1.31 (correct me, if otherwise)

Here's where this enhancement currently stands:

For this KEP, it looks like we still need to do the following:

The status of this enhancement is marked as at risk for enhancement freeze. Once the above PR is merged, we'll update the status.

If you anticipate missing enhancements freeze, you can file an exception request in advance. Thank you!

bitoku commented 4 months ago

/assign

sreeram-venkitesh commented 4 months ago

With https://github.com/kubernetes/enhancements/pull/4684 merged, we can mark this KEP as tracked for enhancements freeze! πŸŽ‰

kwilczynski commented 4 months ago

/assign kwilczynski

hacktivist123 commented 4 months ago

Hello @kannon92 πŸ‘‹, 1.31 Docs Shadow here.

Does this enhancement work planned for 1.31 require any new docs or modification to existing docs?

If so, please follows the steps here to open a PR against dev-1.31 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday June 27, 2024 18:00 PDT.

Also, take a look at Documenting for a release to get yourself familiarised with the docs requirement for the release.

Thank you!

rashansmith commented 4 months ago

Hi @bitoku @kannon92,

:wave: from the v1.31 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement! Some reasons why you might want to write a blog for this feature include (but are not limited to) if this introduces breaking changes, is important to our users, or has been in progress for a long time and is graduating.

To opt in, let us know and open a Feature Blog placeholder PR against the website repository by 3rd July, 2024. For more information about writing a blog see the blog contribution guidelines.

Note: In your placeholder PR, use XX characters for the blog date in the front matter and file name. We will work with you on updating the PR with the publication date once we have a final number of feature blogs for this release.

kwilczynski commented 4 months ago

@rashansmith, a blog post about this feature has already been written and published in the past. See: