kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
109.69k stars 39.28k forks source link

See extra disks in container other than the one mounted inside the pod using lsblk command #126262

Closed adarsh-dell closed 1 month ago

adarsh-dell commented 1 month ago

What happened?

Here's the scenario I tested on k8 cluster:

Created two pods, each consuming its own PVC/PV (mounting only one volume for each pod), and both got scheduled on the same worker node. Both pods are in a running state and volumes are mounted. Observations:

Running the lsblk command on the worker node shows both blocks and their mounting paths. Please see below screenshot.

image

Running the same lsblk command from inside one of the app pods (using exec -it bash) lists both blocks but shows only one volume as mounted with its mount path. While In the manifest file of this pod I have only mounted one volume, why we are seeing other devices ?


I tried to create a very simple and basic pod using below yaml file.

Note: Not using any PVC or dynamic provisioning and this pod also got scheduled to the same worker nodes where other 2 pods are already running and are consuming CSI vols. Even inside this container, if I am running lsblk command I can see sda, sdb.sdc , scnia, scnib etc. Even I haven't mounted anything inside pod.

apiVersion: v1
kind: Pod
metadata:
  name: my-nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80 

image

Question is that what are we seeing all these devices even without mounting it explicitly inside the pod.

There is no impact and issue in reading or writing the data and issue in functionality wise.

What did you expect to happen?

Not seeing any extra device other than the one that is mounted inside the pod explicitly. If this is expected then why?

How can we reproduce it (as minimally and precisely as possible)?

Steps are mentioned in the description.

Anything else we need to know?

No response

Kubernetes version

```console $ kubectl version Client Version: v1.29.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.3 ```

Cloud provider

NA

OS version

```console # On Linux: $ cat /etc/os-release NAME="openSUSE Leap" VERSION="15.5" ID="opensuse-leap" ID_LIKE="suse opensuse" VERSION_ID="15.5" PRETTY_NAME="openSUSE Leap 15.5" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:opensuse:leap:15.5" BUG_REPORT_URL="https://bugs.opensuse.org" HOME_URL="https://www.opensuse.org/" DOCUMENTATION_URL="https://en.opensuse.org/Portal:Leap" LOGO="distributor-logo-Leap" $ uname -a Linux master-1-XouLFUVP8VwOg 5.14.21-150500.55.68-default #1 SMP PREEMPT_DYNAMIC Wed Jun 5 21:39:05 UTC 2024 (40e256a) x86_64 x86_64 x86_64 GNU/Linux # On Windows: C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture # paste output here ```

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

k8s-ci-robot commented 1 month ago

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
kundan2707 commented 1 month ago

/sig storage

xing-yang commented 1 month ago

/cc @jsafrane

jsafrane commented 1 month ago

You runt the first lsblk on the host. Kubernetes (kubelet) mounts all volumes of all Pods to the host first and then ask container runtime to bind-mount only Pod's volumes to a given Pod containers. So it is expected that you see all volumes mounted on the host.

You run the second lsblk on in a container. You should see only Pod's volumes mounted. lsblk does not show any mounts, that part is OK. I am not sure why it shows also sdb and sdc block devices from the host. What container runtime do you use? And do you have sdb/sdc mounted on the host?

/triage needs-information

adarsh-dell commented 1 month ago

Hi @jsafrane Thanks for looking into this issue:

image

Thanks

fanf4 commented 1 month ago

Hi @jsafrane ,

please see below for more detail.

  1. we created a pod with one pvc mounted (provisioning by DELL powerflex CSI driver)

  2. login to the container "task pv container" and use "lsblk", found it listed all disks of HOST. but this container only required disk scinie. other disks (like sdc......sdq...scinia...scinii) are mounted by other pods on HOST worker2. image

  3. output of "lsblk" on HOST lsblk output on host.txt

  4. we create a pod without PVC . login to the container and run lsblk and see all disks of the HOST as well. image

Above tests are in openshift Env.

  1. we also tested in K8S env. created pod without pvc. the container can list all disks of host woker2. disk scina and scinab are mounted by other pods on worker2.

image

jsafrane commented 1 month ago

It looks like an issue (or feature?) in containerd, please open an issue there. Kubernetes does not tell the container runtime to show all host devices and some other runtimes explicitly block them. e.g. CRI-O since https://github.com/cri-o/cri-o/pull/4072.

Note that while lsblk can list the host devices, because it uses information from /sys, the devices are not actually not visible in the container - ls /dev should not show sda or scini*.

/close

k8s-ci-robot commented 1 month ago

@jsafrane: Closing this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/126262#issuecomment-2273949748): >It looks like an issue (or feature?) in containerd, please open an issue there. Kubernetes does not tell the container runtime to show all host devices and some other runtimes explicitly block them. e.g. CRI-O since https://github.com/cri-o/cri-o/pull/4072. > >Note that while lsblk can list the host devices, because it uses information from /sys, the devices are not actually not visible in the container - `ls /dev` should not show `sda` or `scini*`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.