Closed resouer closed 1 year ago
Here's one solution (workaround), which has been used by us to handle https://github.com/kubernetes/kubernetes/issues/49964
Return whatever the container need in AllocateResponse
, so those devices will be attached in container, while kubelet does not (and no need) aware that.
rpc Allocate(AllocateRequest) returns (AllocateResponse) {}
We can do this in a separate dummy DP.
seems related https://github.com/kubernetes/kubernetes/issues/59380#issuecomment-366171312
As we discussed offline, it seems another example of unlimited
resources like /dev/kvm
while kubelet does not (and no need to) aware that.
In normal case also, kubelet is not aware what is being passed to CRI. :)
Yes, I believe /dev/infiniband/rdma_cm
case can drop in #59380. We will move discussion there.
I will still keep this issue open to track other use cases as well.
@resouer Great!! It's wonderful, I like this simple way to support passing host devices to container :)
If device plugin supporting resource share among pods, then DP can cover this problem. https://docs.google.com/document/d/1ZgKH_K4SEfdiE_OfxQ836s4yQWxZfSjS288Tq9YIWCA/edit?disco=AAAAB1hQAk8
Other use cases is for accessing /dev/video0 for processing webcam input and other IoT things without adding a privileged security context
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Another use case is /dev/pci_xxx for attaching pci devices on host.
When can this issue be solved?
I'm working with device plugins and also wish for this feature. I have a varying number of devices per node so the ability to map all devices of a type /dev/abc*
would be ideal. Currently the device plugin container (and other init containers it depends on) have to map /dev
and require privileged: true
which is not a good fit for secure systems.
Another one is /dev/fuse
, which with docker requires --device=/dev/fuse --cap-add=SYS_ADMIN
, but in k8s today needs hostPath: { path: /dev/fuse } ; securityContext: { privileged: true }
.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
See https://gitlab.com/arm-research/smarter/smarter-device-manager from ARM that probably solves most of the use cases mentioned in this issue.
Thank you A LOT @tanskann !! This was the missing clue I was really looking for.
I wrote down how to mount /dev/fuse
without privileged: true
here: https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
My use-case: a weather-station that needs to talk to /dev/ttyUSB0. /remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/kind feature
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/remove-lifecycle rotten
This is still very applicable, especially in IoT use cases.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Still has a usecase and not resolved
Another compelling use case for this feature is Falco.
The official documentation has an example of running it with the principle of least privilege in docker :point_down: https://falco.org/docs/getting-started/running/#docker-least-privileged
But it is not possible in K8s because of this missing feature.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I also need access to USB devices /remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I believe this is still fresh. I think it is still only possible to access something like /dev/ttyUSB0
using privileged: true
/remove-lifecycle stale
I believe this is still fresh. I think it is still only possible to access something like
/dev/ttyUSB0
usingprivileged: true
The device-manager api allows mounting special devices without privileged:true.
At least this approach worked for /dev/fuse: https://github.com/kubernetes/kubernetes/issues/60748#issuecomment-766089063
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Is this a BUG REPORT or FEATURE REQUEST?: /kind feature
What happened: The previous discussion happened in #5607, which is pretty old and with no background of CRI, no to mention Device Plugin. Also, I believe the original requirement in #5607 should have already been fixed by Device Plugin.
While new requirements showing up is: how to specify devices in container by user, or per DP requirement? And how can we make this work with current DP design.
This is actually needed during implementing many Device Plugins. One example is RDMA: https://github.com/hustcat/k8s-rdma-device-plugin, in which case,
/dev/infiniband/rdma_cm
should be passed in all container which use RDMA device for run RDMA application in container.@hustcat Please correct me if I mis-understood sth.
Others devices including:
/dev/dri/renderD128
/dev/infiniband
etc.What you expected to happen:
We may need to collect user requirements first, and re-visit DP to see how to support this.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
cc @RenaudWasTaken @vikaschoudhary16 @derekwaynecarr @vishh @jiayingz
Environment:
kubectl version
):uname -a
):