Open aburdenthehand opened 1 year ago
To be even more specific: It seems like one must specify the label on the VM and on the VMI object (inside the template section of the VM resource). When specifying the VM as follows, I was able to expose VM ports via NodePort service objects:
yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/os: linux
vmName: "__VM_NAME__"
name: "__VM_NAME__"
spec:
running: true
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/domain: "__VM_NAME__"
vmName: "__VM_NAME__"
spec:
domain:
cpu:
cores: __VM_CORES__
model: host-passthrough
devices:
disks:
- disk:
bus: virtio
name: disk0
- cdrom:
bus: sata
readonly: true
name: cloudinitdisk
# see https://github.com/kubevirt/user-guide/pull/262/files
rng: {}
machine:
type: q35
resources:
requests:
memory: "__VM_MEMORY__"
volumes:
- name: disk0
persistentVolumeClaim:
claimName: "__VM_NAME__"
- cloudInitNoCloud:
userData: |
#cloud-config
hostname: __VM_NAME__
ssh_pwauth: True
disable_root: false
ssh_authorized_keys:
- __VM_SSH_PUBLIC_KEY__
name: cloudinitdisk
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle rotten
Hi! I was wondering if this method of exposing the VM ports has been confirmed. If so, should this VM specification @drssdinblck commented be added to the doc?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
/remove-lifecycle rotten
/assign
@aburdenthehand please take a look.
The service objects doc currently specify users edit the VMI object, however it seems as though this should be done on the VM (and restart if running).
This was raised on this thread: https://kubernetes.slack.com/archives/C8ED7RKFE/p1688475040764709
If this is the case, we should update this doc: https://github.com/kubevirt/user-guide/blame/main/docs/virtual_machines/service_objects.md