Open stefan-sesser opened 2 months ago
/cc @jschintag
Hi @stefan-sesser,
it is nice to know people want to use kubevirt on s390x. Unfortunately while we included the code to run on s390x in kubevirt v1.3.0, this did not include jobs to build it for s390x for v1.3. There is an alpha version of v1.4 which will include s390x builds, releasing at the end of this month. See the release schedule. As you already pointed out, in the meanwhile nightly builds are available.
Are you deploying kubevirt in a multi-arch cluster? Currently for s390x our effort has been focused on single architecture clusters. Additionally, the nightly builds are scoped for a single arch, as they are not multi-arch manifests. Please do note we are still implementing the e2e tests for s390x, so it might be that not all kubevirt features work on s390x at the moment.
If you need a specific feature (like multi-arch support) or have any feedback, feel free to reach out to me. I can't make any promises, but it is always better to have feedback.
Hi @jschintag
Thank you very much for your quick answer!
We are running our setup on 3 master nodes (x86) and one worker node (s390x). So I guess we have to replace the x86 master nodes by s390x arch as well, or? What do you mean by not all kubevirt features work on s390x? Do you have any concrete issues yet?
In my opinion multi-arch support would be great for s390x.
Thanks
Hi @stefan-sesser,
What do you mean by not all kubevirt features work on s390x? Do you have any concrete issues yet?
Simply that we did not yet test/use all the features available, so it is possible some might not work (yet).
We are running our setup on 3 master nodes (x86) and one worker node (s390x). So I guess we have to replace the x86 master nodes by s390x arch as well, or?
Yes, or simply try a single node cluster. My question is did you use the x86 nightly or the s390x nightly? You can use the s390x one by following the arm64 example and just replace arm64 with s390x. Theoretically i think kubevirt does not deploy to master nodes by default, so it might work. (Again, entirely experimental and just a thought i just had.)
Yes, I am using the s390x nightly build by following the arm64 example. The virt-operator pods are up and running now, but afterwards it tries to create this strategy job: https://github.com/kubevirt/kubevirt/blob/main/pkg/virt-operator/strategy_job.go#L53 with the following affinitiy rules:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: node-role.kubernetes.io/worker
operator: DoesNotExist
weight: 100
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
At the moment I am checking if I can somehow overwrite/handle this but I have little hope.
Plan B would be to setup a s390x only cluster.
Thanks again for your support!
No Problem, thank you for trying it out.
A workmate just sent me this: https://kubevirt.io/user-guide/cluster_admin/installation/#restricting-kubevirt-components-node-placement
After patching the kubevirt resource like this:
kubectl patch -n kubevirt kubevirt kubevirt --type merge --patch '{"spec": {"infra": {"nodePlacement": {"nodeSelector": {"kubernetes.io/arch": "s390x"}}}}}'
kubectl patch -n kubevirt kubevirt kubevirt --type merge --patch '{"spec": {"workloads": {"nodePlacement": {"nodeSelector": {"kubernetes.io/arch": "s390x"}}}}}'
and deleting the exisiting strategy job it finally deployed everything correctly on the cluster and kubevirt resource shows phase deployed :)
Ok, I am making progress with the whole s390x architecture. Unfortunately, it is failing due to missing s390x docker images for CDI. @jschintag what are you using for storage? Or was this not tested to create a virtual machine on s390x yet? Thx
Currently for testing we only used ephemeral VMs (specifically and alpine containerdisk). CDI enablement is currently in progress, we already have PRs open for that. That being said, i think you can use a normal PVC, as long as you prepare your VM disk inside beforehand. That should work without CDI.
What happened: We have installed kubevirt on one of our on-prem clusters to test the s390x feature implemented with v1.3.0. On x86_64 worker nodes the virt-handler pods are starting up fine and everything works as expected. We are also able to spawn VMs. On s390x worker node the virt-handler is in Init:CrashLoopBackOff. A quick look to quay.io revealed that there is no s390x image for version 1.3.0. Afterwards, we tried to use the daily developer builds operator as I could see some s390x images on quay.io there. Unfortunately, the operator still tries to start the wrong container.
What you expected to happen: Virt-launcher pod is starting with correct image on a s390x k8s node
How to reproduce it (as minimally and precisely as possible): Install kubevirt in a cluster with a s390x worker node.
Additional context: Is there any timeline for fully support of s390x?
Environment:
virtctl version
): v1.3.0 and operator version v1.3.0-beta.0.899+ab1749640d7ea1kubectl version
): v1.30.3uname -a
): 5.14.0-427.16.1.el9_4.s390x