Closed cmoulliard closed 7 months ago
I'm trying this but am stuck on this:
[ 1.3] Downloading: http://builder.libguestfs.org/fedora-39.xz
sha512sum '/home/janedoe/.cache/virt-builder/fedora-39.x86_64.1'
libguestfs: trace: set_verbose true
libguestfs: trace: set_verbose = 0
libguestfs: trace: get_cachedir
libguestfs: trace: get_cachedir = "/var/tmp"
[ 2.2] Planning how to build this image
virt-builder: error: statvfs: No such file or directory:
_virt_builder/output
Unix.Unix_error(Unix.ENOENT, "unlink", "/tmp/virt-builder.0LA2zC/vbcache7e91db.txt.s9tjcobs")
Unix.Unix_error(Unix.ENOENT, "unlink", "/tmp/virt-builder.0LA2zC/vbcache71345d.txt.sz6mceue")
Unix.Unix_error(Unix.ENOENT, "unlink", "/tmp/virt-builder.0LA2zC/vbcache015627.txt.l4go5xcn")
rm -rf -- '/tmp/virt-builder.0LA2zC'
libguestfs: trace: close
libguestfs: closing guestfs handle 0x55a4f7d6e0e0 (state 0)
This problem is also certainly the reason why the podman client pod cannot access the socat service as it got a no route to host
. To be verified !!!
To be verified !!!
I ssh to the VM, executed sudo modprobe iptable-nat
and rebooted the VM but we are still getting
k get vm/podman-remote -ojson | jq -r '.status.interfaces[]'
jq: error (at <stdin>:148): Cannot iterate over null (null)
kubectl exec podman-client -it -- /bin/sh
sh-5.2# podman -r --url=tcp://10.131.1.176:2376 ps
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman socket: Get "http://d/v4.7.2/libpod/_ping": dial tcp 10.131.1.176:2376: connect: no route to host
sh-5.2#
NOTe: I suspect that VMs installed on ocp under namespace openshift-virtualization-os-images
have been tailored to be used on kubevirt. This is why if we use the Fedora OS from !
I got from Alice such a Dockerfile they use to build/customize image https://github.com/kubevirt/test-benchmarks/blob/main/containerdisk/Dockerfile#L1
FYI. I will make a test using simply such a Dockerfile to verify if the network is working
FROM fedora:38 as builder
ENV LIBGUESTFS_BACKEND direct
ENV IMAGE https://mirror.karneval.cz/pub/linux/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
RUN dnf install -y libguestfs guestfs-tools curl
# Cache image download for the next steps
RUN curl -L -o /disk.img ${IMAGE}
RUN virt-customize -a /disk.img --install cloud-init,podman,openssh-server,socatqemu-guest-agent
RUN virt-customize -a /disk.img \
--root-password password:test \
--run-command 'useradd -m fedora -s /bin/bash' \
--run-command 'echo "fedora ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers.d/fedora' \
--password 'fedora:password:fedora' \
--run-command "sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config" \
--run-command "dnf clean all -y"
RUN virt-sparsify --in-place /disk.img
FROM scratch
COPY --from=builder /disk.img /disk/disk.img
FYI. I will make a test using simply such a Dockerfile to verify if the network is working
I built a new image using this Dockerfile and we can now get the externalIP address using
oc project test3
kubectl get vmi/quarkus-dev -ojson | jq -r '.status.interfaces[] | .ipAddress'
10.131.1.192
and
kubectl exec podman-client -it -- /bin/sh
sh-5.2# podman -r --url=tcp://10.131.1.192:2376 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
sh-5.2#
Here is the scenario that I'm testing now on a linux VM
IMAGE=https://mirror.karneval.cz/pub/linux/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2
curl -L -o disk.img ${IMAGE}
cat <<END > customize-vm
sudo systemctl start podman.socket
sudo systemctl enable podman.socket
#sudo modprobe iptable-nat
sudo cat > /etc/systemd/system/podman-remote.service <<EOF
[Unit]
Description=Podman Remote
Requires=podman.socket
After=network.target podman.socket
[Service]
Restart=always
ExecStart=socat TCP-LISTEN:2376,reuseaddr,fork,bind=0.0.0.0 unix:/run/podman/podman.sock
[Install]
WantedBy=default.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable podman-remote.service
sudo systemctl start podman-remote.service
END
cat <<EOF > Dockerfile
FROM scratch
COPY ./disk.img /disk/
EOF
virt-customize -a disk.img --install cloud-init,podman,openssh-server,socat,qemu-guest-agent
virt-customize -a disk.img \
--root-password password:test \
--run-command 'useradd -m fedora -s /bin/bash' \
--run-command 'echo "fedora ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers.d/fedora' \
--password 'fedora:password:fedora' \
--run-command "sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config" \
--run ./customize-vm \
--run-command "dnf clean all -y" -v
virt-sparsify --in-place disk.img
docker build -f Dockerfile -t quay.io/ch007m/quarkus-dev-vm .
docker push quay.io/ch007m/quarkus-dev-vm
# Create a secret with yur public key
kubectl create secret generic fedora-ssh-key --from-file=key=/Users/cmoullia/.ssh/id_rsa.pub
cat <<EOF | kubectl apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: quarkus-dev
labels:
app: quarkus-dev
vm.kubevirt.io/template: fedora-server-small
vm.kubevirt.io/template.namespace: openshift
vm.kubevirt.io/template.revision: '1'
vm.kubevirt.io/template.version: v0.25.0
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: quarkus-dev
spec:
source:
registry:
url: 'docker://quay.io/ch007m/quarkus-dev-vm' #'docker://quay.io/containerdisks/fedora:38'
storage:
resources:
requests:
storage: 30Gi
running: true
template:
metadata:
annotations:
vm.kubevirt.io/flavor: small
vm.kubevirt.io/os: fedora
vm.kubevirt.io/workload: server
labels:
kubevirt.io/domain: quarkus-dev
kubevirt.io/size: small
spec:
accessCredentials:
- sshPublicKey:
propagationMethod:
configDrive: {}
source:
secret:
secretName: fedora-ssh-key
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
networkInterfaceMultiqueue: true
rng: {}
resources:
requests:
memory: 2Gi
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 180
volumes:
- cloudInitConfigDrive:
userData: |-
#cloud-config
hostname: quarkus-dev
name: cloudinitdisk
- dataVolume:
name: quarkus-dev
name: rootdisk
EOF
# SSH to the VIM and test if pôdman replies
virtctl -n test3 ssh fedora@quarkus-dev --local-ssh=true -c "podman -r --url=tcp://localhost:2376 version"
Client: Podman Engine
Version: 4.7.2
API Version: 4.7.2
Go Version: go1.20.10
Built: Tue Oct 31 14:30:11 2023
OS/Arch: linux/amd64
Server: Podman Engine
Version: 4.7.2
API Version: 4.7.2
Go Version: go1.20.10
Built: Tue Oct 31 14:30:11 2023
OS/Arch: linux/amd64
# Do the same using a podman client pod
VM_IP=$(kubectl get vmi/quarkus-dev -ojson | jq -r '.status.interfaces[] | .ipAddress')
echo $VM_IP
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: podman-client
spec:
containers:
- name: podman-client
image: quay.io/podman/stable
args:
- sleep
- "1000000"
securityContext:
capabilities:
add:
- "SYS_ADMIN"
- "MKNOD"
- "SYS_CHROOT"
- "SETFCAP"
- "NET_RAW"
EOF
kubectl exec podman-client -it -- /bin/sh
sh-5.2# podman -r --url=tcp://10.131.1.217:2376 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
:-)
To avoid to download for every VM created the fedora image (> 1.x GB) from a registry we can the following approach where we download one time the VM and next we clone the PVC to the target namespace
cat <<EOF | kubectl apply -n openshift-virtualization-os-images -f -
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: fedora
annotations:
cdi.kubevirt.io/storage.bind.immediate.requested: 'true'
spec:
source:
registry:
url: 'docker://quay.io/ch007m/quarkus-dev-vm'
storage:
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
EOF
cat <<EOF | kubectl apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: quarkus-dev
labels:
app: quarkus-dev
spec:
dataVolumeTemplates:
- apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: quarkus-dev
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 11Gi
source:
pvc:
namespace: openshift-virtualization-os-images
name: fedora
#source:
# registry:
# url: 'docker://quay.io/ch007m/quarkus-dev-vm' #'docker://quay.io/containerdisks/fedora:38'
#storage:
# resources:
# requests:
# storage: 30Gi
running: true
template:
metadata:
labels:
kubevirt.io/domain: quarkus-dev
kubevirt.io/size: small
spec:
accessCredentials:
- sshPublicKey:
propagationMethod:
configDrive: {}
source:
secret:
secretName: fedora-ssh-key
domain:
cpu:
cores: 1
sockets: 1
threads: 1
devices:
disks:
- disk:
bus: virtio
name: rootdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
model: virtio
name: default
networkInterfaceMultiqueue: true
rng: {}
resources:
requests:
memory: 2Gi
networks:
- name: default
pod: {}
terminationGracePeriodSeconds: 180
volumes:
- cloudInitConfigDrive:
userData: |-
#cloud-config
hostname: quarkus-dev
name: cloudinitdisk
- dataVolume:
name: quarkus-dev
name: rootdisk
EOF
As the FATAL message reported by virt-customize is not blocking, I will then close this ticket and integrate what I commented within the ticket/pr about having our own VM image
@iocanel
To avoid to download for every VM created the fedora image (> 1.x GB) from a registry we can the following approach where we download one time the VM and next we clone the PVC to the target namespace
cat <<EOF | kubectl apply -n openshift-virtualization-os-images -f - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: fedora annotations: cdi.kubevirt.io/storage.bind.immediate.requested: 'true' spec: source: registry: url: 'docker://quay.io/ch007m/quarkus-dev-vm' storage: resources: requests: storage: 10Gi volumeMode: Filesystem EOF cat <<EOF | kubectl apply -f - apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: quarkus-dev labels: app: quarkus-dev spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: quarkus-dev spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 11Gi source: pvc: namespace: openshift-virtualization-os-images name: fedora #source: # registry: # url: 'docker://quay.io/ch007m/quarkus-dev-vm' #'docker://quay.io/containerdisks/fedora:38' #storage: # resources: # requests: # storage: 30Gi running: true template: metadata: labels: kubevirt.io/domain: quarkus-dev kubevirt.io/size: small spec: accessCredentials: - sshPublicKey: propagationMethod: configDrive: {} source: secret: secretName: fedora-ssh-key domain: cpu: cores: 1 sockets: 1 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} model: virtio name: default networkInterfaceMultiqueue: true rng: {} resources: requests: memory: 2Gi networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - cloudInitConfigDrive: userData: |- #cloud-config hostname: quarkus-dev name: cloudinitdisk - dataVolume: name: quarkus-dev name: rootdisk EOF
I think this is what CDI (Containerized Data Importer) is all about and I agree with the approach.
Fixed with PR #20
Issue