Open Icedroid opened 5 days ago
So I am a little confused about how the DV request looks like. I see that the IMPORTER_PULL_METHOD
is node
but in that case the endpoint should not be
name: IMPORTER_ENDPOINT
value: >-
http://image-cache.test/public/windows_server_2019_x64_cn.qcow2
Could you please attach the DataVolume?
(node pull does an import from a second container in the same pod, so http://localhost:8100/disk.img
)
So I am a little confused about how the DV request looks like. I see that the
IMPORTER_PULL_METHOD
isnode
but in that case the endpoint should not bename: IMPORTER_ENDPOINT value: >- http://image-cache.test/public/windows_server_2019_x64_cn.qcow2
Could you please attach the DataVolume? (node pull does an import from a second container in the same pod, so
http://localhost:8100/disk.img
)
I create the importer pod manual. I want to use nbd-kit+curl for qemu info and qemu convert, I found the new version has IMPORTER_PULL_METHOD must be node, So I add the env IMPORTER_PULL_METHOD. How should I configure nginx server for test nbdkit+curl ?
We were hitting quite some issues for external mirrors with nbdkit, so the only path that still uses it is the node pull registry import (delegate import to container runtime) where we control the http server that's serving the image.
We don't really support creating the pod manually, there are just too many moving parts. Why do you have to use nbdkit?
What happened: CDI Used Nbdkit + curl download large image file (6.6G) fails with error.
I exec into the importer-dv pod and open nbdkit debug error log:
What you expected to happen: CDI should be able to download and convert image file to raw without nbdkit error issues.
curl -I http://image-cache.test/public/windows_server_2019_x64_cn.qcow2 return 200 and response header has supported Accept-Range. curl -o http://image-cache.test/public/windows_server_2019_x64_cn.qcow2 download success. curl -H "Range: bytes=0-1024" -I http://image-cache.test/public/windows_server_2019_x64_cn.qcow2 also reponse success.
k8s service image-cache.test is nginx , It cached the qcow2 file. nginx.conf as follow:
How to make nginx server support use nbdkit+curl convert qcow2 file stream writed to raw?
How to reproduce it (as minimally and precisely as possible): Ceph block storage (PVC in block mode) CDI v1.60.4
Additional context: It appears using nbdkit to curl the file could be causing issues.
Suggestion: If I Don't use nbdkit to curl, When qcow2 convert raw I can't limit the write bandwitdh of ceph block cluster. High write bandwitdh has affected other ceph block.
Environment:
CDI version (use kubectl get deployments cdi-deployment -o yaml): v1.60.4 Kubernetes version (use kubectl version): v1.24.0 or v1.18.0 DV specification: N/A Cloud provider or hardware configuration: N/A OS (e.g. from /etc/os-release): CentOS 7 Kernel (e.g. uname -a): 4.18 Install tools: N/A Others: ceph:v16.2.7, rook/ceph:v1.9.2