Open neith00 opened 3 years ago
seems like ignite can pull concurrently the same image and affect different uuid.
running sudo ignite kernels -q |xargs sudo ignite kernel rm && sudo ignite images -q | xargs sudo ignite image rm
then pulling the images before starting concurrently the VMs solves the issue.
seems like ignite can pull concurrently the same image and affect different uuid.
Sorry, yes, this currently is a known limitation. We added locks free up contention on devicemapper and some other runtime resources for the VM, but the design for concurrent image pull has a few more open questions.
We've discussed how to support concurrent image pull in detail on the development call. If you're interested and able to join, we're happy to discuss this again.
If we hashed the image and got a hash-specific lockfile first before pulling, we could prevent this problem since other writers would wait on the lock and re-check the image store afterward.
There was also discussion about using containerd's devicemapper related features directly which would completely alleviate any need for ignite to sync these actions.
Lastly there's a much larger topic around the kinds of problems that an ignite daemon could fix.
The workaround for this is as you mentioned -- please pre-pull any shared images before you start your VM's concurrently (OS and kernel)
Also, we may be able to improve this failure mode. It's a bit unfortunate that pulling concurrent images creates an ambiguous store that breaks even all future non-concurrrent runs.
Maybe we could just pick the newest matching image?
When running:
I always get:
FATA[0000] ambiguous kernel query: "weaveworks/ignite-kernel:4.19.125" matched the following IDs/names: weaveworks/ignite-kernel:4.19.125, weaveworks/ignite-kernel:4.19.125
I ran
sudo ignite images -q | xargs sudo ignite image rm
to clean local images but still happens