I am hacking on firecracker-containerd to support firecracker snapshots, and I am facing the following problem (see also firecracker-microvm/firecracker#4036).
I create a snapshot of a VM running a nginx container, and then I try to load this snapshot on a different machine (for preserving the disk state, I simply commit a container snapshot using source code from nerdctl commit).
When restoring the disk state, I patch (see also firecracker-microvm/firecracker#4014) the snapshot's disk device path for the container snapshot device to point to a fresh containerd snapshot mount point (the previously committed image is mounted).
Snapshot loading succeeds and the container is responsive even to http requests (I am dropping the network setup details since I don't have problems with it now), but the nginx container returns internal server errors, and the following error appears in the VM's kernel logs:
It seems like the restored disk state (via container snapshot commit) is inconsistent with the VM's disk state.
For other containers, such as simple Python or Golang http servers, the symptoms after sending a request to the container loaded from a snapshot are crashes (Python interpreter trap on invalid opcode or Golang runtime panic).
I have tried manually doing the same thing, i.e.:
Pull image using firecracker-ctr
Prepare snapshot using firecracker-ctr
Setup firecracker using the getting started guide, adding a stub drive like firecracker-containerd
Send a patch drive request replacing the stub drive with the container snapshot mount
Manually mount the container snapshot inside the VM and launch the nginx server
Pause the VM and create a snapshot
Transfer the VM state files to a different machine using rsync
Repeat steps 1-2 on the second machine.
Resume the VM on the second machine.
Everything works, the nginx server responds with no errors.
Even though the disk state is technically not the same (creating a container commit and pushing it to a registry would require patching firecracker-ctr), the container responds with a greeting (as opposed to an internal server error when doing the same stuff using firecracker-containerd) and seems healthy. Though I do see the same error in the kernel log, I believe it is related to the disk state difference.
Manually loading snapshots of simpler setups (manually creating the container snapshot for a simple Golang http server) also works okay.
Discussing this issue with firecracker folks in scope of firecracker-microvm/firecracker#4036, we came to the conclusion that the problem is rather in firecracker-containerd than in firecracker.
AFAIC, I studied all firecracker-containerd interactions with container snapshots and firecracker (both the VM and the agent running in the VM), and I didn’t find any problems and any special filesystem actions other than those I did manually.
This leads me to the conclusion that the problem may be with the shim and container filesystem setup.
My patch to firecracker-containerd can be found in #760.
I can provide more context and more detailed steps for reproducing this issue, if needed. I would really appreciate any help or suggestions on this effort to support firecracker snapshots in firecracker-containerd.
I am hacking on firecracker-containerd to support firecracker snapshots, and I am facing the following problem (see also firecracker-microvm/firecracker#4036).
I create a snapshot of a VM running a nginx container, and then I try to load this snapshot on a different machine (for preserving the disk state, I simply commit a container snapshot using source code from nerdctl commit).
When restoring the disk state, I patch (see also firecracker-microvm/firecracker#4014) the snapshot's disk device path for the container snapshot device to point to a fresh containerd snapshot mount point (the previously committed image is mounted).
Snapshot loading succeeds and the container is responsive even to http requests (I am dropping the network setup details since I don't have problems with it now), but the nginx container returns internal server errors, and the following error appears in the VM's kernel logs:
It seems like the restored disk state (via container snapshot commit) is inconsistent with the VM's disk state.
For other containers, such as simple Python or Golang http servers, the symptoms after sending a request to the container loaded from a snapshot are crashes (Python interpreter trap on invalid opcode or Golang runtime panic).
I have tried manually doing the same thing, i.e.:
Even though the disk state is technically not the same (creating a container commit and pushing it to a registry would require patching firecracker-ctr), the container responds with a greeting (as opposed to an internal server error when doing the same stuff using firecracker-containerd) and seems healthy. Though I do see the same error in the kernel log, I believe it is related to the disk state difference.
Manually loading snapshots of simpler setups (manually creating the container snapshot for a simple Golang http server) also works okay.
Discussing this issue with firecracker folks in scope of firecracker-microvm/firecracker#4036, we came to the conclusion that the problem is rather in firecracker-containerd than in firecracker.
AFAIC, I studied all firecracker-containerd interactions with container snapshots and firecracker (both the VM and the agent running in the VM), and I didn’t find any problems and any special filesystem actions other than those I did manually.
This leads me to the conclusion that the problem may be with the shim and container filesystem setup.
My patch to firecracker-containerd can be found in #760.
I can provide more context and more detailed steps for reproducing this issue, if needed. I would really appreciate any help or suggestions on this effort to support firecracker snapshots in firecracker-containerd.