When using Pypanda/QEMU/Panda with avatar and the POSIX memory queues, if for some reason the memory queue isn't unlinked before the process exits (for example due to a crash), then /dev/mqueue can completely fill up leading to this error message due to ULIMIT:
qemu: qemu_avatar_mq_open_read: No space left on device
This technically will only happen if the TX/RX queue names are changing, such as when changing the name field of the target being spawned to include an identifier, such as PID
$ ls /dev/mqueue
EMU34129_tx_queue EMU41915_tx_queue EMU44523_tx_queue
EMU36565_rx_queue EMU42910_rx_queue EMU45457_rx_queue
...
It seems error prone to have to rely on a process itself to clean up at atexit, if it never makes it there. A quick fix is to immediately unlink the queue after it has been connected to by Avatar and after it is created by QEMU. This assumes the queue is for a single connection pair and it will not be reusable as it has been unlinked. Unclear if other clients of remote memory will have issue with this, but since Avatar only supports opening an existing queue instead of creating one, I believe this works out.
When using Pypanda/QEMU/Panda with avatar and the POSIX memory queues, if for some reason the memory queue isn't unlinked before the process exits (for example due to a crash), then /dev/mqueue can completely fill up leading to this error message due to ULIMIT:
This technically will only happen if the TX/RX queue names are changing, such as when changing the
name
field of the target being spawned to include an identifier, such as PIDIt seems error prone to have to rely on a process itself to clean up at atexit, if it never makes it there. A quick fix is to immediately unlink the queue after it has been connected to by Avatar and after it is created by QEMU. This assumes the queue is for a single connection pair and it will not be reusable as it has been unlinked. Unclear if other clients of remote memory will have issue with this, but since Avatar only supports opening an existing queue instead of creating one, I believe this works out.