I'm getting crashes when data is streaming from multiple backends. We've done this successfully in the past, but in this case we have a different number of channels and samples / packet. Is dastard intended to handle this case, or would this be considered a bug?
If you did NOT intend to handle this case, that's OK, I'd just like to understand what the intended behavior is.
Background is that the PNNL enclosure we are building up has a different maximum number of channels in each fset. when running multiple fsets, autotune sets the number of samples / packet based on the maximum number of possible channels, hence we get different number of samples / packet. This can be fixed in software (except that we need to rebuild the squashfs).
Demonstration of the "shapes" of the packets from the different backends. 08a80060 is 15 samples x 4 channels per packet, 09a80060 is 31 samples x 4 samples per packet.
$ datatest.py None udp://10.0.15.20 $((16*8192))
0 08a80060 12280931 120 (15, 4) x [('angle', '<i2')]
8192 09a80060 5942350 434 (31, 7) x [('angle', '<i2')]
16384 08a80060 12280932 120 (15, 4) x [('angle', '<i2')]
24576 08a80060 12280933 120 (15, 4) x [('angle', '<i2')]
32768 09a80060 5942351 434 (31, 7) x [('angle', '<i2')]
40960 08a80060 12280934 120 (15, 4) x [('angle', '<i2')]
114688 08a80060 12280940 120 (15, 4) x [('angle', '<i2')]
122880 08a80060 12280941 120 (15, 4) x [('angle', '<i2')]
dastard crash:
$ dastard
This is DASTARD version 0.3.4pre1 (git commit 94a2836)
Logging problems to /home/pcuser/.dastard/logs/problems.log
Logging client updates to /home/pcuser/.dastard/logs/updates.log
2024/09/20 17:40:13 Dastard config file: /home/pcuser/.dastard/config.yaml
2024/09/20 17:40:16 New client connection established
2024/09/20 17:40:18 Starting data source named ABACOSOURCE
Sample rate for chan [ 0- 3] 122070 /sec determined from 205 packets: ÃŽt=0.025068 sec, ÃŽserial=204, and 15.000 samp/packet
Sample rate for chan [4096-4102] 122070 /sec determined from 100 packets: ÃŽt=0.025141 sec, ÃŽserial=99, and 31.000 samp/packet
panic: Consumed 200 of available 220 packets, but there are still 7 frames to fill and 15 frames in packet
goroutine 21 [running]:
github.com/usnistgov/dastard.(*AbacoGroup).demuxData(0xc00029ecf0, {0xc0002a1560, 0xc000073d00?, 0x0?}, 0x0?)
/home/pcuser/qsg_git_clones/dastard/abaco.go:363 +0x525
github.com/usnistgov/dastard.(*AbacoSource).readerMainLoop(0xc0000f1188)
/home/pcuser/qsg_git_clones/dastard/abaco.go:1056 +0xbfa
created by github.com/usnistgov/dastard.(*AbacoSource).StartRun in goroutine 18
/home/pcuser/qsg_git_clones/dastard/abaco.go:951 +0xd6
I'm getting crashes when data is streaming from multiple backends. We've done this successfully in the past, but in this case we have a different number of channels and samples / packet. Is dastard intended to handle this case, or would this be considered a bug?
If you did NOT intend to handle this case, that's OK, I'd just like to understand what the intended behavior is.
Background is that the PNNL enclosure we are building up has a different maximum number of channels in each fset. when running multiple fsets, autotune sets the number of samples / packet based on the maximum number of possible channels, hence we get different number of samples / packet. This can be fixed in software (except that we need to rebuild the squashfs).
Demonstration of the "shapes" of the packets from the different backends. 08a80060 is 15 samples x 4 channels per packet, 09a80060 is 31 samples x 4 samples per packet.
dastard crash: