Closed jeff-o closed 9 years ago
Hi Jeff, thanks for the report. As we don't run into that issue, can you provide a patch to fix it? It should be rather straight forward to block until the device is open.
I'll look into it. Unfortunately the unit I was testing with has already shipped out (with respawn=true) so I can't test any changes at the moment. At the very least it looks like the deadline value could be increased or parameterized for corner cases like mine. A better solution might be to retry every second until a connection is made.
An obvious way to reproduce would be to launch the node and wait five seconds before plugging in or powering the sensor. This would simulate the scenario of the driver "beating" the sensor's own bootup on power-on.
I think he doesn't have access to a device any more.
I have encountered an issue where the driver comes up before the device is ready, causing the driver to crash. In this scenario the computer and sensor (a TIM 551) are being powered on at the same time from the same power source. The log report states:
^[[31m[FATAL] [1423596153.512016982]: Failed to init device: 1^[[0m ^[[31m[ERROR] [1423596153.512052302]: sendSOPASCommand: socket not open^[[0m
SOPAS - Error stopping streaming scan data! sick_tim driver exiting.
I can manually restart the node without issue. I also don't see this issue if the device is allowed to power up first, and then the computer. Setting respawn=true in the launch file allows the node to come up on a cold boot, but this is not ideal.