Closed AkhilTThomas closed 5 months ago
The delay between the reception of a response and the next request is at your application level. Also, your can log show ISOTP frames, not can message. They have a length of 1024, so the transmission time must be included in the timestamp.
I assume your bus runs at 1Mbps A can frame contains 112 bits for 64 bits of payload. 1000000*64/112=571428 kbps
1024*8/571428=143 ms (at best)
If you want to go faster than that, you'll need a CAN FD bus
I'm running at 500kbps classic CAN, the application logic is pretty startight forward and just send bytes continuously
def transfer_chunk(self):
if self.chunk_index < self.total_chunks:
# print(bytes(self.hex_info["data"][self.chunk_index]).hex())
self.uds.client.transfer_data(
(self.chunk_index + 1) % (0xFF + 1),
bytes(self.hex_info["data"][self.chunk_index]),
)
self.chunk_index += 1
else:
self.transfer_done = True
The entire image is ~2.6Mb and it takes 8min đŸ˜¢
Hmmm some numbers does not match.. Can you share the can log of each individual messages?
And is that Mb or MB?
Its MB , ~2500000 bytes. i'll get the logs
@pylessard I have attached the can log, It looks like its taking ~150ms just to send one chunk. Is this timing as good as it gets? can_log.txt
No something is wrong. You should get at least 4x faster. As a rule of thumb, at 500 kbps, you can send just over 4 messages per milliseconds. You send 1 message per millisecond. The reason is that the device is requesting for that in the first flow control message
The 3rd byte is the requested delay between each message. 01
= 1ms.
If the device had a stmin of 0, it would go much faster.
You can ignore that input from the server by setting override_receiver_stmin
to 0. See this : https://can-isotp.readthedocs.io/en/latest/isotp/implementation.html#override_receiver_stmin
You may overflow the device if you do so though.
Also, you may want to consider some compression scheme if you have the control over the device.
I did some computations. with stmin=0, blocksize=0, chunks of 1024. padding activated and a payload of 2.5MB The theoretical minimum flash time should be :
At 500kbps, 8 bytes of data: 76.3 sec At 1000kbps, 8 bytes of data: 38.1 sec At 500kbps, CANFD, 64 bytes of data: 50sec At 1000kbps, CANFD, 64 bytes of data: 25sec
That is assuming a perfect 100% usage of the CAN bus, which won't be the case. But still, it gives you a ballpark.
I think 90sec is reasonably achievable with your current setup. It makes sense with the 4x factor since your actual flashing time must be : 2500000/1024*0.150 = 366sec. Divide by 4 and you get 91.5 sec
Like mentioned above, to get even better performance, you'd need to do compression on the binary.
I did some computations. with stmin=0, blocksize=0, chunks of 1024. padding activated and a payload of 2.5MB The theoretical minimum flash time should be :
At 500kbps, 8 bytes of data: 76.3 sec At 1000kbps, 8 bytes of data: 38.1 sec At 500kbps, CANFD, 64 bytes of data: 50sec At 1000kbps, CANFD, 64 bytes of data: 25sec
That is assuming a perfect 100% usage of the CAN bus, which won't be the case. But still, it gives you a ballpark.
I think 90sec is reasonably achievable with your current setup. It makes sense with the 4x factor since your actual flashing time must be : 2500000/1024*0.150 = 366sec. Divide by 4 and you get 91.5 sec
Like mentioned above, to get even better performance, you'd need to do compression on the binary.
Thank you for the calculation! The timing did improve by reducing the stmin
, but the device cant handle it that fast. At least i was to understand more about this. I will try to investigate the server side limitations.
I will close this ticket.
Understood On last thing, stmin can have a 100 microsecond résolution. Between F1 and F9, it correspond to 100, 200, ..., 900 us
Cheers
Hello, I'm using the latest version of the udsoncan lib with the
NotifierBasedCanStack
and I'm facing slow response time for the transfer data. After a positive response it takes almost ~150ms to send the next chunk of transfer data. Environment:I tried adding a custom
wait_func
but still fastest i could get is this 150ms between a positive response and the next frame.Enabling debugging show that there is ~118ms difference in the timestamp from canoe and the debug print, is this expected or is there a way to reduce this gap? (overall there is 150ms delay)
config: