Closed 4nthraxx closed 3 years ago
Hi! Yes, I see. My guess is that your server request a stmin and/or a blocksize greater than 0.
When that happens the stack will go to sleep when it needs to wait. Unfortunately, on Windows, the thread resolution is ~16ms and this implemntation runs in the user space. So if your server ask to wait 0.5ms between the messages, you may be unlucky and have 16ms between each message. Also, if the stack need to wait for a flow control message every 8 consecutive frames for example, then you may have the same unwanted delay.
Your options are :
suqash_stmin_requirement
parameter. Setting this to true will make the stack ignore the stmin requirement from the server and sends everything it can as fast as possible. This will avoid some user space delay and let the driver handle the transmission speed.If your vendor has a ISO15765 DLL that can do better timing by talking directly to the driver, I am open to support it in this package.
In any case, I suggest you make a can log of your transmission and see what really goes on.
Regards
Thank you for your feedback!
A few findings that might help narrowing it down:
Hi, Sorry for the delay. Here's what I think. I initially did not review your time calculation. You probably did : 700000*8/500000 = 11.2sec. A CAN frame has a lot of overhead. 112bits for 64 bits of data, this count in your bitrate. With a 500kbps channel, you get 4 message per millisecond at best, so your 0.24ms delay shows that you are at full speed; this is good.
Then, IsoTP overhead chops a minimum 1 byte per CAN message, leaving 7 effective bytes per CAN message, yielding a rough maximum of 28bytes per millisecs assuming all of your data is inside a single IsoTP frame. This mean a best case scenario of ~ 700000/28000 = 25sec. (I am ignoring the Flow control frame in this calculation)
Now looking at your CAN log, I see that you send your data in chunk of 566 bytes. We are far from the ideal scenario of all the data into a single IsoTP frame. And looking at the timing, there's a 16ms delay in between your data chunk. This means that your software goes to sleep then you have to wait for Windows to wake it back up with its thread resolution of 16millisecs. If we consider 16ms for each data block, we have 700000/566 * 0.016 = 19.8 sec.
25+19.8 sec = 44.8 sec, roughly what you get.
In other words, you are at full speed but have a total 19.8sec of sleep time. Best you can do is :
Hope that helps.
This seems to be unavoidable under windows limitations.
As suggested the only way to increase performance was to send 64Byte frames with bit rate switch to whatever your max CAN FD speed may be.
Example settings for max speed for me:
self.bus = can.interface.Bus(bustype='vector', channel=0, bitrate=500000, app_name="CANoe", fd=True,
data_bitrate=2000000, bitrate_switch = True)
addr = isotp.Address(isotp.AddressingMode.Normal_11bits, rxid=0x104, txid=0x204, is_fd=True)
self.stack = isotp.CanStack(self.bus, address=addr, error_handler=self.my_error_handler,
params={'tx_padding': 0, 'can_fd': True,
'bitrate_switch': True,
'tx_data_length': 64,
'tx_data_min_length' : 8})
Hello @pylessard,
I implemented the software download using this lib (Vector interface, windows 10) but it looks like the Multi-message mechanism isn't terribly perfomant. Any insights here? I'm on a 500kbps bus and the download of 700kB of data takes ~ 40s? That's at least 3 times too slow.
Why is it that slow? Was anyone able to achieve faster transfer speeds? I'm pretty much just sending the multi message one after another with minimal downtime inbetween to grab new data from the bytearray.
Not sure I'm doing something wrong or is Windows impacting us so hard due to a ton of smaller timeouts in the transport layer...
Thanks in advance!
MK