pylessard / python-udsoncan

Python implementation of UDS (ISO-14229) standard.
MIT License
589 stars 203 forks source link

Slow response on Windows for transfer data #237

Closed AkhilTThomas closed 5 months ago

AkhilTThomas commented 5 months ago

Hello, I'm using the latest version of the udsoncan lib with the NotifierBasedCanStack and I'm facing slow response time for the transfer data. After a positive response it takes almost ~150ms to send the next chunk of transfer data. Environment:

I tried adding a custom wait_func but still fastest i could get is this 150ms between a positive response and the next frame.

Enabling debugging show that there is ~118ms difference in the timestamp from canoe and the debug print, is this expected or is there a way to reduce this gap? (overall there is 150ms delay)

image

image

config:

    "stmin": 0,
    # Request the sender to send 8 consecutives frames before sending a new flow control message
    "blocksize": 32,
    # Number of wait frame allowed before triggering an error
    "wftmax": 0,
    # 8 byte payload (CAN 2.0)
    "tx_data_length": 8,
    # Minimum length of CAN messages. When different from None, messages are padded to meet this length
    "tx_data_min_length": None,
    # Will pad all transmitted CAN messages with byte 0x00
    "tx_padding": 0,
    # Triggers a timeout if a flow control is awaited for more than 1000ms
    "rx_flowcontrol_timeout": 1000,
    # Triggers a timeout if a consecutive frame is awaited for more than 1000ms
    "rx_consecutive_frame_timeout": 1000,
    # When sending, respect the stmin requirement of the receiver
    "override_receiver_stmin": None,
    # Limit the size of receive frame.
    "max_frame_size": 2048,
    "wait_func": busy_wait_seconds,
pylessard commented 5 months ago

The delay between the reception of a response and the next request is at your application level. Also, your can log show ISOTP frames, not can message. They have a length of 1024, so the transmission time must be included in the timestamp.

pylessard commented 5 months ago

I assume your bus runs at 1Mbps A can frame contains 112 bits for 64 bits of payload. 1000000*64/112=571428 kbps

1024*8/571428=143 ms (at best)

pylessard commented 5 months ago

If you want to go faster than that, you'll need a CAN FD bus

AkhilTThomas commented 5 months ago

I'm running at 500kbps classic CAN, the application logic is pretty startight forward and just send bytes continuously

def transfer_chunk(self):
        if self.chunk_index < self.total_chunks:
            # print(bytes(self.hex_info["data"][self.chunk_index]).hex())
            self.uds.client.transfer_data(
                (self.chunk_index + 1) % (0xFF + 1),
                bytes(self.hex_info["data"][self.chunk_index]),
            )
            self.chunk_index += 1
        else:
            self.transfer_done = True

The entire image is ~2.6Mb and it takes 8min đŸ˜¢

pylessard commented 5 months ago

Hmmm some numbers does not match.. Can you share the can log of each individual messages?

And is that Mb or MB?

AkhilTThomas commented 5 months ago

Its MB , ~2500000 bytes. i'll get the logs

AkhilTThomas commented 5 months ago

@pylessard I have attached the can log, It looks like its taking ~150ms just to send one chunk. Is this timing as good as it gets? can_log.txt

pylessard commented 5 months ago

No something is wrong. You should get at least 4x faster. As a rule of thumb, at 500 kbps, you can send just over 4 messages per milliseconds. You send 1 message per millisecond. The reason is that the device is requesting for that in the first flow control message

image

The 3rd byte is the requested delay between each message. 01 = 1ms.

If the device had a stmin of 0, it would go much faster. You can ignore that input from the server by setting override_receiver_stmin to 0. See this : https://can-isotp.readthedocs.io/en/latest/isotp/implementation.html#override_receiver_stmin

You may overflow the device if you do so though.

Also, you may want to consider some compression scheme if you have the control over the device.

pylessard commented 5 months ago

I did some computations. with stmin=0, blocksize=0, chunks of 1024. padding activated and a payload of 2.5MB The theoretical minimum flash time should be :

At 500kbps, 8 bytes of data: 76.3 sec At 1000kbps, 8 bytes of data: 38.1 sec At 500kbps, CANFD, 64 bytes of data: 50sec At 1000kbps, CANFD, 64 bytes of data: 25sec

That is assuming a perfect 100% usage of the CAN bus, which won't be the case. But still, it gives you a ballpark.

I think 90sec is reasonably achievable with your current setup. It makes sense with the 4x factor since your actual flashing time must be : 2500000/1024*0.150 = 366sec. Divide by 4 and you get 91.5 sec

Like mentioned above, to get even better performance, you'd need to do compression on the binary.

AkhilTThomas commented 5 months ago

I did some computations. with stmin=0, blocksize=0, chunks of 1024. padding activated and a payload of 2.5MB The theoretical minimum flash time should be :

At 500kbps, 8 bytes of data: 76.3 sec At 1000kbps, 8 bytes of data: 38.1 sec At 500kbps, CANFD, 64 bytes of data: 50sec At 1000kbps, CANFD, 64 bytes of data: 25sec

That is assuming a perfect 100% usage of the CAN bus, which won't be the case. But still, it gives you a ballpark.

I think 90sec is reasonably achievable with your current setup. It makes sense with the 4x factor since your actual flashing time must be : 2500000/1024*0.150 = 366sec. Divide by 4 and you get 91.5 sec

Like mentioned above, to get even better performance, you'd need to do compression on the binary.

Thank you for the calculation! The timing did improve by reducing the stmin , but the device cant handle it that fast. At least i was to understand more about this. I will try to investigate the server side limitations. I will close this ticket.

pylessard commented 5 months ago

Understood On last thing, stmin can have a 100 microsecond résolution. Between F1 and F9, it correspond to 100, 200, ..., 900 us

Cheers