Open mraineri opened 1 year ago
While testing a device update using rf_update.py on a small IoT device, I noticed that for large files (over 50MB), the multipart request times out after only 70% of the file is uploaded. To resolve this, I had to manually adjust the timeout in the script. Could a timeout argument be added, or perhaps the timeout logic adjusted based on the total file size?
Sure, that's certainly something we can add as an option.
Out of curiosity so I can better understand the scale, about how fast is the transfer to this device? We do try to make a "best guess" calculation specifically for the update script based on the file size (I think a 50MB file should have a timeout of 100 seconds).
Out of curiosity so I can better understand the scale, about how fast is the transfer to this device? We do try to make a "best guess" calculation specifically for the update script based on the file size (I think a 50MB file should have a timeout of 100 seconds).
I ran some tests with dummy files, and here are the results:
Recent additions start specifying timeouts to apply either on a specific request or in general at the script entry point.
Original times were a bit more strict (5 seconds for all scripts, and approximately 1 second for every 3 MB for a push update).
15 seconds could be a bit aggressive for most usage and we could likely bring this back down to 5 seconds. However, the log entry reading could easily go past this in some cases, so maybe we could push the 30 seconds down to the log entry retrieval itself.
I don't have a good sense of a "right" answer for the multipart update one; file sizes can be large, and should we penalize fast networks for being accommodating of slower networks? Is there a better solution? In Ansible, a user has to specify the timeout, but I prefer to avoid adding more options.