AlexT-38 / OpenGEET_ECU_firmware

Open GEET Engine Control Unit firmware
GNU General Public License v3.0
1 stars 0 forks source link

SDCard access times are too long #7

Closed AlexT-38 closed 1 year ago

AlexT-38 commented 1 year ago

This was: Log output is messy. (see #14 "JSON Log Output" ) But is now "SDCard access times are too long"

Problem

At present, the longest SDcard write time seen was 53840ms. This is longer than the 49ms available...

AlexT-38 commented 1 year ago

At present, the longest SDcard write time seen was 53840ms. This is longer than the 49ms available, and some amount of optimisation may be needed. For instance, the text formatted log output is the same regardless of destination, and the available ram is enough to buffer an entire record. It may be better to generate the string in a buffer, and then outputs sections of it at a size that reduces peak SD card access time.

To test this, we will need to fill a buffer with data and then measure access times with steadily increasing amounts of data. We need to do this both with and without closing the file between each write.

From what I understand, the SD card has an internal buffer, and writes will only occur when that buffer is full. Writing data up to this point takes only the time to transfer the data via spi. Buffer sizes may be larger for newer, larger and faster cards, resulting in longer write delays. If we know the buffer size, we could optimise by having two buffers that size, and writing each buffer at the designated time once it is full. Ideally, we need a version of the SD card library that does not block while waiting for the card's buffer to complete the write. This would probably also require a custom SPI library. Screen updates, touch reads and egt sampling would not be usable until. Touch and redraw do not need good time keeping. Egt sensors have a low sample rate and can also tolerate late timing to some extent.

Rewriting the SPI and SDcard librarys is beyond the scope of this project (for now).

To test the current SD card behaviour, a test script is needed:

AlexT-38 commented 1 year ago

https://github.com/arduino-libraries/SD/blob/master/examples/NonBlockingWrite/NonBlockingWrite.ino shows how to perform non blocking SD card writes.

AlexT-38 commented 1 year ago

Testing open/print/close with steadily increasing write sizes shows the following Buffer size is 512 bytes. The fastest write times can be achieved by writing multiples of this size.

The code performs 10 identical writes. After each group of 10, a larger section of the source string is copied into the write buffer. Each write cycle is separated by a 500ms delay. In the interests of expediency, after length=416, the length was increased by 16 instead of 1, and continued up to 1736.

After the first couple of blocks of 10 writes, (and >= 12b) the time taken for the first write of each block is consistently higher than the subsequent writes. Occasionally one other write in that block will also be higher. Occasionally the first write is significantly higher. I suspect the slightly higher times are the result of updating the file system.

The significantly higher write times (and write sizes) were 133496 (49b), 133244 (116b), 142884 (183b), 135484 (250b), 137644 (317b) , 139776 (632b), 142068 (1168b), and 146844 (1704b) us

Typical and extended write times were (approximates): bytes typical extended
<512b 14350 + bytes 22550 + bytes
=512b 12300 15564
512<1024 20000 + bytes 27000 + bytes
=1024 15740 19040
1024<1536 22000 + bytes 29000 + bytes
=1536 19190 22436
1536<1736 26700 +bytes 33800 + bytes

Note that, aside from the 100+ms times, these write times are all much shorter than the 40-55ms times currently being seen with many multiple print calls per record.

Repeating the test but without closing and reopening the file, writes take 36+bytes to complete, and every 512 bytes there is a 3100+bytes write time, and every 8x 512 bytes, there is a ~14000us write.

the very first write after opening a file is ~34000us.

approximately every 12x 8x 512, there is a much larger write time, ranging from 30000 to 300000us

this is with a steadily increasing write load. what happens if the write load is always 512 bytes?

single writes are typically around 3000-4000 us aprox. every 90ish writes, there is one write longer than 50ms, so far between 67 and 113ms now the interval between long writes has increased, and the long writes are around 300-350ms

closing and reopening the file each time with the same 512byte data load each time, the extended write times are consistently around 70ms and occur almost exactly every 96 writes

while this is longer that the 49ms maximum possible slot time, it is at least consistant and so far has not caused catastrophically long write times of 300+ms.

what happens if the write buffer is not exactly a multiple 512? what if it also varies slightly, by say up to +/-128 bytes?

first, with writes of 511, behaviour is the same - every 96 writes there's a ~70ms write

now with slightly more: 520 bytes (and with 100ms delay btween writes - I don't have all day)

now the longer writes are ~80ms and occur every 48 writes.

consistent, no surprise 300ms writes

now with random sizes from 256-512 bytes. this time we also log the total number of bytes transmitted, since this cannot be inferred from the number of writes.

behaviour is approximately the same - long writes occur every ~35000 to ~38000 bytes (70-74 x 512) regular writes are around 15ms long writes around 73ms

and if the file large and varying? say 1500-2000 bytes

behaviour is approximately the same - long writes occur every ~42000 to ~44000 bytes (~84 x 512) regular writes are around 30ms long writes around 88ms

As we are getting times of about 47ms for normal logging writes, and 60ms for longer writes, but log file records are only ~400 bytes long, it seems likely that much of the time spend writing to SD card is formatting of the text and print call overheads.

by formatting the text to a string buffer, typical writes should be around 15ms for the same data load, or around 30ms for much larger loads.

to tackle the longer writes either: 1) make all time critical tasks into interrupts, (which would make receiving from serial at 1Mbaud difficult due to the length of the pid process) 2) implement non blocking access as per the linked article.

AlexT-38 commented 1 year ago

The above linked non blocking write example doesn't seem to work. File::availableForWrite() always returns zero, so nothing is written.

AlexT-38 commented 1 year ago

Setting the block size to 512 seems to mostly work. After around 47 records, writing blocks roughly 5/6 times, taking ~4ms to write each block, there's a jump up to 15 ms and then 28ms. This is followed by ~ 30 records where writes take around 9ms, then drop back to 4ms.

Approximately 1138 block writes later, we get 14, 61, 6, 63, 10 ,10, 59, 27, 9 and 9ms before dropping down to ~4ms again.

Nothing has really been solved here. Occasionally the file system must be updated as the file size grows, and this does not occur in an expedient manner. I doubt that if File::availableForWrite() returned a valid value, that it would have any impact on these occasional holdups.

examining https://github.com/arduino-libraries/SD/blob/master/src/utility/SdFile.cpp confirms this. it simply returns the number of bytes left till the end of the current block.

However, perusing the source code may reveal an alternative strategy - perhaps volume updates can occur in the 4th unused 40ms slot.

AlexT-38 commented 1 year ago

Yes, that seems to be doing it. With the current record size of ~400bytes, writes take ~4ms. Flushing usually takes ~3ms, but occasionally is ~11ms.

Now all that remains is to change the output format.

Also, if binary output is still something that is needed, this also needs to be buffered and transmitted in 512byte chunks. The only problem with this is that the anticipated record text size is up to ~2000 bytes. The buffer needs to be double this, plus a temporary buffer for the record prior to appending to the main buffer. That's ~6000 bytes, which is all the remaining SRAM.

That being said, the main buffer would only be needed if serial output is enabled, and serial will always print the entire buffer content, so there wouldn't be more than 1 record in it. So, probably not a problem. The binary format is much smaller and shouldn't be more than 1000bytes, so we can have 4k for text buffer, or 2k for text buffer and 2k for binary buffer.

One last consideration is that when logging stops, the final incomplete block of data needs to be written before closing the file. Pad with nulls to make up the required size.

AlexT-38 commented 1 year ago

if binary output is still something that is needed

I'm inclined to drop support for that entirely. At least as originally envisioned, with the Records structure being packeted and written to the output streams. If the need does arise, then use something akin to BSON.