greiman / SdFat-beta

Beta SdFat for test of new features
MIT License
165 stars 61 forks source link

Sync Required for SDIO writes? #24

Open custeve opened 5 years ago

custeve commented 5 years ago

Bill, is a sync() (or flush()) still required with this BETA version, when using the SDIO FIFO mode? Our current code base performs the occasional (every 100 writes), a sync in place of a write. There was a mention in a post (sorry, I can't refind that post), that suggested that the write command will automatically sync when the block size is reached? If a sync is required, would it be better to base it off of a total write size, rather than a write count, to better match the 32kb block size?

Thanks!

greiman commented 5 years ago

A sync is only needed if you want to insure no data is lost in a crash and the file will not be closed.

This has always been true for SdFat. If you properly close a file, no sync is needed.

custeve commented 5 years ago

Bill, thanks for the response. Our application is such that the single file is written to indefinitely until power down. We want to lose as little data as possible. The problem I'm having right now is that occasionally throughout the data, there is a 150-500ms blocking that occurs that is causing a hiccup in the higher rate sensor data (we have several high rate sensors via serial, i2c and SPI). The yield function is such that the sensors are being polled during a yield, with an average of about 300 cycles per yield.

It is my impression that on occasion, during a sync, that some blocking is occurring. I am trying some different combinations of block size and sync delay, to see if I can mitigate the problem. Any suggestions you have would helpful. This is on a Teensy 3.5.

greiman commented 5 years ago

There will be occasional longer delays during a sync. a sync causes the data cache to be written to the SD and then an update of the directory entry for the file.

Updating the directory entry causes the entry to be read, updated and written. Each read or write to the SD requires transfer of a 512 byte sector. The cache block may need to be reread.

The basic size of a flash block on an SD is not 512 bytes but may be as large as 512 KB. If a flash wear leveling operation occurs in the SD, an entire 512 KB flash block may be moved in the SD. There is no way to control when the SD does these operations.

Also, the Teensy 3.5/3.6 SDIO controller has a hardware fault that causes occasional additional delays.

Preallocating a file can reduce problems. The ExFatLogger and the older LowLatencyLogger use this method.

It is necessary to truncate and properly close the file at the end of a logging run.

custeve commented 4 years ago

@greiman, I'd be interested in more information regarding the hardware fault in the teensy sdio controller, is there information on that I can read up on? The concern I'm currently having is that when these periodic delays happen, it appears that ALL functions on the processor may cease, including the yield function (which processes incoming sensors) and even serial buffering.

In general, the write speed is fast enough in general for our application, which is only about 80kb/s, but the delays are causing considerable gaps in data 100-500ms. I've been running numerous tests, and have been able to vary the results with different SD cards, all formatted with the official SD formatter. The worst is an 8GB Sandisk Ultra Class 10. The 16GB Class 10 is only slightly better. A Sandisk Extreme 32GB UHC-3 did ok after being formatted, but wasn't great before that. The Samsung 32GB EVO Select UHC-1 has had the fewest issues so far. Today, I plan to run some similar tests using ExFat.

greiman commented 4 years ago

Some of the SDHC controller errata are described here.

I doubt that you will get reliable results polling high speed sensors in a yield function. There is no time guarantee for calls to a yield function.

Here are three ways that users are logging data from high speed sensors.

The ExFatLogger example uses a buffer queue and only write when the SD is not busy. I have used these definitions to log data at very high rates, 5 kHz.

const uint32_t LOG_INTERVAL_USEC = 200;

define SD_CONFIG SdioConfig(FIFO_SDIO)

Several users acquire data using timer interrupts to queue data to the lower priority write code. This allows write to overlap data acquisition.

I have use ChibiOS to log data from a high priority thread.