Closed FreeBear-nc closed 5 months ago
filesize = filesize / (D_FILE_SIZE_DIVIDER/100);
With this code you loose resolution. In Example
original
#define D_FILE_SIZE_DIVIDER 1024
filesize = 16.777.216;
filesize *= 100; // 1.677.721.600
filesize = filesize / D_FILE_SIZE_DIVIDER; // 1.638.400
your code
#define D_FILE_SIZE_DIVIDER 1024
filesize = 16.777.216;
filesize = filesize / (D_FILE_SIZE_DIVIDER/100); // 1.677.721
Noted. Was trying to avoid an overflow when multiplying by 100 which the current code will do at (approx) 2^28. But with a 2^64 limit, it is going to be a very, very long time before we see file sizes in the Exbi range (that is 2^60).
With uint64_t
you can still do the *100
before dividing by D_FILE_SIZE_DIVIDER
and keep the resolution.
It will indeed only pose a problem with unintelligible numbers.
Reworked the code to use an array of suffixes and reduce the number of snprintf_P statements. This means if TiB or others are needed, they just need to be added to the array.
Thanks, I did a bit more refactoring to make the code even more universal, and added Terabytes. cc1d6d9458f326dcdcfc4d50d0f0e23ff1e171b1 It should also sidestep the overflow issue entirely now.
When dealing with SD cards we could be asked to print a uint64 (for example, from SD.totalGytes()). This attempt to handle the very large numbers that might be expected from an SD card.
Yes, there is a bit of ugliness from recasting to a uint32 before calling the snprintf function, but this avoids implicit casts where data could be truncated.