Open DedeHai opened 1 month ago
@DedeHai I was wondering if the tables from https://esp32.com/viewtopic.php?p=82090# are still correct, especially for the float multiply vs. float divide. The table comes from a time when FPU support for esp32 was broken. https://github.com/espressif/esp-idf/issues/96
It seems correct that "float divide" is a lot slower than multiply by inverse, and I think (please correct me) the compiler can generate this optimization automatically. However, the difference today should be like "8-10 times slower" but not a factor of almost 100x.
EDIT: there was a PR for esp-idf that corrected usage of FPU instructions in esp-idf v4. Maybe it would be useful to add a column to the table, for comparing "esp32 esp-idf v3.x" vs. "esp32 esp-idf v4.x"
https://github.com/espressif/esp-idf/commit/db6a30b446f10352fd1e2f2af2fdc814ae266f55
There is an additional thing worth mentioning:
According to c++ semantics, an expression like "if ( x > 1.0)" (with float x) is first "promoted" to double before evaluation, which makes it SLOW. This can be avoided
if ( x > 1.0)
--> if ( x > 1.0f)
, or x += M_PI
--> x += float(M_PI)
, or#define MY_LIMIT 3.14
--> constexpr float MY_LIMIT = 3.14;
(notice that appending "f" is not needed here).You can check the code for such "double promotions" by adding -Wdouble-promotion
to build_flags
https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html#index-Wdouble-promotion
constexpr
Using constexpr is a nice way to optimize without obfuscating the code too much.
In contrast to const
which often calculates at runtime, constexpr
is guaranteed to be calculated by the compiler, so the calculation will never be part of the binary.
https://en.cppreference.com/w/cpp/language/constexpr
constexpr float f = 23.0f;
constexpr float g = 33.0f;
constexpr float h = f / g; // is computed by the compiler, so it needs ZERO cycles at runtime
printf("%f\n", h);
}
You can even create functions that are constexpr
// C++11 constexpr functions use recursion rather than iteration
constexpr int factorial(int n)
{
return n <= 1 ? 1 : (n * factorial(n - 1));
}
static constexpr unsigned getPaletteCount() { return 13 + GRADIENT_PALETTE_COUNT; } // zero-cost "getter" function
...and the classical one:
uint8_t
, int16_t
and friends are useful to save RAM for global data, however - in contrast to older 8bit Arduino processors like AVR - these types are slower than the native types int
or unsigned
.
Update: If 8bit math (with roll-over on 255) is needed, 8bit types should be used - it's still faster than manually checking and adjusting 8bit overflows.
The reason is that esp32 processors have 32bit registers and 32bit instructions, so any calculation on uint8_t
requires some extra effort to correctly emulate 8bit (especially for overflow). Technically uint8_t c = a + b;
becomes something like uint8_t c = ((a & 0xFF) + (b & 0xFF)) & 0xFF;
. And its even more complicated for signed int8_t
...
for more info: https://en.cppreference.com/w/cpp/types/integer
...and the classical one:
avoid 8bit and 16bit integers for local variables
This one is tricky. The code needs not rely on overflows as it does in WLED.
I was wondering if the tables from https://esp32.com/viewtopic.php?p=82090# are still correct.
They are for current WLED, I generatad this yesterday by inserting the code to 0.15. I can add IDF 4 once we move there.
The 8bit/16bit is a bit more elaborate. In general what you write is true but it has 8bit/16bit instructions too. So yes, avoid 8bit but manually checking and adjusting overflows is slower. So if 8bit math is needed, it should be used.
I can add IDF 4 once we move there
I'm really curious to see the numbers for the newer V4 framework 😀 . But yeah, it won't be better than -S3 results.
You could use the esp32_wrover
buildenv for measuring - I think it will also work with esp32 that does not have PSRAM.
https://github.com/Aircoookie/WLED/blob/e9d2182390d43d7dd25492f6555d082280e79b3b/platformio.ini#L481
I want to collect some info here about things I have learned while writing code for the ESP32 family MCUs. Please feel free to add to this.
This is a work in progress.
Comparison of basic operations on the CPU architectures
This table was generated using code from https://esp32.com/viewtopic.php?p=82090#
Even though the ESP32 and the S3 have hardware floating point units, they still do floating point division in software so it should be avoided in speed critical functions.
Edit (softhack007): "Float Multiply-Add" uses a special CPU instruction that combines addition and multiplication. Its generated by the compiler for expressions like
a = a + b * C;
As to why integer divisions on the C3 are so slow is unknown, the datasheet clearly states that it can do 32-bit integer division in hardware.
Bit shifts vs. division
Bit shifts are always faster than doing a division as it is a single-instruction command. The compiler will replace divisions by bit-shifts wherever possible, so
var / 256
is equivalent tovar >> 8
if var is unsigned. If it is a signed integer, it is only equivalent if the value of var is positive and this fact is known to be always the case at compile time. The reason is:-200/256=0
and-200>>8=-1
. So when using signed integers and a bit-shift is possible it is better to do it explicitly instead of leaving it to the compiler. (please correct me if I am wrong here)Fixed point vs. float
Using fixed point math is less accurate but for most operations it is accurate enough and it runs much faster especially when doing divisions. When doing mixed-math there is a pitfall: casting negative floats into unsigned integers is undefined and leads to problems on some CPUs. https://embeddeduse.com/2013/08/25/casting-a-negative-float-to-an-unsigned-int/ To avoid this problem, explicitly cast a float into
int
before assigning it to an unsigned integer.Modulo Operator: %
The modulo operator uses several instructions. A modulo of 2^i can be replaced with a 'bitwise and' or & operator which is a single instruction. The rule is
n % 2^i = n & (2^i - 1)
. For examplen % 2048 = n & 2047