Open osctobe opened 4 years ago
I've used ccache for many years. It is very very reliable. It does an excellent job of speeding up jobs, and it doesn't miss changes. I'm sitting here watching an update happen over my 15+ devices and wished it was installed in the add-on. It is usually as easy as just replacing the compiler command. The flags all move through just fine.
+1 from me.
same here
Thumbs up from here. Would be a nice extension. :)
+1
with the latest esphome (2023.9.3) I see soo many new object files being generated, that have nothing to do with the project, that the firmware generation takes even a lot longer than before. Maybe that's a bug that will get fixed but anyhow, a way more efficient handling of firmware compilations would be very welcome.
The problem is that every "device" is a unique platformio project. It is possible to set a shared cache folder for all environments within one platformio project, but it might not work when that cache would be shared across all projects. In any case a separate folder would need to be used for every platform & framework combination.
agree, every combination will need it's own cache directory. But as I have up to 4 devices per combo, that would already save a lot of time doing recompiles. Now I often chose not to update the devices, due to the long duration and manual sequential work needed for 15+ devices.
I think only a single cache directory for the whole addon is needed. ccache by default locates the cached build by hash of the preprocessed source. If all paths passed to the preprocessor are relative, then multiple projects could reuse the same compilation output.
Would love this to be added. I have 18 Sonoff S31s flashed with almost identical ESPhome config files (I have a common S31 yaml file that is included in the yaml for each device) and compiling the same files 18x feels like a waste of resources and takes a long time.
Couldn't we just configure a new esphome container that has ccache installed and the proper environment set up to call it instead of the gcc compiler?
Users could install that container instead of the standard esphome one and get some users on it. We could get some real data and make a better case for pushing this feature.
Couldn't we just configure a new esphome container that has ccache installed and the proper environment set up to call it instead of the gcc compiler?
Looks like installing ccache in the container and add flags to platformio.ini enough:
https://github.com/platformio/platformio-core/issues/4592
i.e.
board_build.cmake_extra_args = -DCCACHE_ENABLE=ON
So, have any of you actually tried this yet? Care to report back on findings?
I managed to get it working. It was more involved than I expected as I needed to add ccache to the Docker build and modify esphome to add both the board_build.cmake_extra_args = -DCCACHE_ENABLE=ON
as well as a helper script that used env.Replace
to add ccache to CC, CXX and AS. Not sure why both were needed, but with only either option ccache did not get invoked, but with both in place the ccache -s
was showing statistics.
The good news is, that even with my mix of random devices (Sonoff S20/S31, atom lite, and esp32 dev boards) it was still able to get a little over 60% cache hits.
$ ccache -s
Cacheable calls: 590 / 604 (97.68%)
Hits: 374 / 590 (63.39%)
Direct: 332 / 374 (88.77%)
Preprocessed: 42 / 374 (11.23%)
Misses: 216 / 590 (36.61%)
Uncacheable calls: 14 / 604 ( 2.32%)
Local storage:
Cache size (GB): 0.01 / 5.00 ( 0.11%)
However the bad news is that either compilation on my desktop is fast enough, or my disk is slow enough that it at most gets a 30% speedup when it builds firmware for an identical device, so the first S31 took about 30 seconds, the rest were 20 seconds each. Uploading the firmware took about 30 seconds as well, so the maximum speedup I could have achieved would maybe have been around 15% if all my devices were mostly identical, but I'm running a mix so the overall improvement was disappointing to say the least. Now this tradeoff may end up being completely different for people running with a slower CPU, I have 8-cores @ 3.6GHz, or a faster disk (spinning rust here).
But still, the overhead of downloading updated platformio components and uploading/flashing the new firmware definitely reduces the possible gains. Maybe we could think about other approaches that hide the cost of rebuilding completely, have esphome quietly rebuild outdated firmware in the background and only notify the user through homeassistant when the build fails or new firmware is ready to be flashed to the device.
Maybe we could think about other approaches that hide the cost of rebuilding completely, have esphome quietly rebuild outdated firmware in the background and only notify the user through homeassistant when the build fails or new firmware is ready to be flashed to the device.
I really like this idea. I don't think I want ESPHome to actually flash my devices automatically but I would love it to compile the updated firmware in the background and only involve me with the actual flashing procedure.
To help this on my end I moved ESPHome from an addon on my Dell micro 9040 to a LXC on my 40 core server and that helped the compile time a lot. It's still wasting a lot of CPU cycles on doing nearly identical tasks many times over but by throwing some real horsepower at it it's much less of a hassle for me.
Looks like PIO has their own cache system: https://docs.platformio.org/en/latest/projectconf/sections/platformio/options/directory/build_cache_dir.html
This should be set to a common directory (if not already) instead of using ccache.
Someone already explored that option in #2171 but discovered that it SCons (the underlying build system) includes the path to the file in the hash, so it doesn't actually help across different build environments even when they use a common cache directory.
Describe the problem you have/What new integration you would like
The builds are slow and even for copy-and-pasted configuration take a long time. Using ccache for the builds would require only single compilation for most of the code (since the node name is built-in, there still would be one compilation and linking, but that's already way shorter than full build for every new device).
Please describe your use case for this integration and alternatives you've tried:
It's just to spend less time waiting for the firmware builds.
Additional context
None.