Open moefear85 opened 1 month ago
Thank you for the suggestions @moefear85! Can't guarantee I don't have 0x0000DEAF ears, as you say, please excuse if my answer doesn't sound reasonable to you.
Then in the worst case scenario, a user would only need to add a PPA, then install xtensa gcc like any other cross-compiler. The process of building software for a microcontroller is actually no different than cross-compiling
The reason why we currently don't see system-wide tools installation as a viable option is due to the following factors:
1) The cross-compiler toolchain we ship is evolving pretty quickly, with new features being added regularly. 2) Each ESP-IDF release can be used with just one specific version of the cross-compiler toolchain. Older versions typically can't be used because they miss some of the features (see point 1) and newer versions are not guaranteed to work (as they may introduce new warnings and such.) Also we don't have the capacity to perform CI with more than one toolchain version for the given IDF release at a time. 3) Installing multiple IDF copies (of different versions) and using them side by side is pretty common. The current installation approach (and the use of export scripts) ensures that none of the toolchains is available in the system PATH, and we pick the right toolchain version for the IDF version currently being used. You can build an app with IDF v5.1.1 in one terminal and another app with IDF v5.3 in another, simultaneously.
At some point in the future, the toolchain might stabilize enough so that we won't have to push new toolchain releases regularly, and at that point it might be viable to build multiple IDF versions with one toolchain release. If that happens, it will indeed be possible to distribute the toolchain via OS package managers.
The other factor is, although the current installation method isn't the preferred installation method on any of the OSes, we can support installation on Windows, Linux and macOS with one set of tools (minus the shell-specific frontend scripts). Using the "OS package manager" installation approach would require us to manage distribution through apt, homebrew, nuget(?) and possible some other tools, and that would require a lot more work than we can afford right now.
To address some of your specific points:
Updates also require redownloading and installing everything, even for soc variants that a user never intends to use.
Just a note about this, you can pass the name of the SoC you intend to use to the install.sh script. That will result in downloading the tools required by that one target.
But for esp-idf, two different socs have two different compilers even when they are the same architecture and even processor family and core-count.
I guess you mean ESP32 and ESP32-S3? This is no longer the case in recent toolchain releases. We ship just one toolchain for RISC-V and one for Xtensa. Xtensa toolchain still has named executables like xtensa-esp32-elf-gcc for compatibility, but they are small wrappers around the actual compiler — xtensa-esp-elf-gcc.
Updating esp-idf is also not straightforward, since it requires deleting previous folders, pulling git updates, then re-installing.
If you are updating an existing IDF copy to the new release, generally deleting the previous folders (IDF_PATH and IDF_TOOLS_PATH) is not required. Once you have checked out the new version in git and updated submodules, you can run the install script to install the missing tools. This will download only the tools which are missing (i.e. a different version of the tool is required by the newer IDF release.)
After the installation is done, there is a command printed to remove unused versions of the tools, if you wish to optimize disk usage.
Is your feature request related to a problem?
Esp-idf currently uses a non-standard installation method on ubuntu, that of downloading a zip, extracting, then running shell scripts. This is problematic in some automation cases (OS/software re-/installs).
In the past, there were only 1-2 soc variants, so having to download multiple compilers wasn't that much of a problem, nor the non-standard way in which they are invoked by sourcing files and setting environment variables. Currently, there are about 10 and counting. It's becoming a burden for backup/restore even if only due to size. Usually there is only a single compiler for one or more architectures, which supports entire families and vendors. But for esp-idf, two different socs have two different compilers even when they are the same architecture and even processor family and core-count. The burden is multiplied for people who need to maintain several versions of the sdk on disk. Using containers does not resolve the size problem.
Updating esp-idf is also not straightforward, since it requires deleting previous folders, pulling git updates, then re-installing. The download bandwidth is immense, and the entire process fails if the download/install of a single compiler fails. Updates also require redownloading and installing everything, even for soc variants that a user never intends to use.
Docker/Snap/Flatpack/AppImage, and all their equivalents are a
0xbaadedea
, because not only do they reinvent the wheel, they babelize it too. You then have 10 different standards of wheels to learn, defeating the purpose of standardization.Describe the solution you'd like.
It would be nice if the installation process of xtensa gcc followed standard unix/ubuntu paradigms, such that they are automatically installed using apt-get. Apt-get has recently taken control over python package management too, so it would be nice if python package requirements were satisfied using that channel too.
Then in the worst case scenario, a user would only need to add a PPA, then install xtensa gcc like any other cross-compiler. The process of building software for a microcontroller is actually no different than cross-compiling, so it would be nice if the same methodology would apply, including the process of installing library headers using apt-get (installing a source or dev package version), then simply setting CROSS and ARCH environment variables, then run c-/make as usual, and expeect the target to be properly compiled for the target system.
Likewise, simulation & debugging would become more unified with general cross-compiling too, including how qemu is installed and setup. The only difference is then that for debugging, openocd is running in between.
It would be nice if all xtensa source & headers installed to default locations within /usr like usual packages, and if less custom cmake scripts were involved, rather more mainstream CMakeLists used to find and pull in the right source/header files into a project just like other C/++ software on a system.
As for containers, linux already had a simple solution since the epoch that was so simple and effective, it was deemed a problem needing a solution, (the solution being shared libraries). The "problem" is static libraries. That's what containers effectively try to do, except they use shared libraries internally. Why not just build something statically linked? you also then get a monolithic executable, that is simpler and runs faster and even smaller in size. The only reason to use shared libraries is when one has no control over what library types are produced by certain dependencies. This is rare though, as in most cases the build process involves building those dependencies then generating the container.
As for distro package manger babelization, just treat each distro as its own OS. If tomorrow there were suddenly 1000+ OSs, would esp-idf strive to support all of them? Just select a few mainstream ones (including distros), and officially support them. apt and dnf are enough. Users on non-mainstream distros are on their own and can use the sources on their own.
I could go on about another major improvement to the build process, perhaps the most important, but I'll keep it to myself as usually my suggestions fall on 0x0000DEAF ears.
Describe alternatives you've considered.
n/a
Additional context.
Update: I forgot to mention, apt-based installs also not only make (rolling) updates easy, they also make it automatic, as users get notifications when new software is available.