zephyrproject-rtos / zephyr

Primary Git Repository for the Zephyr Project. Zephyr is a new generation, scalable, optimized, secure RTOS for multiple hardware architectures.
https://docs.zephyrproject.org
Apache License 2.0
10.96k stars 6.68k forks source link

Proposal: Multi-platform package-based Zephyr SDK #37255

Open stephanosio opened 3 years ago

stephanosio commented 3 years ago

Introduction (non-technical overview)

The Zephyr SDK, also known as sdk-ng, has several limitations that affect its usability as well as overall developer experience.

Problem description

  1. Limited OS support
    • Only Linux host is fully supported.
    • Preliminary macOS host support has been added, but its scope is limited to the toolchains (i.e. "host tools" such as Zephyr-patched QEMU and OpenOCD are not available in macOS).
    • Windows host is not supported at all.
    • Supporting all major operating systems (Linux, macOS, Windows) is critical for expanding user base as well as improving overall developer experience.
    • Windows support is especially important for the corporate customers who are mostly Windows users.
  2. Inconvenient distribution format
    • Zephyr SDK is currently distributed as a self-extracting executable that must be manually downloaded and installed whenever there is a new release.
    • The entire Zephyr SDK, including the toolchains for all supported targets and host tools, need to be downloaded and installed for hassle-free integration with the Zephyr build system. This can be up to several GBs.
    • Using a more managed distribution method (e.g. package management system) alongside component-level distribution (e.g. a package per component) would result in improved developer experience overall.
  3. Broad release scope and long release cycle
    • All SDK components, including the toolchains for all supported targets and host tools, are currently released at once as single very large "SDK" release.
    • This forces a new release whenever there is a slight change in just one of the components. Releasing a new SDK version for small changes is impractical due to the reasons described in the "2. Inconvenient distribution format," so
    • This leads to long release cycle, which slows down the main Zephyr development because the PRs that depend on the SDK update get stuck until a new SDK release is made and mainlined in the CI.
    • Each SDK component should be released and installed separately in order to expedite main Zephyr development and improve overall developer experience (nobody wants their PR to be stuck for months due to delayed SDK release).

Proposed change

  1. Support all major operating systems
    • Support Linux on AArch64 and x86-64
    • Support macOS on AArch64 and x86-64 (the new "Apple Silicon"-based Macs are running AArch64)
    • Support Windows on x86-64 (Windows on ARM never really took off, so don't bother with AArch64 for now)
  2. Reduce release scope
    • Release individual toolchain components separately (e.g. ARM and x86-64 toolchains should be released separately)
    • Release individual host tool components separately (e.g. QEMU and OpenOCD should be released separately)
  3. Use package management system
    • Use package management systems such as Snap, APT and Homebrew for distributing SDK components.
    • Create packages for individual toolchain and host tool components (e.g. zephyr-crosstool-arm, zephyr-crosstool-x86, zephyr-qemu, zephyr-openocd)
    • Provide automatic and manual updates through package management system.
    • Support side-by-side installation of multiple versions of the same package through package management system.
    • Simple archive-based distribution (i.e. tarball) should still be available alongside the package-based distribution to support more traditional workflow.

Detailed proposal (technical)

A proof-of-concept was implemented last year (2020) to assess the feasibility of supporting Linux, macOS and Windows through Snap/APT, Homebrew and Chocolatey, respectively.

Definitions

Problem description (technical)

  1. Linux-only Yocto-based host tool build process
    • Yocto provides a self-contained library system for the host tools in order to ensure cross-distro compatibility.
    • Only Linux is supported by Yocto.
    • This is the reason why the current preliminary macOS host support cannot provide the host tools for macOS.
    • Linux cross-distro compatibility should be implemented through other means (e.g. cross-distro package management system such as Snap, and/or per-distro build and release).
  2. Collective versioning scheme
    • All components that make up the "Zephyr SDK" are versioned at once and released together.
    • This forces a new release to be produced whenever there is a slight change in just one of the components.
    • For instance, if there is a small update in the ARM toolchain, a new version of Zephyr SDK, which includes the toolchains for all the other targets as well as the host tools such as QEMU, will have to be released.
    • Each component should be versioned and released individually to reduce unnecessary re-download as well as to shorten release cycle.
  3. Lack of component-level Zephyr build system integration
    • The Zephyr SDK, which consists of multiple toolchain and host tool components, provides single CMake package that is used by the Zephyr build system to detect the compatible version of Zephyr SDK installed on the system.
    • This effectively forces users to download and install the all-in-one SDK file for every new release.
    • Each component should have its own CMake package that identifies the component-specific version, so the Zephyr build system can discover and use the required version of each component.

Proposed change (technical)

  1. Rework Zephyr build system to use per-component CMake packages
    1. Each component making up the "Zephyr SDK" shall provide its own CMake package (e.g. all of the following should have its own CMake package: zephyr-crosstool-x86, zephyr-crosstool-arm, zephyr-qemu, zephyr-openocd)
    2. Zephyr build system shall discover each required SDK component of the required version through the CMake find_package command. For example:
      • When Zephyr ARM toolchain v10.0.1.3 is required, the build system will find it through find_package(zephyr-crosstool-arm 10.0.1.3 EXACT ...).
      • When Zephyr QEMU v5.1.0.15 is required, the build system shall find it through find_package(zephyr-qemu 5.1.0.15 EXACT ...).
    3. The SDK component discovery implementation using the CMake find_package command shall allow manually specifying the CMake package search path in order to allow using the SDK component installations that are not registered in the CMake package registry.
      • The CMake package search path can be manually specified using the HINTS option. For example, find_package(zephyr-crosstool-arm 10.0.1.3 EXACT ... HINTS $ENV{ZEPHYR_SDK_INSTALL_DIR}).
      • This will make it easier to use archive-based SDK component distributions (i.e. tarballs).
    4. A preliminary implementation shall be provided in the current Zephyr SDK (sdk-ng) to ease the transition into the "multi-platform package-based Zephyr SDK."
      • As far as the Zephyr build system is concerned, there should be no difference between the current Zephyr SDK (sdk-ng) and the new package-based Zephyr SDK once this is implemented.
  2. Rework versioning scheme
    1. Host tools
      • Use the upstream version as base and append the Zephyr-specific patch version at the end.
      • For example, for Zephyr-patched QEMU that is based on the QEMU 5.0.1, use the version starting from 5.0.1.0, 5.0.1.1 and so on.
    2. Toolchains
      • While toolchain consists of multiple components (i.e. binutils, gcc and gdb), gcc is of primary interest -- for this reason,
      • Toolchain version shall use the gccmajor.gccminor.zcommon.ztarget format where:
        • gccmajor and gccminor are the base gcc major and minor version numbers.
        • zcommon is a version number that is incremented when a Zephyr-specific change that is common to all targets is made.
        • ztarget is a version number that is incremented when a Zephyr-specific change that is limited to a specific target is made.
      • For instance:
        • all toolchain releases based on gcc 10.1 shall be versioned 10.1.x.x, starting at 10.1.0.0.
        • when a Zephyr-specific change that is only applicable to the ARM target is made, a new ARM toolchain release shall be made with the version of 10.1.0.1 (note that the toolchains for the rest of the targets need not be newly released, and they remain at 10.1.0.0).
        • when a Zephyr-specific change that is common to all targets is made, new toolchain releases for all targets shall be made with the version of 10.1.1.0 (this ensures that all target toolchains with the version number 10.1.1.x have the same base feature and makes it easier to track feature updates).
  3. Re-organise SDK repositories and rework build process
    1. Drop Yocto-based host tool build process
      • Yocto only works with Linux hosts, so using it for macOS and Windows is not an option.
      • Linux cross-distro capability will be provided by Snap, so the Yocto base system layer is no longer required.
    2. Rename sdk-ng repository to zephyr-crosstool and move host tool components out to the relevant fork repositories
      • Rework and re-purpose the current sdk-ng repository such that its sole purpose is to build the cross toolchain components using the crosstool-ng.
      • This can be relatively simply achieved by removing the Yocto-based host tool build files (i.e. meta-zephyr-sdk directory).
      • The host tool components, such as QEMU and OpenOCD, shall be moved out to the Zephyr-specific branches in the relevant fork repositories (e.g. zephyrproject-rtos/qemu for QEMU and zephyrproject-rtos/openocd for OpenOCD).
    3. Create host tool repositories
      • For QEMU and OpenOCD, use the existing Zephyr QEMU and OpenOCD fork repositories.
      • For BOSSA, create a Zephyr fork repository based on the upstream repository.
      • For the above fork repositories, create a branch per upstream release on which the Zephyr-specific patches, including the build and release workflows, will be applied (e.g. for QEMU 5.1.0, create a branch named zephyr-v5.1.0 based on the v5.1.0 tag with the Zephyr-specific patches applied on top).
      • The Zephyr-specific issues, pull requests and releases for the aforementioned host tool components shall be handled in the corresponding fork repositories on GitHub.
    4. Implement multi-platform build process in the zephyrproject-rtos/zephyr-crosstool repository (old sdk-ng repository).
      • Implement Linux cross toolchain build process (AArch64 and x86-64).
        • For x86-64 Linux host, build on x86-64 Linux build machine (already implemented).
        • For AArch64 Linux host, build on AArch64 Linux build machine (already implemented); or alternatively, consider Canadian Cross build from x86-64 Linux build machine to reduce the CI resource pool requirements.
      • Implement macOS cross toolchain build process (AArch64 and x86-64).
        • For x86-64 macOS host, build on x86-64 macOS build machine.
        • For AArch64 macOS host, Canadian Cross build on x86-64 macOS build machine.
      • Implement Windows cross toolchain build process (x86-64).
        • For x86-64 Windows host, Canadian Cross build from x86-64 Linux build machine (host type shall be mingw-w64).
      • The release build artifact (release format) for the zephyr-crosstool repository shall be "distribution archive" (.tar.gz for Linux and .zip for Windows), which is used as source for building the distribution packages, or used as-is by the users that prefer to not use package management system-based distribution.
    5. Implement multi-platform build process in the zephyrproject-rtos/qemu repository.
      • Implement Linux, macOS and Windows host QEMU build process.
      • The release build artifact for qemu repository shall be "distribution archive."
    6. Implement multi-platform build process in the zephyrproject-rtos/openocd repository.
      • Implement Linux, macOS and Windows host OpenOCD build process.
      • The release build artifact for qemu repository shall be "distribution archive."
    7. Implement multi-platform build process in the zephyrproject-rtos/bossa repository.
      • Implement Linux, macOS and Windows host BOSSA build process.
      • The release build artifact for qemu repository shall be "distribution archive."
  4. Create distribution packages
    1. Snap packages for Linux cross-distro support
      • Snap provides a known base system (Ubuntu Core) on which programs can run (similar to how Yocto is currently used).
        • Snap supports all major Linux distributions (e.g. Ubuntu, Debian, Fedora, Linux Mint).
        • Not a virtualisation-based solution.
      • Refer to stephanosio/snap-zephyr for preliminary implementation.
      • Create a Snap package source repository.
        • Create zephyrproject-rtos/packages-snap package source repository.
        • Add zephyr-crosstool and host tool package definitions.
      • Implement package build and release process in Snapcraft platform.
        • When building packages, the distribution archives provided by the releases in each component repository shall be used.
        • Users will be able to install the Zephyr SDK packages using the default (upstream) source.
    2. APT packages for native Ubuntu support
      • APT is a standard package manager for Debian-based Linux distributions (notably, Ubuntu).
        • Ubuntu is arguably the most commonly used Linux distribution and having native package support will result in better developer experience overall.
        • APT package can be useful for the WSL users since most of them run Ubuntu.
      • Refer to stephanosio/deb-zephyr for preliminary implementation.
      • Create an APT (deb) package source repository.
        • Create zephyrproject-rtos/packages-deb package source repository.
        • Add zephyr-crosstool and host tool package definitions.
      • Implement package build and release process in Launchpad platform.
        • When building packages, the distribution archives provided by the releases in each component repository shall be used.
      • Create Zephyr PPA (personal package archive) which users can use to download and install Zephyr SDK packages.
    3. Homebrew packages for macOS support
      • Homebrew is a de facto standard package management system for macOS.
        • It supports both AArch64 (Apple Silicon/M1) and x86-64 (Intel) ecosystems.
        • The ecosystem provides most of the required library dependencies on macOS.
      • Refer to stephanosio/homebrew-zephyr for preliminary implementation.
      • Create a Homebrew package source repository/tap.
        • Create zephyrproject-rtos/packages-homebrew package source repository.
        • Add zephyr-crosstool and host tool package definitions.
        • This repository will function as both package source and Homebrew "tap."
      • Implement Homebrew bottle build and release process in the package source repository.
        • When building Homebrew bottles, the distribution archives provided by the releases in each component repository shall be used.
    4. Chocolatey packages for Windows support
      • Chocolatey offers an experience akin to those offered by the package management systems found in Linux.
        • It is arguably the best package management system available in Windows and has a fairly large user base.
        • There are no other practical alternatives (see "Alternatives" below) for this purpose.
      • Refer to stephanosio/chocolatey-zephyr for preliminary implementation.
      • Create Chocolatey package source repository.
        • Create zephyrproject-rtos/packages-chocolatey package source repository.
        • Add zephyr-crosstool and host tool package definitions.
      • Implement package build and release process in Chocolatey platform.
        • When building packages, the distribution archives provided by the releases in each component repository shall be used.
        • Users will be able to install the Zephyr SDK packages using the default (upstream) source.
  5. Introduce new release cadence and strategy
    1. Multiple package release channels
      • As per the problem described in the "Broad release scope and long release cycle" bullet under "Problem description," a new SDK component release should be made as soon as a new update is available (tested and merged), in order to expedite Zephyr main development (basically, as often as possible).
      • Of course, most downstream developers who work with the stable Zephyr releases (and not on the main branch) should not be getting these "bleeding-edge" update releases. To address that, multiple package release channels (e.g. stable, candidate, beta, edge) should be available to which developers can subscribe as they need.
      • Snap, for instance, supports four release channels: stable, candidate, beta, edge. For example, the "stable" channel can be kept up to date to support the latest Zephyr release, the "candidate" channel to support the latest Zephyr release candidate, and the "edge" channel to support the development on the main branch.
      • For APT, Homebrew and Chocolatey, which do not support release channels but do support external package distribution points, multiple package distribution points (i.e. "PPA" for APT, and "Tap" for Homebrew) can be made available corresponding to the release channels.
    2. Cumulative distribution archive release (Zephyr SDK bundle)
      • For the users that prefer to not take advantage of the new package management system-based distribution and want the current Zephyr SDK (sdk-ng)-like experience, a "Zephyr SDK bundle" release can be made for every Zephyr release.
      • "Zephyr SDK bundle" is essentially a snapshot of the SDK components at the time of a Zephyr release, that is distributed as a tarball which can be extracted and "installed" anywhere (yes, that is basically the same thing as the sdk-ng we have right now).
      • For example, at the time of Zephyr v2.7 release, if the version of zephyr-crosstool was 10.1.1.0 and zephyr-qemu was 5.1.1.0, the "Zephry SDK bundle for Zephyr v2.7" will contain the aforementioned versions of the SDK components.
      • A "Zephyr SDK bundle" can be used either by registering the CMake packages provided by it in the CMake package registry, or setting the environment variable that provides the HINTS to the CMake find_package command (basically same idea as ZEPHYR_SDK_INSTALL_DIR now).

Concerns and FAQ

Alternatives

stephanosio commented 3 years ago

cc @galak @nashif @tejlmand @carlescufi @ioannisg

keith-zephyr commented 3 years ago

I'm in favor of this proposal, especially the flexibility of releasing new versions of the host tools and toolchain independently.

One additional thought for helping brand new users get up and running, what are people's thoughts on bundling a Linux virtual machine that already has the all the necessary host tools, toolchains, and even the Zephyr source already installed? The distributed source could be tied to the latest LTS release.

marc-hb commented 3 years ago

what are people's thoughts on bundling a Linux virtual machine that already has the all the necessary host tools, toolchains, and even the Zephyr source already installed?

We've been using that docker image for quite some time now, mostly Just Works: https://github.com/thesofproject/sof/commit/40c9bb2e304d0 https://github.com/thesofproject/sof/blob/40c9bb2e30/zephyr/docker-build.sh

stephanosio commented 3 years ago

what are people's thoughts on bundling a Linux virtual machine that already has the all the necessary host tools, toolchains, and even the Zephyr source already installed?

We are already doing something similar with the Docker image, but the main problem with Docker is that it is painfully slow. There is an ongoing discussion about this.

marc-hb commented 3 years ago

There is an ongoing discussion about this.

I haven't seen much about docker performance at that link.

Isn't the whole point of containers to be leaner and faster than virtual machines? Can you elaborate?

stephanosio commented 3 years ago

I haven't much about docker performance at that link.

I meant there is an ongoing discussion about container-based (including Docker) approach there (sorry for the confusion, I could have phrased it better).

Isn't the whole point of containers to be leaner and faster than virtual machines? Can you elaborate?

In theory, yes; but, I (and many others who I work with) have seen Docker-based build process taking almost 2x as long as native or VM-based build. I assume the performance issues are mostly I/O-bound.

Outside Docker:

$ scripts/twister -v -N -p mps2_an521 -T tests/lib/cmsis_dsp
ZEPHYR_BASE unset, using "/home/stephanos/Dev/zephyrproject/zephyr"
Renaming output directory to /home/stephanos/Dev/zephyrproject/zephyr/twister-out.17
INFO    - Zephyr version: zephyr-v2.6.0-1471-gedb78537f9cf
INFO    - JOBS: 60
INFO    - Using 'zephyr' toolchain.
INFO    - Building initial testcase list...
INFO    - 29 test scenarios (29 configurations) selected, 9 configurations discarded due to filters.
INFO    - Adding tasks to the queue...
INFO    - Added initial list of jobs to queue
INFO    -  1/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_f32 PASSED (qemu 2.332s)
INFO    -  2/20 mps2_an521                tests/lib/cmsis_dsp/complexmath/libraries.cmsis_dsp.complexmath PASSED (qemu 2.328s)
INFO    -  3/20 mps2_an521                tests/lib/cmsis_dsp/statistics/libraries.cmsis_dsp.statistics PASSED (qemu 2.413s)
INFO    -  4/20 mps2_an521                tests/lib/cmsis_dsp/transform/libraries.cmsis_dsp.transform.rq15 PASSED (qemu 2.222s)
INFO    -  5/20 mps2_an521                tests/lib/cmsis_dsp/svm/libraries.cmsis_dsp.svm    PASSED (qemu 2.285s)
INFO    -  6/20 mps2_an521                tests/lib/cmsis_dsp/bayes/libraries.cmsis_dsp.bayes PASSED (qemu 2.139s)
INFO    -  7/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_q31 PASSED (qemu 2.193s)
INFO    -  8/20 mps2_an521                tests/lib/cmsis_dsp/distance/libraries.cmsis_dsp.distance PASSED (qemu 2.181s)
INFO    -  9/20 mps2_an521                tests/lib/cmsis_dsp/filtering/libraries.cmsis_dsp.filtering.fir PASSED (qemu 2.185s)
INFO    - 10/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.binary_q31 PASSED (qemu 2.188s)
INFO    - 11/20 mps2_an521                tests/lib/cmsis_dsp/fastmath/libraries.cmsis_dsp.fastmath PASSED (qemu 2.217s)
INFO    - 12/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_f64 PASSED (qemu 2.218s)
INFO    - 13/20 mps2_an521                tests/lib/cmsis_dsp/filtering/libraries.cmsis_dsp.filtering.biquad PASSED (qemu 2.214s)
INFO    - 14/20 mps2_an521                tests/lib/cmsis_dsp/basicmath/libraries.cmsis_dsp.basicmath PASSED (qemu 2.514s)
INFO    - 15/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_q15 PASSED (qemu 2.345s)
INFO    - 16/20 mps2_an521                tests/lib/cmsis_dsp/support/libraries.cmsis_dsp.support PASSED (qemu 2.387s)
INFO    - 17/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.binary_f32 PASSED (qemu 2.356s)
INFO    - 18/20 mps2_an521                tests/lib/cmsis_dsp/transform/libraries.cmsis_dsp.transform.cq15 PASSED (qemu 2.372s)
INFO    - 19/20 mps2_an521                tests/lib/cmsis_dsp/transform/libraries.cmsis_dsp.transform.rf32 PASSED (qemu 2.466s)
INFO    - 20/20 mps2_an521                tests/lib/cmsis_dsp/filtering/libraries.cmsis_dsp.filtering.misc PASSED (qemu 2.739s)

INFO    - 20 of 20 test configurations passed (100.00%), 0 failed, 9 skipped with 0 warnings in 13.38 seconds
INFO    - In total 2165 test cases were executed, 1886 skipped on 1 out of total 381 platforms (0.26%)
INFO    - 20 test configurations executed on platforms, 0 test configurations were only built.
INFO    - Saving reports...
INFO    - Writing xunit report /home/stephanos/Dev/zephyrproject/zephyr/twister-out/twister.xml...
INFO    - Writing xunit report /home/stephanos/Dev/zephyrproject/zephyr/twister-out/twister_report.xml...
INFO    - Run completed

Inside Docker:

$ scripts/twister -v -N -p mps2_an521 -T tests/lib/cmsis_dsp
Renaming output directory to /workdir/zephyr/twister-out.15
INFO    - Zephyr version: zephyr-v2.6.0-1471-gedb78537f9cf
INFO    - JOBS: 60
INFO    - Using 'zephyr' toolchain.
INFO    - Building initial testcase list...
INFO    - 29 test scenarios (29 configurations) selected, 9 configurations discarded due to filters.
INFO    - Adding tasks to the queue...
INFO    - Added initial list of jobs to queue
INFO    -  1/20 mps2_an521                tests/lib/cmsis_dsp/transform/libraries.cmsis_dsp.transform.rq15 PASSED (qemu 2.362s)
INFO    -  2/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_q15 PASSED (qemu 2.262s)
INFO    -  3/20 mps2_an521                tests/lib/cmsis_dsp/statistics/libraries.cmsis_dsp.statistics PASSED (qemu 2.692s)
INFO    -  4/20 mps2_an521                tests/lib/cmsis_dsp/complexmath/libraries.cmsis_dsp.complexmath PASSED (qemu 2.399s)
INFO    -  5/20 mps2_an521                tests/lib/cmsis_dsp/transform/libraries.cmsis_dsp.transform.rf32 PASSED (qemu 2.824s)
INFO    -  6/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_q31 PASSED (qemu 2.302s)
INFO    -  7/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.binary_f32 PASSED (qemu 2.297s)
INFO    -  8/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.binary_q31 PASSED (qemu 2.369s)
INFO    -  9/20 mps2_an521                tests/lib/cmsis_dsp/filtering/libraries.cmsis_dsp.filtering.biquad PASSED (qemu 2.261s)
INFO    - 10/20 mps2_an521                tests/lib/cmsis_dsp/distance/libraries.cmsis_dsp.distance PASSED (qemu 2.247s)
INFO    - 11/20 mps2_an521                tests/lib/cmsis_dsp/basicmath/libraries.cmsis_dsp.basicmath PASSED (qemu 2.661s)
INFO    - 12/20 mps2_an521                tests/lib/cmsis_dsp/svm/libraries.cmsis_dsp.svm    PASSED (qemu 2.181s)
INFO    - 13/20 mps2_an521                tests/lib/cmsis_dsp/filtering/libraries.cmsis_dsp.filtering.misc PASSED (qemu 2.616s)
INFO    - 14/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_f32 PASSED (qemu 2.162s)
INFO    - 15/20 mps2_an521                tests/lib/cmsis_dsp/bayes/libraries.cmsis_dsp.bayes PASSED (qemu 2.186s)
INFO    - 16/20 mps2_an521                tests/lib/cmsis_dsp/fastmath/libraries.cmsis_dsp.fastmath PASSED (qemu 2.260s)
INFO    - 17/20 mps2_an521                tests/lib/cmsis_dsp/matrix/libraries.cmsis_dsp.matrix.unary_f64 PASSED (qemu 2.343s)
INFO    - 18/20 mps2_an521                tests/lib/cmsis_dsp/filtering/libraries.cmsis_dsp.filtering.fir PASSED (qemu 2.142s)
INFO    - 19/20 mps2_an521                tests/lib/cmsis_dsp/transform/libraries.cmsis_dsp.transform.cq15 PASSED (qemu 2.402s)
INFO    - 20/20 mps2_an521                tests/lib/cmsis_dsp/support/libraries.cmsis_dsp.support PASSED (qemu 2.356s)

INFO    - 20 of 20 test configurations passed (100.00%), 0 failed, 9 skipped with 0 warnings in 20.56 seconds
INFO    - In total 2165 test cases were executed, 1886 skipped on 1 out of total 381 platforms (0.26%)
INFO    - 20 test configurations executed on platforms, 0 test configurations were only built.
INFO    - Saving reports...
INFO    - Writing xunit report /workdir/zephyr/twister-out/twister.xml...
INFO    - Writing xunit report /workdir/zephyr/twister-out/twister_report.xml...
INFO    - Run completed
carlescufi commented 3 years ago

what are people's thoughts on bundling a Linux virtual machine that already has the all the necessary host tools, toolchains, and even the Zephyr source already installed?

As mentioned before, we already have a Docker image for that. While this is a good solution for some, it certainly is not for many. Most developers prefer to have the tools they need installed directly on their operating system, and Docker is not a good option on macOS or Windows anyway. So I propose we continue to offer the Docker image all the same, but we should not rely on it as the main mechanism to start developing on Zephyr.

galak commented 3 years ago

I think we should have some rpm package format and possibly dropping snap's on linux.

stephanosio commented 3 years ago

I think we should have some rpm package format

rpm support does sound reasonable, especially noting that there are many developers on Fedora.

and possibly dropping snap's on linux.

I am not sure about that. There are many different types and versions of Linux distributions with different dependencies. We might be able to (sort of) get away with the toolchains by statically linking everything, but that is not really feasible for the host tools, and we will need something like Yocto to address this -- and that's where Snap, which provides a known base system, comes into play.

I think native deb (for the latest Ubuntu LTS) and rpm (for the latest RHEL release, which should also be compatible with the relevant version of Fedora) packages + Snap (for the rest) would be a reasonable target.

galak commented 3 years ago

I am not sure about that. There are many different types and versions of Linux distributions with different dependencies. We might be able to (sort of) get away with the toolchains by statically linking everything, but that is not really feasible for the host tools, and we will need something like Yocto to address this -- and that's where Snap, which provides a known base system, comes into play.

I think native deb (for the latest Ubuntu LTS) and rpm (for the latest RHEL release, which should also be compatible with the relevant version of Fedora) packages + Snap (for the rest) would be a reasonable target.

So I think rpm, deb, and existing tarball. I don't know know how to judge the usage/acceptance of a Snap. I'm not aware of much if any queries for Snap support and it feels like an ubuntu centric solution that the .deb/ubuntu packages would cover.

My 2 cents would be either we try and poll people to see if they'd use a Snap and/or mark the support as experimental to start with. Not sure if we'd have some way to track downloads or access with hosting a snap.

stephanosio commented 3 years ago

So I think rpm, deb, and existing tarball.

By "existing tarball," do you mean tarball with Yocto sysroot?

If so, in the context of per-component distribution (tarball), we will need to distribute Yocto sysroot per component (e.g. QEMU tarball containing its own sysroot, OpenOCD tarball containing its own sysroot, ...), which can be quite wasteful due to the duplicates.

I don't know know how to judge the usage/acceptance of a Snap. I'm not aware of much if any queries for Snap support and it feels like an ubuntu centric solution

Snap is indeed developed and maintained by Canonical Ltd, which is the company responsible for Ubuntu. Regardless of the ownership and politics, objectively speaking, Snap seemed to be our best option for cross-distro support while keeping things sane (i.e. without Yocto sysroot per component).

For those who are not familiar with Snap, it provides Ubuntu Core base environment on which packaged programs can run -- meaning, as long as you build your program targeting Ubuntu Core (which is just a stripped down version of Ubuntu), it will work on all distros supported by Snap, and that includes Arch, Debian, Fedora, Kali, openSUSE, RHEL, Solus, elementary OS, GalliumOS, Linux Mint, Raspberry Pi OS and Ubuntu.

My 2 cents would be either we try and poll people to see if they'd use a Snap and/or mark the support as experimental to start with.

I think marking it experimental would be a good starting point as you suggest. I am sure the users who are on the distros that are not supported by our deb and rpm packages will appreciate the value of having Snap packages. For the majority of users who can just install the deb and rpm packages though, Snap support would be meaningless.

Not sure if we'd have some way to track downloads or access with hosting a snap.

Snapcraft, which is the platform used for building and distributing Snap packages, provides statistics.

stephanosio commented 3 years ago

Re: Renaming "Zephyr SDK" because the term "SDK" refers to "BSP" (as discussed in the last TSC meeting)

My suggestion is to rename "Zephyr SDK" to "Zephyr Tools," because it is quite literally a collection of tools for Zephyr development.

galak commented 3 years ago

Re: Renaming "Zephyr SDK" because the term "SDK" refers to "BSP" (as discussed in the last TSC meeting)

My suggestion is to rename "Zephyr SDK" to "Zephyr Tools," because it is quite literally a collection of tools for Zephyr development.

Would we merge the nettools into this? Maybe 'buildtools'

tejlmand commented 3 years ago

Toolchain WG proposals, see individual proposals.

tejlmand commented 3 years ago

Proposal 1:

tejlmand commented 3 years ago
tejlmand commented 3 years ago
stephanosio commented 3 years ago

Comments from the Toolchain WG meeting today (2021-08-10)

  1. For toolchain versioning scheme, maybe consider using date instead of the base GCC version (e.g. 2021.08)
  2. For toolchains, do not make per-target releases and only make common (to all targets) releases since keeping track of the per-target and common changes for all the different targets can be quite confusing and error-prone -- it probably is not worth to effort.
  3. For Linux cross-distro support, do all of the following:
    • Provide "universal" tarballs with Yocto sysroot (basically what we have in sdk-ng now)
    • Provide "native" packages for common distros (deb for Ubuntu, and rpm for Fedora)
    • Drop Snap since it comes with too many limitations (mainly sandboxing)
  4. For building SDK components:
    • For "universal" tarballs, use Yocto (basically what sdk-ng does now)
    • For "native" packages, use the build system provided by the package management system
  5. For tarballs, do not bother releasing per-component tarball and only make "cumulative distribution archive releases" (refer to the technical proposed change 5-ii above).
  6. Implement CI test automation pipeline
    • Test "universal" tarballs, "native" packages and other forms of SDK component releases in an automated manner for quality assurance and also to reduce maintenance overhead.
    • Packages for different distros (e.g. Ubuntu and Fedora) should be automatically tested too.
    • Maybe install the SDK components and run twister to verify that they work under every supported environment type.
  7. Make a proof-of-concept implementation in the current sdk-ng repository
    • Implement per-component CMake package support in sdk-ng
    • Implement full macOS and Windows support (in tarball/zip forms)
    • Maybe CI test automation pipeline here too?
    • Maybe re-purpose this repository for building "universal" tarballs/"cumulative distribution archive releases."

Please let me know if I missed any.

tejlmand commented 3 years ago

@stephanosio it seems brew is available for Linux. As brew is the defacto package manager on macOS and we're going to use it there, then I think it would makes sense to look into making brew packages for Linux as well. That should minimize the needed work as I assume the infrastructure for building a brew package in Linux is straightforward when we have the infrastructure in place for macOS.

And brew doesn't suffer the snap limitations. https://docs.brew.sh/Homebrew-on-Linux

Note: I have zero experience with brew on Linux, so if you have already tried and found that it's not worth the effort, please comment.

stephanosio commented 3 years ago

I think it would makes sense to look into making brew packages for Linux as well.

There are quite a few problems with Homebrew on Linux (aka. Linuxbrew):

  1. While it provides self-contained library system to some degree, it still depends on the host glibc and gcc, which can be problematic on conservative distros like RHEL (or CentOS for that matter).
  2. Linuxbrew user base seems to be relatively small and the support for it does not seem to be that great (we will likely encounter many problems in the future if we decide to go this route).
  3. Linuxbrew does not provide an extensive set of "bottles" (prebuilt binary packages), so it tries to locally build many dependency packages from source.
    • Even for x86-64, the bottle for a common dependency like ncurses is missing and it tries to build it from source locally; which, in my case, took 20 minutes ....
    • It does not provide any bottles for AArch64, so it is practically unusable for AArch64 Linux.

IMO, no. 3 is really a showstopper.

As per the last week's discussion, we can drop Snap and provide Yocto-based "universal" tarballs for the people who are not on the mainstream distros like Ubuntu and Fedora.

tejlmand commented 3 years ago

I think it would makes sense to look into making brew packages for Linux as well.

There are quite a few problems with Homebrew on Linux (aka. Linuxbrew):

Thanks, completely agree, brew is not an alternative on Linux.

mkschreder commented 2 years ago

Regarding windows support - that is absolutely not necessary because windows has very nice Ubuntu support through WSL2 and zephyr builds on that without any issues.

stephanosio commented 2 years ago

Regarding windows support - that is absolutely not necessary because windows has very nice Ubuntu support through WSL2 and zephyr builds on that without any issues.

Until you want to flash/debug your targets and do many other USB stuff, and also you want to integrate directly to a native Windows IDE.

koffes commented 2 years ago

@mkschreder : Although WSL2 works nicely, it adds a whole new layer of complexity which we want to shield users from.

beriberikix commented 2 years ago

While you can use WSL2 to flash/debug (I wrote about it here) and I do, many companies still require use of Windows and Windows tools. Even still, when we do trainings, more than 2/3rds of participants are on Windows for their personal machines. Having Windows support is rather important, for better or for worse :)

cfriedt commented 1 year ago

After having spent some time trying to package, build, sell, .. the Zephyr SDK in an appealing way, I actually feel that something like this proposal is probably the best possible solution to the distribution problem.

Almost all package managers have this concept of adding an external mirror or package source.

Zephyr should simply host a repository for various package managers. The CI SDK build already does the whole matrix.

Why not e.g. throw .rpms or .debs into the mix as well? I would suggest for macOS and Windows to just distribute binary packages for use in the repositories (distributed via chocolatey or homebrew).

The reason that I say this, is that it's far easier for big companies to look at a certain preferred package format and to approve certain external repositories than it is keep up with what's happening in a few dozen GitHub projects.

marc-hb commented 1 year ago

Zephyr should "simply" host a repository for various package managers. The CI SDK build already does the whole matrix.

(quotes mine)

Perfect solution except for the small implementation detail of the "packaging hero" capable of juggling many different VMs to deploy, test and release simultaneously across all the wildly different operating systems and packaging formats. Not forgetting some server administration skills and the knowledge of how various online repos are structured. https://en.wikipedia.org/wiki/Bus_factor

Most of this should of course be automated but it still seems like a very large amount of work. There is probably a reason why people packaging Linux software are most often different from the ones writing it (despite crazy amounts of automation) and why brand new Linux packaging solutions like snap, AppImage, Flatpack and what not are being aggressively explored.

Simple archive-based distribution (i.e. tarball) should still be available alongside the package-based distribution to support more traditional workflow.

Thank you. Low-tech = missing nice things but Just Working.

Most developers prefer to have the tools they need installed directly on their operating system

Having to manage not just one but two systems is significant overhead and sharing across both systems can be super painful: https://github.com/thesofproject/sof/blob/0a4b1d62d5/scripts/sudo-cwd.sh

On the other hand, this image is FANTASTIC for CI and as a reference for quickly finding configuration problems, thanks for it!

zephyrbot commented 9 months ago

Hi @stephanosio,

This issue, marked as an RFC, was opened a while ago and did not get any traction. Please confirm the issue is correctly assigned and re-assign it otherwise.

Please take a moment to review if the issue is still relevant to the project. If it is, please provide feedback and direction on how to move forward. If it is not, has already been addressed, is a duplicate, or is no longer relevant, please close it with a short comment explaining the reason.

Thanks!

stephanosio commented 9 months ago
mkschreder commented 8 months ago

I would avoid snap. Better to provide native packages (deb, apk etc) if packaging support is needed.