librecores / docker-images

CI Docker Images
MIT License
19 stars 8 forks source link

Contributions from ghdl/docker #33

Closed eine closed 2 years ago

eine commented 5 years ago

As commented with @oleg-nenashev after his What's new in LibreCores CI? talk at ORConf2019, I believe there is room for contribution between ghdl/docker and librecores. I did not bring this issue before, because of librecores/ci.librecores.org#5. Now that the lack of contribution guidelines is acknowledged, and since I could gain some first-hand insight about what to expect from them, I think we can discuss technical details.

Context

ghdl is an open-source analyzer, compiler and simulator for VHDL. It does also have experimental support for synthesis (which generates a VHDL netlist). Moreover, tgingold/ghdlsynth-beta allows to use GHDL as a frontend for YosysHQ/yosys. Along with YosysHQ/SymbiYosys, formal verification with VHDL is possible. Precisely, Open Source Formal Verification in VHDL was a talk by @pepijndevos at ORConf2019.

ghdl/docker is the repo were all the ghdl/* docker images are defined, built and published. On the one hand, GHDL is tested on Debian, Fedora, Ubuntu, Windows and macOs. On the other hand, GHDL provides multiple optionally but very useful features, such as a language server with plugins for vscode/emacs, or the already mentioned ghdlsynth-beta plugin. As a result, we currently maintain ~100 images.

We are not completely happy with maintaining a subset (~6) of those images, which do not contain any dependencies specific to GHDL. Those exist only because upstream projects at YosysHQ do not provide official docker images. We tried to contribute to those projects, but maintainers seem not to be interested on providing/maintaining docker images. See YosysHQ/yosys#1152, YosysHQ/yosys#1285, YosysHQ/yosys#1287, YosysHQ/SymbiYosys#58, cliffordwolf/icestorm#77, etc.

The list of images that I'd like to migrate from GHDL to librecores is the following:

Base image

At ghdl/docker, we use Debian Buster (debian:buster-slim) as the base image for all the ghdl/synth:* images. Here, image librecores-ci is based on ubuntu:16.04 (https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile#L23).

Since Ubuntu is based on Debian, it is possible to use the same dockerfiles with one or more --build-arg, to build multiple images with the same features/tools but with different bases. However, maintenance effort is slightly increased.

IMHO, keeping latest Ubuntu LTS (Bionic 18.04) and Debian stable (Buster 10) is worth it. This is because Debian Buster is used as a robust base in many companies, and because available SDCard images for boards (such as PYNQ) are based on Ubuntu LTS.

monoimage vs per tool image

The current approach in this repo is to install all the tools in a single image (see https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile#L23). This makes images easier to distribute/use, since all users need to follow exactly the same instructions. However, on the one hand, the size of the image is larger than required for users looking for a single tool/feature. This is specially relevant in CI environments, where images need to be constantly pulled. On the other hand, it is difficult to put a limit on which tools should/shouldn't be included.

To be precise, librecores-ci includes fusesoc, iverilog, verilator, yosys and cocotb, but neither of gtkwave, symbiyosys, nextpnr, icestorm, GHDL or VUnit. A single image containing all of them would be too large. That's why I suggest a modular approach. There should be an image for each tool, with just minimum dependencies for it to work. Of course, some images can be based on others. For example, symbiyosys requires yosys.

Mentioned images from ghdl/docker are an example of this approach. There is an snippet in https://github.com/tgingold/ghdlsynth-beta#docker, which shows how to use beta, nextpnr and icestorm images to synthesize and program an icestick.

NOTE: it is possible to program FPGA boards from Docker Desktop: https://github.com/ghdl/docker/tree/master/usbip

multi-stage builds

Docker's multi-stage builds allow to slim down images by keeping build dependencies explicitly separated from runtime dependencies.

Currently, no clean up is performed in https://github.com/librecores/docker-images/blob/master/librecores-ci/Dockerfile (ref #5). Conversely, in ghdl/docker multi-stage builds are intensively used. For example: https://github.com/ghdl/docker/blob/master/dockerfiles/cache_yosys

Moreover, by enabling DOCKER_BUILDKIT when images are built, intermediate but non-required images are ignored. On the one hand, this speeds up the build time, while allowing to have unrelated tools/steps defined in the same file. On the other hand, this is useful to share a single dockerfile for multiple archs/OS.

multiach images and manifests

Combining Docker and QEMU, it is possible to build docker images for foreign architectures (e.g. arm32v7 or arm64v8). Project dbhi/qus provides a lightweight ready-to-use image that allows to configure the kernel on Docker Desktop, Travis CI, GitHub Actions, etc. dbhi/docker is another project that partially overlaps with ghdl/docker, as it provides multiarch images (amd64, arm32v7 and arm64v8) based on ubuntu:bionic, including GHDL, GtkWave, Python, etc.

Multiple tools (GHDL, icestorm, yosys, verilator, etc.) are supported on amd64, armv7/aarch32 and aarch64. Therefore, it is desirable to provide images for those architectures too.

On the one hand, images for arm/arm64 hosts allow to use open sources EDA tools not only on devices such as Raspberry Pi or ROCK960, but also on ZYNQ/PYNQ/MicroZED/ZEDboard/Ultra96. Precisely, images from dbhi/docker are used on RPi, PYNQ and ROCK960 boards. This is useful to build low-cost jenkins farms, and for software-hardware co-execution on SoCs.

On the other hand, QEMU and Docker can be used on amd64 workstations/servers to avoid cross-compilation and/or for CI testing of apps for foreign architectures. Precisely, binaries built in a arm32v7/ubuntu:bionic image on an amd64 workstation can be copied and successfully executed on a Xilinx board with PYNQ SD image (versions v2.3 or v2.4). I.e., the same build scripts can be used, without any cross-compilation toolchain.

GitHub Actions

Although, GitHub Actions are still in beta, it was announced that they will be public for all users on november. Independently of having any other (external) CI service, I think it'd be desirable to use this feature, since it provides tighter integration with the repo and the timeout is set to 6h. Furthermore, using GitHub's registry instead of or apart from dockerhub might be discussed.

Both ghdl/docker and dbhi/docker include examples of YAML workflows to build and publish docker images. However, versioning/tagging is not implemented in any of them. This is because build scripts are written in bash and they are already hard enough to maintain.

Build toolkit

A relevant issue that I have not solved yet is that the scheme of all the images that are built in librecores, ghdl/docker, dbhi/docker, vunit/docker, etc. is a DAG. For example:

      +-------------+  +------------------+     +--------------------+
      |ubuntu:bionic|  |debian:buster|slim|     |other base images...|
      +-----+-------+  +--------+---------+     +--------------------+
            |                   |
      +-----+----+-----------------------+--------+
      |          |              |        |        |
      | +-----------+-----------+------+--------+ |
      v v        |  |                  v v      | |
build gtkwave    |  |               build yosys | |
      +          v  v                 +         v v
      |  runtime gtkwave              |     runtime yosys
      |        +                      |         +
      v        v                      v         v
    +-+--------+--------+           +-+---------+-------+
    |librecores/gui:base|           |librecores/ci:yosys|
    +--------+----------+           +---------+---------+
             |                                |
             |                                |
             |      +--------------------+    |
             +----->+librecores/gui:yosys+<---+
                    +--------------------+

Currently:

    B
  /   X
A      D
  \   X
    C

Therefore, the steps to build librecores/gui:yosys (D) on top of librecores/ci:yosys (B) are exactly the same that are required to build librecores/gui:base (C) on top of the base image (A).

There are multiple approaches to handle this complexity:


/cc @oleg-nenashev @Nancy-Chauhan @wallento @olofk

eine commented 2 years ago

Most of the proposals above are implemented in hdl/containers, hence, I'm closing this issue.