Closed mschiller-nrao closed 11 months ago
I hacked this to work in Docker by creating a bash script "nvc" executable that added in the -M as the first command line argument before the generated VUNIT command line arguments, but looking at https://github.com/VUnit/vunit/blob/master/vunit/sim_if/nvc.py there doesn't appear to be a way to do that without changing nvc.py, though a mechanism was created for the heap size, but Heap alone (without -M) didn't resolve my out of memory issue with my large design.
I see where is -H 64m -e
coming from: https://github.com/VUnit/vunit/blob/master/vunit/sim_if/nvc.py#L254-L256. But I don't see where is -M 64m
coming from. @mschiller-nrao did you set the elab_flags
to -M 64m
? Did you try setting heap_size
to 64m -M 64m
instead? It might fail because it expects a list of arguments, instead of them space-separated in a single string; but it's worth a try.
/cc @nickg
So I tried sending it into elab_flags as in the commented out line below:
vu.set_compile_option("ghdl.a_flags", ["-frelaxed", "-fsynopsys", "-fexplicit", "-Wno-hide"])
vu.set_compile_option("nvc.a_flags",["--relaxed"])
vu.set_sim_option("ghdl.elab_flags", ["-frelaxed", "-fsynopsys", "-fexplicit", "--syn-binding"])
vu.set_sim_option("ghdl.sim_flags", ["--ieee-asserts=disable","--max-stack-alloc=4096"])
vu.set_sim_option("nvc.heap_size", "64m")
#vu.set_sim_option("nvc.elab_flags", ["-M64m"])
vu.set_sim_option("disable_ieee_warnings",True)
vu.set_sim_option("modelsim.vsim_flags.gui",["-voptargs=+acc"])
vu.main()
But that's an interesting point, I don't think there's any error checking on nvc.heap_size, so I probably could do "128m -M64m"
Perhaps we should add a nvc.global_flags
option? Something like:
diff --git a/vunit/sim_if/nvc.py b/vunit/sim_if/nvc.py
index c3391fe05da2..de0e3ed1ecef 100644
--- a/vunit/sim_if/nvc.py
+++ b/vunit/sim_if/nvc.py
@@ -39,6 +39,7 @@ class NVCInterface(SimulatorInterface): # pylint: disable=too-many-instance-att
]
sim_options = [
+ ListOfStringOption("nvc.global_flags"),
ListOfStringOption("nvc.sim_flags"),
ListOfStringOption("nvc.elab_flags"),
StringOption("nvc.heap_size"),
@@ -225,6 +226,8 @@ class NVCInterface(SimulatorInterface): # pylint: disable=too-many-instance-att
source_file.get_vhdl_standard(), source_file.library.name, source_file.library.directory
)
+ cmd += source_file.compile_options.get("nvc.global_flags", [])
+
cmd += ["-a"]
cmd += source_file.compile_options.get("nvc.a_flags", [])
@@ -252,6 +255,7 @@ class NVCInterface(SimulatorInterface): # pylint: disable=too-many-instance-att
cmd = self._get_command(self._vhdl_standard, config.library_name, libdir)
cmd += ["-H", config.sim_options.get("nvc.heap_size", "64m")]
+ cmd += config.sim_options.get("nvc.global_flags", [])
cmd += ["-e"]
Hi, I am hitting the same issue with -L
option to make NVC search for vunit
and osvvm
library that was installed with nvc --install
. nvc.global_flags
sounds good.
Should be fixed by #948.
@mschiller-nrao @Blebowski can you please confirm that the current master branch works for your use cases?
I can confim that the latest NVC (1.10.0.r2.g2372f19d)) and latest vunit (c5edd327f5ff91f43a99aab9cb4a2ac9d1057832) work nicely for me
Thanks @nickg and @umarcor
(Now I just need to build a docker image with these two combined for CI.... or just wait until these filter into the existing docker images)
@mschiller-nrao container image sim/osvb
from hdl/containers includes master/main GHDL + NVC + Verilator + Icarus Verilog + VUnit + CoCoTb and pinned OSVVM. It's available in the following registries: gcr.io/hdl-containers/sim/osvb
, ghcr.io/hdl/sim/osvb
and <docker.io/hdlc/sim:osvb
.
There are also sim/scipy
, sim/octave
, sim/gnuplot
, which include matplotlib, numpy, octave... on top of sim/osvb. See https://hdl.github.io/containers/ToolsAndImages.html.
Those images are relatively similar to the one used for CI in this repo. See https://github.com/VUnit/vunit/blob/master/.github/workflows/images.yml#L60-L70. The relation between all container images is shown graphically in https://hdl.github.io/containers/dev/Graphs.html.
Nevertheless, should you want/need something more specific, such as a nvc/vunit
image which just installs master VUnit on top of gcr.io/hdl-containers/nvc
, that's an easy enhancement. Please, open an issue in hdl/containers.
Typically, images in hdl/containers are updated once a week. However, if I see any relevant issue, such as this once, I can manually trigger the workflows to have the desired set of images updated. In this case, NVC was not included in sim/osvb until yesterday, so all the images were generated in the last 12-24h and they include latest NVC and VUnit.
@umarcor NICE! I wasn't aware of sim/scipy I was using ghdl/vunit:llvm-master for ghdl and ghcr.io/vunit/dev/nvc:latest (which I had to install vunit in my CI script to make work) for nvc originally, but jury rigged this to work with both GHDL and NVC with my own dockerfile:
FROM ghdl/vunit:llvm-master RUN apt-get update RUN apt-get upgrade -y RUN apt-get install -y build-essential automake autoconf flex check llvm-dev pkg-config zlib1g-dev libdw-dev libffi-dev libzstd-dev git RUN cd /tmp RUN cd /tmp;git clone https://github.com/nickg/nvc.git;cd nvc;./autogen.sh;mkdir build && cd build;../configure;make;make install RUN python3 -m pip install pytest --progress-bar off RUN python3 -m pip install numpy --progress-bar off RUN cd /tmp;git clone --recurse-sub\modules https://github.com/VUnit/vunit.git; cd vunit; pip3 install .
But it'll be better to use a publicly available image than my jury rigged local image.
@mschiller-nrao for completeness, I maintain hdl/containers, ghdl/docker and the CI in this repo. To make it less cumbersome than it actually is, I use the same base image "everywhere". That is currently Debian Bullseye "slim", but I'm transitioning to Debian Bookworm "slim" in the last weeks.
In ghdl/docker and hdl/containers, when a tool is built, it's saved as a "package" image (a scratch
container with pre-built binaries and assets). Then, whenever that tool is to be included in an image, the package is copied instead of rebuilding it. As a result, all the images are using a single build of GHDL and a single build of NVC. VUnit can always be installed last because it does not need compilation and it depends on colorama
only.
NOTE: This is only true since yesterday. I had not combined GHDL and NVC packages/pre-builts in a single image before, so I had not realised they were built with different versions of LLVM. Now both of them are built with LLVM 11 on Debian Bullseye and with LLVM 14 on Debian Bookworm.
So, the equivalent to your Dockerfile is:
ghdl/pkg:bullseye-llvm-11
(which corresponds to ghdl/ghdl:bullseye-llvm-11
, which ghdl/vunit:llvm-master
was based on until a few weeks ago: https://github.com/ghdl/docker/commit/2f11fa97c31e837cc20ab4254519098317a3f269#diff-d31ce0453051853c17ba2a5225b3d1bfab548e095bab0967d6acfd1b3ce1b35dL154) is used to build gcr.io/hdl-containers/ghdl/llvm
: https://github.com/hdl/containers/blob/main/debian-bullseye/ghdl.dockerfile#L38
gcr.io/hdl-containers/sim
is based on gcr.io/hdl-containers/ghdl/llvm
: https://github.com/hdl/containers/blob/main/debian-bullseye/sim.dockerfile#L31
make install
) some additional runtime dependencies are installed: https://github.com/hdl/containers/blob/main/debian-bullseye/sim.dockerfile#L39-L42. Some of those were contributed by @nickg in https://github.com/hdl/containers/blob/main/debian-bullseye/sim.dockerfile#L41-L42.As you can see, it's almost the same. Therefore, I would recommend that you use sim/scipy, but keep your dockerfile around. Should you need to quickly test some update to NVC, VUnit or GHDL, your dockerfile will let you do so with 24h delay at most since GHDL's last update. Conversely, sim/scipy might need up to one week (depending on my availability).
sim/scipy worked out of the box on my CI system. So that's pretty effective... Too bad the commercial tools don't have convenient images like this. Still have to build my own for Questasim and Vivado (and keep it internal for licensing reasons). But at least the opensource tools should be more reliable when I'm working on an opensource project and need both like github actions and my internal gitlab runners to use an image.
(I tend to do first verification in Questasim manually, and then get GHDL and now NVC working so my Continuous Integration doesn't require a license for Questasim when CI might end up running many many runs depending on how prolific engineers are checking in files. My CI is configured to allow Questa to be ran, but it doesn't run automatically. An engineer has to manually run the pipeline for Questa and Vivado to avoid licensing issues. Though in the future when I'm further along on my program I do intend to make Vivado auto run on production branches and such).
Working! with sim/scipy!
It appears that nvc simulator only supports -M as a global option (eg before the "command" that tells nvc to analyze, elaborate or simulate). This suggests that it needs to be implemented like "heap_size" is currently implemented so that the -M64m (or whatever to set the max elaboration size) can be set for a design.
This is necessary to support large designs in nvc
Eg this does not work:
But it would've worked if -H 64m -e -M 64m was -H 64m -M 64m -e