pulp-platform / axi

AXI SystemVerilog synthesizable IP modules and verification infrastructure for high-performance on-chip communication
Other
1.1k stars 265 forks source link

run_vsim.sh: Run simulations in parallel #299

Open colluca opened 1 year ago

colluca commented 1 year ago

This is a feature request to speedup simulation and CI time. Simulations for different parameterizations of an IP could be run in parallel as independent processes.

colluca commented 1 year ago

I think this would be a valuable feature (also to speed up the CI) and I would offer myself to implement this. The easiest would be to rewrite the run_vsim.sh script in Python. How do you feel about this? @micprog @thommythomaso

thommythomaso commented 1 year ago

We should rather invest time in a proper Makefile to run stuff parallel. Instarted this effort in the VCD sim branch (to be merged)

colluca commented 1 year ago

Two comments to understand what's the plan for extending your Makefile, since you mention it's partial: 1) IMO it doesn't make sense to use Makefiles when all but the *.log targets are PHONY. And the only dependency of the *.log targets is basically the Bender.yml file not the actual HDL source files themselves, so most of the time you will end up force-running these rules as if they were PHONY. In this case, IMO sticking to the Makefile language (compared to another scripting language e.g. Python) is just more restrictive than the benefits it brings (since we are not using the useful features of Make most of the time). 2) You are still using the run_vcs.sh file with the loops parameterizing and simulating the MUT.

Regarding 1

I can extend your Makefile to add the source files of a specific TB and MUT as dependencies of the respective target. If I'm not mistaken, this should be easy and safe using Bender. This would really justify having the Makefile, as you simply do make all and he automatically figures out all and only what should be done.

Regarding 2

I wonder whether moving the loops into the Makefile could also make sense, each loop iteration would basically get its own simulation target. When you modify an HDL source file, all loop iterations for that MUT would anyways run so there is no benefit from this point of view. From the parallelization point of view it would make sense. However the same could be achieved just modifying the run_*.sh scripts.

I thought about it and bringing this logic into the Makefile would probably be complex and criptic, with no reasonable advantage over the approach we are currently using. Here an example https://saveman71.com/2020/makefile-recursive-rules. In our case the cartesian product would be between the MUTs and their respective loop variables/bounds, with the additional complexity of not being a dense product.

So for this point I still propose to rewrite the run_*.sh scripts in Python and easily parallelize them there. Of course let me know if you think differently. I'm glad to have your feedback.

thommythomaso commented 1 year ago

Two comments to understand what's the plan for extending your Makefile, since you mention it's partial:

1. IMO it doesn't make sense to use Makefiles when all but the `*.log` targets are PHONY. And the only dependency of the `*.log` targets is basically the `Bender.yml` file not the actual HDL source files themselves, so most of the time you will end up force-running these rules as if they were PHONY. In this case, IMO sticking to the Makefile language (compared to another scripting language e.g. Python) is just more restrictive than the benefits it brings (since we are not using the useful features of Make most of the time).

2. You are still using the `run_vcs.sh` file with the loops parameterizing and simulating the MUT.

Regarding 1

I can extend your Makefile to add the source files of a specific TB and MUT as dependencies of the respective target. If I'm not mistaken, this should be easy and safe using Bender. This would really justify having the Makefile, as you simply do make all and he automatically figures out all and only what should be done.

Regarding 2

I wonder whether moving the loops into the Makefile could also make sense, each loop iteration would basically get its own simulation target. When you modify an HDL source file, all loop iterations for that MUT would anyways run so there is no benefit from this point of view. From the parallelization point of view it would make sense. However the same could be achieved just modifying the run_*.sh scripts.

I thought about it and bringing this logic into the Makefile would probably be complex and criptic, with no reasonable advantage over the approach we are currently using. Here an example https://saveman71.com/2020/makefile-recursive-rules. In our case the cartesian product would be between the MUTs and their respective loop variables/bounds, with the additional complexity of not being a dense product.

So for this point I still propose to rewrite the run_*.sh scripts in Python and easily parallelize them there. Of course let me know if you think differently. I'm glad to have your feedback.

As I said it is incomplete.

To 1) Only running changed files if you call make all is not a requirement. The idea is to easily run all in parallel on your machine if you want to ensure all still pass or run them one by one.

To 2) Ofc. Move all to a nicely constructed makefile. I don't like the boilerplate code that you need to write in order to let something run in a magical version of python, that is maybe not installed on your target system. Make is specifically constructed to let such tasks run parallel. So lets use it.