VHDL / Compliance-Tests

Tests to evaluate the support of VHDL 2008 and VHDL 2019 features
https://vhdl.github.io/Compliance-Tests/
Apache License 2.0
25 stars 7 forks source link

Skeleton 2019 #13

Closed bpadalino closed 1 year ago

bpadalino commented 1 year ago

Adding skeleton 2019 tests. Not exhaustive, but looking to cover the basics for parsing at least.

bpadalino commented 1 year ago

I'd prefer to keep the tests which are targeting specific LRM changes for VHDL-2019 to be as simple and easy as possible. In this case, it comes directly from the LRM and the example on twiki which I think should be the minimum viable test for this.

Paebbels commented 1 year ago

My changes check the same LRM change. The proposed test checks only if 2 items are ordered. That's a very trivial case. To check if it's correctly implemented also chaining should be tested.

So if we go with simple tests first, then I propose to convert each testcase corresponding to an LRM change into a directory with a single vhdl file. So other can add more elaborate checks later in that directory.

bpadalino commented 1 year ago

I feel like a whole new directory is unnecessary. The names of each of the tests is specifically tb_<lrm-change>.vhdl on purpose so it can track one specific thing. The idea behind these tests is just to do the absolute bare minimum for parsing and testing.

I'm OK adding an extra test to this one below in the file which extends the strict LRM example, and I think that having both would give us better granularity of what the issue is with the tool. Basic compliance? More complex usage?

When coming up with more complex compliance tests, I think we can utilize other filenames.

That should work, right?

LarsAsplund commented 1 year ago

I would also suggest keeping the examples as simple as possible and only focus on the new language constructs in isolation. With larger tests we risk having failures because of other unsupported features that are not the focus of the test.

I would also suggest we name tests after the feature and not the LRM reference. Someone not very familiar with the LRM references should be able to run the tests with their simulator and have an understanding what is supported by just reading the test report

image

I can give an example for how the an HTML report can be generated where the test report + pass/fail status + link to LRM is listed

bpadalino commented 1 year ago

I personally don't care about the name of the file - I used the LRM reference to be as concise as possible. I added a link to the twiki page, but it would be better if it was probably more like an LRM reference.

With that being said, and knowing that this is still a draft, I'd much rather prefer if the discussion revolved a little more around all the tests with TODO in them. There are 24 of them where I didn't have a quick/easy test.

I plan on taking some time this weekend to fill those in, but if you had some suggestions for code modifications there I'd appreciate that.

umarcor commented 1 year ago

I didn't have time to read this conversation thoroughly. I hope to do it later today, along with fixing the shield in the readme and having a general look at the repo. Yet, let me chimme in to remember that https://github.com/VHDL/Compliance-Tests/blob/main/issues/LCS2019.yml exists. We should keep the items in the YAML file properly linked to the VHDL files introduced in this PR. E.g., we can let mwe be a list of files in the vhdl_2019 subdir, instead of a hardcoded multi-line code-block.

With regard to having simple vs complex examples, I believe we should have both, but in separated files. Then, since we use VUnit, we can get detailed results about which tools support none, basic use cases only or corner cases as well. Actually, that's something we did for 2008 tests: split them into multiple files to have more granular results. See, for instance: https://github.com/VHDL/Compliance-Tests/commit/b5a8580af6a9cd3b0ee40bc74a7fa02b89d0500f

bpadalino commented 1 year ago

I didn't realize about the YAML file.

I can try to combine the MWE in those with the files, and change the entries to look more like:

LCS-2016-002:
    title: "Allow access and protected type parameters on function interfaces"
    body: |
      Allow access and protected type parameters on function interfaces
    mwe: "file://../vhdl_2019/tb_002.vhd"

Or would you rather the filename just be "../vhdl_2019/tb_002.vhd" instead of trying to make it a URI?

umarcor commented 1 year ago

@bpadalino, it's ok. That file is not very known precisely because we were lacking code examples to make it usable/useful.

We are going to parse/process the YAML file ourselves through https://github.com/VHDL/Compliance-Tests/blob/main/issues/issues.py, so whatever is easier for us. In this case, I believe it's better to just provide a list of files, assuming the "root" is the vhdl_2019 subdir of this repo. That is:

LCS-2016-002:
    title: "Allow access and protected type parameters on function interfaces"
    body: |
      Allow access and protected type parameters on function interfaces
    mwe:
      - "tb_002.vhd"

By using a list, instead of a string, we can very easily tell apart the mwe's which are a hardcoded block (string with newlines in it) from the lists of files.

bpadalino commented 1 year ago

I've updated the LCS2019.yml file with the ones I knew about. There were a few that were there that I didn't know made it into the standard, specifically: 007a, 012, 014, 014a, 028, 059a, 070.

Can you confirm those made it into the standard and aren't just placeholders that didn't actually make it in?

LarsAsplund commented 1 year ago

I updated VUnit to a version which has better support for VUnit-2019 and running test cases despite files not compiling. If you could rebase on that it will be easier to show how you can make your test cases recognized by VUnit such that they becomes part of the pass/fail statistics. Currently VUnit will only compile them but not run them. Looks like Riviera-Pro can manage most of them

bpadalino commented 1 year ago

I've rebased but never used VUnit before. I get some python errors with regards to VUnit when I try to run so I am sure my installation is not correct.

Can you make some code suggestions for a test that could easily be adapted to utilize VUnit in the way you want? I can work on my python installation for testing.

Also, what does it take to get VUnit to support the nvc simulator? Do you have any links to documentation for adding a new simulator?

LarsAsplund commented 1 year ago

@bpadalino Let's start getting VUnit going:

  1. Have you walked through: https://vunit.github.io/installing.html? I see that the list of versions we've tested with is highly outdated. I think I will remove that and just maintain known exceptions. For example, you need at least Python 3.6.
  2. What error messages do you get?
LarsAsplund commented 1 year ago

@bpadalino Regarding nvc. I've been a bit surprised that we haven't had more requests for that. Unfortunately there is no guide to make that work but let me try myself to get a feel for how far off we are a working solution. There was some interaction between @kraigher and @nickg 5 years ago (https://github.com/VUnit/vunit/issues/44) but that the last info I have and I haven't tested myself.

LarsAsplund commented 1 year ago

@bpadalino Regarding naming of tests. I'm one of those in favor of very descriptive naming of tests. That is a highly subjective matter but the reason I pointed it out for this project is that I envision the compliance test suite to have a highly informative output. Something like this matrix for SystemVerilog support in various tools (https://antmicro.com/blog/2019/11/systemverilog-test-suite):

image

The problem with this matrix is that it lacks commercial tools. I'm not sure the reasons but it could be that license agreements prevent you from publishing performance related data. That is unfortunate because keeping such information hidden will only slow down VHDL-2019 adoption. For that reason I would like a test suite that someone evaluating what commercial license to acquire can run on their own and get a fairly readable report in return.

Anyway, that is not of highest importance right now but something to keep in mind. I will create a draft for such a report with LRM links that we can use as a start for related discussions.

nickg commented 1 year ago

@LarsAsplund I've got a VUnit fork here that works with NVC, based on the work @kraigher did a few years ago:

https://github.com/nickg/vunit/blob/nvc/vunit/sim_if/nvc.py

It can pass all the acceptance tests but needs the very latest master. I've been holding off submitting a PR as I want to fix some elaboration performance issues first.

nickg commented 1 year ago

See https://github.com/VUnit/vunit/pull/904

LarsAsplund commented 1 year ago

@nickg That's brilliant! There will be a new VUnit release from shortly so I will make sure that this PR is included.

bpadalino commented 1 year ago

@LarsAsplund I agree that a table like the SystemVerilog one you linked to is a great end goal. I think I'd prefer this branch to be as barebones (hence skeleton-2019) as possible so my main focus is simple tests for the remaining items for now. I'd hate this PR to become the main development space, and would prefer to try to knock the last TODO tests out, do a quick review of the tests for minimum viability, then merge.

As I said previously, though, I think the goal in the end for this suite to be like the SV one is exactly what I want to see as well!

umarcor commented 1 year ago

There were a few that were there that I didn't know made it into the standard, specifically: 007a, 012, 014, 014a, 028, 059a, 070.

Can you confirm those made it into the standard and aren't just placeholders that didn't actually make it in?

/cc @Paebbels @JimLewis can you tell?

umarcor commented 1 year ago

Regarding nvc. I've been a bit surprised that we haven't had more requests for that.

I guess nvc started to be usable for "regular" projects in the last 2y, since a lot of effort was put into it in 2021 and 2022. See https://github.com/nickg/nvc/graphs/contributors.

umarcor commented 1 year ago

FTR, the table that @LarsAsplund showed above is probably https://chipsalliance.github.io/sv-tests-results/. Tests are run in https://github.com/chipsalliance/sv-tests, and results are pushed to sv-tests-results.

I believe we could generate a similar table, since there are a bunch of open source tools for VHDL already (ghdl, nvc, rust_hdl, pyVHDLmodel, hwt...). However, testing tools other than simulators would require custom plumbing because VUnit cannot deal with those. I think we'd better focus on having HDL examples, before introducing other tools.

umarcor commented 1 year ago

@bpadalino see https://github.com/bpadalino/Compliance-Tests/compare/skeleton-2019...VHDL:Compliance-Tests:umarcor/skeleton-2019. That's copied from https://vunit.github.io/user_guide.html#vhdl-test-benches. As you can see in https://github.com/VHDL/Compliance-Tests/actions/runs/4276530136/jobs/7444686288#step:5:2182, that allows VUnit to identify each file as a testbench, and show it in the summary. Otherwise, the files are parsed by the tool (GHDL in CI), but VUnit thinks they are not tests.

LarsAsplund commented 1 year ago

@bpadalino I tested @nickg's PR for NVC support in VUnit. There were a few failing test cases in the acceptance test suite when running under Windows and we're looking into that. However, if you rebase his PR onto the v5.0.0 branch of VUnit it works good enough for the compliance tests and I guess you would like to run with NVC. Is your work on improved VHDL-2019 support in NVC available such that I can build and test with that?

nickg commented 1 year ago

@LarsAsplund what VHDL-2019 support there is is all on the master branch. At the moment it's limited to some of the STD.ENV additions (e.g. calling path API, some of the time/directory API), garbage collection, conditional compilation, and the syntax relaxations @bpadalino contributed (e.g. allowing trailing semi-colon, empty records, etc.). There's a table of supported features here: https://www.nickg.me.uk/nvc/features.html

bpadalino commented 1 year ago

@bpadalino see bpadalino/Compliance-Tests@skeleton-2019...VHDL:Compliance-Tests:umarcor/skeleton-2019. That's copied from https://vunit.github.io/user_guide.html#vhdl-test-benches. As you can see in https://github.com/VHDL/Compliance-Tests/actions/runs/4276530136/jobs/7444686288#step:5:2182, that allows VUnit to identify each file as a testbench, and show it in the summary. Otherwise, the files are parsed by the tool (GHDL in CI), but VUnit thinks they are not tests.

I've marked the PR as ready for review. I am unsure if you want to pull in the current changes as they are, then add the VUnit stuff to it once it's in main or if you want me to make everything work perfectly in the branch first before merging.

I am fine with either case, but would also appreciate an actual review of the tests as I am unsure if I actually did get the VHDL-2019 aspects versus VHDL-2008.

Marked as TODO: 023, 043. Marked as REVIEW: 030, 032, 047.

LarsAsplund commented 1 year ago

@bpadalino If you ask me the I think the tests should be VUnit adapted before merging to main. Without that there will not be a clearly visible result for the compliance with those tests. I can help you doing that if you want but I feel it's much better if I help you manage that yourself. You mentioned some Python problems. Is that still so? If so, can you provide some more info?

bpadalino commented 1 year ago

@LarsAsplund That sounds fine. I am still having python issues. I've done pip3 install vunit-hdl and when I python3 run.py I get the following:

$ python3 run.py 
Using VHDL-2019
Traceback (most recent call last):
  File "/home/bpadalino/work/vhdl/Compliance-Tests.git/run.py", line 19, in <module>
    ui.add_vhdl_builtins()
AttributeError: 'VUnit' object has no attribute 'add_vhdl_builtins'. Did you mean: 'add_builtins'?

pip3 list shows vunit-hdl at 4.6.2.

bpadalino commented 1 year ago

Alright, I added the vunit stuff like @umarcor did for the examples. I have no idea what I am supposed do to run or use vunit here.

The readme doesn't really have instructions on how this is supposed to be used, other than a collection of tests.

Let me know what other things you want changed to merge this.

umarcor commented 1 year ago

@bpadalino in order to run VUnit locally, as done in the CI of this repo, you need to install branch v5.0.0. See https://github.com/VHDL/Compliance-Tests/blob/main/requirements.txt. You can do pip install -U -r requirements.txt.

The error is because of a feature that changed in the master branch.

umarcor commented 1 year ago

The readme doesn't really have instructions on how this is supposed to be used, other than a collection of tests.

See https://github.com/VHDL/Compliance-Tests/blob/main/.github/workflows/Test.yml#L23-L24 and/or http://vunit.github.io/cli.html#example-session.

bpadalino commented 1 year ago

Thanks for the command. It looks like it was the v5.0.0 version requirement that allowed things to at least run.

Running these tests seem to take quite some time. Is there a way to parallelize the compilation part, and serialize the running? With my Riviera-PRO license, I only get 1 seat to run at a time, but I can compile without the license.

My current output is:

====================================================================================================================================
pass 133 of 135
fail 2 of 135
====================================================================================================================================
Total time was 157.2 seconds
Elapsed time was 157.2 seconds
====================================================================================================================================
Some failed!
umarcor commented 1 year ago

@bpadalino that's nice news! As you can see in CI, results for GHDL are different but the total number is the same.

This PR is good to merge on my side. We can improve the integration with VUnit, the granularity, etc. afterwards, in upcoming PRs.

Yet, I did not check the correctness of the examples with regard to the LRM. /cc @Paebbels @JimLewis

umarcor commented 1 year ago

Running these tests seem to take quite some time. Is there a way to parallelize the compilation part, and serialize the running? With my Riviera-PRO license, I only get 1 seat to run at a time, but I can compile without the license.

I'm not sure about that. Let's wait for @LarsAsplund to tell.

umarcor commented 1 year ago

Actually, you can run once with CLI option "compile only" in parallel, and then without none of them: ./run.py --compile -p10 && ./run.py.

Yet, there might be some other approach which I'm not aware of.

bpadalino commented 1 year ago

Something I noticed was that it seems the files that fail to compile don't come up as being problematic in the pass/fail report at the end of the run.

Is this intentional?

EDIT: I believe I was mistaken and this is not the case. When I run with nvc as my simulator, I see:

pass 109 of 135
fail 26 of 135

... and with Riviera-PRO I see:

pass 133 of 135
fail 2 of 135

... and I am sure nvc is failing to compile a bunch of tests.

Sorry for the noise.

LarsAsplund commented 1 year ago

@bpadalino @umarcor It is interesting how the same topic tend to appear in several places at the same time. There was a Siemens post on VUnit and performance yesterday: https://github.com/VHDL/Compliance-Tests.

While that is interesting to explore one should sort out what the critical times involved are and where optimizations are needed. When doing a python run.py --clean to force everything to re-compile I see that compilation is about 6% of the overall execution so the key to better performance is multiple simulation threads but I'm also in a position where I only have one RPRO license. This option is only available to me with GHDL or NVC.

When doing python run.py after a previous compile and no changes it will only try to recompile the files that previously failed but that takes just a second or two.

If you run a subset of the test that do not depend on the failing files, and use the -m/--minimal option, there is no need to try recompiling the failing files. For example, python run.py -m vhdl_2008.tb_bit_string_literals*

The summary will only contain passing/failing tests. If we adapt all testbenches to VUnit then those related to files not compiling will be shown as failing tests.

@umarcor I never tried ./run.py --compile -p10 && ./run.py but does it really work on your side? Do you see a speed improvement.

Parallel compilation is something that has been up to discussion a few times in the past but never materialized. Mostly because it's not obvious it will have a significant impact on the overall test performance. I think the "group compilation" discussed in the Siemens post would be more interesting as it also has other purposes. I also imagine that parallel compilation can be a bit tricky. There is a tendency among some simulators to centralize resources in a way that complicates execution in parallel threads. In this case the compiled library would be such a resource. Maybe we could compile libraries rather than files in separate threads.

Anyway, I'll be back this evening after my "real" job to see if I can contribute more on this.

JimLewis commented 1 year ago

@umarcor WRT

There were a few that were there that I didn't know made it into the standard, specifically: 007a, 012, 014, 014a, 028, 059a, 070.

The definitive list is: http://www.eda-twiki.org/cgi-bin/view.cgi/P1076/VHDL2017

In: 12, 14, 14a (may need additional work in 202X), 59a

Not In: 007a, 28, 70

bpadalino commented 1 year ago

Added 12, 14, and 14a in 231dd7b. I wrote a quick test for 12, but 14 and 14a - from a minimal perspective - is a little more involved maybe?

bpadalino commented 1 year ago

The last commit was just a rebase off main.

bpadalino commented 1 year ago

Just checking, I don't think there are any other outstanding issues with this PR, right?

bpadalino commented 1 year ago

Sounds good. Once it's merged I can look at adding some more comprehensive tests from some bugs I've submitted to Aldec against Riviera-PRO.

umarcor commented 1 year ago

@bpadalino feel free to change the author of the commit assigned to me and set it to yourself. I only copy-pasted 2-3 and you did all the rest :wink:.

umarcor commented 1 year ago

@umarcor I never tried ./run.py --compile -p10 && ./run.py but does it really work on your side? Do you see a speed improvement.

I did not try. I provided the example when on the phone, based on my knowledge of VUnit's CLI.

umarcor commented 1 year ago

@bpadalino, please ack if/when you are ready to merge.

bpadalino commented 1 year ago

I am ready to merge if you are - I don't care about the authorship on whatever commits.