VUnit / vunit

VUnit is a unit testing framework for VHDL/SystemVerilog
http://vunit.github.io/
Other
722 stars 261 forks source link

Add support for cocotb as a supported testing methodology #651

Open ktbarrett opened 4 years ago

ktbarrett commented 4 years ago

I will use this issue to keep track of the status of my efforts to add cocotb support to VUnit and receive suggestions and feedback. I am new to VUnit (I haven't used it as a testbenching framework before creating this issue), but have a good understanding of cocotb.

Some thoughts

Incompatibility with VUnit's test runner and libraries

When using cocotb as the testing methodology, one would not use VUnit's libraries. It would probably work, but seems like a bad idea unless you know what you're doing; this would require further investigation. Assuming all VUnit needs to detect a testbench is a single string parameter runner_cfg, we can just add that parameter to an otherwise empty testbench.

Loading cocotb with the simulator

This requires we set a few variables in the environment the simulator runs in. Assuming that VUnit does not sandbox the simulator run, this is trivial. It also requires we specify a few simulator command line options to load libcocotb as a VPI, VHPI, or FLI project. The pli simulator argument isn't sufficient, but we can do it using each simulator's sim_args option. Initially, this could be done in the pre_config hook.

Reporting results

I can see that VUnit generates a JSON report that seems to be created in Verilog. cocotb outputs a XUnit report. We could one of two things here: 1. in the post_check we could read in the cocotb results and convert them to the JSON output expected by VUnit. This might not work the way I think it does, post_check is only run when tests pass which implies it checks the JSON file for failure. 2. Modify cocotb to output in the JSON format.

That's another question, does VUnit check for a JSON file to ensure the test ran correctly immediately? If so, the report will have to be generated in cocotb.

Support in run.py

This will be an issue going forward, there is nothing in VUnit (from the documentation) that I see that parallel's the idea of "loading a plugin", which would inject hooks alongside the user's hook overloads. It might be a good idea to add support for multiple hook handlers to support plugins like cocotb.

Summary

Given what I have stated here, I think I could do an example project template with no code change to VUnit. Obviously, that's not the final solution, but should be a valid proof of concept.

Icarus + cocotb support

Icarus is currently not supported, and for good reason, it does not have sufficient support for SystemVerilog to use the VUnit libraries. However, if cocotb is used as the testing methodology, VUnit can support Icarus. Assuming all VUnit needs for testbench detection is the presence of a single string parameter.

Repo

https://github.com/ktbarrett/vunit-cocotb

LarsAsplund commented 4 years ago

@ktbarrett Have a look at the -x option. It will return an XUnit report.

ktbarrett commented 4 years ago

Took almost nothing to get a simple example running. I'm not sure if I could get cocotb's XUnit output to work with VUnit's reporter. I added my repo to the OP.

Relevant comment on pros/cons/remaining work. https://github.com/cocotb/cocotb/pull/1562#issuecomment-633619086.

ktbarrett commented 4 years ago

I am interested in proper integration of cocotb and VUnit, so I'm back working on this. This list is mostly as a reminder for myself.

My current plan of attack is to create cocotb-specific derivatives of TestRun, SameSimTestSuite, IndependentSimTestCase and TestBench. Users will be able to create CocotbTestBenches with HDL entities and associated Python test modules. Such testbench object will be registered with the current TestBenchList.

Per typical cocotb operation, all tests in a module will be executed in the same simulator run in order of their appearance in the file. I might introduce a decorator to mark cocotb tests that should be executed independently, but considering how clunky cocotb's regression manager is, I don't think it will be pleasant; I might skip this.

The TestRun for cocotb will set appropriate values for MODULE and TESTCASE to orchestrate the regression manager to run the selected set of tests. It will also read and parse the result.xml output.

Following that basic support I might add cocotb as a Builtin with support being added via add_cocotb(), if that's appropriate.

eine commented 4 years ago

Hi @ktbarrett! Glad to see you active on this! As you might have seen, I've been discussing with @rodrigomelo9 about it. Please, let me share some thoughts:

According to https://github.com/ktbarrett/vunit-cocotb/blob/master/tests/run.py it seems that the entrypoint to a "cocotb" simulation is the regular entrypoint to any simulation. That is, the simulator CLI is executed as usual, by providing a reference to a shared library. Then, I guess that the simulator triggers the shared library, the shared library looks the envvars, loads the Python module and the show goes on. When the simulation is finished, the simulator terminates it as any non-cocotb simulation. Is this understanding correct?

The VHDL source is currently incomplete for being a valid VUnit testbench: https://github.com/ktbarrett/vunit-cocotb/blob/master/tests/hdl/dff.vhd. The generic was added but the startup and cleanup procedure calls are missing. See https://github.com/VUnit/vunit/blob/master/examples/vhdl/array_axis_vcs/src/test/tb_axis_loop.vhd#L63-L72. Regarding your concern about regression management, I believe that would solve all your problems, as VUnit would take care of it.

If my understanding is correct, I don't think creating derivatives of TestRun, SameSimTestSuite, etc. is required. Instead, a property/option might be added to existing testbenches, where optional Python test modules are defined. See options enable_coverage or pli in http://vunit.github.io/py/opts.html. These options might be defined per vu object, per library, per testbench, etc. Hence, it would be possible to assign the same cocotb modules to multiple testbenches, at once. That option would be the one used for setting MODULE and TESTCASE.

Regarding the fact that cocotb executes multiple tests in the same simulator in order of appearance, I believe that's the feature we need to work on. In VUnit, multiple tests are defined as shown in https://github.com/VUnit/vunit/blob/master/examples/vhdl/generate_tests/test/tb_generated.vhd#L34-L44. Hence, we'd need a mechanism for cocotb to read/get the string that identifies the test which VUnit is about to execute. Alternatively, cocotb could use "configuration" generation API options for creating tests dynamically. Based on any of those, VUnit does already handle executing multiple tests in a single instance, launching an instance for each test, and/or running multiple tests in parallel.

Assuming that cocotb reuses both VUnit's regression/test management features, the last point to address would be for cocotb to generate logs using VUnit's API (either HDL or Python). As a result, cocotb's info would be automatically embedded in VUnit's terminal and/or XUnit report.

ktbarrett commented 4 years ago

Is this understanding correct?

That is correct, cocotb is a VPI/VHPI/FLI cosimulation, which is non-obtrusive to regular simulation flow. This allows testbenches to partially exist in both cocotb and HDL. They can even communicate via system tasks that are registered via VPI (see here and here).

That being said, I think a large part of the rest of your comments miss another cocotb ideal: that you should be able to test a design without writing any HDL code. Meaning there is no testbench HDL file. So test cases, test suites, and testbenches don't exist in HDL in cocotb, so we can't leverage the existing VUnit regression tools which assume they are described in HDL files. If we are going to offer this to current cocotb users as an improvement over the makefiles, it should more-or-less function the same way.

Since a project may contain both cocotb and HDL-based testing methodologies, cocotb-specific overloads are the easiest way to make them transparently work together.

Regarding the fact that cocotb executes multiple tests in the same simulator in order of appearance, I believe that's the feature we need to work on.

I agree, the regression manager is a hack. I'd prefer to refactor that whole section of cocotb's infrastructure to be friendlier to other testing methodologies like VUnit or pytest (via cocotb-test), etc. I have a plan for it; but we don't have the contributors, reviewers, or consensus for making breaking changes or going back into active development.

Based on any of those, VUnit does already handle executing multiple tests in a single instance, launching an instance for each test, and/or running multiple tests in parallel.

This is what I'd hope I could accomplish by hooking into the existing infrastructure with cocotb-specific overloads. The TestRun overload wouldn't provide the configuration option, but instead set MODULE and TESTCASE.

Assuming that cocotb reuses both VUnit's regression/test management features, the last point to address would be for cocotb to generate logs using VUnit's API (either HDL or Python). As a result, cocotb's info would be automatically embedded in VUnit's terminal and/or XUnit report.

I had considered this as well. It will require a refactor of the regression manager (or just remove/replace it like I suggested above) to be able to handle alternative test reporting format.

eine commented 4 years ago

I'm glad to see that we agree on the mid-long term strategy. For the short term, I'm kind of arguing in favour of preserving as much of VUnit as possible, while you are trying to preserve as much of cocotb as you can. I believe that's a natural friction, since we both want to reduce the disruption for our respective user bases.

My position is that both user bases will need to adapt. This is not a technical statement only: by combining VUnit and cocotb we can provide an umbrella for VUnit, cocotb, OSVVM, UVVM, UVM, etc. to coexist, and for users of those frameworks to be mixed together. As I said somewhere else, cocotb might provide Python bindings for existing verification components (VUnit, OSVVM, UVVM), instead of reimplementing them from scratch.

That being said, I think a large part of the rest of your comments miss another cocotb ideal: that you should be able to test a design without writing any HDL code. Meaning there is no testbench HDL file. So test cases, test suites, and testbenches don't exist in HDL in cocotb, so we can't leverage the existing VUnit regression tools which assume they are described in HDL files. If we are going to offer this to current cocotb users as an improvement over the makefiles, it should more-or-less function the same way.

The point is that cocotb's and VUnit's ideals in this regard are opposite and incompatible:

Hence, if the premise is that no testbenches will exist, the outcome of this integration would be "cocotb reusing some of VUnit's simulator interfaces and source dependency scanner", but it wouldn't be VUnit any more. All of VUnit's logging and communication features are based on the core runner. If a properly managed testbench does not exist, all the system comes down.

Moreover, preserving existing testbenches is a must for VUnit's user base to adopt cocotb non-intrusively. Being non-intrusive and picking features one by one is the design philosophy of VUnit.

Yet, I agree that we should also support users of cocotb which don't expect any meaningful HDL testbench. I believe that the testbench can be considered "a configuration file to sync VUnit and cocotb". As such, it can be autogenerated:

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_example is
  generic (runner_cfg : string);
end entity;

architecture tb of tb_example is
begin
  main : process
  begin
    test_runner_setup(runner, runner_cfg);
    report "Placeholder";
    test_runner_cleanup(runner); -- Simulation ends here
  end process;
end architecture;

or

library vunit_lib;
context vunit_lib.vunit_context;

entity tb_example is
  generic (runner_cfg : string);
end entity;

architecture tb of tb_example is
begin
  main : process
  begin
    test_runner_setup(runner, runner_cfg);
    report "Placeholder";
    test_runner_cleanup(runner); -- Simulation ends here
  end process;

  uut: entiy dff ...
end architecture;

Then, based on that foundation, the Python API can be used for "overloading" the test. That is, generating several configurations, customizing each of them, etc. See September 24, 2020 9:35 AM.

Note that all of this (generating the facade and overloading the test) can be achieved in the run.py file. Hence, no modification in VUnit is required. It could be designed as an external Python module (neither cocotb nor VUnit, but a bridge).

Furthermore, VUnit can set generics through the Python API. Those generics are available in the HDL testbench. Hence, those parameters should be readable by cocotb through VPI/VHPI. This recent discussion is related: September 24, 2020 1:59 AM.

I agree, the regression manager is a hack. I'd prefer to refactor that whole section of cocotb's infrastructure to be friendlier to other testing methodologies like VUnit or pytest (via cocotb-test), etc. I have a plan for it; but we don't have the contributors, reviewers, or consensus for making breaking changes or going back into active development.

My main point is that it might not be worth rewriting all the VUnit classes you mentioned, as long as you already know that it's a hack and it will need to be changed in the not far future. Chances are that it will not be merged straightaway, for the reasons I mentioned above.

We will ask VUnit users to understand that some components of the testbenches or even the top-level may be written in Python. Even though their HDL only testbenches will work, they will need to slightly adapt their scripts if they want to take advantage of cocotb.

By the same token, I believe we should ask cocotb users to understand that the list of tests needs to be coordinated with the HDL. Their cocotb only testbenches will work, but they will need to slightly adapt their scripts if they want to take advantage of VUnit.

Overall, I think that the added value for cocotb users is more significant than for VUnit users. That is because VUnit is already compatible with OSVVM, UVVM and UVM, and both the build system and the regression system are more robust. This is kind of acknowledged since your proposal is based on a run.py file, which cocotb users need to understand. That's why I think that existing VUnit testbenchs should work in the integration without additional modifications, but cocotb users might need to tweak the run_test function (https://github.com/ktbarrett/vunit-cocotb/blob/master/tests/tests/dff_cocotb.py#L132-L152). I said "might" because the "bridge" module might take care of it transparently. Furthermore, it might be added as run_vunit_test, so that it can coexist with a cocotb only execution. These are implementation details that I cannot specify ATM.

My feeling is that this approach might be kept compatible with existing cocotb features by just ignoring some of them, and thus avoiding consensus issues for having those updated. At the same time, it would minimize the modifications in either cocotb's or VUnit's codebases. However, you are the one who knows cocotb's internals.

The TestRun overload wouldn't provide the configuration option, but instead set MODULE and TESTCASE.

I guess I don't see the need for overloading yet. I mean, there needs to be some other reason apart from setting MODULE and TESTCASE that justifies the overloading of the class. We'll see as you advance with it...

I had considered this as well. It will require a refactor of the regression manager (or just remove/replace it like I suggested above) to be able to handle alternative test reporting format.

Wouldn't it be possible for the "bridge" to overload the Scoreboard (https://github.com/ktbarrett/vunit-cocotb/blob/master/tests/tests/dff_cocotb.py#L37)? Once again, I'm quite ignorant about cocotb internals. My thought is just "don't you feel more comfortable overloading/adapting cocotb classes instead of learning VUnit internals?". There must be some reason for you to modify so many VUnit classes which I am not understanding yet.

ktbarrett commented 4 years ago

I believe that's a natural friction, since we both want to reduce the disruption for our respective user bases.

How would my proposal disrupt VUnit's user base? Like I mentioned in the OP, I don't use VUnit, so I'm not aware of the consequences.

cocotb might provide Python bindings for existing verification components (VUnit, OSVVM, UVVM), instead of reimplementing them from scratch.

I'm not sure if that's an appropriate usage of cocotb. cocotb is very much designed to be the test driver, not a slave. It can be a "co-master", but then we would have to worry about merging results and informing each testing framework of certain events that occur in the other. That sounds like a lot of work, and is definitely out of scope for a basic integration. I know there has been interest in using Python for modelling, implementation convenience, et cetera in HDL testbenches; but that's distinct from cocotb's purpose and design.

Moreover, preserving existing testbenches is a must for VUnit's user base to adopt cocotb non-intrusively. Being non-intrusive and picking features one by one is the design philosophy of VUnit.

I think I see what your concern is. I don't think cocotb and HDL verification methods should be used together. In my mind, it's one or the other; so there is no way for cocotb to intrude. What I meant by "seamlessly supporting both" was that within the same project some tests use OSVVM, and others use cocotb. I never intended to support cocotb running simultaneously with the other verification methodologies.

So there have been some implications in your responses, but maybe you can explicitly lay out what you are expecting out of this potential integration?

Hence, if the premise is that no testbenches will exist, the outcome of this integration would be "cocotb reusing some of VUnit's simulator interfaces and source dependency scanner", but it wouldn't be VUnit any more.

Overall, I think that the added value for cocotb users is more significant than for VUnit users. That is because VUnit is already compatible with OSVVM, UVVM and UVM, and both the build system and the regression system are more robust.

This is our interest in integrating cocotb and VUnit. I am not a fan of every single FOSS verification project reimplementing the same functionality (usually poorly), rather than agreeing on and contributing to one project that can service any use case. I don't think VUnit is that, but it's the closest thing right now. Others who contribute to cocotb are in the process of supporting cocotb in edalize and/or fusesoc. After looking at those projects, I can't say I was too impressed; which is why I'm putting my efforts into VUnit integration.

This is kind of acknowledged since your proposal is based on a run.py file, which cocotb users need to understand.

Our surveys have concluded that users would be thrilled to never look at a makefile again.

The "bridge" implementation you speak of could work. I'm just wondering if that's less work than overloads, considering it's a less clean solution IMO (assuming we don't intend to have cocotb and HDL tests running simultaneously).

Wouldn't it be possible for the "bridge" to overload the Scoreboard (https://github.com/ktbarrett/vunit-cocotb/blob/master/tests/tests/dff_cocotb.py#L37)?

The Scoreboard has been deprecated. cocotb (the repo) is getting out of the verification methodology game. We are focusing on cocotb as a framework to write testbenches in Python. In this guise it's more comparable to SystemVerilog or VHDL. Instead, we will be relying on third party libraries like uvm-python to handle verification.

My thought is just "don't you feel more comfortable overloading/adapting cocotb classes instead of learning VUnit internals?". There must be some reason for you to modify so many VUnit classes which I am not understanding yet.

Actually, no. Getting significant changes into cocotb is somewhere between brutal and impossible. There is also a fair amount of spaghetti in cocotb's components that make it even more difficult. By contrast VUnit seems to have fairly well defined interfaces I can take advantage of. The primary reason I had for overloading the VUnit internals was to support describing testbenches, test suites, and test cases as Python objects rather than HDL. I wouldn't be significantly modifying the VUnit internals; the overloads can sit in a separate package and should encapsulate all cocotb-specific behavior so it doesn't leak in the VUnit internals.

kraigher commented 4 years ago

Actually the VUnit architecture is not dependent on any HDL test bench. For most systems the simulation is a black box. This is why it can support both Verilog and SystemVerilog.

Actually at a previous workplace I used the VUnit TestRunner to run board level tests without any HDL or simulator.

Thus it should be no problem architecturally to support cocotb as a third subsystem along VHDL and SystemVerilog. VUnit would then provide value in addition to vanilla cocotb with its incremental compilation, simulator support, test runner and results reporting.

Ideally cocotb tests would be seamlessly discovered like HDL tests are today. That should be easy given that cocotb uses decoratiors to annotate tests if I am not mistaken.

With this I just want to add my 2c that @ktbarrett approach is viable given someone has the will, knowledge and time to implement it.

eine commented 4 years ago

First off, I'm really glad that we are starting to understand each other. Bear in mind that I'm not a native english speaker, so although I might sound too direct/strong, I'm open to changing my opinions completely should it be the best technical solution.

How would my proposal disrupt VUnit's user base? Like I mentioned in the OP, I don't use VUnit, so I'm not aware of the consequences.

From my simplistic vision, cocotb is VHPI/VPI. The fact that scripts are written in Python is kind of irrelevant because exactly the same functionality might be written in C/C++. Of course, using Python makes it unarguably easier. That's why cocotb is so popular, and that's why I/we want to take advantage of that. However, someone might come up with a Ruby or Rust equivalent to cocotb. Hence, from an HDL language perspective, cocotb can be used together with any other foreign tool (using various mechanisms: VHPI/VPI/FFI/DPI). Therefore, from VUnit's "build system" perspective, cocotb is "just" a VHPI/VPI tool.

From this perspective, my vision is for VUnit users to pick cocotb features and for cocotb users to pick VUnit features. The disruptions comes from the fact that your conception is "one or the other". However, that's ok, now that I understood it.

cocotb might provide Python bindings for existing verification components (VUnit, OSVVM, UVVM), instead of reimplementing them from scratch.

I'm not sure if that's an appropriate usage of cocotb. cocotb is very much designed to be the test driver, not a slave. It can be a "co-master", but then we would have to worry about merging results and informing each testing framework of certain events that occur in the other. That sounds like a lot of work, and is definitely out of scope for a basic integration. I know there has been interest in using Python for modelling, implementation convenience, et cetera in HDL testbenches; but that's distinct from cocotb's purpose and design.

Verification Components, precisely master VCs are typically the drivers of the tests, not the slaves. The idea is that all the signal toggling can be described in HDL, where it is natural. Then, subprograms to manage those VCs can be executed either from HDL (as it is done now) or through VHPI/VPI (i.e. cocotb). Anyway, this is just my vision of where we should go in the long term. Not a priority at all. Specially due to the current fragmentation with regard to open source VHDL verification components. I just wanted to drop the idea, for trying to avoid people rewriting dozens of VCs in Python.

Regarding co-mastering, essentially VCs are modules instantiated in the design. From cocotb's perspective there should be no difference between a top-level HDL module instantiating multiple "design" submodule/subcomponent or another top-level HDL module where some of the components are VCs. Hence, I believe it is already possible to combine HDL VCs with cocotb, regardless of this discussion and/or any integration with VUnit.

Moreover, preserving existing testbenches is a must for VUnit's user base to adopt cocotb non-intrusively. Being non-intrusive and picking features one by one is the design philosophy of VUnit.

I think I see what your concern is. I don't think cocotb and HDL verification methods should be used together. In my mind, it's one or the other; so there is no way for cocotb to intrude. What I meant by "seamlessly supporting both" was that within the same project some tests use OSVVM, and others use cocotb. I never intended to support cocotb running simultaneously with the other verification methodologies.

Understood.

So there have been some implications in your responses, but maybe you can explicitly lay out what you are expecting out of this potential integration?

I'm expecting to use cocotb (Python) for (in the future as I know that some features are not possible yet):

What I'm not expecting cocotb to do:

Please, note that this is for the sake of discussion and sharing ideas, as it is biased by my conception of cocotb as "a VHPI/VPI frontend". I understood that most of those use cases are out of your scope.

I am not a fan of every single FOSS verification project reimplementing the same functionality (usually poorly), rather than agreeing on and contributing to one project that can service any use case. I don't think VUnit is that, but it's the closest thing right now. Others who contribute to cocotb are in the process of supporting cocotb in edalize and/or fusesoc. After looking at those projects, I can't say I was too impressed; which is why I'm putting my efforts into VUnit integration.

As you might have read in hdl/awesome, I agree. I think that fusesoc's YAML syntax can be a good foundation for a "universal" declarative configuration file format. However, edalize itself is not attractive compared to VUnit, tsfpga, PyFPGA, etc. Yet, neither of these tools alone can cover edalize's scope.

Our surveys have concluded that users would be thrilled to never look at a makefile again.

I didn't fill it, but I'm one of those too :D

The "bridge" implementation you speak of could work. I'm just wondering if that's less work than overloads, considering it's a less clean solution IMO (assuming we don't intend to have cocotb and HDL tests running simultaneously).

This is directly related to @kraigher's comment. As he explained, you can indeed use the TestRunner and focus on VUnit's Python only features. So, conceive it as a kind of alternative to pytest. However, I think that the critical point is that he said "run (...) tests without any HDL or simulator" and "no problem architecturally to support cocotb as a third subsystem along VHDL and SystemVerilog". In the context of cocotb, you/we really want to use it along with VHDL and SystemVerilog, not as a separate subsystem. That is, you/we want to reuse all the existing simulator interfaces for compiling the sources and launching the simulations. Of course, @kraigher is more familiar with the codebase than me. He might see how to support a third subsystem which reuses the existing interfaces.

I'm quite confident that my proposal is the way to go for supporting HDL frameworks and cocotb peacefully coexisting together, and for users to use them interchangeably as building pieces of their tests. However, I cannot say it is easier than reusing existing decorations and feeding them to an overloaded class. For a first approach, I think that's ok. I'm not convinced about Olof's proposal, tho.

The primary reason I had for overloading the VUnit internals was to support describing testbenches, test suites, and test cases as Python objects rather than HDL. I wouldn't be significantly modifying the VUnit internals; the overloads can sit in a separate package and should encapsulate all cocotb-specific behavior so it doesn't leak in the VUnit internals.

This sounds really good. Please let me know when you have something that I can try. The approach feels similar to tsfpga. Every once in a while some customizations are upstreamed here, but most of the features are kept in the project.

kraigher commented 4 years ago

To clarify I mean all VUnit subsystems related to compiling code in the correct order and incrementally could be used to substitute the makefiles of cocotb while still not requiring cocotb to use the VUnit HDL runner (VHDL nor Verilog). Just using VUnit as a better pytest would not be a strong enough value proposition.

Since cocotb already interacts with the HDL as a black box it is a completely orthogonal concern if any other VUnit libraries like verification components are used in the simulation.

However I believe that cocotb users would not want much verification code in HDL but rather keep that part in Python. I think the main value proposition to cocotb users would be getting auto compile and nicer CLI and test results.

eine commented 4 years ago

@kraigher, what's your proposal regarding the regression management? If I don't get it wrong, not using the HDL runner would imply keeping cocotb's codebase for deciding when a test passed and when failed, isn't it?

kraigher commented 4 years ago

My intuition says cocotb would substitute the hdl runner and thus be responsible for reporting pass/fail within a test suite. The VUnit runner would still be an outer layer to catch uncaught exceptions etc same as today.

Cocotb could write the same test results file in the output path as the HDL runner does. It is a black box already today. Alternatively depending on the details a Python level integration or wrapper could provide some extra value that we cannot see now.

My intuition also tells me that it would be better for cocotb to steal the necessary subset of VUnit that provides value to their users rather than letting VUnit become a large superproject. If it turns out well the changes can be backported and shared. I do not think a typical cocotb user has any interest in the HDL part of VUnit and thus the value proposition of having both worlds in one tool is low. It could even create poor UX and unclear focus of the tool which scares away new users by looking too complicated.

javValverde commented 10 months ago

I would be very interested in such an integration.
At the moment we have a bunch of VUnit testbenches, and I would like to integrate some cocotb testbenches to the same flow (VUnit is great for building HDL, parameterizing tests, reporting results)

Has the situation improved in the last few years?

joshrsmith commented 10 months ago

It is not part of VUnit proper, but there is another unrelated project that aims to bring cocotb and vunit together under the same python runner: https://github.com/jwprice100/vcst

javValverde commented 10 months ago

I tried running it, but it seems to have a couple of problems. It doesn't work in windows and it seems to have some problems with the latest VUnit release.

Is the framework based on a public and stable VUnit API?