Closed BjornOlsson closed 9 years ago
A .wlf
file is already saved into vunit_out/tests/lib.entity.test_name/msim/vsim.wlf
There is already a --gui=load
flag for launching a GUI with the design loaded.
There are two ways to achieve what you want with the existing functionality:
Start with the --gui=load
flag and do:
log -r /*
vunit_run
quit -f
After this manually launch ModelSim GUI to view vunit_out/tests/lib.entity.test_name/msim/vsim.wlf
Launch with --gui=load
and do:
log -r /*
vunit_run
dataset save sim sim.wlf
quit -sim
dataset open sim sim.wlf
We could automate this by adding a --gui=view
flag to run a test case and open the .wlf
with a viewer license afterwards (Maybe even support GTKWave). The problem is how to specify what to log, we could add a flag for that or a flag to point out a .do
file.
We could wrap the three last TCL commands of this method to save a dataset and re-open it without a simulation license as a pre-loaded TCL procedure just like vunit_load
and vunit_run
. It could be named vunit_view
@BjornOlsson What do you think about the above? Any method you prefer?
Hi,
I think I have a similar issue as Bjorn had.
I am working on a project where I have a bunch of Modelsim projects for simulation of our new IP (written in VHDL), and I have a window of opportunity to refactor it to use something more automatic for build & test.
Our current workflow is:
I used OSVVM before, and wrote a couple of automatic tests, and it worked ok with scripting everything with TCL & occasionally visually inspecting the waveform.
That being said, I hate TCL and would very much like to replace it with Python, especially if there is a prospect of integrating some unit tests for our code.
I tried Vunit before, but my issue is that I didn't figure out a smart way to save the waveform so I can rerun testcases while looking at the same signals of interest, instead of having to add them again manually.
I would like to use Vunit to add some automatic tests, but I need to support the old way of doing things in Modelsim (10.5b) because full test automation is just not feasible at this point. But I would like to set up a full regression test suite as soon as possible.
I know python & git so I might be able to help if you point me in the right direction.
P.S. Should I open a new issue, or is it OK to continue this discussion here?
I reread your suggested methods 1 & 2, and I think I understood them this time. I have two problems with them:
This makes the approaches inferior to just having a modelsim TCL file that automates everything.
@sthenc The contents of this old issue is outdated by now.
A more recent discussion is found in https://github.com/VUnit/vunit/issues/223
Your use case can most likely be solved by one of the existing modelsim TCL-hooks we offer such as modelsim.init_files.after_load
that you can read about here
Let us know if you need more hooks and we will consider it, preferably in a new issue.
@sthenc You can also join the Gitter Chat if you just want to discuss.
@kraigher Thanks, that works. I was already lurking in the gitter chat, I will join.
For the benefit of others who might want to do the same thing, example of what to put into run.py:
vu = VUnit.from_argv()
lib = vu.add_library("lib")
lib.add_source_files(... ... )
...
vu.set_sim_option("modelsim.init_file.gui", "wave.do")
vu.main()
Then run the run script like this:
python run.py --gui
It would be nice if it was possible to start a simulation and pass along a (pre defined) waveform set () flaga and attritbue to modelsim, like:
python.py --wave-gen
This would make modelsim generate a .wlf file för the simulation and the signals defined in wave.do, and put this file under /msim in the test case lib. If needed, these wave forms could then be analyzed in the GUI after the simulation has ended. (without actually tying up a ModelSim license).