Closed podhrmic closed 2 years ago
In GitLab by @podhrmic on Nov 17, 2020, 16:58
FYI @zutshi @EthanJamesLew@bauer-matthews
In GitLab by @bauer-matthews on Nov 20, 2020, 08:20
Thanks, @podhrmic!
The static tests look pretty in line with what Aditya presented. Do you foresee big challenges in making the abstraction possible? What would the abstraction look like? A couple other thoughts:
In GitLab by @podhrmic on Jul 13, 2021, 14:01
@zutshi is this still relevant? OR can we close and re-prioritize?
Superseded by #117 and #118
In GitLab by @podhrmic on Nov 17, 2020, 16:58
Hi, this issue is to summarize what I was trying to say on Monday.
After talking with Ethan I realized that the notion of "inheritance" doesn't really work in our case, because we want the components to stay very independent.
Static tests
For static test we need to augment the system-under-test (SUT) with a signal generator:
that is done in the TOML config file, such as f16_llc_analyze_config.toml
Next step is to create a job config file, and describe your tests, such as:
The available tests are in
src/tests_static.py
Now, you can run the tests with for example:
./run-csaf.sh -e f16-llc-analyze -f f16_job_conf_static_tests.toml
The results are currently only printed on screen and logged, but they can be visualized in a similar way as in the notebook.
A library abstracting over this is certainly possible.
Dynamic tests
IMHO what we have in falsify_bopt.py is a good prototype - in short, specify an objective function, and call a standard optimization algorithm. Again, a library of different objective functions can be made, and then we can easily search/falsify different conditions.