Open tangxifan opened 3 years ago
We are working on it.
CI setup is up on the master branch for the 4 tools i.e. Yosys Vivado Quartus Lattice Diamond
All the tools run independently on the self hosted runners and check if the added/modified benchmark is synthesizable or not.
@komaljaved-rs Thanks for the updates. I just take a quick look and there are several issues.
master
and cga_test
. Later, should we constrain only the master branch? Since we configured the CI to be run on any pull requestsac97
is used. That makes sense to me. However, the log files are very long. I see you have python scripts to run the tests. Can you redirect the tool output to log files? In the CI log, we just need to see if a benchmark is o.k. or not. For example, @tangxifan rules were set correctly before. I have set them again.
Yes, I'll remove the cga_test branch from the CI tests once whole flow will be setup. cga_test branch is kept for testing purpose.
OK I'll turn of the logs display avoid lengthy logs.
@laraibkhan-rs I think you are going in a right direction. However, I cannot see any pull request when you push changes to the master branch. Does it mean there are still someone who can directly modify the master branch without going through code review stages?
@tangxifan it had been deactivated as we were doing some small changes in designs renaming and in scripting. I have set rule again and a pull request in created. You can check it out.
@laraibkhan-rs Understood. This makes sense to me. Since we now have CI, we should accept pull requests as the only way to modify the master branch now.
I think this issue is close to be addressed. The remaining work is on
After the CI setup, we should start developing
@tangxifan Can you please specify what reports you want to be generated. Also right now whenever a benchmark is added to the master branch it is synthesized on the following tools:
And the update is only merged if synthesis is successful on all tools.
And secondly, by 'Ensure all the benchmarks are covered in CI' do you mean that all designs have to be synthesized once a new design is added or only the design being added. Currently all designs in the repo have manually been checked.
@komaljaved-rs
@tangxifan CI setup to synthesize all the benchmarks present in the repo is up and running. It can be triggered any time by the user or we can set it up to run monthly.
@komaljaved-rs I have not seen any log file for CI running for all the benchmarks from the Github Action page:
Since this is a RS server, I believe we can set it to run weekly. This is to ensure that any environmental changes on the RS server will not impact benchmarks
@laraibkhan-rs @komaljaved-rs I just checked your pull requests. It look like you are address this issues well following incremental changes. However, in the pull requests, you never mention which issue are being addressed. It is better to mention it so that we can keep tracking in this issue about what have been done. And we can then decide if it is enough to close this issue. Can you do it for all your previous pull requests?
Here is an example:
@tangxifan The screenshot shown above is for the diamond test that is triggered on every pull request and synthesizes only added/modified test cases (if there are any). As there was no new benchmark added that's why it is giving "file exists but is empty" message but this files contains configs of added/modified test cases. I think its better to trigger it only if there is any added/modified test case during the pull request.
All the benchmarks are ruuning in this CI Setup.
I have not displayed the log files as you suggested to redirect the output to log files and display if the benchmark ran successfully or not. Can you please explain where do you want to see the the log files?
@komaljaved-rs Thanks for the updates. I understand the situation here. Here are my comments
Problem Definition
Currently, the Github action is only set up for notifying the gap analysis repo: https://github.com/RapidSilicon/Gap-Analysis However, for RTL benchmarks, CI is need to certify the RTL benchmarks by ensuring
If any benchmarks cannot be compiled as is, CI should report error and we can assign engineers to fix the issue
If any benchmarks cannot pass HDL simulation, CI should report error and we can assign engineers to fix the issue
Possible Solution
Github runner set-up:
How many Github Runners are needed