Closed rpl-ffl closed 2 weeks ago
- Do I understand that this scirpt is run as part of PR requests? I guess it is not what we want, we want an extra script for end-to-end tests
The stage was added to CI (end-to-end) and RT (targeting dynamic validator scenarios, not important for our milestones right now)
- I am not a fun of scripts in pipeliens. I think pipelines should be declarative, straigforward, easy to understand what they execute, their behaviour should not change when for example a file is removed from the test
dir
- it should recognise that the file is missing.
I hear you. My current line would be to add a hard-coded list of baseline tests into the stage. Would this address your concern?
- Is just running
norma run
enought to see that the test has happes? How is it recognised if the test scenario does not run correctly.
Added CatchError on each baseline scenario runs. Would you find helpful, if we add a scenario that fails to see if the pipeline would catch it?
- Do I understand that this scirpt is run as part of PR requests? I guess it is not what we want, we want an extra script for end-to-end tests
The stage was added to CI (end-to-end) and RT (targeting dynamic validator scenarios, not important for our milestones right now)
I do not know what these pipelines are suppose to do. In other projects we have just one pipeline that is exectured fo pull-requests, and all other pipelines are in LaScala
project. I am not saying it is the best approach, but was somehow decided this way. To be consistent, I think we should follow this pattern. In particualer, we would like to create an end-to-end test, i.e. a pipeline in LaScala.
- I am not a fun of scripts in pipeliens. I think pipelines should be declarative, straigforward, easy to understand what they execute, their behaviour should not change when for example a file is removed from the test
dir
- it should recognise that the file is missing.I hear you. My current line would be to add a hard-coded list of baseline tests into the stage. Would this address your concern?
Yes, I think it is better to repeat exectution of Norma a few times. At the end, I guess we will run 1-2 test pipelines right? I do not see why we would have more. We can have a test scenario that represents all possible configuration variants in aone file. Only if there were some configurations that contradicts we would need to have more scenario files.
- Is just running
norma run
enought to see that the test has happes? How is it recognised if the test scenario does not run correctly.Adding CatchError on each baseline scenario runs. Would you find helpful, if we add a scenario that fails to see if the pipeline would catch it?
If I understand CatchError
correctly, it catches a possible execution failure, allows for handling it, and continues execution. I think we do not need this. My question was more about this: lets say a scenario runs, all work well, the script does not crash, but for instance the check of block hashes at the end of exectuion fails. Would this make the job fail, or succes?
If I understand
CatchError
correctly, it catches a possible execution failure, allows for handling it, and continues execution. I think we do not need this. My question was more about this: lets say a scenario runs, all work well, the script does not crash, but for instance the check of block hashes at the end of exectuion fails. Would this make the job fail, or succes?
The job would fail.
I do not know what these pipelines are suppose to do. In other projects we have just one pipeline that is exectured fo pull-requests, and all other pipelines are in LaScala project. I am not saying it is the best approach, but was somehow decided this way. To be consistent, I think we should follow this pattern. In particualer, we would like to create an end-to-end test, i.e. a pipeline in LaScala.
CI.jenkinsfile to be moved to LaScala. This will be the end-to-end test for Norma. This is consistent with other projects.
Yes, I think it is better to repeat exectution of Norma a few times. At the end, I guess we will run 1-2 test pipelines right? I do not see why we would have more. We can have a test scenario that represents all possible configuration variants in aone file. Only if there were some configurations that contradicts we would need to have more scenario files.
Let me clarify:
demonet/dynamic.yml
, demonet/slope.yml
, demonet/static.yml
= 3 runs during end-to-end test
test/baseline_check.yml
= 1 run during end-to-end testbaseline_test
that lists all the target scenarios during end-to-end test
demonet/dynamic.yml
, we will then add this to baseline_test
= 2 runs during end-to-end testAm I missing anything?
Yes, I think it is better to repeat exectution of Norma a few times. At the end, I guess we will run 1-2 test pipelines right? I do not see why we would have more. We can have a test scenario that represents all possible configuration variants in aone file. Only if there were some configurations that contradicts we would need to have more scenario files.
Let me clarify:
- We want to execute as few as possible Norma Scenario Runs during end-to-end test.
Originally, we were targeting
demonet/dynamic.yml
,demonet/slope.yml
,demonet/static.yml
= 3 runs during end-to-end test
- Given the current state of Norma, we disabled the 3 targets above and only target
test/baseline_check.yml
= 1 run during end-to-end testWe want to create a new file
baseline_test
that lists all the target scenarios during end-to-end test
- When we want to enable say
demonet/dynamic.yml
, we will then add this tobaseline_test
= 2 runs during end-to-end testAm I missing anything?
yes, soudns good to me. Just the baseline_test
files is the new pipeline? If yes, call it perhaps baseline.jenkinsfile
yes, soudns good to me. Just the
baseline_test
files is the new pipeline? If yes, call it perhapsbaseline.jenkinsfile
This is implemented in the same pipeline = LaScala's norma/CI.Jenkinsfile
baseline_test
will be a variable containing the list of target scenarios defined for the entire pipeline// after "make test" succeeded
stage("run x, y, z scenarios in parallel') {
- create new node for each scenario x, y, z
- for each new node: build, gofmt check, norma run {x,y,z}
- if run in any of the node failed, return FAIL and terminate all other runs.
- stage pass only when all nodes passed, else FAIL
}
@kjezek
I believe the current state of this PR addressed your concern.
CI.jenkinsfile
and RT.jenkinsfile
is removed from this repo and added to LaScala (now as PR). This pipeline will be the end-to-end pipeline for Norma. Only PR.jenkinsfile
will remain in this repo for PR testing purposes. This is consistent with other projects.scenarios/test/baseline_check.yml
) fails, then the pipeline results in failure.Furthermore, this code should go to a new pipeline, which is stored in LaScala repo and triggered from Jenkins in Norma folder. So I guess we are not about to merge this PR.
We now have 2 pipelines:
An additional test stage is added to all jenkinsfile. This stage run all scenarios under
scenarios/test
to make sure that baseline tests pass.