Closed abasau closed 6 years ago
Critical: 0 Warning: 0 Info: 0
Security | Defect | API | Anomaly | Rename | Lint | Info | |
---|---|---|---|---|---|---|---|
Critical | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Warning | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Info | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
File | Critical | Warning | Info |
---|
Critical: 0 Warning: 0 Info: 0
Security | Defect | API | Anomaly | Rename | Lint | Info | |
---|---|---|---|---|---|---|---|
Critical | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Warning | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Info | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
File | Critical | Warning | Info |
---|
Hi @aliaksandrbasau , I made one more commit
Waiting your opinion to merge this.
@nvborisenko I don't think that writing exceptions to a file is a good idea. It hides the exception from the end-user if the user doesn't know where to look. And if the error happens on CI, the user will not be notified right away about it. Even local user may not notice issues with RP integration. And it's possible that there will be no permissions to write to the current directory (but unlikely). Another issue - not re-throwing an exception in SafeBindingInvoker.cs. It allows tests to continue to run without RP integration. In my opinion all tests should fail in case of issues with RP. On my project it doesn't make sense to run tests without RP integration. I would assume that it's true for many other projects where RP is the main and only repository for test results. I would rather fail fast and throw the exception to propagate it to unit testing framework which will show the exception to the end-user and will write it to its results file. If it happens on CI, the user will immediately receive email notification that the run failed. Writing exceptions to a file and not propagating it to unit testing framework are not traditional ways to handle exception, and this way we would make assumptions about end-user workflow.
@DzmitryHumianiuk we need your opinion on this issue. Basically the question comes down to the following: Is having tests executed (even without RP integration) more important than having RP integration? If yes, RP has to swallow all errors (e.g. incorrect config, unavailable server, network issue) and to run tests without RP integration. If no, the client should fail fast and fail test run.
I have strong opinion that reporting should not affect tests execution. Tests are important. I see 2 improvements here:
@aliaksandrbasau @nvborisenko
I would agree with @nvborisenko here. Reporting is the secondary for testing. Tests first.
But, let's follow the same concept which we follow for java integrations.
Send simple request to check heartbeat at https://rp/api/v1/heartbeat
.
If (!heartbeat){
print ("EEEERRRORORORORORREE")
}
and continue running tests.
in case if you don't use any other reporting, just add special flag, which can be populated via config file.
smthn lilke failOnRPUnavailable
If (!heartbeat ){
if (failOnRPUnavailable){
exitRun
} else {
print ("EEEERRRORORORORORREE")
}
}
@DzmitryHumianiuk Thanks! That's good to know (surprisingly). In our cases it goes beyond checking heartbeat. The intend was to identify configuration issues as well. It means that we at least need to try to create a launch. But this is not relevant to this conversation. The most important thing is to understand the general approach.
@aliaksandrbasau let's make it valid for both groups.
add flag failOnRPIssue
and it true
then fail test run.
in not, then continue to run at any stage.
does it make sense?
@DzmitryHumianiuk I makes perfect sense. I would rather call it something like integrationStartegy
with two possible values: tests-first
and results-first
(the names are subject to change).
@nvborisenko What do you think?
What I understand:
Are we on the same page?
PS: I don't like to introduce a new configuration property. There are many flows to be configurable, anybody wants to configure everything; but this makes the integration harder for understanding.
@nvborisenko I understand your reluctance to add a new config property and even share it but, on the other, we should not decide what's important for the end user. If we do decide it, it may lead to people abandoning the official implementation of the client. It would be a game-changer for me. I wouldn't want for tests to run without RP integration. Checking heartbeat might not be enough to make sure that config is okay (e.g. it doesn't validate project name). "dummy call" might work in this case (e.g. read list of launches) if it's executed in scope of a project. As I understand all users can created launches and report results so if the search succeeds then the config is correct.
- Before tests execution verify RP "availability" (make dummy call). If available - start execution, if not - consider it as configuration error and throw exception (abort execution)
- Reporting should not affect tests, all tests should be executed and standard report should be populated (xml file, whatever). Then we should make a decision whether abort CI job or not (property in config file?).
I thought we are not going to abort execution in any case. Even if the configuration file is incorrect. (If we are talking about tests-first
strategy.)
Let's make a short call to discuss this situation. Generally I prefer unified way in all agents. Currently nunit/xunit/vstest just output reporting errors into console without affecting tests execution.
@aliaksandrbasau @nvborisenko folks, did you find common solution, suitable for both?
@DzmitryHumianiuk Not yet. We didn't find time to discuss it.
@aliaksandrbasau @nvborisenko book a call for you two? :)
@DzmitryHumianiuk Working as a part-time secretary? :) I will let it slide this time :) Just don't do it in the future.
@aliaksandrbasau pushing things to be done :)
Again I forgot what it is about. Will start from white paper if no news from @aliaksandrbasau Alex. We decided to write all http error messages at the end of tests execution and don't affect tests runner.
@nvborisenko Yep, that's what was decided. And RP integration will not affect test runs. I will be able to start working on the issue in a couple of weeks.
@aliaksandrbasau please switch to upcoming branch releases/1.3.0
for experimenting. Thanks. And it seems this is issue on specfow side https://github.com/techtalk/SpecFlow/issues/1269
Resolves #27. SafeBindingInvoker swallowed exceptions in BeforeTestRun, BeforeFeature, AfterFeature, and AfterTestRun hooks.