reportportal / agent-net-specflow

Report Portal agent for SpecFlow
Apache License 2.0
10 stars 5 forks source link

Resolving issue when the plugin swallowed exceptions in test run and feature hooks #28

Closed abasau closed 6 years ago

abasau commented 6 years ago

Resolves #27. SafeBindingInvoker swallowed exceptions in BeforeTestRun, BeforeFeature, AfterFeature, and AfterTestRun hooks.

ghost commented 6 years ago

DeepCode found the following issues:

Critical: 0 Warning: 0 Info: 0

Priority breakdown:

Security Defect API Anomaly Rename Lint Info
Critical 0 0 0 0 0 0 0
Warning 0 0 0 0 0 0 0
Info 0 0 0 0 0 0 0

Most affected files:

File Critical Warning Info

See detailed analysis.

ghost commented 6 years ago

DeepCode found the following issues:

Critical: 0 Warning: 0 Info: 0

Priority breakdown:

Security Defect API Anomaly Rename Lint Info
Critical 0 0 0 0 0 0 0
Warning 0 0 0 0 0 0 0
Info 0 0 0 0 0 0 0

Most affected files:

File Critical Warning Info

See detailed analysis.

nvborisenko commented 6 years ago

Hi @aliaksandrbasau , I made one more commit

Waiting your opinion to merge this.

abasau commented 6 years ago

@nvborisenko I don't think that writing exceptions to a file is a good idea. It hides the exception from the end-user if the user doesn't know where to look. And if the error happens on CI, the user will not be notified right away about it. Even local user may not notice issues with RP integration. And it's possible that there will be no permissions to write to the current directory (but unlikely). Another issue - not re-throwing an exception in SafeBindingInvoker.cs. It allows tests to continue to run without RP integration. In my opinion all tests should fail in case of issues with RP. On my project it doesn't make sense to run tests without RP integration. I would assume that it's true for many other projects where RP is the main and only repository for test results. I would rather fail fast and throw the exception to propagate it to unit testing framework which will show the exception to the end-user and will write it to its results file. If it happens on CI, the user will immediately receive email notification that the run failed. Writing exceptions to a file and not propagating it to unit testing framework are not traditional ways to handle exception, and this way we would make assumptions about end-user workflow.

abasau commented 6 years ago

@DzmitryHumianiuk we need your opinion on this issue. Basically the question comes down to the following: Is having tests executed (even without RP integration) more important than having RP integration? If yes, RP has to swallow all errors (e.g. incorrect config, unavailable server, network issue) and to run tests without RP integration. If no, the client should fail fast and fail test run.

nvborisenko commented 6 years ago

I have strong opinion that reporting should not affect tests execution. Tests are important. I see 2 improvements here:

DzmitryHumianiuk commented 6 years ago

@aliaksandrbasau @nvborisenko

I would agree with @nvborisenko here. Reporting is the secondary for testing. Tests first.

But, let's follow the same concept which we follow for java integrations. Send simple request to check heartbeat at https://rp/api/v1/heartbeat.

If (!heartbeat){
     print ("EEEERRRORORORORORREE")
}

and continue running tests.

in case if you don't use any other reporting, just add special flag, which can be populated via config file. smthn lilke failOnRPUnavailable

If (!heartbeat ){
    if (failOnRPUnavailable){
         exitRun
   } else {
        print ("EEEERRRORORORORORREE")
  }
}
abasau commented 6 years ago

@DzmitryHumianiuk Thanks! That's good to know (surprisingly). In our cases it goes beyond checking heartbeat. The intend was to identify configuration issues as well. It means that we at least need to try to create a launch. But this is not relevant to this conversation. The most important thing is to understand the general approach.

DzmitryHumianiuk commented 6 years ago

@aliaksandrbasau let's make it valid for both groups.

add flag failOnRPIssue and it true then fail test run. in not, then continue to run at any stage.

does it make sense?

abasau commented 6 years ago

@DzmitryHumianiuk I makes perfect sense. I would rather call it something like integrationStartegy with two possible values: tests-first and results-first (the names are subject to change). @nvborisenko What do you think?

nvborisenko commented 6 years ago

What I understand:

  1. Before tests execution verify RP "availability" (make dummy call). If available - start execution, if not - consider it as configuration error and throw exception (abort execution)
  2. Reporting should not affect tests, all tests should be executed and standard report should be populated (xml file, whatever). Then we should make a decision whether abort CI job or not (property in config file?).

Are we on the same page?

PS: I don't like to introduce a new configuration property. There are many flows to be configurable, anybody wants to configure everything; but this makes the integration harder for understanding.

abasau commented 6 years ago

@nvborisenko I understand your reluctance to add a new config property and even share it but, on the other, we should not decide what's important for the end user. If we do decide it, it may lead to people abandoning the official implementation of the client. It would be a game-changer for me. I wouldn't want for tests to run without RP integration. Checking heartbeat might not be enough to make sure that config is okay (e.g. it doesn't validate project name). "dummy call" might work in this case (e.g. read list of launches) if it's executed in scope of a project. As I understand all users can created launches and report results so if the search succeeds then the config is correct.

  1. Before tests execution verify RP "availability" (make dummy call). If available - start execution, if not - consider it as configuration error and throw exception (abort execution)
  2. Reporting should not affect tests, all tests should be executed and standard report should be populated (xml file, whatever). Then we should make a decision whether abort CI job or not (property in config file?).

I thought we are not going to abort execution in any case. Even if the configuration file is incorrect. (If we are talking about tests-first strategy.)

nvborisenko commented 6 years ago

Let's make a short call to discuss this situation. Generally I prefer unified way in all agents. Currently nunit/xunit/vstest just output reporting errors into console without affecting tests execution.

DzmitryHumianiuk commented 6 years ago

@aliaksandrbasau @nvborisenko folks, did you find common solution, suitable for both?

abasau commented 6 years ago

@DzmitryHumianiuk Not yet. We didn't find time to discuss it.

DzmitryHumianiuk commented 6 years ago

@aliaksandrbasau @nvborisenko book a call for you two? :)

abasau commented 6 years ago

@DzmitryHumianiuk Working as a part-time secretary? :) I will let it slide this time :) Just don't do it in the future.

DzmitryHumianiuk commented 6 years ago

@aliaksandrbasau pushing things to be done :)

nvborisenko commented 6 years ago

Again I forgot what it is about. Will start from white paper if no news from @aliaksandrbasau Alex. We decided to write all http error messages at the end of tests execution and don't affect tests runner.

abasau commented 6 years ago

@nvborisenko Yep, that's what was decided. And RP integration will not affect test runs. I will be able to start working on the issue in a couple of weeks.

nvborisenko commented 6 years ago

@aliaksandrbasau please switch to upcoming branch releases/1.3.0 for experimenting. Thanks. And it seems this is issue on specfow side https://github.com/techtalk/SpecFlow/issues/1269