TOPdesk / dart-junitreport

An application to generate JUnit XML reports from dart test runs.
https://pub.dartlang.org/packages/junitreport
MIT License
31 stars 45 forks source link

Adding stack traces in xml_reports. #14

Open malonaz opened 5 years ago

malonaz commented 5 years ago

Hey thank you for your help, this is a great tool! I was wondering if you could add stack trace data in the report? thank you

<testcase classname="vault.range" name="CreateAccount()" time="0.062">
   <failure message="'5' != 1" type="AssertionError">
   <![CDATA[Traceback (most recent call last):
       package:test_api                
         expect
          test/range/range_test.dart 6:5 main.<fn>\n]]>     
   </failure>
</testcase>
rspilker commented 5 years ago

Thanks for reaching out.

I don't exactly understand the request. It would help if you could give a concrete example, preferably including source code , to demonstrate the problem you want to solve, and how you would expect it to end up in the xml reports.

raubreysmith commented 5 years ago

@rspilker looking at the dart --reporter json and flutter --machine output there is a stacktrace property

{"testID":3,"error":"Expected: <false>\n  Actual: <true>\nmy failure reason\n","stackTrace":"package:test_api         expect\ntest\\main.test.dart 5:5  main.<fn>\n","isFailure":true,"type":"error","time":1171}

Version

pub -v
FINE: Pub 2.3.0
buntagonalprism commented 3 years ago

Pretty old issue, but it looks like the stack trace was being deliberately excluded from the JUnit output when a test failed an expect() call. The stack trace was only included when the test threw an unexpected exception.

I've raised a pull request to fix this behaviour. I don't see any reason why the stack traces should be excluded from test failures - they provide useful information especially in large projects with lots of tests and test helper methods.

rspilker commented 2 years ago

I'm a bit in the fence. the reason it that in most cases, the only clutter up the output. I understand that if you have a lot of test helper classes and the actual expect is not being done in the test method itself, it might be helpful.

However, I think in that case you're doing it wrong. I'm okay with having some helper code, but the actual comparison or verification of exception should always be done in the test function.

So now you're asking me to add clutter to the output for those developers that are using best practices. I have a problem with that.

I am considering adding a boolean opt-in command line parameter for also adding stack traces on failures.

davidmartos96 commented 2 years ago

I've also encountered with this. An opt in flag would be great for our use case. Thanks for the useful package @rspilker !