appveyor / ci

AppVeyor community support repository
https://www.appveyor.com
344 stars 65 forks source link

Tests over Multi-targeting builds report incorrect number of tests passed #1821

Open ig-sinicyn opened 6 years ago

ig-sinicyn commented 6 years ago

Link to build

Command to run tests:

 nunit3-console C:\projects\codejam\Blocks\tests\bin\Publish\net35\CodeJam.Blocks-Tests.dll C:\projects\codejam\Blocks\tests\bin\Publish\net40\CodeJam.Blocks-Tests.dll C:\projects\codejam\Blocks\tests\bin\Publish\net45\CodeJam.Blocks-Tests.dll C:\projects\codejam\Blocks\tests\bin\Publish\net461\CodeJam.Blocks-Tests.dll C:\projects\codejam\Experimental\tests\bin\Publish\net461\CodeJam.Experimental-Tests.dll C:\projects\codejam\Experimental\tests-performance\bin\Publish\net461\CodeJam.Experimental-Tests.Performance.dll C:\projects\codejam\Main\tests\bin\Publish\net35\CodeJam-Tests.dll C:\projects\codejam\Main\tests\bin\Publish\net40\CodeJam-Tests.dll C:\projects\codejam\Main\tests\bin\Publish\net45\CodeJam-Tests.dll C:\projects\codejam\Main\tests\bin\Publish\net461\CodeJam-Tests.dll C:\projects\codejam\Main\tests-performance\bin\Publish\net461\CodeJam-Tests.Performance.dll C:\projects\codejam\PerfTests\tests\bin\Publish\net461\CodeJam.PerfTests-Tests.dll --result=myresults.xml;format=AppVeyor

output:

 NUnit Console Runner 3.6.1 
 Copyright (C) 2017 Charlie Poole

 Runtime Environment
    OS Version: Microsoft Windows NT 10.0.14393.0
   CLR Version: 4.0.30319.42000

 Test Files
     C:\projects\codejam\Blocks\tests\bin\Publish\net35\CodeJam.Blocks-Tests.dll
     C:\projects\codejam\Blocks\tests\bin\Publish\net40\CodeJam.Blocks-Tests.dll
     C:\projects\codejam\Blocks\tests\bin\Publish\net45\CodeJam.Blocks-Tests.dll
     C:\projects\codejam\Blocks\tests\bin\Publish\net461\CodeJam.Blocks-Tests.dll
     C:\projects\codejam\Experimental\tests\bin\Publish\net461\CodeJam.Experimental-Tests.dll
     C:\projects\codejam\Experimental\tests-performance\bin\Publish\net461\CodeJam.Experimental-Tests.Performance.dll
     C:\projects\codejam\Main\tests\bin\Publish\net35\CodeJam-Tests.dll
     C:\projects\codejam\Main\tests\bin\Publish\net40\CodeJam-Tests.dll
     C:\projects\codejam\Main\tests\bin\Publish\net45\CodeJam-Tests.dll
     C:\projects\codejam\Main\tests\bin\Publish\net461\CodeJam-Tests.dll
     C:\projects\codejam\Main\tests-performance\bin\Publish\net461\CodeJam-Tests.Performance.dll
     C:\projects\codejam\PerfTests\tests\bin\Publish\net461\CodeJam.PerfTests-Tests.dll
...

 Test Run Summary
   Overall result: Warning
   Test Count: 6782, Passed: 6695, Failed: 0, Warnings: 0, Inconclusive: 1, Skipped: 86
     Skipped Tests - Ignored: 13, Explicit: 73, Other: 0
   Start time: 2017-09-30 09:47:24Z
     End time: 2017-09-30 09:48:34Z
     Duration: 70.013 seconds 
  Results (AppVeyor) saved as myresults.xml

The tests page on site lists only 3300 tests while the report does include at least 6695 passed tests (there should be smth around 10 k, actually).

IlyaFinkelshteyn commented 6 years ago

I see that you are writing NUnit test results to XMK but I do not see you are uploading this XML to AppVeyor... I see that you are uploading only .NET Core NUnit test results... Maybe I am missing something, please clarify the scenario.

ig-sinicyn commented 6 years ago

@IlyaFinkelshteyn

I do not see you are uploading this XML to AppVeyor

Yep, the results are (somehow) uploaded automatically, no idea why. I think this is because of ;format=AppVeyor. The tests page contains output from a first run of the TestTargeting test for the full FW (Running on mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) but only first one (the test is run for each target fw).

Should I add explicit upload for the first run?

IlyaFinkelshteyn commented 6 years ago

I have to admit that I did not look deep enough honestly. Can you please try to upload and if results are still not as they should be, we will look deeper, OK?

ig-sinicyn commented 6 years ago

@IlyaFinkelshteyn Sorry for late reply, was really busy this week:)

I've tried explicit upload, no luck. The command:

#run .net tests
$logFileName = "$env:APPVEYOR_BUILD_FOLDER\_Results\net_nunit_results.xml"
$a = (gci -include $include -r | `
    where { $_.fullname -match "\\bin\\Publish\\net\d" -and $_.fullname -notmatch $exclude } | `
    select -ExpandProperty FullName)
echo "nunit3-console $a --result=$logFileName"
&"nunit3-console" $a "--result=$logFileName"
if ($LastExitCode -ne 0) { $host.SetShouldExit($LastExitCode) }
$wc.UploadFile("https://ci.appveyor.com/api/testresults/nunit3/$env:APPVEYOR_JOB_ID", "$logFileName")
if ($LastExitCode -ne 0) { 
    echo "FAIL: UploadFile: https://ci.appveyor.com/api/testresults/nunit3/$env:APPVEYOR_JOB_ID from $logFileName"
    $host.SetShouldExit($LastExitCode)
}

results are same. build is here. I've added uploaded results to build artifacts, nunit one is here.

It contains four records for CodeJam.TargetingTests.TestTargeting test, build tests page shows only one.

IlyaFinkelshteyn commented 6 years ago

@ig-sinicyn Now it is my turn to apologize :) Was under water too.

We use test name to create unique key in test collection. We currently do not leverage id because we need to keep the same test model abstraction for all supported test frameworks.

Those 4 CodeJam.TargetingTests.TestTargeting tests have the same name, and therefore only one was lucky to get into results.

What I would recommend is to use parell testing or build matrix (which is technically the same thing) to run the same test with different input in different build jobs.

Note that with single concurrent job, those jobs will run sequentially, but you will get all tests in separate and more compact and clean form.

ig-sinicyn commented 6 years ago

@IlyaFinkelshteyn Thanks, will try:) BTW, is there a way to apply matrix only to the test scripts? Smth like

environment:
  matrix:
    - target: net45
    - target: net461
    etc

test_script:
  # somehow run a separate test for each target

? We need to build all configurations at once (because build creates a nugget package) and I do not want to rebuild entire solution for each target FW:)

IlyaFinkelshteyn commented 6 years ago

What you can do is to have 2 projects. First builds entire solution and starts second with API (sample) at on_success step. Second has build matrix or parallel testing setting, each job downloads artifacts (basic and advanced examples) and does respective tests. Also this blog post though not directly related, can give you an idea of how to create second project.

ig-sinicyn commented 6 years ago

Thanks a lot!

IlyaFinkelshteyn commented 6 years ago

@ig-sinicyn we started hitting this over and over. Please watch https://github.com/appveyor/ci/issues/1894