xiaowanmay / googletest

Automatically exported from code.google.com/p/googletest
0 stars 0 forks source link

When using "gtest_repeat" and "gtest_output=xml", output file is overwritten in each iteration #260

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Add both options when runnign your tests: --gtest_repeat=N
--gtest_output=xml
2. See as at each iteration the previous output file is overwritten
3. It happens no matter how one changes the output configuration
(specifying a directory or a filename)

What is the expected output? What do you see instead?
There should be at least one file output configuration that would write
each iteration output into a new XML file, with a slightly different name.
For instance, they could be suffixed with "-iterationX.xml".

What version of the product are you using? On what operating system?
Google Test 1.4.0 running in Windows XP, being build with VS2008.

Please provide any additional information below, such as a code snippet.
The only work around I found so far was to actually run the test executable
as many time as I want, changing the output file name in each of them.

Original issue reported on code.google.com by carlos.c...@gmail.com on 12 Mar 2010 at 3:23

GoogleCodeExporter commented 9 years ago
I'm not sure we want to change this.

The purpose of --gtest_repeat is for debugging flaky tests, which fails 
sometimes but
not always.  Usually the user will drop into the debugger on the first failure 
(using
--gtest_break_on_failure).  If the first N iterations of the test programs pass,
there's no value in keeping the first N versions of the XML file, as we know 
that
they all say "all tests passed".

Keeping all the XML files can be useful when the user wants to keep going after 
a
failed iteration and look at all the failures afterward.  I'm not sure whether 
this
is useful enough to justify the added complexity though.

I'm marking this as "won't fix" for now.  If you feel strongly that we should 
do it,
please start a discussion on the mailing list.  Thanks!

Original comment by w...@google.com on 12 Mar 2010 at 4:37

GoogleCodeExporter commented 9 years ago
Thanks for the explanations. Let me try to explain why I think this is of use.

In this case, I'm using Google Test timing and parametrization features to 
execute
performance tests over the combination of diverse configurations of my system. 
I'd
use the repeat feature to execute many times this tests, generate many XML 
files, and
at the end parse and work the data to get statistics on those times.

Maybe this is not exactly what Google Test was meant for, but it's sure being 
useful
in that sense.

If that should not be reason enough, I can start this exactly same discussion 
in the
mailing list. What do you say? :)

Original comment by carlos.c...@gmail.com on 12 Mar 2010 at 5:50

GoogleCodeExporter commented 9 years ago
As you suspected, this is not what Google Test was designed for. :-)

While adding an option to generate a different XML file in each iteration makes 
your
task easier, it makes gtest slightly more complex (not just the implementation 
- the
users will have a heavier cognitive burden as they need to learn more options). 
 I
don't expect this to be a common enough scenario to be worth it.

Since you need to write some script to collect/parse the XML files in the end, 
you
might as well make your script invoke the test program inside a loop, supplying 
a
different XML file name each time.  Thanks.

Original comment by w...@google.com on 13 Mar 2010 at 6:20

GoogleCodeExporter commented 9 years ago
This issue seems like a bug to me. What if you want to catch issues that are 
intermittent?  You likely want to run your tests iteratively (perhaps 
overnight) and have them generate an xml file that gets parsed by another tool 
and reports the results.  (Take your pick on the 3rd party tool...Jenkins, Team 
Concert, etc.)

If the xml file generated only reports the results of the last test run, it 
isn't really iterating on the tests.  The alternative is to manually scan 
through (perhaps dozens of) console logs to see if you can spot a failure.

Original comment by cynical...@gmail.com on 5 Jun 2014 at 10:08