google / open-vcdiff

An encoder/decoder for the VCDIFF (RFC3284) format
Apache License 2.0
186 stars 52 forks source link

Unit tests pass with errors #5

Closed Steelskin closed 9 years ago

Steelskin commented 9 years ago

Original issue 5 created by stas@sl.iae.nsk.su on 2008-09-08T11:01:04.000Z:

The following unit tests report errors to stderr: vcencoder_test addrcache_test vcdecoder_test codetable_test

However, all these tests output "[ PASSED ]" to stdout

The project was built from source files with VS2008, under WinXP SP2.

To reproduce the problem:

  1. Open open-vcdiff.sln with VS2008
  2. Have the project converted by VS
  3. Re-build the solution
  4. Run the unit tests (found in Release sub-dir) one by one (they also are run automatically as a part od the build process)

I see "ERROR" lines of output. See the attached stderr files.

Platform: Microsoft Visual Studio 2008 Version 9.0.21022.8 RTM OS Name Microsoft Windows XP Professional
Version 5.1.2600 Service Pack 2 Build 2600

vcdecoder_test error output mentions "VCDIFF delta file"; there's no such file provided nor generated by another test.

Steelskin commented 9 years ago

Old comments:

The ERROR lines of output are expected as part of the unit tests.

When the open-vcdiff encoder or decoder encounters an error, it logs an error message to stderr and returns false from the interface -- for example, from VCDiffStreamingDecoder::DecodeChunk(), defined in vcdecoder.h.

As an example, in the following three lines of output from the build:

[ RUN      ] CodeTableTest.MissingAdd
ERROR: VCDiff: Bad code table; there is no opcode for inst ADD, size 0,  mode 0
[       OK ] CodeTableTest.MissingAdd

... the first and third line are produced by the unit test framework, while the middle ERROR line is the error message reported by open-vcdiff. The unit test CodeTableTest.MissingAdd makes sure that if open-vcdiff is presented with an invalid code table, it reports an error condition. So the error is the correct, expected result of the test.

If a unit test fails, the unit test executable will return a non-zero exit code, which will cause Visual Studio to report a failure condition and stop the build. For open-vcdiff 0.2, if none of the unit tests fails, you will see the following line at the end of a complete rebuild: ========== Build: 18 succeeded, 0 failed, 0 up-to-date, 0 skipped ========== This is enough to assure you that none of the unit tests has failed.

If you are integrating open-vcdiff into an environment in which you don't want any error logging information sent to stderr, then you can change the definition of LogMessage in logging.cc (part of the vcdcom project) to take some other action when a message is logged. Your replacement definition must return a class that has the << operator defined for strings, integers, etc., because the LOG macro has the same syntax as a C++ output stream like cerr.

vcdecoder_test error output mentions "VCDIFF delta file"; there's no such file provided nor generated by another test.

The term "delta file" has a particular meaning in the context of VCDIFF: it refers to the output of the encoder, or the input of the decoder. It does not necessarily refer to a file on disk. See Section 4 of RFC 3284 (http://www.ietf.org/rfc/rfc3284.txt), which describes the expected contents of a "delta file". In the case of vcdecoder_test, the "delta file" is actually a static series of hard-coded bytes, which are used to confirm that the decoder produces the correct output (or the proper error message) given the delta file as input.

P.S. I thought about suppressing the ERROR messages to make the unit test output cleaner, but in most cases it is important to verify not just that the encoder or decoder produces some sort of error, but that the error message is the correct one.

Answer from reporter:

All you say is perfectly sound. Agree. Unit tests must test as much execution paths as possible, including error handling. I think that makes the task less automated, because it involves looking through the output in order to verify the error messages. It somewhat counters the philosophy of automated batch testing. The strictest approach would be to have the test environment capture and verify the error messages, thus fully encapsulating the tested code. And this would make the output cleaner, too. Though I do not know if googletest facilitates that.

Reply:

I agree with you. Verifying the test output by hand is not ideal and can cause regressions to go unnoticed. Currently we only have automated checks for the output of the death tests (those that are expected to crash or exit.) I'll change this defect to an enhancement request and keep it open. The goal will be to print encoder/decoder errors only if they differ from the expected output.

This issue seems to be a duplicate of #33 as the root issue is the same. We need to change the logging mechanism. Closing.